Compressible generalized hybrid Monte Carlo.
Fang, Youhan; Sanz-Serna, J M; Skeel, Robert D
2014-05-01
One of the most demanding calculations is to generate random samples from a specified probability distribution (usually with an unknown normalizing prefactor) in a high-dimensional configuration space. One often has to resort to using a Markov chain Monte Carlo method, which converges only in the limit to the prescribed distribution. Such methods typically inch through configuration space step by step, with acceptance of a step based on a Metropolis(-Hastings) criterion. An acceptance rate of 100% is possible in principle by embedding configuration space in a higher dimensional phase space and using ordinary differential equations. In practice, numerical integrators must be used, lowering the acceptance rate. This is the essence of hybrid Monte Carlo methods. Presented is a general framework for constructing such methods under relaxed conditions: the only geometric property needed is (weakened) reversibility; volume preservation is not needed. The possibilities are illustrated by deriving a couple of explicit hybrid Monte Carlo methods, one based on barrier-lowering variable-metric dynamics and another based on isokinetic dynamics. PMID:24811626
Multiple-time-stepping generalized hybrid Monte Carlo methods
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Simulated Annealing using Hybrid Monte Carlo
R. Salazar; R. Toral
1997-07-31
We propose a variant of the Simulated Annealing method for optimization in the multivariate analysis of differentiable functions. The method uses global actualizations via the Hybrid Monte Carlo algorithm in their generalized version for the proposal of new configurations. We show how this choice can improve upon the performance of simulated annealing methods (mainly when the number of variables is large) by allowing a more effective searching scheme and a faster annealing schedule.
NASA Astrophysics Data System (ADS)
Aklan, B.; Jakoby, B. W.; Watson, C. C.; Braun, H.; Ritt, P.; Quick, H. H.
2015-06-01
A simulation toolkit, GATE (Geant4 Application for Tomographic Emission), was used to develop an accurate Monte Carlo (MC) simulation of a fully integrated 3T PET/MR hybrid imaging system (Siemens Biograph mMR). The PET/MR components of the Biograph mMR were simulated in order to allow a detailed study of variations of the system design on the PET performance, which are not easy to access and measure on a real PET/MR system. The 3T static magnetic field of the MR system was taken into account in all Monte Carlo simulations. The validation of the MC model was carried out against actual measurements performed on the PET/MR system by following the NEMA (National Electrical Manufacturers Association) NU 2-2007 standard. The comparison of simulated and experimental performance measurements included spatial resolution, sensitivity, scatter fraction, and count rate capability. The validated system model was then used for two different applications. The first application focused on investigating the effect of an extension of the PET field-of-view on the PET performance of the PET/MR system. The second application deals with simulating a modified system timing resolution and coincidence time window of the PET detector electronics in order to simulate time-of-flight (TOF) PET detection. A dedicated phantom was modeled to investigate the impact of TOF on overall PET image quality. Simulation results showed that the overall divergence between simulated and measured data was found to be less than 10%. Varying the detector geometry showed that the system sensitivity and noise equivalent count rate of the PET/MR system increased progressively with an increasing number of axial detector block rings, as to be expected. TOF-based PET reconstructions of the modeled phantom showed an improvement in signal-to-noise ratio and image contrast to the conventional non-TOF PET reconstructions. In conclusion, the validated MC simulation model of an integrated PET/MR system with an overall accuracy error of less than 10% can now be used for further MC simulation applications such as development of hardware components as well as for testing of new PET/MR software algorithms, such as assessment of point-spread function-based reconstruction algorithms.
Aklan, B; Jakoby, B W; Watson, C C; Braun, H; Ritt, P; Quick, H H
2015-06-21
A simulation toolkit, GATE (Geant4 Application for Tomographic Emission), was used to develop an accurate Monte Carlo (MC) simulation of a fully integrated 3T PET/MR hybrid imaging system (Siemens Biograph mMR). The PET/MR components of the Biograph mMR were simulated in order to allow a detailed study of variations of the system design on the PET performance, which are not easy to access and measure on a real PET/MR system. The 3T static magnetic field of the MR system was taken into account in all Monte Carlo simulations. The validation of the MC model was carried out against actual measurements performed on the PET/MR system by following the NEMA (National Electrical Manufacturers Association) NU 2-2007 standard. The comparison of simulated and experimental performance measurements included spatial resolution, sensitivity, scatter fraction, and count rate capability. The validated system model was then used for two different applications. The first application focused on investigating the effect of an extension of the PET field-of-view on the PET performance of the PET/MR system. The second application deals with simulating a modified system timing resolution and coincidence time window of the PET detector electronics in order to simulate time-of-flight (TOF) PET detection. A dedicated phantom was modeled to investigate the impact of TOF on overall PET image quality. Simulation results showed that the overall divergence between simulated and measured data was found to be less than 10%. Varying the detector geometry showed that the system sensitivity and noise equivalent count rate of the PET/MR system increased progressively with an increasing number of axial detector block rings, as to be expected. TOF-based PET reconstructions of the modeled phantom showed an improvement in signal-to-noise ratio and image contrast to the conventional non-TOF PET reconstructions. In conclusion, the validated MC simulation model of an integrated PET/MR system with an overall accuracy error of less than 10% can now be used for further MC simulation applications such as development of hardware components as well as for testing of new PET/MR software algorithms, such as assessment of point-spread function-based reconstruction algorithms. PMID:26040657
Hybrid algorithms in quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Kim, Jeongnim; Esler, Kenneth P.; McMinis, Jeremy; Morales, Miguel A.; Clark, Bryan K.; Shulenburger, Luke; Ceperley, David M.
2012-12-01
With advances in algorithms and growing computing powers, quantum Monte Carlo (QMC) methods have become a leading contender for high accuracy calculations for the electronic structure of realistic systems. The performance gain on recent HPC systems is largely driven by increasing parallelism: the number of compute cores of a SMP and the number of SMPs have been going up, as the Top500 list attests. However, the available memory as well as the communication and memory bandwidth per element has not kept pace with the increasing parallelism. This severely limits the applicability of QMC and the problem size it can handle. OpenMP/MPI hybrid programming provides applications with simple but effective solutions to overcome efficiency and scalability bottlenecks on large-scale clusters based on multi/many-core SMPs. We discuss the design and implementation of hybrid methods in QMCPACK and analyze its performance on current HPC platforms characterized by various memory and communication hierarchies.
Hybrid algorithms in quantum Monte Carlo
Esler, Kenneth P; Mcminis, Jeremy; Morales, Miguel A; Clark, Bryan K.; Shulenburger, Luke; Ceperley, David M
2012-01-01
With advances in algorithms and growing computing powers, quantum Monte Carlo (QMC) methods have become a leading contender for high accuracy calculations for the electronic structure of realistic systems. The performance gain on recent HPC systems is largely driven by increasing parallelism: the number of compute cores of a SMP and the number of SMPs have been going up, as the Top500 list attests. However, the available memory as well as the communication and memory bandwidth per element has not kept pace with the increasing parallelism. This severely limits the applicability of QMC and the problem size it can handle. OpenMP/MPI hybrid programming provides applications with simple but effective solutions to overcome efficiency and scalability bottlenecks on large-scale clusters based on multi/many-core SMPs. We discuss the design and implementation of hybrid methods in QMCPACK and analyze its performance on current HPC platforms characterized by various memory and communication hierarchies.
Quantum photonics hybrid integration platform
Murray, E.; Ellis, D. J. P.; Meany, T.; Floether, F. F.; Lee, J. P.; Griffiths, J. P.; Jones, G. A. C.; Farrer, I.; Ritchie, D. A.; Bennett, A. J.; Shields, A. J.
2015-10-29
Fundamental to integrated photonic quantum computing is an on-chip method for routing and modulating quantum light emission. We demonstrate a hybrid integration platform consisting of arbitrarily designed waveguide circuits and single-photon sources...
Speeding up the Hybrid-Monte-Carlo algorithm for dynamical fermions
M. Hasenbusch; K. Jansen
2001-10-22
We propose a modification of the Hybrid-Monte-Carlo algorithm that allows for a larger step-size of the integration scheme at constant acceptance rate. The key ingredient is the splitting of the pseudo-fermion action into two parts. We test our proposal at the example of the two-dimensional lattice Schwinger model and four-dimensional lattice QCD with two degenerate flavours of Wilson- fermions.
Monte Carlo Reliability Model for Microwave Monolithic Integrated Circuits
Rubloff, Gary W.
Monte Carlo Reliability Model for Microwave Monolithic Integrated Circuits Aris Christou Materials Carlo simulation is reported for analog integrated circuits and is based on the modification behavior of MMICs (Monolithic Microwave Integrated Circuits) from individual FET (Field Effect Transistor
A Primer in Monte Carlo Integration Using Mathcad
ERIC Educational Resources Information Center
Hoyer, Chad E.; Kegerreis, Jeb S.
2013-01-01
The essentials of Monte Carlo integration are presented for use in an upper-level physical chemistry setting. A Mathcad document that aids in the dissemination and utilization of this information is described and is available in the Supporting Information. A brief outline of Monte Carlo integration is given, along with ideas and pedagogy for…
PATH INTEGRAL MONTE CARLO SIMULATIONS OF HOT DENSE BURKHARD MILITZER
Militzer, Burkhard
PATH INTEGRAL MONTE CARLO SIMULATIONS OF HOT DENSE HYDROGEN BY BURKHARD MILITZER Diplom, Humboldt-Champaign, 2000 Urbana, Illinois #12; c Copyright by Burkhard Militzer, 2000 #12; PATH INTEGRAL MONTE CARLO SIMULATIONS OF HOT DENSE HYDROGEN Burkhard Militzer, Ph.D. Department of Physics University of Illinois
Quantum photonics hybrid integration platform
NASA Astrophysics Data System (ADS)
Murray, E.; Ellis, D. J. P.; Meany, T.; Floether, F. F.; Lee, J. P.; Griffiths, J. P.; Jones, G. A. C.; Farrer, I.; Ritchie, D. A.; Bennett, A. J.; Shields, A. J.
2015-10-01
Fundamental to integrated photonic quantum computing is an on-chip method for routing and modulating quantum light emission. We demonstrate a hybrid integration platform consisting of arbitrarily designed waveguide circuits and single-photon sources. InAs quantum dots (QD) embedded in GaAs are bonded to a SiON waveguide chip such that the QD emission is coupled to the waveguide mode. The waveguides are SiON core embedded in a SiO2 cladding. A tuneable Mach Zehnder interferometer (MZI) modulates the emission between two output ports and can act as a path-encoded qubit preparation device. The single-photon nature of the emission was verified using the on-chip MZI as a beamsplitter in a Hanbury Brown and Twiss measurement.
Quantum photonics hybrid integration platform
Murray, Eoin; Meany, Thomas; Flother, Frederick F; Lee, James P; Griffiths, Jonathan P; Jones, Geb A C; Farrer, Ian; Ritchie, David A; Bennet, Anthony J; Shields, Andrew J
2015-01-01
Fundamental to integrated photonic quantum computing is an on-chip method for routing and modulating quantum light emission. We demonstrate a hybrid integration platform consisting of arbitrarily designed waveguide circuits and single photon sources. InAs quantum dots (QD) embedded in GaAs are bonded to an SiON waveguide chip such that the QD emission is coupled to the waveguide mode. The waveguides are SiON core embedded in a SiO2 cladding. A tuneable Mach Zehnder modulates the emission between two output ports and can act as a path-encoded qubit preparation device. The single photon nature of the emission was veri?ed by an on-chip Hanbury Brown and Twiss measurement.
ITER Neutronics Modeling Using Hybrid Monte Carlo/Deterministic and CAD-Based Monte Carlo Methods
Ibrahim, A.; Mosher, Scott W; Evans, Thomas M; Peplow, Douglas E.; Sawan, M.; Wilson, P.; Wagner, John C; Heltemes, Thad
2011-01-01
The immense size and complex geometry of the ITER experimental fusion reactor require the development of special techniques that can accurately and efficiently perform neutronics simulations with minimal human effort. This paper shows the effect of the hybrid Monte Carlo (MC)/deterministic techniques - Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) - in enhancing the efficiency of the neutronics modeling of ITER and demonstrates the applicability of coupling these methods with computer-aided-design-based MC. Three quantities were calculated in this analysis: the total nuclear heating in the inboard leg of the toroidal field coils (TFCs), the prompt dose outside the biological shield, and the total neutron and gamma fluxes over a mesh tally covering the entire reactor. The use of FW-CADIS in estimating the nuclear heating in the inboard TFCs resulted in a factor of ~ 275 increase in the MC figure of merit (FOM) compared with analog MC and a factor of ~ 9 compared with the traditional methods of variance reduction. By providing a factor of ~ 21 000 increase in the MC FOM, the radiation dose calculation showed how the CADIS method can be effectively used in the simulation of problems that are practically impossible using analog MC. The total flux calculation demonstrated the ability of FW-CADIS to simultaneously enhance the MC statistical precision throughout the entire ITER geometry. Collectively, these calculations demonstrate the ability of the hybrid techniques to accurately model very challenging shielding problems in reasonable execution times.
Decorrelation of the topological charge in tempered Hybrid Monte Carlo simulations of QCD
E. -M. Ilgenfritz; Werner Kerler; H. Stüben
1999-08-18
We study the improvement of simulations of QCD with dynamical Wilson fermions by combining the Hybrid Monte Carlo algorithm with parallel tempering. As an indicator for decorrelation we use the topological charge.
An application of the UV-filtering preconditioner to the Polynomial Hybrid Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Ishikawa, Ken-Ichi
2006-12-01
, We apply the UV-filtering preconditioner, previously used to improve the Multi-Boson algorithm, to the Polynomial Hybrid Monte Carlo (UV-PHMC) algorithm. The performance test for the algorithm is given for the plaquette gauge action and the O(a)-improved Wilson action at ? = 5.2, csw = 2.02, M? /M? ˜ 0.8 and 0.7 on a 163 × 48 lattice. We find that the UV-filtering reduces the magnitude of the molecular dynamics force from the pseudo fermion by a factor 3 by tuning the UV-filter parameter. Combining with the multi-time scale molecular dynamics integrator we achieve a factor 2 improvement.
Hybrid S{sub N}/Monte Carlo research and results
Baker, R.S.
1993-05-01
The neutral particle transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. The Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid Monte Carlo/S{sub N} method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by themselves. The hybrid method has been successfully applied to realistic shielding problems. The vectorized Monte Carlo algorithm in the hybrid method has been ported to the massively parallel architecture of the Connection Machine. Comparisons of performance on a vector machine (Cray Y-MP) and the Connection Machine (CM-2) show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when realistic problems requiring variance reduction are considered. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well.
Hybrid S[sub N]/Monte Carlo research and results
Baker, R.S.
1993-01-01
The neutral particle transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S[sub N]) and stochastic (Monte Carlo) methods are applied. The Monte Carlo and S[sub N] regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid Monte Carlo/S[sub N] method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S[sub N] is well suited for by themselves. The hybrid method has been successfully applied to realistic shielding problems. The vectorized Monte Carlo algorithm in the hybrid method has been ported to the massively parallel architecture of the Connection Machine. Comparisons of performance on a vector machine (Cray Y-MP) and the Connection Machine (CM-2) show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when realistic problems requiring variance reduction are considered. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well.
Thermodynamics of the fully frustrated quantum Josephson-junction array: A hybrid Monte Carlo study
Mikalopas, J.; Jarrell, M.; Pinski, F.J.; Chung, W. ); Novotny, M.A. )
1994-07-01
We use a hybrid Monte Carlo algorithm to simulate the thermodynamic properties of a two-dimensional periodic array of fully frustrated quantum Josephson junctions. We find a variety of metastable configurations that correspond to spatial domain boundaries that are also present in classical arrays. In previous work, due to long temporal correlations between configurations in diffusive Monte Carlo algorithms, such metastable states were interpreted as evidence for a quantum-induced low-temperature first-order transition.
A Concise Force Calculation for Hybrid Monte Carlo with Improved Actions
Nikhil Karthik
2014-01-06
We present a concise way to calculate force for Hybrid Monte Carlo with improved actions using the fact that changes in thin and smeared link matrices lie in their respective tangent vector spaces. Since hypercubic smearing schemes are very memory intensive, we also present a memory optimized implementation of them.
Reich, Sebastian
Shadow Hybrid Monte Carlo to a Peptide Toxin/Bilayer System Chze Ling Wee Department of Biochemistry system is a coarse-grained (CG) representation of a small peptide toxin interacting with a phospholipid bilayer. Specifically, we show that GSHMC allows for a quicker localization of the toxin to the headgroup
Hybrid manufacturing : integrating direct write and sterolithography.
Davis, Donald W.; Inamdar, Asim; Lopes, Amit; Chavez, Bart D.; Gallegos, Phillip L.; Palmer, Jeremy Andrew; Wicker, Ryan B.; Medina, Francisco; Hennessey, Robert E.
2005-07-01
A commercial stereolithography (SL) machine was modified to integrate fluid dispensing or direct-write (DW) technology with SL in an integrated manufacturing environment for automated and efficient hybrid manufacturing of complex electrical devices, combining three-dimensional (3D) electrical circuitry with SL-manufactured parts. The modified SL system operates similarly to a commercially available machine, although build interrupts were used to stop and start the SL build while depositing fluid using the DW system. An additional linear encoder was attached to the SL platform z-stage and used to maintain accurate part registration during the SL and DW build processes. Individual STL files were required as part of the manufacturing process plan. The DW system employed a three-axis translation mechanism that was integrated with the commercial SL machine. Registration between the SL part, SL laser and the DW nozzle was maintained through the use of 0.025-inch diameter cylindrical reference holes manufactured in the part during SL. After depositing conductive ink using DW, the SL laser was commanded to trace the profile until the ink was cured. The current system allows for easy exchange between SL and DW in order to manufacture fully functional 3D electrical circuits and structures in a semi-automated environment. To demonstrate the manufacturing capabilities, the hybrid SL/DW setup was used to make a simple multi-layer SL part with embedded circuitry. This hybrid system is not intended to function as a commercial system, it is intended for experimental demonstration only. This hybrid SL/DW system has the potential for manufacturing fully functional electromechanical devices that are more compact, less expensive, and more reliable than their conventional predecessors, and work is ongoing in order to fully automate the current system.
Monte Carlo Integration Using Spatial Structure of Markov Random Field
NASA Astrophysics Data System (ADS)
Yasuda, Muneki
2015-03-01
Monte Carlo integration (MCI) techniques are important in various fields. In this study, a new MCI technique for Markov random fields (MRFs) is proposed. MCI consists of two successive parts: the first involves sampling using a technique such as the Markov chain Monte Carlo method, and the second involves an averaging operation using the obtained sample points. In the averaging operation, a simple sample averaging technique is often employed. The method proposed in this paper improves the averaging operation by addressing the spatial structure of the MRF and is mathematically guaranteed to statistically outperform standard MCI using the simple sample averaging operation. Moreover, the proposed method can be improved in a systematic manner and is numerically verified by numerical simulations using planar Ising models. In the latter part of this paper, the proposed method is applied to the inverse Ising problem and we observe that it outperforms the maximum pseudo-likelihood estimation.
Path integral monte carlo calculation of the deuterium hugoniot
Militzer; Ceperley
2000-08-28
Restricted path integral Monte Carlo simulations have been used to calculate the equilibrium properties of deuterium for two densities: 0.674 and 0.838 g cm(-3) ( r(s) = 2.00 and 1.86) in the temperature range of 10(5)integral. Further, we compare the results obtained with a free particle nodal restriction with those from a self-consistent variational principle, which includes interactions and bound states. By using the calculated internal energies and pressures, we determine the shock Hugoniot and compare with recent laser shock wave experiments as well as other theories. PMID:10970640
Yang, Yang; Longini, Ira M; Halloran, M Elizabeth; Obenchain, Valerie
2012-12-01
In epidemics of infectious diseases such as influenza, an individual may have one of four possible final states: prior immune, escaped from infection, infected with symptoms, and infected asymptomatically. The exact state is often not observed. In addition, the unobserved transmission times of asymptomatic infections further complicate analysis. Under the assumption of missing at random, data-augmentation techniques can be used to integrate out such uncertainties. We adapt an importance-sampling-based Monte Carlo Expectation-Maximization (MCEM) algorithm to the setting of an infectious disease transmitted in close contact groups. Assuming the independence between close contact groups, we propose a hybrid EM-MCEM algorithm that applies the MCEM or the traditional EM algorithms to each close contact group depending on the dimension of missing data in that group, and discuss the variance estimation for this practice. In addition, we propose a bootstrap approach to assess the total Monte Carlo error and factor that error into the variance estimation. The proposed methods are evaluated using simulation studies. We use the hybrid EM-MCEM algorithm to analyze two influenza epidemics in the late 1970s to assess the effects of age and preseason antibody levels on the transmissibility and pathogenicity of the viruses. PMID:22506893
Monte Carlo Simulations of Background Spectra in Integral Imager Detectors
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.; Dietz, K. L.; Ramsey, B. D.; Weisskopf, M. C.
1998-01-01
Predictions of the expected gamma-ray backgrounds in the ISGRI (CdTe) and PiCsIT (Csl) detectors on INTEGRAL due to cosmic-ray interactions and the diffuse gamma-ray background have been made using a coupled set of Monte Carlo radiation transport codes (HETC, FLUKA, EGS4, and MORSE) and a detailed, 3-D mass model of the spacecraft and detector assemblies. The simulations include both the prompt background component from induced hadronic and electromagnetic cascades and the delayed component due to emissions from induced radioactivity. Background spectra have been obtained with and without the use of active (BGO) shielding and charged particle rejection to evaluate the effectiveness of anticoincidence counting on background rejection.
CAD-based Monte Carlo Program for Integrated Simulation of Nuclear System SuperMC
NASA Astrophysics Data System (ADS)
Wu, Yican; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Long, Pengcheng; Hu, Liqin
2014-06-01
Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as routine method for nuclear design and analysis in the future. High fidelity simulation with MC method coupled with multi-physical phenomenon simulation has significant impact on safety, economy and sustainability of nuclear systems. However, great challenges to current MC methods and codes prevent its application in real engineering project. SuperMC is a CAD-based Monte Carlo program for integrated simulation of nuclear system developed by FDS Team, China, making use of hybrid MC-deterministic method and advanced computer technologies. The design aim, architecture and main methodology of SuperMC were presented in this paper. SuperMC2.1, the latest version for neutron, photon and coupled neutron and photon transport calculation, has been developed and validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. SuperMC is still in its evolution process toward a general and routine tool for nuclear system. Warning, no authors found for 2014snam.conf06023.
Hybrid parallel sequential Monte Carlo algorithm combining MCMC and auxiliary variable
NASA Astrophysics Data System (ADS)
Wang, Danling; Morris, John; Zhang, Qin; Gu, Quanfeng
2010-02-01
Sequential Monte Carlo (SMC) simulations are widely used to solve problems associated with complex probability distribution. Intensive computations are their main drawbacks,whic h restrict to be applied to real time applications,a nd thus efficient parallelism under high performance computing environment is crucial to effective implementations,esp ecially for intelligent computer vision systems. The combination of auxiliary variables importance sampling with Markov Chain Monte Carlo (MCMC) resampling for pipelining data are proposed in this paper so as to minimize executive time,whilst improve the estimation accuracy. Experimental resultion a network of workstations composed of simple off-the-shelf hardware components show that the hybrid parallel scheme provides a bottleneck free to reduce executive time with increasing particles,co mpared to the conventional SMC and MCMC based parallel schemes.
A hybrid (Monte Carlo/deterministic) approach for multi-dimensional radiation transport
Bal, Guillaume; Davis, Anthony B.; Langmore, Ian
2011-08-20
Highlights: {yields} We introduce a variance reduction scheme for Monte Carlo (MC) transport. {yields} The primary application is atmospheric remote sensing. {yields} The technique first solves the adjoint problem using a deterministic solver. {yields} Next, the adjoint solution is used as an importance function for the MC solver. {yields} The adjoint problem is solved quickly since it ignores the volume. - Abstract: A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or a airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.
NASA Astrophysics Data System (ADS)
Townson, Reid W.; Zavgorodni, Sergei
2014-12-01
In GPU-based Monte Carlo simulations for radiotherapy dose calculation, source modelling from a phase-space source can be an efficiency bottleneck. Previously, this has been addressed using phase-space-let (PSL) sources, which provided significant efficiency enhancement. We propose that additional speed-up can be achieved through the use of a hybrid primary photon point source model combined with a secondary PSL source. A novel phase-space derived and histogram-based implementation of this model has been integrated into gDPM v3.0. Additionally, a simple method for approximately deriving target photon source characteristics from a phase-space that does not contain inheritable particle history variables (LATCH) has been demonstrated to succeed in selecting over 99% of the true target photons with only ~0.3% contamination (for a Varian 21EX 18?MV machine). The hybrid source model was tested using an array of open fields for various Varian 21EX and TrueBeam energies, and all cases achieved greater than 97% chi-test agreement (the mean was 99%) above the 2% isodose with 1% / 1?mm criteria. The root mean square deviations (RMSDs) were less than 1%, with a mean of 0.5%, and the source generation time was 4-5 times faster. A seven-field intensity modulated radiation therapy patient treatment achieved 95% chi-test agreement above the 10% isodose with 1% / 1?mm criteria, 99.8% for 2% / 2?mm, a RMSD of 0.8%, and source generation speed-up factor of 2.5. Presented as part of the International Workshop on Monte Carlo Techniques in Medical Physics
Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hidek; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki
2009-10-01
Purpose: To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. Methods and Materials: The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. Results: The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Conclusions: Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.
Using hybrid implicit Monte Carlo diffusion to simulate gray radiation hydrodynamics
NASA Astrophysics Data System (ADS)
Cleveland, Mathew A.; Gentile, Nick
2015-06-01
This work describes how to couple a hybrid Implicit Monte Carlo Diffusion (HIMCD) method with a Lagrangian hydrodynamics code to evaluate the coupled radiation hydrodynamics equations. This HIMCD method dynamically applies Implicit Monte Carlo Diffusion (IMD) [1] to regions of a problem that are opaque and diffusive while applying standard Implicit Monte Carlo (IMC) [2] to regions where the diffusion approximation is invalid. We show that this method significantly improves the computational efficiency as compared to a standard IMC/Hydrodynamics solver, when optically thick diffusive material is present, while maintaining accuracy. Two test cases are used to demonstrate the accuracy and performance of HIMCD as compared to IMC and IMD. The first is the Lowrie semi-analytic diffusive shock [3]. The second is a simple test case where the source radiation streams through optically thin material and heats a thick diffusive region of material causing it to rapidly expand. We found that HIMCD proves to be accurate, robust, and computationally efficient for these test problems.
Hybrid Monte Carlo/Deterministic Methods for Accelerating Active Interrogation Modeling
Peplow, Douglas E.; Miller, Thomas Martin; Patton, Bruce W; Wagner, John C
2013-01-01
The potential for smuggling special nuclear material (SNM) into the United States is a major concern to homeland security, so federal agencies are investigating a variety of preventive measures, including detection and interdiction of SNM during transport. One approach for SNM detection, called active interrogation, uses a radiation source, such as a beam of neutrons or photons, to scan cargo containers and detect the products of induced fissions. In realistic cargo transport scenarios, the process of inducing and detecting fissions in SNM is difficult due to the presence of various and potentially thick materials between the radiation source and the SNM, and the practical limitations on radiation source strength and detection capabilities. Therefore, computer simulations are being used, along with experimental measurements, in efforts to design effective active interrogation detection systems. The computer simulations mostly consist of simulating radiation transport from the source to the detector region(s). Although the Monte Carlo method is predominantly used for these simulations, difficulties persist related to calculating statistically meaningful detector responses in practical computing times, thereby limiting their usefulness for design and evaluation of practical active interrogation systems. In previous work, the benefits of hybrid methods that use the results of approximate deterministic transport calculations to accelerate high-fidelity Monte Carlo simulations have been demonstrated for source-detector type problems. In this work, the hybrid methods are applied and evaluated for three example active interrogation problems. Additionally, a new approach is presented that uses multiple goal-based importance functions depending on a particle s relevance to the ultimate goal of the simulation. Results from the examples demonstrate that the application of hybrid methods to active interrogation problems dramatically increases their calculational efficiency.
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore »geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less
Path Integral Monte Carlo Simulation of the Low-Density Hydrogen Plasma B. Militzer y
Militzer, Burkhard
Path Integral Monte Carlo Simulation of the Low-Density Hydrogen Plasma B. Militzer y Lawrence. There are ab initio methods such as y Electronic address: militzer@llnl.gov z Electronic address: ceperley
Plug-in integrated/hybrid circuit
NASA Technical Reports Server (NTRS)
Stringer, E. J.
1974-01-01
Hybrid circuitry can be installed into standard round bayonet connectors, to eliminate wiring from connector to circuit. Circuits can be connected directly into either section of connector pair, eliminating need for hard wiring to that section.
QYMSYM: A GPU-accelerated hybrid symplectic integrator
NASA Astrophysics Data System (ADS)
Moore, Alexander; Quillen, Alice C.
2012-10-01
QYMSYM is a GPU accelerated 2nd order hybrid symplectic integrator that identifies close approaches between particles and switches from symplectic to Hermite algorithms for particles that require higher resolution integrations. This is a parallel code running with CUDA on a video card that puts the many processors on board to work while taking advantage of fast shared memory.
Path integral Monte Carlo simulations of hot dense hydrogen
NASA Astrophysics Data System (ADS)
Militzer, Burkhard
Path integral Monte Carlo (PIMC) simulations are a powerful computational method to study interacting quantum systems at finite temperature. In this work, PIMC has been applied to study the equilibrium properties of hot, dense hydrogen in the temperature and density range of 5000 ? T ? 10 6K and 10--3 ? p ? 2.7gcm --3. We determine the equation of state (EOS) and the high temperature phase diagram. Under these conditions, hydrogen is a dense fluid that exhibits a molecular, an atomic and a plasma regime at low density. A high density, it is predicted to go into a metallic state. The determination of these properties has direct application to the understanding of brown dwarfs and Jovian planets. The restricted PIMC method relies on a nodal surface, taken from a trial density matrix, in order to deal with Fermi statistics. This method has been applied extensively using free particle nodes. We develop a variational technique that allows us to obtain a variational many-body density matrix (VDM). In a first application, we derive a VDM that describes the principle physical effects in high temperature hydrogen such as ionization and dissociation. In the PIMC simulation, we employ the VDM in order to replace the free particle nodes and study the effect on the derived thermodynamic properties. The modifications are particularly significant at low temperature and high density where PIMC using free particle nodes have suggested a first order plasma phase transition. We critically review these findings and show improved results from simulations with VDM nodes. Recent laser shock wave experiments are of particular relevance to this research because they represent the first direct EOS measurements in the megabar regime. We estimate the shock Hugoniot from the calculated EOS and compare with the experimental findings. We study finite size effects and the dependence on the time step and on the type of nodes. Furthermore, we extend the restricted PIMC method to open paths in order to determine off-diagonal density matrix elements and apply this method to the momentum distribution of the electron gas and to the natural orbitals in hydrogen.
Dynamical overlap fermion simulations with a preconditioned Hybrid Monte Carlo force
Jan Volkholz; Wolfgang Bietenholz; Stanislav Shcheredin
2006-09-29
We present simulation results for the 2-flavour Schwinger model with dynamical Ginsparg-Wilson fermions. Our Dirac operator is constructed by inserting an approximately chiral hypercube operator into the overlap formula, which yields the overlap hypercube operator. Due to the similarity with the hypercubic kernel, a low polynomial of this kernel can be used as a numerically cheap way to evaluate the fermionic part of the Hybrid Monte Carlo force. We verify algorithmic requirements like area conservation and reversibility, and we discuss the viability of this approach in view of the acceptance rate. Next we confirm a high level of locality for this formulation. Finally we evaluate the chiral condensate at light fermion masses, based on the density of low lying Dirac eigenvalues in different topological sectors. The results represent one of the first measurements with dynamical overlap fermions, and they agree very well with analytic predictions at weak coupling.
The Acceptance Probability of the Hybrid Monte Carlo Method in High-Dimensional Problems
NASA Astrophysics Data System (ADS)
Beskos, A.; Pillai, N. S.; Roberts, G. O.; Sanz-Serna, J. M.; Stuart, A. M.
2010-09-01
We investigate the properties of the Hybrid Monte-Carlo algorithm in high dimensions. In the simplified scenario of independent, identically distributed components, we prove that, to obtain an G(1) acceptance probability as the dimension d of the state space tends to ?, the Verlet/leap-frog step-size h should be scaled as h = ?×d-1/4. We also identify analytically the asymptotically optimal acceptance probability, which turns out to be 0.651 (with three decimal places); this is the choice that optimally balances the cost of generating a proposal, which decreases as ? increases, against the cost related to the average number of proposals required to obtain acceptance, which increases as ? increases.
Hybrid Monte Carlo approach to the entanglement entropy of interacting fermions
NASA Astrophysics Data System (ADS)
Drut, Joaquín E.; Porter, William J.
2015-09-01
The Monte Carlo calculation of Rényi entanglement entropies Sn of interacting fermions suffers from a well-known signal-to-noise problem, even for a large number of situations in which the infamous sign problem is absent. A few methods have been proposed to overcome this issue, such as ensemble switching and the use of auxiliary partition-function ratios. Here, we present an approach that builds on the recently proposed free-fermion decomposition method; it incorporates entanglement in the probability measure in a natural way; it takes advantage of the hybrid Monte Carlo algorithm (an essential tool in lattice quantum chromodynamics and other gauge theories with dynamical fermions); and it does not suffer from noise problems. This method displays no sign problem for the same cases as other approaches and is therefore useful for a wide variety of systems. As a proof of principle, we calculate S2 for the one-dimensional, half-filled Hubbard model and compare with results from exact diagonalization and the free-fermion decomposition method.
A hybrid Monte Carlo approach to the entanglement entropy of interacting fermions
Drut, Joaquín E
2015-01-01
The Monte Carlo calculation of R\\'enyi entanglement entropies $S^{}_n$ of interacting fermions suffers from a well-known signal-to-noise problem, even for a large number of situations in which the infamous sign problem is absent. A few methods have been proposed to overcome this issue, such as ensemble switching and the use of auxiliary partition-function ratios. Here, we present an approach that builds on the recently proposed free-fermion decomposition method; it incorporates entanglement in the probability measure in a natural way; it takes advantage of the hybrid Monte Carlo algorithm (an essential tool in lattice quantum chromodynamics and other gauge theories with dynamical fermions); and it does not suffer from noise problems. This method displays no sign problem for the same cases as other approaches and is therefore useful for a wide variety of systems. As a proof of principle, we calculate $S_2^{}$ for the one-dimensional, half-filled Hubbard model and compare with results from exact diagonalization...
A hybrid Monte Carlo approach to the entanglement entropy of interacting fermions
Joaquín E. Drut; William J. Porter
2015-10-10
The Monte Carlo calculation of R\\'enyi entanglement entropies $S^{}_n$ of interacting fermions suffers from a well-known signal-to-noise problem, even for a large number of situations in which the infamous sign problem is absent. A few methods have been proposed to overcome this issue, such as ensemble switching and the use of auxiliary partition-function ratios. Here, we present an approach that builds on the recently proposed free-fermion decomposition method; it incorporates entanglement in the probability measure in a natural way; it takes advantage of the hybrid Monte Carlo algorithm (an essential tool in lattice quantum chromodynamics and other gauge theories with dynamical fermions); and it does not suffer from noise problems. This method displays no sign problem for the same cases as other approaches and is therefore useful for a wide variety of systems. As a proof of principle, we calculate $S_2^{}$ for the one-dimensional, half-filled Hubbard model and compare with results from exact diagonalization and the free-fermion decomposition method.
Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis
Heo, W.; Kim, W.; Kim, Y.; Yun, S.
2013-07-01
A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)
Improved Hybrid Monte Carlo/n-Moment Transport Equations Model for the Polar Wind
NASA Astrophysics Data System (ADS)
Barakat, A. R.; Ji, J.; Schunk, R. W.
2013-12-01
In many space plasma problems (e.g. terrestrial polar wind, solar wind, etc.), the plasma gradually evolves from dense collision-dominated into rarified collisionless conditions. For decades, numerous attempts were made in order to address this type of problem using simulations based on one of two approaches. These approaches are: (1) the (fluid-like) Generalized Transport Equations, GTE, and (2) the particle-based Monte Carlo (MC) techniques. In contrast to the computationally intensive MC, the GTE approach can be considerably more efficient but its validity is questionable outside the collision-dominated region depending on the number of transport parameters considered. There have been several attempts to develop hybrid models that combine the strengths of both approaches. In particular, low-order GTE formulations were applied within the collision-dominated region, while an MC simulation was applied within the collisionless region and in the collisional-to-collisionless transition region. However, attention must be paid to assuring the consistency of the two approaches in the region where they are matched. Contrary to all previous studies, our model pays special attention to the ';matching' issue, and hence eliminates the discontinuities/inaccuracies associated with mismatching. As an example, we applied our technique to the Coulomb-Milne problem because of its relevance to the problem of space plasma flow from high- to low-density regions. We will compare the velocity distribution function and its moments (density, flow velocity, temperature, etc.) from the following models: (1) the pure MC model, (2) our hybrid model, and (3) previously published hybrid models. We will also consider a wide range of the test-to-background mass ratio.
Optimization of the hybrid silicon photonic integrated circuit platform
NASA Astrophysics Data System (ADS)
Heck, Martijn J. R.; Davenport, Michael L.; Srinivasan, Sudharsanan; Hulme, Jared; Bowers, John E.
2013-03-01
In the hybrid silicon platform, active III/V based components are integrated on a silicon-on-insulator photonic integrated circuit by means of wafer bonding. This is done in a self-aligned back-end process at low temperatures, making it compatible with CMOS-based silicon processing. This approach allows for low cost, high volume, high quality and reproducible chip fabrication. Such features make the hybrid silicon platform an attractive technology for applications like optical interconnects, microwave photonics and sensors operating at wavelengths around 1.3 ?m and 1.55 ?m. For these applications energy efficient operation is a key parameter. In this paper we present our efforts to bring the III/V components in the hybrid silicon platform, such as lasers and optical amplifiers, on par with the far more mature monolithic InP-based integration technology. We present our development work to increase hybrid silicon laser and amplifier wall-plug efficiency. This is done by careful optimization of III/V mesa geometry and guiding silicon waveguide width. We also discuss current injection efficiency and thermal performance. Furthermore we show the characterization of the low-loss and low-reflection mode converters that couple the hybrid III/V components to silicon waveguides. Reflections below -41 dB and passive loss of 0.3 dB per converter were obtained.
A Hybrid Mobile Robot Architecture with Integrated Planning and Control
Stephan, Frank
A Hybrid Mobile Robot Architecture with Integrated Planning and Control Kian Hsiang Low Inst Research in the planning and control of mobile robots has received much attention in the past two decades network is trained to perform fine, smooth motor control that moves the robot through the checkpoints
High-order path-integral Monte Carlo methods for solving quantum dot problems
NASA Astrophysics Data System (ADS)
Chin, Siu A.
2015-03-01
The conventional second-order path-integral Monte Carlo method is plagued with the sign problem in solving many-fermion systems. This is due to the large number of antisymmetric free-fermion propagators that are needed to extract the ground state wave function at large imaginary time. In this work we show that optimized fourth-order path-integral Monte Carlo methods, which use no more than five free-fermion propagators, can yield accurate quantum dot energies for up to 20 polarized electrons with the use of the Hamiltonian energy estimator.
Constant-pH Hybrid Nonequilibrium Molecular Dynamics-Monte Carlo Simulation Method.
Chen, Yunjie; Roux, Benoît
2015-08-11
A computational method is developed to carry out explicit solvent simulations of complex molecular systems under conditions of constant pH. In constant-pH simulations, preidentified ionizable sites are allowed to spontaneously protonate and deprotonate as a function of time in response to the environment and the imposed pH. The method, based on a hybrid scheme originally proposed by H. A. Stern (J. Chem. Phys. 2007, 126, 164112), consists of carrying out short nonequilibrium molecular dynamics (neMD) switching trajectories to generate physically plausible configurations with changed protonation states that are subsequently accepted or rejected according to a Metropolis Monte Carlo (MC) criterion. To ensure microscopic detailed balance arising from such nonequilibrium switches, the atomic momenta are altered according to the symmetric two-ends momentum reversal prescription. To achieve higher efficiency, the original neMD-MC scheme is separated into two steps, reducing the need for generating a large number of unproductive and costly nonequilibrium trajectories. In the first step, the protonation state of a site is randomly attributed via a Metropolis MC process on the basis of an intrinsic pKa; an attempted nonequilibrium switch is generated only if this change in protonation state is accepted. This hybrid two-step inherent pKa neMD-MC simulation method is tested with single amino acids in solution (Asp, Glu, and His) and then applied to turkey ovomucoid third domain and hen egg-white lysozyme. Because of the simple linear increase in the computational cost relative to the number of titratable sites, the present method is naturally able to treat extremely large systems. PMID:26300709
Quirk, Thomas, J., IV
2004-08-01
The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Compton scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.
NASA Astrophysics Data System (ADS)
Bousige, Colin; BoÅ£an, Alexandru; Ulm, Franz-Josef; Pellenq, Roland J.-M.; Coasne, Benoît
2015-03-01
We report an efficient atom-scale reconstruction method that consists of combining the Hybrid Reverse Monte Carlo algorithm (HRMC) with Molecular Dynamics (MD) in the framework of a simulated annealing technique. In the spirit of the experimentally constrained molecular relaxation technique [Biswas et al., Phys. Rev. B 69, 195207 (2004)], this modified procedure offers a refined strategy in the field of reconstruction techniques, with special interest for heterogeneous and disordered solids such as amorphous porous materials. While the HRMC method generates physical structures, thanks to the use of energy penalties, the combination with MD makes the method at least one order of magnitude faster than HRMC simulations to obtain structures of similar quality. Furthermore, in order to ensure the transferability of this technique, we provide rational arguments to select the various input parameters such as the relative weight ? of the energy penalty with respect to the structure optimization. By applying the method to disordered porous carbons, we show that adsorption properties provide data to test the global texture of the reconstructed sample but are only weakly sensitive to the presence of defects. In contrast, the vibrational properties such as the phonon density of states are found to be very sensitive to the local structure of the sample.
Bousige, Colin; Bo?an, Alexandru; Coasne, Benoît; Ulm, Franz-Josef; Pellenq, Roland J.-M.
2015-03-21
We report an efficient atom-scale reconstruction method that consists of combining the Hybrid Reverse Monte Carlo algorithm (HRMC) with Molecular Dynamics (MD) in the framework of a simulated annealing technique. In the spirit of the experimentally constrained molecular relaxation technique [Biswas et al., Phys. Rev. B 69, 195207 (2004)], this modified procedure offers a refined strategy in the field of reconstruction techniques, with special interest for heterogeneous and disordered solids such as amorphous porous materials. While the HRMC method generates physical structures, thanks to the use of energy penalties, the combination with MD makes the method at least one order of magnitude faster than HRMC simulations to obtain structures of similar quality. Furthermore, in order to ensure the transferability of this technique, we provide rational arguments to select the various input parameters such as the relative weight ? of the energy penalty with respect to the structure optimization. By applying the method to disordered porous carbons, we show that adsorption properties provide data to test the global texture of the reconstructed sample but are only weakly sensitive to the presence of defects. In contrast, the vibrational properties such as the phonon density of states are found to be very sensitive to the local structure of the sample.
Abdel-Khalik, Hany S.; Zhang, Qiong
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10^{3} - 10^{5} times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
Importance Sampling and Adjoint Hybrid Methods in Monte Carlo Transport with Reflecting Boundaries
Guillaume Bal; Ian Langmore
2011-04-13
Adjoint methods form a class of importance sampling methods that are used to accelerate Monte Carlo (MC) simulations of transport equations. Ideally, adjoint methods allow for zero-variance MC estimators provided that the solution to an adjoint transport equation is known. Hybrid methods aim at (i) approximately solving the adjoint transport equation with a deterministic method; and (ii) use the solution to construct an unbiased MC sampling algorithm with low variance. The problem with this approach is that both steps can be prohibitively expensive. In this paper, we simplify steps (i) and (ii) by calculating only parts of the adjoint solution. More specifically, in a geometry with limited volume scattering and complicated reflection at the boundary, we consider the situation where the adjoint solution "neglects" volume scattering, whereby significantly reducing the degrees of freedom in steps (i) and (ii). A main application for such a geometry is in remote sensing of the environment using physics-based signal models. Volume scattering is then incorporated using an analog sampling algorithm (or more precisely a simple modification of analog sampling called a heuristic sampling algorithm) in order to obtain unbiased estimators. In geometries with weak volume scattering (with a domain of interest of size comparable to the transport mean free path), we demonstrate numerically significant variance reductions and speed-ups (figures of merit).
Manousakis, Efstratios
Submonolayer molecular hydrogen on graphite: A path-integral Monte Carlo study Kwangsik Nho path-integral Monte Carlo PIMC to simulate molecular hydrogen on graphite at submono- layer coverage-graphite interaction, i.e., we include the effects of substrate corrugations. In this case we carry our
Assignment 1: 3-D Monte Carlo Integration Due: Monday 1/28/11 (before class); Mult Fac = 1.0
Whaley, R. Clint
Assignment 1: 3-D Monte Carlo Integration Due: Monday 1/28/11 (before class); Mult Fac = 1.0 In this assignment, you will use OpenMP to parallelize 3-D Monte Carlo integration. Your program should take threads> Any other runtime flags will cause a usage message to printed, and the program will exit
Polymer waveguide based hybrid opto-electric integration technology
NASA Astrophysics Data System (ADS)
Mao, Jinbin; Deng, Lingling; Jiang, Xiyan; Ren, Rong; Zhai, Yumeng; Wang, Jin
2014-10-01
While monolithic integration especially based on InP appears to be quite an expensive solution for optical devices, hybrid integration solutions using cheaper material platforms are considered powerful competitors because of the high freedom of design, yield optimization and relative cost-efficiency. Among them, the polymer planar-lightwave circuit (PLC) technology is regarded attractive as polymer offers the potential of fairly simple and low-cost fabrication, and of low-cost packaging. In our work, polymer PLC was fabricated by using the standard reactive ion etching (RIE) technique, while other active and passive devices can be integrated on the polymer PLC platform. Exemplary polymer waveguide devices was a 13-channel arrayed waveguide grating (AWG) chip, where the central channel cross-talk was below -30dB and the polarization dependent frequency shift was mitigated by inserting a half wave plate. An optical 900 hybrid was also realized with one 2×4 multi-mode interferometer (MMI). The excess insertion losses are below 4dB for the C-band, while the transmission imbalance is below 1.2dB. When such an optical hybrid was integrated vertically with mesa-type photodiodes, the responsivity of the individual PD was around 0.06 A/W, while the 3 dB bandwidth reaches 24 ~ 27 GHz, which is sufficient for 100Gbit/s receivers. Another example of the hybrid integration was to couple the polymer waveguides to fiber by applying fiber grooves, whose typical loss value was 0.2 dB per-facet over a broad spectral range from 1200-1600 nm.
Integrated Hybrid System Architecture for Risk Analysis
NASA Technical Reports Server (NTRS)
Moynihan, Gary P.; Fonseca, Daniel J.; Ray, Paul S.
2010-01-01
A conceptual design has been announced of an expert-system computer program, and the development of a prototype of the program, intended for use as a project-management tool. The program integrates schedule and risk data for the purpose of determining the schedule applications of safety risks and, somewhat conversely, the effects of changes in schedules on changes on safety. It is noted that the design has been delivered to a NASA client and that it is planned to disclose the design in a conference presentation.
Path Integral Monte Carlo Calculation of the Deuterium Hugoniot B. Militzer and D. M. Ceperley
Militzer, Burkhard
estimate the shock hugoniot and compare with recent Laser shock wave experiments. We study #12;nite size ePath Integral Monte Carlo Calculation of the Deuterium Hugoniot B. Militzer and D. M. Ceperley Recent laser shock wave experiments on pre- compressed liquid deuterium [1,2] have produced an un
FPGA-accelerated Monte-Carlo integration using stratified sampling and Brownian bridges
Bertels, Koen
that produces one entire bridge per clock cycle. The FPGA-accelerated design is up to 880 times faster compared pricing and Brownian bridges. We adapt MISER to the requirements for a parallel design in Section IVFPGA-accelerated Monte-Carlo integration using stratified sampling and Brownian bridges Mark de
HRMC_1.1: Hybrid Reverse Monte Carlo method with silicon and carbon potentials
NASA Astrophysics Data System (ADS)
Opletal, G.; Petersen, T. C.; O'Malley, B.; Snook, I. K.; McCulloch, D. G.; Yarovsky, I.
2011-02-01
The Hybrid Reverse Monte Carlo (HRMC) code models the atomic structure of materials via the use of a combination of constraints including experimental diffraction data and an empirical energy potential. This energy constraint is in the form of either the Environment Dependent Interatomic Potential (EDIP) for carbon and silicon and the original and modified Stillinger-Weber potentials applicable to silicon. In this version, an update is made to correct an error in the EDIP carbon energy calculation routine. New version program summaryProgram title: HRMC version 1.1 Catalogue identifier: AEAO_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAO_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 36 991 No. of bytes in distributed program, including test data, etc.: 907 800 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: Any computer capable of running executables produced by the g77 Fortran compiler. Operating system: Unix, Windows RAM: Depends on the type of empirical potential use, number of atoms and which constraints are employed. Classification: 7.7 Catalogue identifier of previous version: AEAO_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 777 Does the new version supersede the previous version?: Yes Nature of problem: Atomic modelling using empirical potentials and experimental data. Solution method: Monte Carlo Reasons for new version: An error in a term associated with the calculation of energies using the EDIP carbon potential which results in incorrect energies. Summary of revisions: Fix to correct brackets in the two body part of the EDIP carbon potential routine. Additional comments: The code is not standard FORTRAN 77 but includes some additional features and therefore generates errors when compiled using the Nag95 compiler. It does compile successfully with the GNU g77 compiler ( http://www.gnu.org/software/fortran/fortran.html). Running time: Depends on the type of empirical potential use, number of atoms and which constraints are employed. The test included in the distribution took 37 minutes on a DEC Alpha PC.
Graphene/Si CMOS Hybrid Hall Integrated Circuits
Huang, Le; Xu, Huilong; Zhang, Zhiyong; Chen, Chengying; Jiang, Jianhua; Ma, Xiaomeng; Chen, Bingyan; Li, Zishen; Zhong, Hua; Peng, Lian-Mao
2014-01-01
Graphene/silicon CMOS hybrid integrated circuits (ICs) should provide powerful functions which combines the ultra-high carrier mobility of graphene and the sophisticated functions of silicon CMOS ICs. But it is difficult to integrate these two kinds of heterogeneous devices on a single chip. In this work a low temperature process is developed for integrating graphene devices onto silicon CMOS ICs for the first time, and a high performance graphene/CMOS hybrid Hall IC is demonstrated. Signal amplifying/process ICs are manufactured via commercial 0.18?um silicon CMOS technology, and graphene Hall elements (GHEs) are fabricated on top of the passivation layer of the CMOS chip via a low-temperature micro-fabrication process. The sensitivity of the GHE on CMOS chip is further improved by integrating the GHE with the CMOS amplifier on the Si chip. This work not only paves the way to fabricate graphene/Si CMOS Hall ICs with much higher performance than that of conventional Hall ICs, but also provides a general method for scalable integration of graphene devices with silicon CMOS ICs via a low-temperature process. PMID:24998222
Kaoru Aoki; Shigetaka Kuroda; Shigemasa Kajiwara; Hiromitsu Sato; Yoshio Yamamoto
2000-06-19
This paper presents the technical approach used to design and develop the powerplant for the Honda Insight, a new motor assist hybrid vehicle with an overall development objective of just half the fuel consumption of the current Civic over a wide range of driving conditions. Fuel consumption of 35km/L (Japanese 10-15 mode), and 3.4L/100km (98/69/EC) was realized. To achieve this, a new Integrated Motor Assist (IMA) hybrid power plant system was developed, incorporating many new technologies for packaging and integrating the motor assist system and for improving engine thermal efficiency. This was developed in combination with a new lightweight aluminum body with low aerodynamic resistance. Environmental performance goals also included the simultaneous achievement of low emissions (half the Japanese year 2000 standards, and half the EU2000 standards), high efficiency, and recyclability. Full consideration was also given to key consumer attributes, including crash safety performance, handling, and driving performance.
Copula Based Monte Carlo Integration in Financial Problems
Sancetta, Alessio
2006-03-14
integration based on our proposed transformation can be further sim- plified if the quantile function of the random variables has a closed form. Here we provide a family of distributions that satisfies these requirements and is particu- larly well suited as a... ) := [( ?2 ??2 ) ln ( P exp { ?ˆ K? k=1 wkX?k })] ?=?ˆ . (The formal validity of (10) requires a four times differentiable cumulant generating function. In which case, if wj ? 0, ?j ? J, as card (J) ? ? (i.e. as the number of assets in the portfolio tends...
SU-E-T-117: Dose to Organs Outside of CT Scan Range- Monte Carlo and Hybrid Phantom Approach
Pelletier, C; Jung, J; Lee, C; Kim, J; Lee, C
2014-06-01
Purpose: Epidemiological study of second cancer risk for cancer survivors often requires the dose to normal tissues located outside the anatomy covered by radiological imaging, which is usually limited to tumor and organs at risk. We have investigated the feasibility of using whole body computational human phantoms for estimating out-of-field organ doses for patients treated by Intensity Modulated Radiation Therapy (IMRT). Methods: Identical 7-field IMRT prostate plans were performed using X-ray Voxel Monte Carlo (XVMC), a radiotherapy-specific Monte Carlo transport code, on the computed tomography (CT) images of the torso of an adult male patient (175 cm height, 66 kg weight) and an adult male hybrid computational phantom with the equivalent body size. Dose to the liver, right lung, and left lung were calculated and compared. Results: Considerable differences are seen between the doses calculated by XVMC for the patient CT and the hybrid phantom. One major contributing factor is the treatment method, deep inspiration breath hold (DIBH), used for this patient. This leads to significant differences in the organ position relative to the treatment isocenter. The transverse distances from the treatment isocenter to the inferior border of the liver, left lung, and right lung are 19.5cm, 29.5cm, and 30.0cm, respectively for the patient CT, compared with 24.3cm, 36.6cm, and 39.1cm, respectively, for the hybrid phantom. When corrected for the distance, the mean doses calculated using the hybrid phantom are within 28% of those calculated using the patient CT. Conclusion: This study showed that mean dose to the organs located in the missing CT coverage can be reconstructed by using whole body computational human phantoms within reasonable dosimetric uncertainty, however appropriate corrections may be necessary if the patient is treated with a technique that will significantly deform the size or location of the organs relative to the hybrid phantom.
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
NASA Astrophysics Data System (ADS)
Butlitsky, M. A.; Zelener, B. B.; Zelener, B. V.
2015-11-01
Earlier a two-component pseudopotential plasma model, which we called a “shelf Coulomb” model has been developed. A Monte-Carlo study of canonical NVT ensemble with periodic boundary conditions has been undertaken to calculate equations of state, pair distribution functions, internal energies and other thermodynamics properties of the model. In present work, an attempt is made to apply so-called hybrid Gibbs statistical ensemble Monte-Carlo technique to this model. First simulation results data show qualitatively similar results for critical point region for both methods. Gibbs ensemble technique let us to estimate the melting curve position and a triple point of the model (in reduced temperature and specific volume coordinates): T* ? 0.0476, v* ? 6 × 10-4.
Scalable Software for Multivariate Integration on Hybrid Platforms
NASA Astrophysics Data System (ADS)
de Doncker, E.; Yuasa, F.; Kapenga, J.; Olagbemi, O.
2015-09-01
The paper describes the software infrastructure of the PARINT package for multivariate numerical integration, layered over a hybrid parallel environment with distributed memory computations (on MPI). The parallel problem distribution is typically performed on the region level in the adaptive partitioning procedure. Our objective has been to provide the end-user with state of the art problem solving power packaged as portable software. We will give test results of the multivariate ParInt engine, with significant speedups for a set of 3-loop Feynman integrals. An extrapolation with respect to the dimensional regularization parameter (?) is applied to sequences of multivariate ParInt results Q(?) to obtain the leading asymptotic expansion coefficients as ? ? 0. This paper further introduces a novel method for a parallel computation of the Q(?) sequence as the components of the integral of a vector function.
High-order Path Integral Monte Carlo methods for solving strongly correlated fermion problems
NASA Astrophysics Data System (ADS)
Chin, Siu A.
2015-03-01
In solving for the ground state of a strongly correlated many-fermion system, the conventional second-order Path Integral Monte Carlo method is plagued with the sign problem. This is due to the large number of anti-symmetric free fermion propagators that are needed to extract the square of the ground state wave function at large imaginary time. In this work, I show that optimized fourth-order Path Integral Monte Carlo methods, which uses no more than 5 free-fermion propagators, in conjunction with the use of the Hamiltonian energy estimator, can yield accurate ground state energies for quantum dots with up to 20 polarized electrons. The correlations are directly built-in and no explicit wave functions are needed. This work is supported by the Qatar National Research Fund NPRP GRANT #5-674-1-114.
NASA Astrophysics Data System (ADS)
Dornheim, Tobias; Schoof, Tim; Groth, Simon; Filinov, Alexey; Bonitz, Michael
2015-11-01
The uniform electron gas (UEG) at finite temperature is of high current interest due to its key relevance for many applications including dense plasmas and laser excited solids. In particular, density functional theory heavily relies on accurate thermodynamic data for the UEG. Until recently, the only existing first-principle results had been obtained for N = 33 electrons with restricted path integral Monte Carlo (RPIMC), for low to moderate density, r s = r ¯ / a B ? 1 . These data have been complemented by configuration path integral Monte Carlo (CPIMC) simulations for rs ? 1 that substantially deviate from RPIMC towards smaller rs and low temperature. In this work, we present results from an independent third method—the recently developed permutation blocking path integral Monte Carlo (PB-PIMC) approach [T. Dornheim et al., New J. Phys. 17, 073017 (2015)] which we extend to the UEG. Interestingly, PB-PIMC allows us to perform simulations over the entire density range down to half the Fermi temperature (? = kBT/EF = 0.5) and, therefore, to compare our results to both aforementioned methods. While we find excellent agreement with CPIMC, where results are available, we observe deviations from RPIMC that are beyond the statistical errors and increase with density.
Dornheim, Tobias; Groth, Simon; Filinov, Alexey; Bonitz, Michael
2015-01-01
The uniform electron gas (UEG) at finite temperature is of high current interest due to its key relevance for many applications including dense plasmas and laser excited solids. In particular, density functional theory heavily relies on accurate thermodynamic data for the UEG. Until recently, the only existing first-principle results had been obtained for $N=33$ electrons with restricted path integral Monte Carlo (RPIMC), for low to moderate density, $r_s = \\overline{r}/a_B \\gtrsim 1$. This data has been complemented by Configuration path integral Monte Carlo (CPIMC) simulations for $r_s \\leq 1$ that substantially deviate from RPIMC towards smaller $r_s$ and low temperature. In this work, we present results from an independent third method---the recently developed permutation blocking path integral Monte Carlo (PB-PIMC) approach [T. Dornheim \\textit{et al.}, NJP \\textbf{17}, 073017 (2015)] which we extend to the UEG. Interestingly, PB-PIMC allows us to perform simulations over the entire density range down to...
Dornheim, Tobias; Schoof, Tim; Groth, Simon; Filinov, Alexey; Bonitz, Michael
2015-11-28
The uniform electron gas (UEG) at finite temperature is of high current interest due to its key relevance for many applications including dense plasmas and laser excited solids. In particular, density functional theory heavily relies on accurate thermodynamic data for the UEG. Until recently, the only existing first-principle results had been obtained for N = 33 electrons with restricted path integral Monte Carlo (RPIMC), for low to moderate density, rs=r¯/aB?1. These data have been complemented by configuration path integral Monte Carlo (CPIMC) simulations for rs ? 1 that substantially deviate from RPIMC towards smaller rs and low temperature. In this work, we present results from an independent third method-the recently developed permutation blocking path integral Monte Carlo (PB-PIMC) approach [T. Dornheim et al., New J. Phys. 17, 073017 (2015)] which we extend to the UEG. Interestingly, PB-PIMC allows us to perform simulations over the entire density range down to half the Fermi temperature (? = kBT/EF = 0.5) and, therefore, to compare our results to both aforementioned methods. While we find excellent agreement with CPIMC, where results are available, we observe deviations from RPIMC that are beyond the statistical errors and increase with density. PMID:26627944
Hybrid two-chain simulation and integral equation theory : application to polyethylene liquids.
Huimin Li, David T. Wu; Curro, John G.; McCoy, John Dwane
2006-02-01
We present results from a hybrid simulation and integral equation approach to the calculation of polymer melt properties. The simulation consists of explicit Monte Carlo (MC) sampling of two polymer molecules, where the effect of the surrounding chains is accounted for by an HNC solvation potential. The solvation potential is determined from the Polymer Reference Interaction Site Model (PRISM) as a functional of the pair correlation function from simulation. This hybrid two-chain MC-PRISM approach was carried out on liquids of polyethylene chains of 24 and 66 CH{sub 2} units. The results are compared with MD simulation and self-consistent PRISM-PY theory under the same conditions, revealing that the two-chain calculation is close to MD, and able to overcome the defects of the PRISM-PY closure and predict more accurate structures of the liquid at both short and long range. The direct correlation function, for instance, has a tail at longer range which is consistent with MD simulation and avoids the short-range assumptions in PRISM-PY theory. As a result, the self-consistent two-chain MC-PRISM calculation predicts an isothermal compressibility closer to the MD results.
First Results From GLAST-LAT Integrated Towers Cosmic Ray Data Taking And Monte Carlo Comparison
Brigida, M.; Caliandro, A.; Favuzzi, C.; Fusco, P.; Gargano, F.; Giordano, F.; Giglietto, N.; Loparco, F.; Marangelli, B.; Mazziotta, M.N.; Mirizzi, N.; Raino, S.; Spinelli, P.; /Bari U. /INFN, Bari
2007-02-15
GLAST Large Area Telescope (LAT) is a gamma ray telescope instrumented with silicon-strip detector planes and sheets of converter, followed by a calorimeter (CAL) and surrounded by an anticoincidence system (ACD). This instrument is sensitive to gamma rays in the energy range between 20 MeV and 300 GeV. At present, the first towers have been integrated and pre-launch data taking with cosmic ray muons is being performed. The results from the data analysis carried out during LAT integration will be discussed and a comparison with the predictions from the Monte Carlo simulation will be shown.
Runoff prediction using an integrated hybrid modelling scheme
NASA Astrophysics Data System (ADS)
Remesan, Renji; Shamim, Muhammad Ali; Han, Dawei; Mathew, Jimson
2009-06-01
SummaryRainfall runoff is a very complicated process due to its nonlinear and multidimensional dynamics, and hence difficult to model. There are several options for a modeller to consider, for example: the type of input data to be used, the length of model calibration (training) data and whether or not the input data be treated as signals with different frequency bands so that they can be modelled separately. This paper describes a new hybrid modelling scheme to answer the above mentioned questions. The proposed methodology is based on a hybrid model integrating wavelet transformation, a modelling engine (Artificial Neural Network) and the Gamma Test. First, the Gamma Test is used to decide the required input data dimensions and its length. Second, the wavelet transformation decomposes the input signals into different frequency bands. Finally, a modelling engine (ANN in this study) is used to model the decomposed signals separately. The proposed scheme was tested using the Brue catchment, Southwest England, as a case study and has produced very positive results. The hybrid model outperforms all other models tested. This study has a wider implication in the hydrological modelling field since its general framework could be applied to other model combinations (e.g., model engine could be Support Vector Machines, neuro-fuzzy systems, or even a conceptual model. The signal decomposition could be carried out by Fourier transformation).
Eylenceo?lu, E.; Rafatov, I.; Kudryavtsev, A. A.
2015-01-15
Two-dimensional hybrid Monte Carlo–fluid numerical code is developed and applied to model the dc glow discharge. The model is based on the separation of electrons into two parts: the low energetic (slow) and high energetic (fast) electron groups. Ions and slow electrons are described within the fluid model using the drift-diffusion approximation for particle fluxes. Fast electrons, represented by suitable number of super particles emitted from the cathode, are responsible for ionization processes in the discharge volume, which are simulated by the Monte Carlo collision method. Electrostatic field is obtained from the solution of Poisson equation. The test calculations were carried out for an argon plasma. Main properties of the glow discharge are considered. Current-voltage curves, electric field reversal phenomenon, and the vortex current formation are developed and discussed. The results are compared to those obtained from the simple and extended fluid models. Contrary to reports in the literature, the analysis does not reveal significant advantages of existing hybrid methods over the extended fluid model.
Quantum Mechanical Single Molecule Partition Function from PathIntegral Monte Carlo Simulations
Chempath, Shaji; Bell, Alexis T.; Predescu, Cristian
2006-10-01
An algorithm for calculating the partition function of a molecule with the path integral Monte Carlo method is presented. Staged thermodynamic perturbation with respect to a reference harmonic potential is utilized to evaluate the ratio of partition functions. Parallel tempering and a new Monte Carlo estimator for the ratio of partition functions are implemented here to achieve well converged simulations that give an accuracy of 0.04 kcal/mol in the reported free energies. The method is applied to various test systems, including a catalytic system composed of 18 atoms. Absolute free energies calculated by this method lead to corrections as large as 2.6 kcal/mol at 300 K for some of the examples presented.
Hybrid Clustering by Integrating Text and Citation based Graphs in Journal Database Analysis
Hybrid Clustering by Integrating Text and Citation based Graphs in Journal Database Analysis Xinhai to large scale journal database and immediate clustering task. On the hand, many graph partition algorithms.Glanzel@econ.kuleuven.ac.be Abstract We propose a hybrid clustering strategy by integrating heterogeneous information sources as graphs
Golden Ratio Versus Pi as Random Sequence Sources for Monte Carlo Integration
NASA Technical Reports Server (NTRS)
Sen, S. K.; Agarwal, Ravi P.; Shaykhian, Gholam Ali
2007-01-01
We discuss here the relative merits of these numbers as possible random sequence sources. The quality of these sequences is not judged directly based on the outcome of all known tests for the randomness of a sequence. Instead, it is determined implicitly by the accuracy of the Monte Carlo integration in a statistical sense. Since our main motive of using a random sequence is to solve real world problems, it is more desirable if we compare the quality of the sequences based on their performances for these problems in terms of quality/accuracy of the output. We also compare these sources against those generated by a popular pseudo-random generator, viz., the Matlab rand and the quasi-random generator ha/ton both in terms of error and time complexity. Our study demonstrates that consecutive blocks of digits of each of these numbers produce a good random sequence source. It is observed that randomly chosen blocks of digits do not have any remarkable advantage over consecutive blocks for the accuracy of the Monte Carlo integration. Also, it reveals that pi is a better source of a random sequence than theta when the accuracy of the integration is concerned.
NASA Astrophysics Data System (ADS)
Militzer, Burkhard; Driver, Kevin P.
2015-10-01
We extend the applicability range of fermionic path integral Monte Carlo simulations to heavier elements and lower temperatures by introducing various localized nodal surfaces. Hartree-Fock nodes yield the most accurate prediction for pressure and internal energy, which we combine with the results from density functional molecular dynamics simulations to obtain a consistent equation of state for hot, dense silicon under plasma conditions and in the regime of warm dense matter (2.3 - 18.6 g cm-3 , 5.0 ×1 05-1.3 ×1 08 K ). The shock Hugoniot curve is derived and the structure of the fluid is characterized with various pair correlation functions.
Coulomb tunneling for fusion reactions in dense matter: Path integral Monte Carlo versus mean field
A. I. Chugunov; H. E. DeWitt; D. G. Yakovlev
2007-07-24
We compare Path Integral Monte Carlo calculations by Militzer and Pollock (Phys. Rev. B 71, 134303, 2005) of Coulomb tunneling in nuclear reactions in dense matter to semiclassical calculations assuming WKB Coulomb barrier penetration through the radial mean-field potential. We find a very good agreement of two approaches at temperatures higher than ~1/5 of the ion plasma temperature. We obtain a simple parameterization of the mean field potential and of the respective reaction rates. We analyze Gamow-peak energies of reacting ions in various reaction regimes and discuss theoretical uncertainties of nuclear reaction rates taking carbon burning in dense stellar matter as an example.
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10{sup 2-4}), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.
Andújar, C; Arribas, P; Ruiz, C; Serrano, J; Gómez-Zurita, J
2014-09-01
In species differentiation, characters may not diverge synchronously, and there are also processes that shuffle character states in lineages descendant from a common ancestor. Species are thus expected to show some degree of incongruence among characters; therefore, taxonomic delimitation can benefit from integrative approaches and objective strategies that account for character conflict. We illustrate the potential of exploiting conflict for species delimitation in a study case of ground beetles of the subgenus Carabus (Mesocarabus), where traditional taxonomy does not accurately delimit species. The molecular phylogenies of four mitochondrial and three nuclear genes, cladistic analysis of the aedeagus, ecological niche divergence and morphometry of pronotal shape in more than 500 specimens of Mesocarabus show that these characters are not fully congruent. For these data, a three-step operational strategy is proposed for species delimitation by (i) delineating candidate species based on the integration of incongruence among conclusive lines of evidence, (ii) corroborating candidate species with inconclusive lines of evidence and (iii) refining a final species proposal based on an integrated characterization of candidate species based on the evolutionary analysis of incongruence. This procedure provided a general understanding of the reticulate process of hybridization and introgression acting on Mesocarabus and generated the hypothesis of seven Mesocarabus species, including two putative hybrid lineages. Our work emphasizes the importance of incorporating critical analyses of character and phylogenetic conflict to infer both the evolutionary history and species boundaries through an integrative taxonomic approach. PMID:24828576
Data assimilation using a GPU accelerated path integral Monte Carlo approach
NASA Astrophysics Data System (ADS)
Quinn, John C.; Abarbanel, Henry D. I.
2011-09-01
The answers to data assimilation questions can be expressed as path integrals over all possible state and parameter histories. We show how these path integrals can be evaluated numerically using a Markov Chain Monte Carlo method designed to run in parallel on a graphics processing unit (GPU). We demonstrate the application of the method to an example with a transmembrane voltage time series of a simulated neuron as an input, and using a Hodgkin-Huxley neuron model. By taking advantage of GPU computing, we gain a parallel speedup factor of up to about 300, compared to an equivalent serial computation on a CPU, with performance increasing as the length of the observation time used for data assimilation increases.
Realization of a hybrid-integrated MEMS scanning grating spectrometer
NASA Astrophysics Data System (ADS)
Pügner, Tino; Knobbe, Jens; Grüger, Heinrich; Schenk, Harald
2012-06-01
Spectrometers and Spectrographs based on scanning grating monochromators are well-established tools for various applications. As new applications came into focus in the last few years, there is a demand for more sophisticated and miniaturized systems. The next generation spectroscopic devices should exhibit very small dimensions and low power consumption, respectively. We have developed a spectroscopic system with a volume of only (15 × 10 × 14) mm3 and a few milliwatts of power consumption that has the potential to fulfill the demands of the upcoming applications. Our approach is based on two dierent strategies. First, we apply resonantly driven MEMS (micro electro mechanical systems). The latest generation of our MEMS scanning grating device has two integrated optical slits and piezoresistive position detection in addition to the already existing miniaturized 1-d scanning grating plate and the electrostatic driving mechanism. Our second strategy is to take advantage of the hybrid integration of optical components by highly sophisticated manufacturing technologies. One objective is the combination of MEMS technology and a planar mounting approach, which potentially facilitate the mass production of spectroscopic systems and a signicant reduction of cost per unit. We present the optical system design as well as the realization of a miniaturized scanning grating spectrometer for the near infrared (NIR) range between 950 nm and 1900 nm with a spectral resolution of 10 nm. The MEMS devices as well as the optical components have been manufactured and rst samples of the spectroscopic measurement device have been mounted by an automated die bonder.
Path Integral Monte Carlo finite-temperature electronic structure of quantum dots
NASA Astrophysics Data System (ADS)
Leino, Markku; Rantala, Tapio T.
2003-03-01
Quantum Monte Carlo methods allow a straightforward procedure for evaluation of electronic structures with a proper treatment of electronic correlations. This can be done even at finite temperatures [1]. We apply the Path Integral Monte Carlo (PIMC) simulation method [2] for one and two electrons in a single and double quantum dots. With this approach we evaluate the electronic distributions and correlations, and finite temperature effects on those. Temperature increase broadens the one-electron distribution as expected. This effect is smaller for correlated electrons than for single ones. The simulated one and two electron distributions of a single and two coupled quantum dots are also compared to those from experiments and other theoretical (0 K) methods [3]. Computational capacity is found to become the limiting factor in simulations with increasing accuracy. This and other essential aspects of PIMC and its capability in this type of calculations are also discussed. [1] R.P. Feynman: Statistical Mechanics, Addison Wesley, 1972. [2] D.M. Ceperley, Rev.Mod.Phys. 67, 279 (1995). [3] M. Pi, A. Emperador and M. Barranco, Phys.Rev.B 63, 115316 (2001).
Monte-Carlo experiments on star-cluster induced integrated-galaxy IMF variations
Carsten Weidner; Pavel Kroupa
2004-09-30
As most if not all stars are born in stellar clusters the shape of the mass function of the field stars is not only determined by the initial mass function of stars (IMF) but also by the cluster mass function (CMF). In order to quantify this Monte-Carlo simulations were carried out by taking cluster masses randomly from a CMF and then populating these clusters with stars randomly taken from an IMF. Two cases were studied. Firstly the star masses were added randomly until the cluster mass was reached. Secondly a number of stars, given by the cluster mass divided by an estimate of the mean stellar mass and sorted by mass, were added until the desired cluster mass was reached. Both experiments verified the analytical results of Kroupa & Weidner (2003) that the resulting integrated stellar initial mass function is a folding of the IMF with the CMF and therefore steeper than the input IMF above 1 Msol.
WORM ALGORITHM PATH INTEGRAL MONTE CARLO APPLIED TO THE 3He-4He II SANDWICH SYSTEM
NASA Astrophysics Data System (ADS)
Al-Oqali, Amer; Sakhel, Asaad R.; Ghassib, Humam B.; Sakhel, Roger R.
2012-12-01
We present a numerical investigation of the thermal and structural properties of the 3He-4He sandwich system adsorbed on a graphite substrate using the worm algorithm path integral Monte Carlo (WAPIMC) method [M. Boninsegni, N. Prokof'ev and B. Svistunov, Phys. Rev. E74, 036701 (2006)]. For this purpose, we have modified a previously written WAPIMC code originally adapted for 4He on graphite, by including the second 3He-component. To describe the fermions, a temperature-dependent statistical potential has been used. This has proven very effective. The WAPIMC calculations have been conducted in the millikelvin temperature regime. However, because of the heavy computations involved, only 30, 40 and 50 mK have been considered for the time being. The pair correlations, Matsubara Green's function, structure factor, and density profiles have been explored at these temperatures.
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).
Torsional path integral Monte Carlo method for the quantum simulation of large molecules
NASA Astrophysics Data System (ADS)
Miller, Thomas F.; Clary, David C.
2002-05-01
A molecular application is introduced for calculating quantum statistical mechanical expectation values of large molecules at nonzero temperatures. The Torsional Path Integral Monte Carlo (TPIMC) technique applies an uncoupled winding number formalism to the torsional degrees of freedom in molecular systems. The internal energy of the molecules ethane, n-butane, n-octane, and enkephalin are calculated at standard temperature using the TPIMC technique and compared to the expectation values obtained using the harmonic oscillator approximation and a variational technique. All studied molecules exhibited significant quantum mechanical contributions to their internal energy expectation values according to the TPIMC technique. The harmonic oscillator approximation approach to calculating the internal energy performs well for the molecules presented in this study but is limited by its neglect of both anharmonicity effects and the potential coupling of intramolecular torsions.
Excitonic effects in two-dimensional semiconductors: Path integral Monte Carlo approach
NASA Astrophysics Data System (ADS)
Velizhanin, Kirill A.; Saxena, Avadh
2015-11-01
One of the most striking features of novel two-dimensional semiconductors (e.g., transition metal dichalcogenide monolayers or phosphorene) is a strong Coulomb interaction between charge carriers resulting in large excitonic effects. In particular, this leads to the formation of multicarrier bound states upon photoexcitation (e.g., excitons, trions, and biexcitons), which could remain stable at near-room temperatures and contribute significantly to the optical properties of such materials. In the present work we have used the path integral Monte Carlo methodology to numerically study properties of multicarrier bound states in two-dimensional semiconductors. Specifically, we have accurately investigated and tabulated the dependence of single-exciton, trion, and biexciton binding energies on the strength of dielectric screening, including the limiting cases of very strong and very weak screening. The results of this work are potentially useful in the analysis of experimental data and benchmarking of theoretical and computational models.
Fermionic path-integral Monte Carlo results for the uniform electron gas at finite temperature
NASA Astrophysics Data System (ADS)
Filinov, V. S.; Fortov, V. E.; Bonitz, M.; Moldabekov, Zh.
2015-03-01
The uniform electron gas (UEG) at finite temperature has recently attracted substantial interest due to the experimental progress in the field of warm dense matter. To explain the experimental data, accurate theoretical models for high-density plasmas are needed that depend crucially on the quality of the thermodynamic properties of the quantum degenerate nonideal electrons and of the treatment of their interaction with the positive background. Recent fixed-node path-integral Monte Carlo (RPIMC) data are believed to be the most accurate for the UEG at finite temperature, but they become questionable at high degeneracy when the Brueckner parameter rs=a /aB —the ratio of the mean interparticle distance to the Bohr radius—approaches 1. The validity range of these simulations and their predictive capabilities for the UEG are presently unknown. This is due to the unknown quality of the used fixed nodes and of the finite-size scaling from N =33 simulated particles (per spin projection) to the macroscopic limit. To analyze these questions, we present alternative direct fermionic path integral Monte Carlo (DPIMC) simulations that are independent from RPIMC. Our simulations take into account quantum effects not only in the electron system but also in their interaction with the uniform positive background. Also, we use substantially larger particle numbers (up to three times more) and perform an extrapolation to the macroscopic limit. We observe very good agreement with RPIMC, for the polarized electron gas, up to moderate densities around rs=4 , and larger deviations for the unpolarized case, for low temperatures. For higher densities (high electron degeneracy), rs?1.5 , both RPIMC and DPIMC are problematic due to the increased fermion sign problem.
Dunn, K. L.; Wilson, P. P. H.
2013-07-01
A new Monte Carlo mesh tally based on a Kernel Density Estimator (KDE) approach using integrated particle tracks is presented. We first derive the KDE integral-track estimator and present a brief overview of its implementation as an alternative to the MCNP fmesh tally. To facilitate a valid quantitative comparison between these two tallies for verification purposes, there are two key issues that must be addressed. The first of these issues involves selecting a good data transfer method to convert the nodal-based KDE results into their cell-averaged equivalents (or vice versa with the cell-averaged MCNP results). The second involves choosing an appropriate resolution of the mesh, since if it is too coarse this can introduce significant errors into the reference MCNP solution. After discussing both of these issues in some detail, we present the results of a convergence analysis that shows the KDE integral-track and MCNP fmesh tallies are indeed capable of producing equivalent results for some simple 3D transport problems. In all cases considered, there was clear convergence from the KDE results to the reference MCNP results as the number of particle histories was increased. (authors)
Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
NASA Astrophysics Data System (ADS)
Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.
2013-12-01
We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.
NASA Technical Reports Server (NTRS)
Park, Han G.; Cannon, Howard; Bajwa, Anupa; Mackey, Ryan; James, Mark; Maul, William
2004-01-01
This paper describes the initial integration of a hybrid reasoning system utilizing a continuous domain feature-based detector, Beacon-based Exceptions Analysis for Multimissions (BEAM), and a discrete domain model-based reasoner, Livingstone.
Integrating Shape and Texture in Deformable Models: from Hybrid Methods to Metamorphs
Huang, Xiaolei
Integrating Shape and Texture in Deformable Models: from Hybrid Methods to Metamorphs Dimitris models, which we term "Metamorphs". The novel formulation of the Metamorph models tightly couples shape and region information in a variational framework. Keywords Metamorphs, deformable models, implicit
Extension of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes to 100 GeV
Miller, S.G.
1988-08-01
Version 2.1 of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes was modified to extend their ability to model interactions up to 100 GeV. Benchmarks against experimental results conducted at 10 and 15 GeV confirm the accuracy of the extended codes. 12 refs., 2 figs., 2 tabs.
Streamline Integration using MPI-Hybrid Parallelism on a Large Multi-Core Architecture
Camp, David; Garth, Christoph; Childs, Hank; Pugmire, Dave; Joy, Kenneth I.
2010-11-01
Streamline computation in a very large vector field data set represents a significant challenge due to the non-local and datadependentnature of streamline integration. In this paper, we conduct a study of the performance characteristics of hybrid parallel programmingand execution as applied to streamline integration on a large, multicore platform. With multi-core processors now prevalent in clustersand supercomputers, there is a need to understand the impact of these hybrid systems in order to make the best implementation choice.We use two MPI-based distribution approaches based on established parallelization paradigms, parallelize-over-seeds and parallelize-overblocks,and present a novel MPI-hybrid algorithm for each approach to compute streamlines. Our findings indicate that the work sharing betweencores in the proposed MPI-hybrid parallel implementation results in much improved performance and consumes less communication andI/O bandwidth than a traditional, non-hybrid distributed implementation.
An Event-Driven Hybrid Molecular Dynamics and Direct Simulation Monte Carlo Algorithm
Donev, A; Garcia, A L; Alder, B J
2007-07-30
A novel algorithm is developed for the simulation of polymer chains suspended in a solvent. The polymers are represented as chains of hard spheres tethered by square wells and interact with the solvent particles with hard core potentials. The algorithm uses event-driven molecular dynamics (MD) for the simulation of the polymer chain and the interactions between the chain beads and the surrounding solvent particles. The interactions between the solvent particles themselves are not treated deterministically as in event-driven algorithms, rather, the momentum and energy exchange in the solvent is determined stochastically using the Direct Simulation Monte Carlo (DSMC) method. The coupling between the solvent and the solute is consistently represented at the particle level, however, unlike full MD simulations of both the solvent and the solute, the spatial structure of the solvent is ignored. The algorithm is described in detail and applied to the study of the dynamics of a polymer chain tethered to a hard wall subjected to uniform shear. The algorithm closely reproduces full MD simulations with two orders of magnitude greater efficiency. Results do not confirm the existence of periodic (cycling) motion of the polymer chain.
B. Kamala Latha; G. Sai Preeti; K. P. N. Murthy; V. S. S. Sastry
2015-07-30
Equilibrium director structures in two thin hybrid planar films of biaxial nematics are investigated through Markov chain Monte Carlo simulations based on a lattice Hamiltonian model within the London dispersion approximation. While the substrates of the two films induce similar anchoring influences on the long axes of the liquid crystal molecules (viz. planar orientation at one end and perpendicular, or homeotropic, orientations at the other), they differ in their coupling with the minor axes of the molecules. In Type-A film the substrates do not interact with the minor axes at all (which is experimentally relatively more amenable), while in Type-B, the orientations of the molecular axes at the surface layer are influenced as well by their biaxial coupling with the surface. Both films exhibit expected bending of the director associated with ordering of the molecular long axes due to surface anchoring. Simulation results indicate that the Type-A film hosts stable and noise free director structures in the biaxial nematic phase of the LC medium, resulting from dominant ordering of one of the minor axes in the plane of the substrates. High degree of this stable order thus developed could be of practical interest for in-plane switching applications with an external field. Type-B film, on the other hand, experiences competing interactions among the minor axes, due to incompatible anchoring influences at the bounding substrates, apparently leading to frustration, and hence to noisy equilibrium director structures.
Integrated graphene/nanoparticle hybrids for biological and electronic applications.
Nguyen, Kim Truc; Zhao, Yanli
2014-06-21
The development of novel graphene/nanoparticle hybrid materials is currently the subject of tremendous research interest. The intrinsic exceptional assets of both graphene (including graphene oxide and reduced graphene oxide) and nanoparticles render their hybrid materials synergic properties that can be useful in various applications. In this feature review, we highlight recent developments in graphene/nanoparticle hybrids and their promising potential in electronic and biological applications. First, the latest advances in synthetic methods for the preparation of the graphene/nanoparticle hybrids are introduced, with the emphasis on approaches to (1) decorate nanoparticles onto two-dimensional graphene and (2) wrap nanoparticles with graphene sheets. The pros and cons of large-scale synthesis are also discussed. Then, the state-of-the-art of graphene/nanoparticle hybrids in electronic and biological applications is reviewed. For electronic applications, we focus on the advantages of using these hybrids in transparent conducting films, as well as energy harvesting and storage. Biological applications, electrochemical biosensing, bioimaging, and drug delivery using the hybrids are showcased. Finally, the future research prospects and challenges in this rapidly developing area are discussed. PMID:24752364
A hybrid approach to device integration on a genetic analysis platform
NASA Astrophysics Data System (ADS)
Brennan, Des; Jary, Dorothee; Kurg, Ants; Berik, Evgeny; Justice, John; Aherne, Margaret; Macek, Milan; Galvin, Paul
2012-10-01
Point-of-care (POC) systems require significant component integration to implement biochemical protocols associated with molecular diagnostic assays. Hybrid platforms where discrete components are combined in a single platform are a suitable approach to integration, where combining multiple device fabrication steps on a single substrate is not possible due to incompatible or costly fabrication steps. We integrate three devices each with a specific system functionality: (i) a silicon electro-wetting-on-dielectric (EWOD) device to move and mix sample and reagent droplets in an oil phase, (ii) a polymer microfluidic chip containing channels and reservoirs and (iii) an aqueous phase glass microarray for fluorescence microarray hybridization detection. The EWOD device offers the possibility of fully integrating on-chip sample preparation using nanolitre sample and reagent volumes. A key challenge is sample transfer from the oil phase EWOD device to the aqueous phase microarray for hybridization detection. The EWOD device, waveguide performance and functionality are maintained during the integration process. An on-chip biochemical protocol for arrayed primer extension (APEX) was implemented for single nucleotide polymorphism (SNiP) analysis. The prepared sample is aspirated from the EWOD oil phase to the aqueous phase microarray for hybridization. A bench-top instrumentation system was also developed around the integrated platform to drive the EWOD electrodes, implement APEX sample heating and image the microarray after hybridization.
Monte Carlo solution of the volume-integral equation of electromagnetic scattering
NASA Astrophysics Data System (ADS)
Peltoniemi, J.; Muinonen, K.
2014-07-01
Electromagnetic scattering is often the main physical process to be understood when interpreting the observations of asteroids, comets, and meteors. Modeling the scattering faces still many problems, and one needs to assess several different cases: multiple scattering and shadowing by the rough surface, multiple scattering inside a surface element, and single scattering by a small object. Our specific goal is to extend the electromagnetic techniques to larger and more complicated objects, and derive approximations taking into account the most important effects of waves. Here we experiment with Monte Carlo techniques: can they provide something new to solving the scattering problems? The electromagnetic wave equation in the presence of a scatterer of volume V and refractive index m, with an incident wave EE_0, including boundary conditions and the scattering condition at infinity, can be presented in the form of an integral equation EE(rr)(1+suski(rr) Q(?))-int_{V-V_?}ddrr' GG(rr-rr')suski(rr')EE(rr') =EE_0, where suski(rr)=m(rr)^2-1, Q(?)=-1/3+{cal O}(?^2)+{O'}(m^2?^2), {O}, and {O'} are some second- and higher-order corrections for the finite-size volume V_? of radius ? around the singularity and GG is the dyadic Green's function of the form GG(RR)={exp(im kR)}/{4? R}[unittensor(1+{im}/{R}-{1}/{R^2})-RRRR(1+{3im}/{R}-{3}/{R^2})]. In general, this is solved by extending the internal field in terms of some simple basis functions, e.g., plane or spherical waves or a cubic grid, approximating the integrals in a clever way, and determining the goodness of the solution somehow, e.g., moments or least square. Whatever the choice, the solution usually converges nicely towards a correct enough solution when the scatterer is small and simple, and diverges when the scatterer becomes too complicated. With certain methods, one can reach larger scatterers faster, but the memory and CPU needs can be huge. Until today, all successful solutions are based on more or less regular quadratures. Because of the oscillating singularity of the Green's function, the quadrature must match exactly the canceling patterns of the integrand, and any improper quadrature leads to large errors. Monte Carlo based integration appears thus a very bad choice, but we take the challenge, and formulate the integration applying a three-finger rule to catch the singularity. Our other selections are the least-squares technique and plane-wave basis, though both can be freely and easily changed. The singularity is treated fully numerically, and the radius ? is assumed so small that the correction terms do not contribute. Any other choice only worsens the accuracy, without a significant gain in speed. As with any other technique, we can solve small spheres of size x<5/|m| within an hour of processor time with about 1% accuracy for a large range of refractive indices. In speed, this technique does not compete with faster techniques such as ADDA, but in some random cases the accuracy can be even better (probably due to sub-optimal singularity formula in ADDA -- applying numerical integration also there could probably make ADDA winner in all the cases). We continue towards more complicated cases and multiple scattering to see, if some further improvements can be made.
Integration of LED chip within patch antenna geometry for hybrid FSO/RF communication
Huang, Zhaoran "Rena"
Integration of LED chip within patch antenna geometry for hybrid FSO/RF communication J. Liao, A mode communi- cation transmitter using a LED integrated within the geometry of a planar patch antenna on a shared substrate is demonstrated. An exper- imental FSO link is constructed with a bare die visible LED
Hybrid plasmon photonic crystal resonance grating for integrated spectrometer biosensor.
Guo, Hong; Guo, Junpeng
2015-01-15
Using nanofabricated hybrid metal-dielectric nanohole array photonic crystal gratings, a hybrid plasmonic optical resonance spectrometer biosensor is demonstrated. The new spectrometer sensor technique measures plasmonic optical resonance from the first-order diffraction rather than via the traditional method of measuring optical resonance from transmission. The resonance spectra measured with the new spectrometer technique are compared with the spectra measured using a commercial optical spectrometer. It is shown that the new optical resonance spectrometer can be used to measure plasmonic optical resonance that otherwise cannot be measured with a regular optical spectrometer. PMID:25679856
Levashov, P R; Filinov, V S; Fortov, V E
2006-01-01
In this work we calculate the thermodynamic properties of hydrogen-helium plasmas with different mass fractions of helium by the direct path integral Monte Carlo method. To avoid unphysical approximations we use the path integral representation of the density matrix. We pay special attention to the region of weak coupling and degeneracy and compare the results of simulation with a model based on the chemical picture. Further with the help of calculated deuterium isochors we compute the shock Hugoniot of deuterium. We analyze our results in comparison with recent experimental and calculated data on the deuterium Hugoniot.
Flight tests of a hybrid-centered integrated 3D perspective-view primary flight display
NASA Astrophysics Data System (ADS)
He, Gang; Feyereisen, Thea; Wilson, Blake; Wyatt, Sandy; Engels, Jary
2006-05-01
This paper describes flight tests of a Honeywell Synthetic Vision System (SVS) prototype operating in a hybrid-centered mode on a Primus Epic TM large format display. This novel hybrid mode effectively resolves some cognitive and perceptual human factors issues associated with traditional heading-up or track-up display modes. By integrating synthetic 3D perspective view with advanced Head-Up Display (HUD) symbology in this mode, the test results demonstrate that the hybrid display mode provides clear indications of current track and crab conditions, and is effective in overcoming flight guidance symbology collision and resultant ambiguity. The hybrid-centering SVS display concept is shown to be effective in all phases of flight and is particularly valuable during landing operations with a strong cross-wind. The recorded flight test data from Honeywell's prototype SVS concept at Reno, Nevada on board Honeywell Citation V aircraft will be discussed.
NASA Astrophysics Data System (ADS)
Franke, Brian C.; Kensek, Ronald P.; Prinja, Anil K.
2014-06-01
Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative "condensed transport" formulation, a Generalized Boltzmann-Fokker-Planck GBFP method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations.
The energy demand of distillation-molecular sieve systems for ethanol recovery/dehydration can be significant, particularly for dilute solutions. An alternative hybrid process integrating vapor stripping (like a beer still) with vapor compression and a vapor permeation membrane s...
Conceptual Integration of Hybridization by Algerian Students Intending to Teach Physical Sciences
ERIC Educational Resources Information Center
Salah, Hazzi; Dumon, Alain
2011-01-01
This work aims to assess the difficulties encountered by students of the Ecole Normale Superieure of Kouba (Algeria) intending to teach physical science in the integration of the hybridization of atomic orbitals. It is a concept that they should use in describing the formation of molecular orbitals ([sigma] and [pi]) in organic chemistry and gaps…
HiFi-WiN: Hybrid Integrated Fiber-Wireless Networking for Broadband Metropolitan Area
Kansas, University of
HiFi-WiN: Hybrid Integrated Fiber-Wireless Networking for Broadband Metropolitan Area Access robustness and security of first/last-mile broadband wireless access links within a metropoli- tan service services. Although often employed in cellular telephony networks, where the data rates between the central
Design of integrated hybrid silicon waveguide optical gyroscope.
Srinivasan, Sudharsanan; Moreira, Renan; Blumenthal, Daniel; Bowers, John E
2014-10-20
We propose and analyze a novel highly integrated optical gyroscope using low loss silicon nitride waveguides. By integrating the active optical components on chip, we show the possibility of reaching a detection limit on the order of 19°/hr/?Hz in an area smaller than 10 cm(2). This study examines a number of parameters, including the dependence of sensitivity on sensor area. PMID:25401532
Integrated Cost and Schedule using Monte Carlo Simulation of a CPM Model - 12419
Hulett, David T.; Nosbisch, Michael R.
2012-07-01
This discussion of the recommended practice (RP) 57R-09 of AACE International defines the integrated analysis of schedule and cost risk to estimate the appropriate level of cost and schedule contingency reserve on projects. The main contribution of this RP is to include the impact of schedule risk on cost risk and hence on the need for cost contingency reserves. Additional benefits include the prioritizing of the risks to cost, some of which are risks to schedule, so that risk mitigation may be conducted in a cost-effective way, scatter diagrams of time-cost pairs for developing joint targets of time and cost, and probabilistic cash flow which shows cash flow at different levels of certainty. Integrating cost and schedule risk into one analysis based on the project schedule loaded with costed resources from the cost estimate provides both: (1) more accurate cost estimates than if the schedule risk were ignored or incorporated only partially, and (2) illustrates the importance of schedule risk to cost risk when the durations of activities using labor-type (time-dependent) resources are risky. Many activities such as detailed engineering, construction or software development are mainly conducted by people who need to be paid even if their work takes longer than scheduled. Level-of-effort resources, such as the project management team, are extreme examples of time-dependent resources, since if the project duration exceeds its planned duration the cost of these resources will increase over their budgeted amount. The integrated cost-schedule risk analysis is based on: - A high quality CPM schedule with logic tight enough so that it will provide the correct dates and critical paths during simulation automatically without manual intervention. - A contingency-free estimate of project costs that is loaded on the activities of the schedule. - Resolves inconsistencies between cost estimate and schedule that often creep into those documents as project execution proceeds. - Good-quality risk data that are usually collected in risk interviews of the project team, management and others knowledgeable in the risk of the project. The risks from the risk register are used as the basis of the risk data in the risk driver method. The risk driver method is based in the fundamental principle that identifiable risks drive overall cost and schedule risk. - A Monte Carlo simulation software program that can simulate schedule risk, burn WM2012 rate risk and time-independent resource risk. The results include the standard histograms and cumulative distributions of possible cost and time results for the project. However, by simulating both cost and time simultaneously we can collect the cost-time pairs of results and hence show the scatter diagram ('football chart') that indicates the joint probability of finishing on time and on budget. Also, we can derive the probabilistic cash flow for comparison with the time-phased project budget. Finally the risks to schedule completion and to cost can be prioritized, say at the P-80 level of confidence, to help focus the risk mitigation efforts. If the cost and schedule estimates including contingency reserves are not acceptable to the project stakeholders the project team should conduct risk mitigation workshops and studies, deciding which risk mitigation actions to take, and re-run the Monte Carlo simulation to determine the possible improvement to the project's objectives. Finally, it is recommended that the contingency reserves of cost and of time, calculated at a level that represents an acceptable degree of certainty and uncertainty for the project stakeholders, be added as a resource-loaded activity to the project schedule for strategic planning purposes. The risk analysis described in this paper is correct only for the current plan, represented by the schedule. The project contingency reserve of time and cost that are the main results of this analysis apply if that plan is to be followed. Of course project managers have the option of re-planning and re-scheduling in the face of new facts, in part by m
Hybrid Integrated Label-Free Chemical and Biological Sensors
Mehrabani, Simin; Maker, Ashley J.; Armani, Andrea M.
2014-01-01
Label-free sensors based on electrical, mechanical and optical transduction methods have potential applications in numerous areas of society, ranging from healthcare to environmental monitoring. Initial research in the field focused on the development and optimization of various sensor platforms fabricated from a single material system, such as fiber-based optical sensors and silicon nanowire-based electrical sensors. However, more recent research efforts have explored designing sensors fabricated from multiple materials. For example, synthetic materials and/or biomaterials can also be added to the sensor to improve its response toward analytes of interest. By leveraging the properties of the different material systems, these hybrid sensing devices can have significantly improved performance over their single-material counterparts (better sensitivity, specificity, signal to noise, and/or detection limits). This review will briefly discuss some of the methods for creating these multi-material sensor platforms and the advances enabled by this design approach. PMID:24675757
Hybrid-integrated prism array optoelectronic targeting system
NASA Astrophysics Data System (ADS)
Chang, C. C.; Chang, H. C.; Tang, L. C.; Young, W. K.; Wang, J. C.; Huang, K. L.
2005-11-01
This investigation proposes a cost-effective, compact, and robust optoelectronic targeting system for measuring ballistic impact velocity and the distribution of projectile motion. The major elements of this system are four photo-gates hybridized by compound one-dimensional prism array and analog/digital electronic components. The number of light sources and photodetectors used in a photo-gate was reduced to one pair of light source and photodetector. The average velocity and location of the projectile are determined according to the measured time intervals ( ˜10 -8 s) passing each pair. The system can accurately measure the velocity of a bullet as it leaves a gun barrel, as well as the velocity at specific points along the trajectory outside the firearm. Additionally, the system uses a widespread low-powered laser pointer as a light source. Compared with other optoelectronic targeting systems that use high-powered lasers, the proposed system is both economical and safe.
NASA Astrophysics Data System (ADS)
Cerjanic, Alexander M.
The development of a spectral domain method of moments code for the modeling of single layer microstrip patch antennas is presented in this thesis. The mixed potential integral equation formulation of Maxwell's equations is used as the theoretical basis for the work, and is solved via the method of moments. General-purpose graphics processing units are used for the computation of the impedance matrix by incorporation of quasi-Monte Carlo integration. The development of the various components of the code, including Green's function, impedance matrix, and excitation vector modules are discussed with individual test cases for the major code modules. The integrated code was tested by modeling a suite of four coaxially probe fed circularly polarized single layer microstrip patch antennas and the computed results are compared to those obtained by measurement. Finally, a study examining the relationship between design parameters and S11 performance was undertaken using the code.
El-Kady, Maher F; Ihns, Melanie; Li, Mengping; Hwang, Jee Youn; Mousavi, Mir F; Chaney, Lindsay; Lech, Andrew T; Kaner, Richard B
2015-04-01
Supercapacitors now play an important role in the progress of hybrid and electric vehicles, consumer electronics, and military and space applications. There is a growing demand in developing hybrid supercapacitor systems to overcome the energy density limitations of the current generation of carbon-based supercapacitors. Here, we demonstrate 3D high-performance hybrid supercapacitors and microsupercapacitors based on graphene and MnO2 by rationally designing the electrode microstructure and combining active materials with electrolytes that operate at high voltages. This results in hybrid electrodes with ultrahigh volumetric capacitance of over 1,100 F/cm(3). This corresponds to a specific capacitance of the constituent MnO2 of 1,145 F/g, which is close to the theoretical value of 1,380 F/g. The energy density of the full device varies between 22 and 42 Wh/l depending on the device configuration, which is superior to those of commercially available double-layer supercapacitors, pseudocapacitors, lithium-ion capacitors, and hybrid supercapacitors tested under the same conditions and is comparable to that of lead acid batteries. These hybrid supercapacitors use aqueous electrolytes and are assembled in air without the need for expensive "dry rooms" required for building today's supercapacitors. Furthermore, we demonstrate a simple technique for the fabrication of supercapacitor arrays for high-voltage applications. These arrays can be integrated with solar cells for efficient energy harvesting and storage systems. PMID:25831542
El-Kady, Maher F.; Ihns, Melanie; Li, Mengping; Hwang, Jee Youn; Mousavi, Mir F.; Chaney, Lindsay; Lech, Andrew T.; Kaner, Richard B.
2015-01-01
Supercapacitors now play an important role in the progress of hybrid and electric vehicles, consumer electronics, and military and space applications. There is a growing demand in developing hybrid supercapacitor systems to overcome the energy density limitations of the current generation of carbon-based supercapacitors. Here, we demonstrate 3D high-performance hybrid supercapacitors and microsupercapacitors based on graphene and MnO2 by rationally designing the electrode microstructure and combining active materials with electrolytes that operate at high voltages. This results in hybrid electrodes with ultrahigh volumetric capacitance of over 1,100 F/cm3. This corresponds to a specific capacitance of the constituent MnO2 of 1,145 F/g, which is close to the theoretical value of 1,380 F/g. The energy density of the full device varies between 22 and 42 Wh/l depending on the device configuration, which is superior to those of commercially available double-layer supercapacitors, pseudocapacitors, lithium-ion capacitors, and hybrid supercapacitors tested under the same conditions and is comparable to that of lead acid batteries. These hybrid supercapacitors use aqueous electrolytes and are assembled in air without the need for expensive “dry rooms” required for building today’s supercapacitors. Furthermore, we demonstrate a simple technique for the fabrication of supercapacitor arrays for high-voltage applications. These arrays can be integrated with solar cells for efficient energy harvesting and storage systems. PMID:25831542
Outcome of the First wwPDB Hybrid/Integrative Methods Task Force Workshop.
Sali, Andrej; Berman, Helen M; Schwede, Torsten; Trewhella, Jill; Kleywegt, Gerard; Burley, Stephen K; Markley, John; Nakamura, Haruki; Adams, Paul; Bonvin, Alexandre M J J; Chiu, Wah; Peraro, Matteo Dal; Di Maio, Frank; Ferrin, Thomas E; Grünewald, Kay; Gutmanas, Aleksandras; Henderson, Richard; Hummer, Gerhard; Iwasaki, Kenji; Johnson, Graham; Lawson, Catherine L; Meiler, Jens; Marti-Renom, Marc A; Montelione, Gaetano T; Nilges, Michael; Nussinov, Ruth; Patwardhan, Ardan; Rappsilber, Juri; Read, Randy J; Saibil, Helen; Schröder, Gunnar F; Schwieters, Charles D; Seidel, Claus A M; Svergun, Dmitri; Topf, Maya; Ulrich, Eldon L; Velankar, Sameer; Westbrook, John D
2015-07-01
Structures of biomolecular systems are increasingly computed by integrative modeling that relies on varied types of experimental data and theoretical information. We describe here the proceedings and conclusions from the first wwPDB Hybrid/Integrative Methods Task Force Workshop held at the European Bioinformatics Institute in Hinxton, UK, on October 6 and 7, 2014. At the workshop, experts in various experimental fields of structural biology, experts in integrative modeling and visualization, and experts in data archiving addressed a series of questions central to the future of structural biology. How should integrative models be represented? How should the data and integrative models be validated? What data should be archived? How should the data and models be archived? What information should accompany the publication of integrative models? PMID:26095030
Militzer, Burkhard; Driver, Kevin P
2015-10-23
We extend the applicability range of fermionic path integral Monte Carlo simulations to heavier elements and lower temperatures by introducing various localized nodal surfaces. Hartree-Fock nodes yield the most accurate prediction for pressure and internal energy, which we combine with the results from density functional molecular dynamics simulations to obtain a consistent equation of state for hot, dense silicon under plasma conditions and in the regime of warm dense matter (2.3-18.6??g?cm^{-3}, 5.0×10^{5}-1.3×10^{8}??K). The shock Hugoniot curve is derived and the structure of the fluid is characterized with various pair correlation functions. PMID:26551129
Nabok, Dmitrii; Puschnig, Peter; Ambrosch-Draxl, Claudia
2011-01-01
The treatment of van der Waals interactions in density functional theory is an important field of ongoing research. Among different approaches developed recently to capture these non-local interactions, the van der Waals density functional (vdW-DF) developed in the groups of Langreth and Lundqvist is becoming increasingly popular. It does not rely on empirical parameters, and has been successfully applied to molecules, surface systems, and weakly-bound solids. As the vdW-DF requires the evaluation of a six-dimensional integral, it scales, however, unfavorably with system size. In this work, we present a numerically efficient implementation based on the Monte-Carlo technique for multi-dimensional integration. It can handle different versions of vdW-DF. Applications range from simple dimers to complex structures such as molecular crystals and organic molecules physisorbed on metal surfaces. PMID:21822326
NASA Astrophysics Data System (ADS)
Kubota, Shigeru; Kanomata, Kensaku; Suzuki, Takahiko; Hirose, Fumihiko
2014-10-01
The antireflection structure (ARS) for solar cells is categorized to mainly two different techniques, i.e., the surface texturing and the single or multi-layer antireflection interference coating. In this study, we propose a novel hybrid ARS, which integrates moth eye texturing and multi-layer coat, for application to organic photovoltaics (OPVs). Using optical simulations based on the finite-difference time-domain (FDTD) method, we conduct nearly global optimization of the geometric parameters characterizing the hybrid ARS. The proposed optimization algorithm consists of two steps: in the first step, we optimize the period and height of moth eye array, in the absence of multi-layer coating. In the second step, we optimize the whole structure of hybrid ARS by using the solution obtained by the first step as the starting search point. The methods of the simple grid search and the Hooke and Jeeves pattern search are used for global and local searches, respectively. In addition, we study the effects of deviations in the geometric parameters of hybrid ARS from their optimized values. The design concept of hybrid ARS is highly beneficial for broadband light trapping in OPVs.
Hybrid photonic integrated circuits for faster and greener optical communication networks
NASA Astrophysics Data System (ADS)
Stampoulidis, L.; Kehayas, E.; Zimmermann, L.
2011-01-01
We present current development efforts on hybrid photonic integration for new generation "faster and greener" Tb/scapacity optical networks. On the physical layer, we present the development of a versatile, silicon-based photonic integration platform that acts as a technology "blender" bringing together different material systems including III-V and silicon-based semiconductors. The platform is also used to implement the so-called O-to-O (optical-to-optical) functionalities by patterning low-loss passive components such as MMI couplers and delay interferometers. With these passive building blocks as well as the ability for hybrid assembly of active material, we demonstrate the fabrication of key optical transport and routing devices such as optical demodulators and all-optical wavelength converters. These devices can now be used to fabricate chip-scale 100 GbE transceiver PICs and Tb/s-capacity wavelength switching platforms.
Broadband silicon optical modulator using a graphene-integrated hybrid plasmonic waveguide.
Shin, Jin-Soo; Kim, Jin Tae
2015-09-11
Graphene is an excellent electronic and photonic material for developing electronic-photonic integrated circuits in Si-based semiconductor devices with ultra wide operational bandwidth. As an extended application, here we propose a broadband silicon optical modulator using a graphene-integrated hybrid plasmonic waveguide, and investigate the optical characteristics numerically at a wavelength of 1.55 ?m. The optical device is based on the surface plasmon polariton absorption of graphene. By electrically tuning the graphene's refractive index as low as that of a noble metal, the hybrid plasmonic waveguide supports a strongly confined highly lossy hybrid long-range surface plasmon polariton strip mode, and hence light coupled from an input waveguide experiences significant power attenuation as it propagates along the waveguide. Over the entire C-band from 1.530 to 1.565 ?m wavelengths, the on/off extinction ratio is larger than 13.7 dB. This modulator has the potential to play a key role in realizing graphene-Si waveguide-based integrated photonic devices. PMID:26293975
Broadband silicon optical modulator using a graphene-integrated hybrid plasmonic waveguide
NASA Astrophysics Data System (ADS)
Shin, Jin-Soo; Kim, Jin Tae
2015-09-01
Graphene is an excellent electronic and photonic material for developing electronic-photonic integrated circuits in Si-based semiconductor devices with ultra wide operational bandwidth. As an extended application, here we propose a broadband silicon optical modulator using a graphene-integrated hybrid plasmonic waveguide, and investigate the optical characteristics numerically at a wavelength of 1.55 ?m. The optical device is based on the surface plasmon polariton absorption of graphene. By electrically tuning the graphene’s refractive index as low as that of a noble metal, the hybrid plasmonic waveguide supports a strongly confined highly lossy hybrid long-range surface plasmon polariton strip mode, and hence light coupled from an input waveguide experiences significant power attenuation as it propagates along the waveguide. Over the entire C-band from 1.530 to 1.565 ?m wavelengths, the on/off extinction ratio is larger than 13.7 dB. This modulator has the potential to play a key role in realizing graphene-Si waveguide-based integrated photonic devices.
980nm-1550nm vertically integrated duplexer for hybrid erbium-doped waveguide amplifiers on glass
NASA Astrophysics Data System (ADS)
Onestas, Lydie; Nappez, Thomas; Ghibaudo, Elise; Vitrant, Guy; Broquin, Jean-Emmanuel
2009-02-01
Ion-exchanged devices on glass have been successfully used to realize passive and active integrated optic devices for sensor and telecom applications. Nowadays, research is focused on the reduction of the chip dimensions with an increase of the number of different function integrated. In this paper we present how the use of two stacked optical layers can allow realizing efficient and compact pump duplexer for ion-exchanged hybrid erbium doped waveguide amplifier. Indeed our complete theoretical study of the device shows that excess losses lower than - 0.1 dB and crosstalk lower than -20 dB can be achieved.
Urbic, T.; Holovko, M. F.
2011-01-01
Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes–Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied. PMID:21992334
NASA Astrophysics Data System (ADS)
Brito, B. G. A.; Cândido, Ladir; Hai, G.-Q.; Peeters, F. M.
2015-11-01
In order to study quantum effects in a two-dimensional crystal lattice of a free-standing monolayer graphene, we have performed both path-integral Monte Carlo (PIMC) and classical Monte Carlo (MC) simulations for temperatures up to 2000 K. The REBO potential is used for the interatomic interaction. The total energy, interatomic distance, root-mean-square displacement of the atom vibrations, and the free energy of the graphene layer are calculated. The obtained lattice vibrational energy per atom from the classical MC simulation is very close to the energy of a three-dimensional harmonic oscillator 3 kBT . The PIMC simulation shows that quantum effects due to zero-point vibrations are significant for temperatures T <1000 K. The quantum contribution to the lattice vibrational energy becomes larger than that of the classical lattice for T <400 K. The lattice expansion due to the zero-point motion causes an increase of 0.53% in the lattice parameter. A minimum in the lattice parameter appears at T ?500 K. Quantum effects on the atomic vibration amplitude of the graphene lattice and its free energy are investigated.
O'Brien, M J; Procassini, R J; Joy, K I
2009-03-09
Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.
NASA Astrophysics Data System (ADS)
McGrath, J.; Marchal, J.; Medjoubi, K.
2013-10-01
Spectroscopic and imaging performance parameters of hybrid pixel detectors operated in single-X-ray photon-counting mode can be inferred from the dependence of their sampling function (or aperture function) on the detection energy threshold. In a previous paper, it was shown that this dependence could be modelled using a simple analytical method. Measurements were performed on typical synchrotron X-ray detectors and fitted to the analytical formulas in order to obtain detector parameters such as charge-sharing width, energy dispersion and fill-factor at 50% threshold. In the present paper, we use Monte-Carlo (MC) and Finite-Element-Modeling (FEM) software tools to perform a more detailed simulation of image formation processes taking place in photon-counting hybrid pixel detectors of various pixel sizes associated to standard silicon sensor thickness and exposed to 15 keV monochromatic X-rays. We show that the MC/FEM simulation results can be used to produce detector parameters required in the analytical expressions of the sampling function of these detectors.
Path-integral Monte Carlo simulation of the structure of deuterium in the critical region. [D2; D2
Neumann, M. ); Zoppi, M. )
1991-08-15
The site-site distribution function and structure factor of deuterium close to the critical point have been calculated by means of quantum path-integral Monte Carlo simulations. The calculations were performed using a spherically symmetric pairwise-additive potential for the distribution of the molecular center of mass and assuming that D{sub 2} may be regarded as a classical free rotor. Some of the problems connected with the simulation of a quantum system close to the critical point are analyzed in detail. The results of the simulations have been compared with the neutron-scattering experiment by Zoppi {ital et} {ital al}. (Phys. Rev. A 39, 4684 (1989)). The overall agreement is excellent, but some diffuse discrepancies are found. Possible origins of this inconsistency are discussed.
Dornheim, Tobias; Filinov, Alexey; Bonitz, Michael
2015-01-01
Correlated fermions are of high interest in condensed matter (Fermi liquids, Wigner molecules), cold atomic gases and dense plasmas. Here we propose a novel approach to path integral Monte Carlo (PIMC) simulations of strongly degenerate non-ideal fermions at finite temperature by combining a fourth-order factorization of the density matrix with antisymmetric propagators, i.e., determinants, between all imaginary time slices. To efficiently run through the modified configuration space, we introduce a modification of the widely used continuous space worm algorithm, which allows for an efficient sampling at arbitrary system parameters. We demonstrate how the application of determinants achieves an effective blocking of permutations with opposite signs, leading to a significant relieve of the fermion sign problem. To benchmark the capability of our method regarding the simulation of degenerate fermions, we consider multiple electrons in a quantum dot and compare our results with other ab initio techniques, where ...
Kirk, B.L.
1985-12-01
The ITS (Integrated Tiger Series) Monte Carlo code package developed at Sandia National Laboratories and distributed as CCC-467/ITS by the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory (ORNL) consists of eight codes - the standard codes, TIGER, CYLTRAN, ACCEPT; the P-codes, TIGERP, CYLTRANP, ACCEPTP; and the M-codes ACCEPTM, CYLTRANM. The codes have been adapted to run on the IBM 3081, VAX 11/780, CDC-7600, and Cray 1 with the use of the update emulator UPEML. This manual should serve as a guide to a user running the codes on IBM computers having 370 architecture. The cases listed were tested on the IBM 3033, under the MVS operating system using the VS Fortran Level 1.3.1 compiler.
Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J. S.; Tsui, Benjamin M. W.
2008-01-01
The authors developed and validated an efficient Monte Carlo simulation (MCS) workflow to facilitate small animal pinhole SPECT imaging research. This workflow seamlessly integrates two existing MCS tools: simulation system for emission tomography (SimSET) and GEANT4 application for emission tomography (GATE). Specifically, we retained the strength of GATE in describing complex collimator?detector configurations to meet the anticipated needs for studying advanced pinhole collimation (e.g., multipinhole) geometry, while inserting the fast SimSET photon history generator (PHG) to circumvent the relatively slow GEANT4 MCS code used by GATE in simulating photon interactions inside voxelized phantoms. For validation, data generated from this new SimSET-GATE workflow were compared with those from GATE-only simulations as well as experimental measurements obtained using a commercial small animal pinhole SPECT system. Our results showed excellent agreement (e.g., in system point response functions and energy spectra) between SimSET-GATE and GATE-only simulations, and, more importantly, a significant computational speedup (up to ?10-fold) provided by the new workflow. Satisfactory agreement between MCS results and experimental data were also observed. In conclusion, the authors have successfully integrated SimSET photon history generator in GATE for fast and realistic pinhole SPECT simulations, which can facilitate research in, for example, the development and application of quantitative pinhole and multipinhole SPECT for small animal imaging. This integrated simulation tool can also be adapted for studying other preclinical and clinical SPECT techniques. PMID:18697552
Performance analysis of an OTEC plant and a desalination plant using an integrated hybrid cycle
Uehara, Haruo; Miyara, Akio; Ikegami, Yasuyuki; Nakaoka, Tsutomu
1996-05-01
A performance analysis of an OTEC plant using an integrated hybrid cycle (I-H OTEC Cycle) has been conducted. The I-H OTEC cycle is a combination of a closed-cycle OTEC plant and a spray flash desalination plant. In an I-H OTEC cycle, warm sea water evaporates the liquid ammonia in the OTEC evaporator, then enters the flash chamber and evaporates itself. The evaporated steam enters the desalination condenser and is condensed by the cold sea water passed through the OTEC condenser. The optimization of the I-H OTEC cycle is analyzed by the method of steepest descent. The total heat transfer area of heat exchangers per net power is used as an objective function. Numerical results are reported for a 10 MW I-H OTEC cycle with plate-type heat exchangers and ammonia as working fluid. The results are compared with those of a joint hybrid OTEC cycle (J-H OTEC Cycle).
Draft of M2 Report on Integration of the Hybrid Hydride Model into INL’s MBM Framework for Review
Tikare, Veena; Weck, Philippe F.; Schultz, Peter A.; Clark, Blythe; Glazoff, Michael; Homer, Eric
2014-07-01
This report documents the development, demonstration and validation of a mesoscale, microstructural evolution model for simulation of zirconium hydride {delta}-ZrH{sub 1.5} precipitation in the cladding of used nuclear fuels that may occur during long-term dry storage. While the Zr-based claddings are manufactured free of any hydrogen, they absorb hydrogen during service, in the reactor by a process commonly termed ‘hydrogen pick-up’. The precipitation and growth of zirconium hydrides during dry storage is one of the most likely fuel rod integrity failure mechanisms either by embrittlement or delayed hydride cracking of the cladding (Hanson et al., 2011). While the phenomenon is well documented and identified as a potential key failure mechanism during long-term dry storage (Birk et al., 2012 and NUREG/CR-7116), the ability to actually predict the formation of hydrides is poor. The model being documented in this work is a computational capability for the prediction of hydride formation in different claddings of used nuclear fuels. This work supports the Used Fuel Disposition Research and Development Campaign in assessing the structural engineering performance of the cladding during and after long-term dry storage. This document demonstrates a basic hydride precipitation model that is built on a recently developed hybrid Potts-phase field model that combines elements of Potts-Monte Carlo and the phase-field models (Homer et al., 2013; Tikare and Schultz, 2012). The model capabilities are demonstrated along with the incorporation of the starting microstructure, thermodynamics of the Zr-H system and the hydride formation mechanism.
Sole means navigation and integrity through hybrid Loran-C and NAVSTAR GPS
NASA Technical Reports Server (NTRS)
Vangraas, Frank
1990-01-01
A sole means navigation system does not only call for integrity, but also for coverage, reliability, availability and accuracy. Even though ground monitored GPS will provide integrity, availability is still not sufficient. One satellite outage can affect a large service area for several hours per day. The same holds for differential GPS; a total satellite outage cannot be corrected for. To obtain sufficient coverage, extra measurements are needed, either in the form of extra GPS satellites (expensive) or through redundant measurements from other systems. LORAN-C is available and will, hybridized with GPS, result in a system that has the potential to satisfy the requirements for a sole means navigation system for use in the continental United States. Assumptions are made about the qualification sole means, mainly based on current sole means systems such as VOR/DME. In order to allow for system design that will satisfy sole means requirements, it is recommended that a definition of a sole means navigation system be established. This definition must include requirements for availability, reliability, and integrity currently not specified. In addition to the definition of a sole means navigation system, certification requirements must be established for hybrid navigation systems. This will allow for design and production of a new generation of airborne navigation systems that will reduce overall system costs and simplify training procedures.
Integration of Multisensor Hybrid Reasoners to Support Personal Autonomy in the Smart Home
Valero, Miguel Ángel; Bravo, José; Chamizo, Juan Manuel García; López-de-Ipiña, Diego
2014-01-01
The deployment of the Ambient Intelligence (AmI) paradigm requires designing and integrating user-centered smart environments to assist people in their daily life activities. This research paper details an integration and validation of multiple heterogeneous sensors with hybrid reasoners that support decision making in order to monitor personal and environmental data at a smart home in a private way. The results innovate on knowledge-based platforms, distributed sensors, connected objects, accessibility and authentication methods to promote independent living for elderly people. TALISMAN+, the AmI framework deployed, integrates four subsystems in the smart home: (i) a mobile biomedical telemonitoring platform to provide elderly patients with continuous disease management; (ii) an integration middleware that allows context capture from heterogeneous sensors to program environment's reaction; (iii) a vision system for intelligent monitoring of daily activities in the home; and (iv) an ontologies-based integrated reasoning platform to trigger local actions and manage private information in the smart home. The framework was integrated in two real running environments, the UPM Accessible Digital Home and MetalTIC house, and successfully validated by five experts in home care, elderly people and personal autonomy. PMID:25232910
Integration of multisensor hybrid reasoners to support personal autonomy in the smart home.
Valero, Miguel Ángel; Bravo, José; Chamizo, Juan Manuel García; López-de-Ipiña, Diego
2014-01-01
The deployment of the Ambient Intelligence (AmI) paradigm requires designing and integrating user-centered smart environments to assist people in their daily life activities. This research paper details an integration and validation of multiple heterogeneous sensors with hybrid reasoners that support decision making in order to monitor personal and environmental data at a smart home in a private way. The results innovate on knowledge-based platforms, distributed sensors, connected objects, accessibility and authentication methods to promote independent living for elderly people. TALISMAN+, the AmI framework deployed, integrates four subsystems in the smart home: (i) a mobile biomedical telemonitoring platform to provide elderly patients with continuous disease management; (ii) an integration middleware that allows context capture from heterogeneous sensors to program environment's reaction; (iii) a vision system for intelligent monitoring of daily activities in the home; and (iv) an ontologies-based integrated reasoning platform to trigger local actions and manage private information in the smart home. The framework was integrated in two real running environments, the UPM Accessible Digital Home and MetalTIC house, and successfully validated by five experts in home care, elderly people and personal autonomy. PMID:25232910
ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2008-04-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
Arunraj, N S; Mandal, Saptarshi; Maiti, J
2013-06-01
Modeling uncertainty during risk assessment is a vital component for effective decision making. Unfortunately, most of the risk assessment studies suffer from uncertainty analysis. The development of tools and techniques for capturing uncertainty in risk assessment is ongoing and there has been a substantial growth in this respect in health risk assessment. In this study, the cross-disciplinary approaches for uncertainty analyses are identified and a modified approach suitable for industrial safety risk assessment is proposed using fuzzy set theory and Monte Carlo simulation. The proposed method is applied to a benzene extraction unit (BEU) of a chemical plant. The case study results show that the proposed method provides better measure of uncertainty than the existing methods as unlike traditional risk analysis method this approach takes into account both variability and uncertainty of information into risk calculation, and instead of a single risk value this approach provides interval value of risk values for a given percentile of risk. The implications of these results in terms of risk control and regulatory compliances are also discussed. PMID:23567215
Sensing and actuating capabilities of a shape memory polymer composite integrated with hybrid filler
NASA Astrophysics Data System (ADS)
Lu, Haibao; Yu, Kai; Liu, Yanju; Leng, Jinsong
2010-06-01
In this paper, hybrid fillers, including carbon black (CB) and chopped short carbon fibers (SCF), are integrated into a styrene-based shape memory polymer (SMP) with sensing and actuating capabilities. The hybrid filler is expected to transform insulating SMP into conducting. Static mechanical properties of the SMP composites containing various filler concentrations of hybrid filler reinforcement are studied first, and it is theoretically and experimentally confirmed that the mechanical properties are significantly improved by a factor of filler content of SCF. The excellent electrical properties of this novel type of SMP composite are determined by a four-point-probe method. As a consequence, the sensing properties of SMP composite filled with 5 wt% CB and 2 wt% SCF are characterized by functions of temperature and strain. These two experimental results both aid the use of SMP composites as sensors that respond to changes in temperature or mechanical loads. On the other hand, the actuating capability of SMP composites is also validated and demonstrated. The dynamic mechanical analysis result reveals that the output strength of SMP composites is improved with an increase in filler content of SCF. The actuating capability of SMP composites is subsequently demonstrated in a series of photographs.
Shang, Yu; Lin, Yu; Yu, Guoqiang; Li, Ting; Chen, Lei; Toborek, Michal
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (?D{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of ?D{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N???5) linear algorithm was more accurate in extracting ?D{sub B} (errors?
NASA Astrophysics Data System (ADS)
Verhaegen, F.; Symonds-Tayler, R.; Liu, H. H.; Nahum, A. E.
2000-11-01
In some linear accelerators, the charge collected by the monitor ion chamber is partly caused by backscattered particles from accelerator components downstream from the chamber. This influences the output of the accelerator and also has to be taken into account when output factors are derived from Monte Carlo simulations. In this work, the contribution of backscattered particles to the monitor ion chamber response of a Varian 2100C linac was determined for photon beams (6, 10 MV) and for electron beams (6, 12, 20 MeV). The experimental procedure consisted of charge integration from the target in a photon beam or from the monitor ion chamber in electron beams. The Monte Carlo code EGS4/BEAM was used to study the contribution of backscattered particles to the dose deposited in the monitor ion chamber. Both measurements and simulations showed a linear increase in backscatter fraction with decreasing field size for photon and electron beams. For 6 MV and 10 MV photon beams, a 2-3% increase in backscatter was obtained for a 0.5×0.5 cm2 field compared to a 40×40 cm2 field. The results for the 6 MV beam were slightly higher than for the 10 MV beam. For electron beams (6, 12, 20 MeV), an increase of similar magnitude was obtained from measurements and simulations for 6 MeV electrons. For higher energy electron beams a smaller increase in backscatter fraction was found. The problem is of less importance for electron beams since large variations of field size for a single electron energy usually do not occur.
Verhaegen, F; Symonds-Tayler, R; Liu, H H; Nahum, A E
2000-11-01
In some linear accelerators, the charge collected by the monitor ion chamber is partly caused by backscattered particles from accelerator components downstream from the chamber. This influences the output of the accelerator and also has to be taken into account when output factors are derived from Monte Carlo simulations. In this work, the contribution of backscattered particles to the monitor ion chamber response of a Varian 2100C linac was determined for photon beams (6, 10 MV) and for electron beams (6, 12, 20 MeV). The experimental procedure consisted of charge integration from the target in a photon beam or from the monitor ion chamber in electron beams. The Monte Carlo code EGS4/BEAM was used to study the contribution of backscattered particles to the dose deposited in the monitor ion chamber. Both measurements and simulations showed a linear increase in backscatter fraction with decreasing field size for photon and electron beams. For 6 MV and 10 MV photon beams, a 2-3% increase in backscatter was obtained for a 0.5 x 0.5 cm2 field compared to a 40 x 40 cm2 field. The results for the 6 MV beam were slightly higher than for the 10 MV beam. For electron beams (6, 12, 20 MeV), an increase of similar magnitude was obtained from measurements and simulations for 6 MeV electrons. For higher energy electron beams a smaller increase in backscatter fraction was found. The problem is of less importance for electron beams since large variations of field size for a single electron energy usually do not occur. PMID:11098896
NASA Astrophysics Data System (ADS)
Patrovsky, Andreas
This thesis deals with a novel type of integrated dielectric waveguide which is synthesized on a planar grounded substrate by perforation of the zones adjacent to a guiding channel in the center. The resulting Substrate Integrated Image Guide (SIIG) not only allows for low-loss guidance of electromagnetic waves in a similar way as the standard image guide, but also meets the requirements of low cost and ease of integration. A first objective was the detailed analysis of the propagation properties of fundamental and higher order modes in this waveguide structure, regarding attenuation, dispersion behavior, bandwidth, leakage effects, and the impact of fabrication tolerances. For this purpose, specifically adapted techniques of analysis are presented, since established methods for the conventional image guide can not be applied to the more complex periodic SIIG. Commercial electromagnetic full-wave software is used along with a dual-line approach involving a subsequent extraction of the propagation constant from simulated S-parameters. Alternatively, the solution of the eigenmode problem of a single SIIG unit cell also performs the task. Both techniques are in good agreement and provide accurate results, which is supported by measurements on laser-fabricated prototypes. It is shown that the achievable attenuation is much lower than in the standard integrated technologies and that losses mainly depend on the chosen dielectric material. As a consequence, the SIIG also is an attractive technology for applications beyond the mmW band, i. e. in the terahertz range. Design recommendations for the geometric parameters of the SIIG are discussed and a simplified equivalent model with homogeneous dielectric regions is introduced to speed up the design of passive components. Low-loss transitions between dissimilar waveguide structures are indispensable key components for a hybrid integrated platform. In order to enable the connection of standard measurement equipment in the W-band (75 GHz to 110 GHz), a transition from rectangular waveguide to SIIG was developed. Another transition to either microstrip or CPW is essential to enable coplanar probe measurements and to achieve compatibility with monolithic millimeter wave integrated circuits (MMICs). Microstrip and image guide have very different requirements for the substrate thickness, for which reason efforts were concentrated on a wideband transition between the SIIG and CPW. The designed transition shows good broadband performance and minimal radiation loss. Other transitions from the SIIG to the Substrate Integrated Waveguide (SIW) are also presented in the context of substrate integrated circuits (SICs). The latter technology combines planar transmission lines and originally non-planar waveguide structures that are synthesized in planar form on a common substrate. High alignment precision is a direct consequence, which eliminates the necessity for additional tuning. As an open dielectric waveguide technology with very small transmission loss, the SIIG is particularly suitable for antennas and corresponding feed lines. The similarity of the SIIG with other dielectric waveguides and especially with the image guide suggests a knowledge transfer from known dielectric antennas. A planar SIIG rod antenna was designed and fabricated, as a derivative of the established polyrod antenna. The structural shape is simple and compact, and it provides a medium gain in the range of 10 dBi to 15 dBi. A second developed type, an SIIG traveling-wave linear array antenna, is frequency-steerable through broadside due to special radiation elements. The novel design of a slab-mode antenna forms an endfire beam by a planar lens configuration. In addition, all of those dielectric-based antennas are highly efficient. Being synthesized on a planar substrate, the SIIG can be combined in a hybrid way with other waveguide structures on the same substrate in so-called substrate integrated circuits (SICs). It joins the SIW and the Substrate Integrated Non-Radiative Dielectric guide (SINRD) and adds unique featu
Towards integration of PET/MR hybrid imaging into radiation therapy treatment planning
Paulus, Daniel H.; Thorwath, Daniela; Schmidt, Holger; Quick, Harald H.
2014-07-15
Purpose: Multimodality imaging has become an important adjunct of state-of-the-art radiation therapy (RT) treatment planning. Recently, simultaneous PET/MR hybrid imaging has become clinically available and may also contribute to target volume delineation and biological individualization in RT planning. For integration of PET/MR hybrid imaging into RT treatment planning, compatible dedicated RT devices are required for accurate patient positioning. In this study, prototype RT positioning devices intended for PET/MR hybrid imaging are introduced and tested toward PET/MR compatibility and image quality. Methods: A prototype flat RT table overlay and two radiofrequency (RF) coil holders that each fix one flexible body matrix RF coil for RT head/neck imaging have been evaluated within this study. MR image quality with the RT head setup was compared to the actual PET/MR setup with a dedicated head RF coil. PET photon attenuation and CT-based attenuation correction (AC) of the hardware components has been quantitatively evaluated by phantom scans. Clinical application of the new RT setup in PET/MR imaging was evaluated in anin vivo study. Results: The RT table overlay and RF coil holders are fully PET/MR compatible. MR phantom and volunteer imaging with the RT head setup revealed high image quality, comparable to images acquired with the dedicated PET/MR head RF coil, albeit with 25% reduced SNR. Repositioning accuracy of the RF coil holders was below 1 mm. PET photon attenuation of the RT table overlay was calculated to be 3.8% and 13.8% for the RF coil holders. With CT-based AC of the devices, the underestimation error was reduced to 0.6% and 0.8%, respectively. Comparable results were found within the patient study. Conclusions: The newly designed RT devices for hybrid PET/MR imaging are PET and MR compatible. The mechanically rigid design and the reproducible positioning allow for straightforward CT-based AC. The systematic evaluation within this study provides the technical basis for the clinical integration of PET/MR hybrid imaging into RT treatment planning.
Hybrid and heterogeneous photonic integrated circuits for high-performance applications
NASA Astrophysics Data System (ADS)
Heck, Martijn J. R.
2015-02-01
Photonic integration based on silicon, silica, or indium phosphide technologies has reached a level of maturity where it has now become an integral part of telecom and datacom networks. However, although impressive levels of integration and bandwidth have been achieved, the performance of these technologies is relatively low, compared to fiber-optics and discrete bulk optics counterparts. This limits their application in more demanding fields like microwave photonics, e.g., for 4G/5G wireless communications, more advanced complex modulation formats for telecommunications, and highly energy-efficient interconnects. The invention of the ultra-low loss waveguide (ULLW) platform, by me and my co-workers at UC Santa Barbara, heralds a new range of applications for photonic integrated circuits. Fiber-like loss performance, with waveguide propagation losses < 0.1 dB/m, has been realized in waveguides with silicon nitride cores. This performance level represents an order of magnitude lower loss than silica-based waveguides, and 2 - 3 orders of magnitude lower than the silicon-on-insulator and indium phosphide PIC platforms. A combination of the silicon, ULLW, and/or indium phosphide platforms can be made using hybrid or heterogeneous integration techniques. Using "the best of both worlds" approach, improved performance can be achieved. I will discuss the opportunities that these technologies offer for various high-performance applications, such as low-noise lasers and oscillators, high-resolution radars and gyroscopes, and high-bandwidth photonic analog-to-digital converters.
NASA Astrophysics Data System (ADS)
Nagarajan, Adarsh; Shireen, Wajiha
2013-06-01
This paper proposes an approach for integrating Plug-In Hybrid Electric Vehicles (PHEV) to an existing residential photovoltaic system, to control and optimize the power consumption of residential load. Control involves determining the source from which residential load will be catered, where as optimization of power flow reduces the stress on the grid. The system built to achieve the goal is a combination of the existing residential photovoltaic system, PHEV, Power Conditioning Unit (PCU), and a controller. The PCU involves two DC-DC Boost Converters and an inverter. This paper emphasizes on developing the controller logic and its implementation in order to accommodate the flexibility and benefits of the proposed integrated system. The proposed controller logic has been simulated using MATLAB SIMULINK and further implemented using Digital Signal Processor (DSP) microcontroller, TMS320F28035, from Texas Instruments
Integration Issues of Cells into Battery Packs for Plug-in and Hybrid Electric Vehicles: Preprint
Pesaran, A. A.; Kim, G. H.; Keyser, M.
2009-05-01
The main barriers to increased market share of hybrid electric vehicles (HEVs) and commercialization of plug-in HEVs are the cost, safety, and life of lithium ion batteries. Significant effort is being directed to address these issues for lithium ion cells. However, even the best cells may not perform as well when integrated into packs for vehicles because of the environment in which vehicles operate. This paper discusses mechanical, electrical, and thermal integration issues and vehicle interface issues that could impact the cost, life, and safety of the system. It also compares the advantages and disadvantages of using many small cells versus a few large cells and using prismatic cells versus cylindrical cells.
Sharma, Diksha; Badano, Aldo
2013-03-15
Purpose: hybridMANTIS is a Monte Carlo package for modeling indirect x-ray imagers using columnar geometry based on a hybrid concept that maximizes the utilization of available CPU and graphics processing unit processors in a workstation. Methods: The authors compare hybridMANTIS x-ray response simulations to previously published MANTIS and experimental data for four cesium iodide scintillator screens. These screens have a variety of reflective and absorptive surfaces with different thicknesses. The authors analyze hybridMANTIS results in terms of modulation transfer function and calculate the root mean square difference and Swank factors from simulated and experimental results. Results: The comparison suggests that hybridMANTIS better matches the experimental data as compared to MANTIS, especially at high spatial frequencies and for the thicker screens. hybridMANTIS simulations are much faster than MANTIS with speed-ups up to 5260. Conclusions: hybridMANTIS is a useful tool for improved description and optimization of image acquisition stages in medical imaging systems and for modeling the forward problem in iterative reconstruction algorithms.
Lee, C; Badal, A
2014-06-15
Purpose: Computational voxel phantom provides realistic anatomy but the voxel structure may result in dosimetric error compared to real anatomy composed of perfect surface. We analyzed the dosimetric error caused from the voxel structure in hybrid computational phantoms by comparing the voxel-based doses at different resolutions with triangle mesh-based doses. Methods: We incorporated the existing adult male UF/NCI hybrid phantom in mesh format into a Monte Carlo transport code, penMesh that supports triangle meshes. We calculated energy deposition to selected organs of interest for parallel photon beams with three mono energies (0.1, 1, and 10 MeV) in antero-posterior geometry. We also calculated organ energy deposition using three voxel phantoms with different voxel resolutions (1, 5, and 10 mm) using MCNPX2.7. Results: Comparison of organ energy deposition between the two methods showed that agreement overall improved for higher voxel resolution, but for many organs the differences were small. Difference in the energy deposition for 1 MeV, for example, decreased from 11.5% to 1.7% in muscle but only from 0.6% to 0.3% in liver as voxel resolution increased from 10 mm to 1 mm. The differences were smaller at higher energies. The number of photon histories processed per second in voxels were 6.4×10{sup 4}, 3.3×10{sup 4}, and 1.3×10{sup 4}, for 10, 5, and 1 mm resolutions at 10 MeV, respectively, while meshes ran at 4.0×10{sup 4} histories/sec. Conclusion: The combination of hybrid mesh phantom and penMesh was proved to be accurate and of similar speed compared to the voxel phantom and MCNPX. The lowest voxel resolution caused a maximum dosimetric error of 12.6% at 0.1 MeV and 6.8% at 10 MeV but the error was insignificant in some organs. We will apply the tool to calculate dose to very thin layer tissues (e.g., radiosensitive layer in gastro intestines) which cannot be modeled by voxel phantoms.
Geng, Changran; Tang, Xiaobin; Qian, Wei; Guan, Fada; Johns, Jesse; Yu, Haiyan; Gong, Chunhui; Shu, Diyun; Chen, Da
2015-09-01
The S values for the thyroid as the radioiodine source organ to other target organs were investigated using Chinese hybrid reference phantoms and the Monte Carlo code MCNP5. Two radioiodine isotopes (125)I and (131)I uniformly distributed in the thyroid were investigated separately. We compared our S values for (131)I in Chinese phantoms with previous studies using other types of phantoms: Oak Ridge National Laboratory (ORNL) stylized phantoms, International Commission on Radiological Protection (ICRP) voxel phantoms, and University of Florida (UF) phantoms. Our results are much closer to the UF phantoms. For each specific target organ, the S value for (131)I is larger than for (125)I in both male and female phantoms. In addition, the S values and effective dose to surrounding face-to-face exposed individuals, including different genders and ages (10- and 15-year-old juniors, and adults) from an adult male radioiodine carrier were also investigated. The target organ S values and effective dose for surrounding individuals obey the inverse square law with the distance between source and target phantoms. The obtained effective dose data in Chinese phantoms are comparable to the results in a previous study using the UF phantoms. The data generated in this study can serve as the reference to make recommendations for radiation protection of the Chinese patients or nuclear workers. PMID:26344387
Hybrid integrated biological-solid-state system powered with adenosine triphosphate.
Roseman, Jared M; Lin, Jianxun; Ramakrishnan, Siddharth; Rosenstein, Jacob K; Shepard, Kenneth L
2015-01-01
There is enormous potential in combining the capabilities of the biological and the solid state to create hybrid engineered systems. While there have been recent efforts to harness power from naturally occurring potentials in living systems in plants and animals to power complementary metal-oxide-semiconductor integrated circuits, here we report the first successful effort to isolate the energetics of an electrogenic ion pump in an engineered in vitro environment to power such an artificial system. An integrated circuit is powered by adenosine triphosphate through the action of Na(+)/K(+) adenosine triphosphatases in an integrated in vitro lipid bilayer membrane. The ion pumps (active in the membrane at numbers exceeding 2 × 10(6)?mm(-2)) are able to sustain a short-circuit current of 32.6?pA?mm(-2) and an open-circuit voltage of 78?mV, providing for a maximum power transfer of 1.27?pW?mm(-2) from a single bilayer. Two series-stacked bilayers provide a voltage sufficient to operate an integrated circuit with a conversion efficiency of chemical to electrical energy of 14.9%. PMID:26638983
Hybrid integrated biological–solid-state system powered with adenosine triphosphate
Roseman, Jared M.; Lin, Jianxun; Ramakrishnan, Siddharth; Rosenstein, Jacob K.; Shepard, Kenneth L.
2015-01-01
There is enormous potential in combining the capabilities of the biological and the solid state to create hybrid engineered systems. While there have been recent efforts to harness power from naturally occurring potentials in living systems in plants and animals to power complementary metal-oxide-semiconductor integrated circuits, here we report the first successful effort to isolate the energetics of an electrogenic ion pump in an engineered in vitro environment to power such an artificial system. An integrated circuit is powered by adenosine triphosphate through the action of Na+/K+ adenosine triphosphatases in an integrated in vitro lipid bilayer membrane. The ion pumps (active in the membrane at numbers exceeding 2 × 106?mm?2) are able to sustain a short-circuit current of 32.6?pA?mm?2 and an open-circuit voltage of 78?mV, providing for a maximum power transfer of 1.27?pW?mm?2 from a single bilayer. Two series-stacked bilayers provide a voltage sufficient to operate an integrated circuit with a conversion efficiency of chemical to electrical energy of 14.9%. PMID:26638983
NASA Astrophysics Data System (ADS)
Tramonto, F.; Salvestrini, P.; Nava, M.; Galli, D. E.
2015-07-01
By means of the Path Integral Monte Carlo method, we have performed a detailed microscopic study of 4He nanodroplets doped with an argon ion, Ar, at K. We have computed density profiles, energies, dissociation energies, and characterized the local order around the ion for nanodroplets with a number of 4He atoms ranging from 10 to 64 and also 128. We have found the formation of a stable solid structure around the ion, a "snowball", consisting of three concentric shells in which the 4He atoms are placed at the vertices of platonic solids: the first inner shell is an icosahedron (12 atoms); the second one is a dodecahedron with 20 atoms placed on the faces of the icosahedron of the first shell; the third shell is again an icosahedron composed of 12 atoms placed on the faces of the dodecahedron of the second shell. The "magic numbers" implied by this structure, 12, 32, and 44 helium atoms, have been observed in a recent experimental study (Bartl et al., J Phys Chem A 118:8050, 2014) of these complexes; the dissociation energy curve computed in the present work shows jumps in correspondence with those found in the nanodroplets abundance distribution measured in that experiment, strengthening the agreement between theory and experiment. The same structures were predicted in Galli et al. (J Phys Chem A 115:7300, 2011) in a study regarding Na+@4He when ; a comparison between Ar+@4He and Na+@4He complexes is also presented.
Path integral Monte Carlo study of (H2)n@C70 (n = 1,2,3)
NASA Astrophysics Data System (ADS)
Hao, Yan; Zhang, Hong; Cheng, Xin-Lu
2015-08-01
The path integral Monte Carlo (PIMC) method is employed to study the thermal properties of C70 with one, two, and three H2 molecules confined in the cage, respectively. The interaction energies and vibrationally averaged spatial distributions under different temperatures are calculated to evaluate the stabilities of (H2)n@C70 (n = 1, 2, 3). The results show that (H2)2@C70 is more stable than H2@C70. The interaction energy slowly changes in a large temperature range, so temperature has little effect on the stability of the system. For H2@C70 and (H2)2@C70, the interaction energies keep negative; however, when three H2 molecules are in the cage, the interaction energy rapidly increases to a positive value. This implies that at most two H2 molecules can be trapped by C70. With an increase of temperature, the peak of the spatial distribution gradually shifts away from the center of the cage, but the maximum distance from the center of H2 molecule to the cage center is much smaller than the average radius of C70. Project supported by the National Natural Science Foundation of China (Grant Nos. 11474207 and 11374217).
NASA Astrophysics Data System (ADS)
Otaki, Hiroki; Ando, Koji
2015-01-01
The dielectric properties of proton(H)-deuteron(D) mixed crystals of the hydrogen-bonded material 5-Bromo-9-hydroxyphenalenone are studied using a novel path integral Monte Carlo (PIMC) method that takes account of the dipole induction effect depending on the relative proton configurations in the surrounding molecules. The induced dipole is evaluated using the fragment molecular orbital method with electron correlation included by second-order Møller-Plesset perturbation theory and long-range corrected density functional theory. The results show a greater influence of Csbnd H ⋯O intermolecular weak hydrogen bonding on the induction than for results evaluated with the Hartree-Fock method. The induction correction is incorporated into the PIMC simulations with a model Hamiltonian that consists of long-range dipolar interactions and a transverse term describing proton tunneling. The relationship between the calculated phase transition temperature and H/D mixing ratio is consistent with the experimental phase diagram, indicating that the balance between the proton tunneling and the collective ordering is appropriately described.
NASA Astrophysics Data System (ADS)
Dornheim, Tobias; Groth, Simon; Filinov, Alexey; Bonitz, Michael
2015-07-01
Correlated fermions are of high interest in condensed matter (Fermi liquids, Wigner molecules), cold atomic gases and dense plasmas. Here we propose a novel approach to path integral Monte Carlo (PIMC) simulations of strongly degenerate non-ideal fermions at finite temperature by combining a fourth-order factorization of the density matrix with antisymmetric propagators, i.e., determinants, between all imaginary time slices. To efficiently run through the modified configuration space, we introduce a modification of the widely used continuous space worm algorithm, which allows for an efficient sampling at arbitrary system parameters. We demonstrate how the application of determinants achieves an effective blocking of permutations with opposite signs, leading to a significant relieve of the fermion sign problem. To benchmark the capability of our method regarding the simulation of degenerate fermions, we consider multiple electrons in a quantum dot and compare our results with other ab initio techniques, where they are available. The present permutation blocking PIMC approach allows us to obtain accurate results even for N = 20 electrons at low temperature and arbitrary coupling, where no other ab initio results have been reported, so far.
NASA Astrophysics Data System (ADS)
Cuccoli, Alessandro; Macchi, Alessandro; Pedrolli, Gaia; Tognetti, Valerio; Vaia, Ruggero
1995-05-01
We consider the problem of the extrapolation of path-integral Monte Carlo (PIMC) data to infinite Trotter number P. Finite-P data, being even functions of P, have high-P dependence that is generally well described by a quadratic fit, a0+a1P-2, where a0 is the exact quantum value. However, in order to get convergence it is often necessary to run PIMC codes with rather high P values, which implies long computer times and larger statistical errors of the data. It is well known that also for harmonic systems the finite-P data are not exact; nevertheless, they can be easily calculated by Gaussian quadrature. Starting from this observation, we suggest an easy way to correct PIMC data for anharmonic systems in order to take into account the harmonic part exactly, with strong improvement of the extrapolation to P=?. Lower Trotter numbers are thus required, with the advantages of computer-time saving and much better accuracy of the extrapolated values, without any change in the PIMC code. In order to demonstrate the effectiveness of the approach, we report finite-P data processing for a single anharmonic particle, whose finite-P data are obtained by the matrix-squaring method, and for a chain of atoms with Morse interaction.
NASA Astrophysics Data System (ADS)
Lindsay, A.; McCloskey, J.; Nalbant, S. S.; Simao, N.; Murphy, S.; NicBhloscaidh, M.; Steacy, S.
2013-12-01
Identifying fault sections where slip deficits have accumulated may provide a means for understanding sequences of large megathrust earthquakes. Stress accumulated during the interseismic period on locked sections of an active fault is stored as potential slip. Where this potential slip remains unreleased during earthquakes, a slip deficit can be said to have accrued. Analysis of the spatial distribution of slip during antecedent events along the fault will show where the locked plate has spent its stored slip and indicate where the potential for large events remains. The location of recent earthquakes and their distribution of slip can be estimated instrumentally. To develop the idea of long-term slip-deficit modelling it is necessary to constrain the size and distribution of slip for pre-instrumental events dating back hundreds of years covering more than one ';seismic cycle'. This requires the exploitation of proxy sources of data. Coral microatolls, growing in the intertidal zone of the outer island arc of the Sunda trench, present the possibility of producing high resolution reconstructions of slip for a number of pre-instrumental earthquakes. Their growth is influenced by tectonic flexing of the continental plate beneath them allows them to act as long term geodetic recorders. However, the sparse distribution of data available using coral geodesy results in a under determined problem with non-unique solutions. Instead of producing one definite model satisfying the observed corals displacements, a Monte Carlo Slip Estimator based on a Genetic Algorithm (MCSE-GA) accelerating the rate of convergence is used to identify a suite of models consistent with the data. Successive iterations of the MCSE-GA sample different displacements at each coral location, from within the spread of associated uncertainties, producing a catalog of models from the full range of possibilities. The suite of best slip distributions are weighted according to their fitness and stacked to produce a final estimate of the distribution of slip for a particular earthquake. Examination of the slip values in the stacked models allows areas of high confidence to be identified where the standard deviation is low. Similarly, areas of low confidence will be found where standard deviations are high. These high resolution models can be used to reconstruct a history of slip along the fault, both identifying and quantifying of slip deficits and constraining confidence in the accuracy of the modelled information. This presentation will demonstrate the ability of the MCSE-GA to produce accurate models of slip for instrumentally recorded earthquakes and show estimates for slip during paleoearthquakes along the Sunda Megathrust.
NASA Technical Reports Server (NTRS)
Combi, Michael R.
2004-01-01
In order to understand the global structure, dynamics, and physical and chemical processes occurring in the upper atmospheres, exospheres, and ionospheres of the Earth, the other planets, comets and planetary satellites and their interactions with their outer particles and fields environs, it is often necessary to address the fundamentally non-equilibrium aspects of the physical environment. These are regions where complex chemistry, energetics, and electromagnetic field influences are important. Traditional approaches are based largely on hydrodynamic or magnetohydrodynamic (MHD) formulations and are very important and highly useful. However, these methods often have limitations in rarefied physical regimes where the molecular collision rates and ion gyrofrequencies are small and where interactions with ionospheres and upper neutral atmospheres are important. At the University of Michigan we have an established base of experience and expertise in numerical simulations based on particle codes which address these physical regimes. The Principal Investigator, Dr. Michael Combi, has over 20 years of experience in the development of particle-kinetic and hybrid kinetichydrodynamics models and their direct use in data analysis. He has also worked in ground-based and space-based remote observational work and on spacecraft instrument teams. His research has involved studies of cometary atmospheres and ionospheres and their interaction with the solar wind, the neutral gas clouds escaping from Jupiter s moon Io, the interaction of the atmospheres/ionospheres of Io and Europa with Jupiter s corotating magnetosphere, as well as Earth s ionosphere. This report describes our progress during the year. The contained in section 2 of this report will serve as the basis of a paper describing the method and its application to the cometary coma that will be continued under a research and analysis grant that supports various applications of theoretical comet models to understanding the inner comae of comets (grant NAGS- 13239 from the Planetary Atmospheres program).
Automatic on-chip RNA-DNA hybridization assay with integrated phase change microvalves
NASA Astrophysics Data System (ADS)
Weng, Xuan; Jiang, Hai; Wang, Junsheng; Chen, Shu; Cao, Honghe; Li, Dongqing
2012-07-01
An RNA-DNA hybridization assay microfluidic chip integrated with electrothermally actuated phase change microvalves for detecting pathogenic bacteria is presented in this paper. In order to realize the sequential loading and washing processes required in such an assay, gravity-based pressure-driven flow and phase-change microvalves were used in the microfluidic chip. Paraffin wax was used as the phase change material in the valves and thin film heaters were used to electrothermally actuate microvalves. Light absorption measured by a photodetector to determine the concentrations of the samples. The automatic control of the complete assay was implemented by a self-coded LabVIEW program. To examine the performance of this chip, Salmonella was used as a sample pathogen. Significantly, reduction in reagent/sample consumption (up to 20 folds) was achieved by this on-chip assay, compared with using the commercial test kit following the same protocol in conventional labs. The experimental results show that the quantitative detection can be obtained in approximately 26 min, and the detection limit is as low as 103 CFU ml-1. This RNA-DNA hybridization assay microfluidic chip shows an excellent potential in the development of a portable device for point-of-testing applications.
NASA Astrophysics Data System (ADS)
Santato, Clara
2015-10-01
The boom in multifunctional, flexible, and portable electronics and the increasing need of low-energy cost and autonomy for applications ranging from wireless sensor networks for smart environments to biomedical applications are triggering research efforts towards the development of self-powered sustainable electronic devices. Within this context, the coupling of electronic devices (e.g. sensors, transistors) with small size energy storage systems (e.g. micro-batteries or micro-supercapacitors) is actively pursued. Micro-electrochemical supercapacitors are attracting much attention in electronics for their capability of delivering short power pulses with high stability over repeated charge/discharge cycling. For their high specific pseudocapacitance, electronically conducting polymers are well known as positive materials for hybrid supercapacitors featuring high surface carbon negative electrodes. The processability of both polymer and carbon is of great relevance for the development of flexible miniaturised devices. Electronically conducting polymers are even well known to feature an electronic conductivity that depends on their oxidation (p-doped state) and that it is modulated by the polymer potential. This property and the related pseudocapacitive response make polymer very attracting channel materials for electrolyte-gated (EG) transistors. Here, we propose a novel concept of "Trans-capacitor", an integrated device that exhibits the storage properties of a polymer/carbon hybrid supercapacitor and the low-voltage operation of an electrolyte-gated transistor.
Bylesjö, Max; Nilsson, Robert; Srivastava, Vaibhav; Grönlund, Andreas; Johansson, Annika I; Jansson, Steffan; Karlsson, Jan; Moritz, Thomas; Wingsle, Gunnar; Trygg, Johan
2009-01-01
Tree biotechnology will soon reach a mature state where it will influence the overall supply of fiber, energy and wood products. We are now ready to make the transition from identifying candidate genes, controlling important biological processes, to discovering the detailed molecular function of these genes on a broader, more holistic, systems biology level. In this paper, a strategy is outlined for informative data generation and integrated modeling of systematic changes in transcript, protein and metabolite profiles measured from hybrid aspen samples. The aim is to study characteristics of common changes in relation to genotype-specific perturbations affecting the lignin biosynthesis and growth. We show that a considerable part of the systematic effects in the system can be tracked across all platforms and that the approach has a high potential value in functional characterization of candidate genes. PMID:19053836
Celik, Metin
2009-03-01
The International Safety Management (ISM) Code defines a broad framework for the safe management and operation of merchant ships, maintaining high standards of safety and environmental protection. On the other hand, ISO 14001:2004 provides a generic, worldwide environmental management standard that has been utilized by several industries. Both the ISM Code and ISO 14001:2004 have the practical goal of establishing a sustainable Integrated Environmental Management System (IEMS) for shipping businesses. This paper presents a hybrid design methodology that shows how requirements from both standards can be combined into a single execution scheme. Specifically, the Analytic Hierarchy Process (AHP) and Fuzzy Axiomatic Design (FAD) are used to structure an IEMS for ship management companies. This research provides decision aid to maritime executives in order to enhance the environmental performance in the shipping industry. PMID:19038488
Yoo, S. J. Ben
IEEE PHOTONICS TECHNOLOGY LETTERS, VOL. 22, NO. 24, DECEMBER 15, 2010 1793 1-GHz Monolithically--This letter demonstrates a 1-GHz hybrid mode-locked monolithic semiconductor laser fabricated on indium knowledge, this is the lowest reported repetition rate for a monolithically integrated mode
Srivastava, Anurag K.; Annabathina, Bharath; Kamalasadan, Sukumar
2010-04-15
Plug-in hybrid electric vehicle may be prime candidates for the next generation of vehicles, but they offer several technological and economical challenges. This article assesses current progress in PHEV technology, market trends, research needs, challenges ahead and policy options for integrating PHEVs into the electric grid. (author)
Clark, Michael A.; Joo, Balint; Kennedy, Anthony D.; Silva, Paolo J.
2011-10-01
We show how the integrators used for the molecular dynamics step of the Hybrid Monte Carlo algorithm can be further improved. These integrators not only approximately conserve some Hamiltonian H but conserve exactly a nearby shadow Hamiltonian H~. This property allows for a new tuning method of the molecular dynamics integrator and also allows for a new class of integrators (force-gradient integrators) which is expected to reduce significantly the computational cost of future large-scale gauge field ensemble generation.
NASA Astrophysics Data System (ADS)
Lindoy, Lachlan P.; Kolmann, Stephen J.; D'Arcy, Jordan H.; Crittenden, Deborah L.; Jordan, Meredith J. T.
2015-11-01
Finite temperature quantum and anharmonic effects are studied in H2-Li+-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H2. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H2 molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ?Uads, and enthalpy, ?Hads, for H2 adsorption onto Li+-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling—coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H2-Li+-benzene are the "helicopter" and "ferris wheel" H2 rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ?Uads and ?Hads are -13.3 ± 0.1 and -14.5 ± 0.1 kJ mol-1, respectively.
Lindoy, Lachlan P; Kolmann, Stephen J; D'Arcy, Jordan H; Crittenden, Deborah L; Jordan, Meredith J T
2015-11-21
Finite temperature quantum and anharmonic effects are studied in H2-Li(+)-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H2. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H2 molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ?Uads, and enthalpy, ?Hads, for H2 adsorption onto Li(+)-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling-coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H2-Li(+)-benzene are the "helicopter" and "ferris wheel" H2 rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ?Uads and ?Hads are -13.3 ± 0.1 and -14.5 ± 0.1 kJ mol(-1), respectively. PMID:26590532
Electric-drive tractability indicator integrated in hybrid electric vehicle tachometer
Tamai, Goro; Zhou, Jing; Weslati, Feisel
2014-09-02
An indicator, system and method of indicating electric drive usability in a hybrid electric vehicle. A tachometer is used that includes a display having an all-electric drive portion and a hybrid drive portion. The all-electric drive portion and the hybrid drive portion share a first boundary which indicates a minimum electric drive usability and a beginning of hybrid drive operation of the vehicle. The indicated level of electric drive usability is derived from at least one of a percent battery discharge, a percent maximum torque provided by the electric drive, and a percent electric drive to hybrid drive operating cost for the hybrid electric vehicle.
NASA Astrophysics Data System (ADS)
Esler, Kenneth Paul
Path integral Monte Carlo (PIMC) is a quantum-level simulation method based on a stochastic sampling of the many-body thermal density matrix. Utilizing the imaginary-time formulation of Feynman's sum-over-histories, it includes thermal fluctuations and particle correlations in a natural way. Over the past two decades, PIMC has been applied to the study of the electron gas, hydrogen under extreme pressure, and superfluid helium with great success. However, the computational demand scales with a high power of the atomic number, preventing its application to systems containing heavier elements. In this dissertation, we present the methodological developments necessary to apply this powerful tool to these systems. We begin by introducing the PIMC method. We then explain how effective potentials with position-dependent electron masses can be used to significantly reduce the computational demand of the method for heavier elements, while retaining high accuracy. We explain how these pseudohamiltonians can be integrated into the PIMC simulation by computing the density matrix for the electron-ion pair. We then address the difficulties associated with the long-range behavior of the coulomb potential, and improve a method to optimally partition particle interactions into real-space and reciprocal-space summations. We discuss the use of twist-averaged boundary conditions to reduce the finite-size effects in our simulations and the fixed-phase method needed to enforce the boundary conditions. Finally, we explain how a PIMC simulation of the electrons can be coupled to a classical Langevin dynamics simulation of the ions to achieve an efficient sampling of all degrees of freedom. After describing these advancements in methodology, we apply our new technology to fluid sodium near its liquid-vapor critical point. In particular, we explore the microscopic mechanisms which drive the continuous change from a dense metallic liquid to an expanded insulating vapor above the critical temperature. We show that the dynamic aggregation and dissociation of clusters of atoms play a significant role in determining the conductivity and that the formation of these clusters is highly density and temperature dependent. Finally, we suggest several avenues for research to further improve our simulations.
First application close measurements applying the new hybrid integrated MEMS spectrometer
NASA Astrophysics Data System (ADS)
Grüger, Heinrich; Pügner, Tino; Knobbe, Jens; Schenk, Harald
2013-05-01
Grating spectrometers have been designed in many different configurations. Now potential high volume applications ask for extremely miniaturized and low cost systems. By the use of integrated MEMS (micro electro mechanical systems) scanning grating devices a less expensive single detector can be used in the NIR instead of the array detectors required for fixed grating systems. Meanwhile the design of a hybrid integrated MEMS scanning grating spectrometer has been drawn. The MEMS device was fabricated in the Fraunhofer IPMS own clean room facility. This chip is mounted on a small circuit board together with the detector and then stacked with spacer and mirror substrate. The spectrometer has been realized by stacking several planar substrates by sophisticated mounting technologies. The spectrometer has been designed for the 950nm - 1900nm spectral range and 9nm spectral resolution with organic matter analysis in mind. First applications are considered in the food quality analysis and food processing technology. As example for the use of a spectrometer with this performance the grill process of steak was analyzed. Similar measurement would be possible on dairy products, vegetables or fruit. The idea is a mobile spectrometer for in situ and on site analysis applications in or attached to a host system providing processing, data access and input-output capabilities, disregarding this would be a laptop, tablet, smart phone or embedded platform.
NASA Astrophysics Data System (ADS)
An, Yu Jing
With increasing device integration and miniaturization, it is desirable to grow Al-Ga-N optoelectronic devices on inexpensive, large size Si wafers. The latter enables seamless integration of optical components with conventional electronics. However, Si has large lattice and thermal expansion mismatches with group-III nitrides, and absorbs visible and UV light emitted by active nitride layers. To circumvent these difficulties, unique hybrid substrates were developed based on HfxZr1-xB2(0001) buffered Si(111) including on-axis and miscut geometries. The work described in this dissertation focuses on epitaxial synthesis, characterization, and theoretical description of strain, thermoelastic behavior, and electronic structure of thick ZrB2 films and associated heterostructures including Si/ZrB2/HfxZr1-xB2, Si/ZrB 2/HfB2 and Si/HfxZr1-xB2. Optical quality ZrB2 films up to 500 nm thick were obtained via reactions of carefully tuned Zr(BH4)/H2 admixtures using gas source molecular beam epitaxy (GS-MBE). A residual tensile strain persisted in these films, independent of thickness, and it vanished at the growth temperature of 900°C. Comparison of the lattice mismatch between sapphire (Al2O3), silicon carbide (SiC), and bulk ZrB 2 substrates with GaN films over 20-900°C illustrated superior structural and thermal characteristics of the boride templates. Measurements and density functional theory (DFT) simulations of the boride dielectric function and reflectivity indicated metallic Drude behavior across the IR range. At higher energies (2-7 eV) additional spectral features were identified to be interband transitions. The ZrB2 films were used as strain-compensating buffers to fabricate HfxZr1-xB2 including HfB2 . Ellipsometry indicated that the band structure and reflectivity evolved smoothly from ZrB2 to HfB2, paving the way for the fabrication of optimized hybrid substrates, enabling large scale nitride integration with Si technologies via simultaneous optical and strain engineering. The Hf xZr1-xB2/Si technology was utilized to grow Al xGa1-xN via displacement reactions of D2GaN 3 vapors and Al atoms at unprecedented low temperatures (650-700°C), compatible with Si processing. The films exhibited strong cathodoluminescence with narrow peak widths comparable to those observed in MOCVD samples grown at 1100°C. The formation of GaN was investigated theoretically using first principle simulations.
ERIC Educational Resources Information Center
Rodriguez-Keyes, Elizabeth; Schneider, Dana A.
2013-01-01
This study illustrates an experience of implementing a hybrid model for teaching human behavior in the social environment in an urban university setting. Developing a hybrid model in a BSW program arose out of a desire to reach students in a different way. Designed to promote curiosity and active learning, this particular hybrid model has students…
Advanced Hybrid Spacesuit Concept Featuring Integrated Open Loop and Closed Loop Ventilation Systems
NASA Technical Reports Server (NTRS)
Daniel, Brian A.; Fitzpatrick, Garret R.; Gohmert, Dustin M.; Ybarra, Rick M.; Dub, Mark O.
2013-01-01
A document discusses the design and prototype of an advanced spacesuit concept that integrates the capability to function seamlessly with multiple ventilation system approaches. Traditionally, spacesuits are designed to operate both dependently and independently of a host vehicle environment control and life support system (ECLSS). Spacesuits that operate independent of vehicle-provided ECLSS services must do so with equipment selfcontained within or on the spacesuit. Suits that are dependent on vehicle-provided consumables must remain physically connected to and integrated with the vehicle to operate properly. This innovation is the design and prototype of a hybrid spacesuit approach that configures the spacesuit to seamlessly interface and integrate with either type of vehicular systems, while still maintaining the ability to function completely independent of the vehicle. An existing Advanced Crew Escape Suit (ACES) was utilized as the platform from which to develop the innovation. The ACES was retrofitted with selected components and one-off items to achieve the objective. The ventilation system concept was developed and prototyped/retrofitted to an existing ACES. Components were selected to provide suit connectors, hoses/umbilicals, internal breathing system ducting/ conduits, etc. The concept utilizes a lowpressure- drop, high-flow ventilation system that serves as a conduit from the vehicle supply into the suit, up through a neck seal, into the breathing helmet cavity, back down through the neck seal, out of the suit, and returned to the vehicle. The concept also utilizes a modified demand-based breathing system configured to function seamlessly with the low-pressure-drop closed-loop ventilation system.
NASA Astrophysics Data System (ADS)
Griesheimer, D. P.; Gill, D. F.; Nease, B. R.; Sutton, T. M.; Stedry, M. H.; Dobreff, P. S.; Carpenter, D. C.; Trumbull, T. H.; Caro, E.; Joo, H.; Millman, D. L.
2014-06-01
MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21 has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code also supports many in-line reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy neutron/nucleus interaction physics over the range from 10-5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuous-energy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering, pair production, and photoelectric interactions. All of the nuclear data required by MC21 is provided by the NDEX system of codes, which extracts and processes data from EPDL-, ENDF-, and ACE-formatted source files. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create spatial cells from first- and second-order surfaces. The system also allows models to be built up as hierarchical collections of previously defined spatial cells, with interior detail provided by grids and template overlays. Results are collected by a generalized tally capability which allows users to edit integral flux and reaction rate information. Results can be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each tally. The tally system has been optimized to maintain a high level of efficiency, even as the number of edit regions becomes very large.
Integration and optimization of the gas removal system for hybrid-cycle OTEC power plants
Rabas, T.J.; Panchal, C.B.; Stevens, H.C. )
1990-02-01
A preliminary design of the noncondensible gas removal system for a 10 mWe, land-based hybrid-cycle OTEC power plant has been developed and is presented herein. This gas removal system is very different from that used for conventional power plants because of the substantially larger and continuous noncondensible gas flow rates and lower condenser pressure levels which predicate the need for higher-efficiency components. Previous OTEC studies discussed the need for multiple high-efficiency compressors with intercoolers; however, no previous design effort was devoted to the details of the intercoolers, integration and optimization of the intercoolers with the compressors, and the practical design constraints and feasibility issues of these components. The resulting gas removal system design uses centrifugal (radial) compressors with matrix-type crossflow aluminum heat exchangers as intercoolers. Once-through boiling of ammonia is used as the heat sink for the cooling and condensing of the steam-gas mixture. A computerized calculation method was developed for the performance analysis and subsystem optimization. For a specific number of compressor units and the stream arrangement, the method is used to calculate the dimensions, speeds, power requirements, and costs of all the components.
Zhang, Xingyu; Subbaraman, Harish; Wang, Shiyi; Zhan, Qiwen; Luo, Jingdong; Jen, Alex K -Y; Chung, Chi-jui; Yan, Hai; Pan, Zeyu; Nelson, Robert L; Lee, Charles Y -C; Chen, Ray T
2015-01-01
In this work, we design, fabricate and characterize a compact, broadband and highly sensitive integrated photonic electromagnetic field sensor based on a silicon-organic hybrid modulator driven by a bowtie antenna. The large electro-optic (EO) coefficient of organic polymer, the slow-light effects in the silicon slot photonic crystal waveguide (PCW), and the broadband field enhancement provided by the bowtie antenna, are all combined to enhance the interaction of microwaves and optical waves, enabling a high EO modulation efficiency and thus a high sensitivity. The modulator is experimentally demonstrated with a record-high effective in-device EO modulation efficiency of r33=1230pm/V. Modulation response up to 40GHz is measured, with a 3-dB bandwidth of 11GHz. The slot PCW has an interaction length of 300um, and the bowtie antenna has an area smaller than 1cm2. The bowtie antenna in the device is experimentally demonstrated to have a broadband characteristics with a central resonance frequency of 10GHz, as we...
Yu, Longhai; Zheng, Jiajiu; Xu, Yang; Dai, Daoxin; He, Sailing
2014-11-25
Graphene is well-known as a two-dimensional sheet of carbon atoms arrayed in a honeycomb structure. It has some unique and fascinating properties, which are useful for realizing many optoelectronic devices and applications, including transistors, photodetectors, solar cells, and modulators. To enhance light-graphene interactions and take advantage of its properties, a promising approach is to combine a graphene sheet with optical waveguides, such as silicon nanophotonic wires considered in this paper. Here we report local and nonlocal optically induced transparency (OIT) effects in graphene-silicon hybrid nanophotonic integrated circuits. A low-power, continuous-wave laser is used as the pump light, and the power required for producing the OIT effect is as low as ?0.1 mW. The corresponding power density is several orders lower than that needed for the previously reported saturated absorption effect in graphene, which implies a mechanism involving light absorption by the silicon and photocarrier transport through the silicon-graphene junction. The present OIT effect enables low power, all-optical, broadband control and sensing, modulation and switching locally and nonlocally. PMID:25372937
NASA Technical Reports Server (NTRS)
Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Bavuso, Salvatore J.
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. The Hybrid Automated Reliability Predictor (HARP) tutorial provides insight into HARP modeling techniques and the interactive textual prompting input language via a step-by-step explanation and demonstration of HARP's fault occurrence/repair model and the fault/error handling models. Example applications are worked in their entirety and the HARP tabular output data are presented for each. Simple models are presented at first with each succeeding example demonstrating greater modeling power and complexity. This document is not intended to present the theoretical and mathematical basis for HARP.
Yoo, S. J. Ben
Super-Long Cavity, Monolithically Integrated 1-GHz Hybrid Mode-Locked InP Laser for All-locked monolithic semiconductor laser on an InP platform is demonstrated. Monolithic integration of the 4.1 cm-CDMA) networks [1, 2]. However, there have been few studies of monolithically integrated low frequency (
NASA Astrophysics Data System (ADS)
Cramer, S. N.; Roussin, R. W.
1981-11-01
A Monte Carlo analysis of a time-dependent neutron and secondary gamma-ray integral experiment on a thick concrete and steel shield is presented. The energy range covered in the analysis is 15-2 MeV for neutron source energies. The multigroup MORSE code was used with the VITAMIN C 171-36 neutron-gamma-ray cross-section data set. Both neutron and gamma-ray count rates and unfolded energy spectra are presented and compared, with good general agreement, with experimental results.
Asselineau, Charles-Alexis; Zapata, Jose; Pye, John
2015-06-01
A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost. PMID:26072868
Hekkala, Evon R; Platt, Steven G; Thorbjarnarson, John B; Rainwater, Thomas R; Tessler, Michael; Cunningham, Seth W; Twomey, Christopher; Amato, George
2015-09-01
The genus Crocodylus comprises 12 currently recognized species, many of which can be difficult to differentiate phenotypically. Interspecific hybridization among crocodiles is known to occur in captivity and has been documented between some species in the wild. The identification of hybrid individuals is of importance for management and monitoring of crocodilians, many of which are Convention on International Trade in Endangered Species (CITES) listed. In this study, both mitochondrial and nuclear DNA markers were evaluated for their use in confirming a suspected hybrid zone between American crocodile (Crocodylus acutus) and Morelet's crocodile (Crocodylus moreletii) populations in southern Belize where individuals and nests exhibiting atypical phenotypic features had previously been observed. Patterns observed in both phenotypic and molecular data indicate possible behavioural and ecological characteristics associated with hybridization events. The results of the combined analyses found that the majority of suspected hybrid samples represent crosses between female C. acutus and male C. moreletii. Phenotypic data could statistically identify hybrids, although morphological overlap between hybrids and C. moreletii reduced reliability of identification based solely on field characters. Ecologically, C. acutus was exclusively found in saline waters, whereas hybrids and C. moreletii were largely absent in these conditions. A hypothesized correlation between unidirectional hybridization and destruction of C. acutus breeding habitats warrants additional research. PMID:26473062
Hekkala, Evon R.; Platt, Steven G.; Thorbjarnarson, John B.; Rainwater, Thomas R.; Tessler, Michael; Cunningham, Seth W.; Twomey, Christopher; Amato, George
2015-01-01
The genus Crocodylus comprises 12 currently recognized species, many of which can be difficult to differentiate phenotypically. Interspecific hybridization among crocodiles is known to occur in captivity and has been documented between some species in the wild. The identification of hybrid individuals is of importance for management and monitoring of crocodilians, many of which are Convention on International Trade in Endangered Species (CITES) listed. In this study, both mitochondrial and nuclear DNA markers were evaluated for their use in confirming a suspected hybrid zone between American crocodile (Crocodylus acutus) and Morelet’s crocodile (Crocodylus moreletii) populations in southern Belize where individuals and nests exhibiting atypical phenotypic features had previously been observed. Patterns observed in both phenotypic and molecular data indicate possible behavioural and ecological characteristics associated with hybridization events. The results of the combined analyses found that the majority of suspected hybrid samples represent crosses between female C. acutus and male C. moreletii. Phenotypic data could statistically identify hybrids, although morphological overlap between hybrids and C. moreletii reduced reliability of identification based solely on field characters. Ecologically, C. acutus was exclusively found in saline waters, whereas hybrids and C. moreletii were largely absent in these conditions. A hypothesized correlation between unidirectional hybridization and destruction of C. acutus breeding habitats warrants additional research. PMID:26473062
1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO
T. EVANS; ET AL
2000-08-01
We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.
NASA Astrophysics Data System (ADS)
Luo, Xianshu; Cao, Yulian; Song, Junfeng; Hu, Xiaonan; Cheng, Yungbing; Li, Chengming; Liu, Chongyang; Liow, Tsung-Yang; Yu, Mingbin; Wang, Hong; Wang, Qijie; Lo, Patrick Guo-Qiang
2015-04-01
Integrated optical light source on silicon is one of the key building blocks for optical interconnect technology. Great research efforts have been devoting worldwide to explore various approaches to integrate optical light source onto the silicon substrate. The achievements so far include the successful demonstration of III/V-on-Si hybrid lasers through III/V-gain material to silicon wafer bonding technology. However, for potential large-scale integration, leveraging on mature silicon complementary metal oxide semiconductor (CMOS) fabrication technology and infrastructure, more effective bonding scheme with high bonding yield is in great demand considering manufacturing needs. In this paper, we propose and demonstrate a high-throughput multiple dies-to-wafer (D2W) bonding technology which is then applied for the demonstration of hybrid silicon lasers. By temporarily bonding III/V dies to a handle silicon wafer for simultaneous batch processing, it is expected to bond unlimited III/V dies to silicon device wafer with high yield. As proof-of-concept, more than 100 III/V dies bonding to 200 mm silicon wafer is demonstrated. The high performance of the bonding interface is examined with various characterization techniques. Repeatable demonstrations of 16-III/V-die bonding to pre-patterned 200 mm silicon wafers have been performed for various hybrid silicon lasers, in which device library including Fabry-Perot (FP) laser, lateral-coupled distributed feedback (LC-DFB) laser with side wall grating, and mode-locked laser (MLL). From these results, the presented multiple D2W bonding technology can be a key enabler towards the large-scale heterogeneous integration of optoelectronic integrated circuits (H-OEIC).
Hybrid materials science: a promised land for the integrative design of multifunctional materials
NASA Astrophysics Data System (ADS)
Nicole, Lionel; Laberty-Robert, Christel; Rozes, Laurence; Sanchez, Clément
2014-05-01
For more than 5000 years, organic-inorganic composite materials created by men via skill and serendipity have been part of human culture and customs. The concept of ``hybrid organic-inorganic'' nanocomposites exploded in the second half of the 20th century with the expansion of the so-called ``chimie douce'' which led to many collaborations between a large set of chemists, physicists and biologists. Consequently, the scientific melting pot of these very different scientific communities created a new pluridisciplinary school of thought. Today, the tremendous effort of basic research performed in the last twenty years allows tailor-made multifunctional hybrid materials with perfect control over composition, structure and shape. Some of these hybrid materials have already entered the industrial market. Many tailor-made multiscale hybrids are increasingly impacting numerous fields of applications: optics, catalysis, energy, environment, nanomedicine, etc. In the present feature article, we emphasize several fundamental and applied aspects of the hybrid materials field: bioreplication, mesostructured thin films, Lego-like chemistry designed hybrid nanocomposites, and advanced hybrid materials for energy. Finally, a few commercial applications of hybrid materials will be presented.
Integrating Quality Matters into Hybrid Course Design: A Principles of Marketing Case Study
ERIC Educational Resources Information Center
Young, Mark R.
2014-01-01
Previous research supports the idea that the success of hybrid or online delivery modes is more a function of course design than delivery media. This article describes a case study of a hybrid Principles of Marketing course that implemented a comprehensive redesign based on design principles espoused by the Quality Matters Program, a center for…
Gorshkov, Anton V; Kirillin, Mikhail Yu
2015-08-01
Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing. PMID:26249663
Hybrid Environmental Control System Integrated Modeling Trade Study Analysis for Commercial Aviation
NASA Astrophysics Data System (ADS)
Parrilla, Javier
Current industry trends demonstrate aircraft electrification will be part of future platforms in order to achieve higher levels of efficiency in various vehicle level sub-systems. However electrification requires a substantial change in aircraft design that is not suitable for re-winged or re-engined applications as some aircraft manufacturers are opting for today. Thermal limits arise as engine cores progressively get smaller and hotter to improve overall engine efficiency, while legacy systems still demand a substantial amount of pneumatic, hydraulic and electric power extraction. The environmental control system (ECS) provides pressurization, ventilation and air conditioning in commercial aircraft, making it the main heat sink for all aircraft loads with exception of the engine. To mitigate the architecture thermal limits in an efficient manner, the form in which the ECS interacts with the engine will have to be enhanced as to reduce the overall energy consumed and achieve an energy optimized solution. This study examines a tradeoff analysis of an electric ECS by use of a fully integrated Numerical Propulsion Simulation System (NPSS) model that is capable of studying the interaction between the ECS and the engine cycle deck. It was found that a peak solution lays in a hybrid ECS where it utilizes the correct balance between a traditional pneumatic and a fully electric system. This intermediate architecture offers a substantial improvement in aircraft fuel consumptions due to a reduced amount of waste heat and customer bleed in exchange for partial electrification of the air-conditions pack which is a viable option for re-winged applications.
Gao, Yanbin; Liu, Shifei; Atia, Mohamed M.; Noureldin, Aboelmagd
2015-01-01
This paper takes advantage of the complementary characteristics of Global Positioning System (GPS) and Light Detection and Ranging (LiDAR) to provide periodic corrections to Inertial Navigation System (INS) alternatively in different environmental conditions. In open sky, where GPS signals are available and LiDAR measurements are sparse, GPS is integrated with INS. Meanwhile, in confined outdoor environments and indoors, where GPS is unreliable or unavailable and LiDAR measurements are rich, LiDAR replaces GPS to integrate with INS. This paper also proposes an innovative hybrid scan matching algorithm that combines the feature-based scan matching method and Iterative Closest Point (ICP) based scan matching method. The algorithm can work and transit between two modes depending on the number of matched line features over two scans, thus achieving efficiency and robustness concurrently. Two integration schemes of INS and LiDAR with hybrid scan matching algorithm are implemented and compared. Real experiments are performed on an Unmanned Ground Vehicle (UGV) for both outdoor and indoor environments. Experimental results show that the multi-sensor integrated system can remain sub-meter navigation accuracy during the whole trajectory. PMID:26389906
Gao, Yanbin; Liu, Shifei; Atia, Mohamed M; Noureldin, Aboelmagd
2015-01-01
This paper takes advantage of the complementary characteristics of Global Positioning System (GPS) and Light Detection and Ranging (LiDAR) to provide periodic corrections to Inertial Navigation System (INS) alternatively in different environmental conditions. In open sky, where GPS signals are available and LiDAR measurements are sparse, GPS is integrated with INS. Meanwhile, in confined outdoor environments and indoors, where GPS is unreliable or unavailable and LiDAR measurements are rich, LiDAR replaces GPS to integrate with INS. This paper also proposes an innovative hybrid scan matching algorithm that combines the feature-based scan matching method and Iterative Closest Point (ICP) based scan matching method. The algorithm can work and transit between two modes depending on the number of matched line features over two scans, thus achieving efficiency and robustness concurrently. Two integration schemes of INS and LiDAR with hybrid scan matching algorithm are implemented and compared. Real experiments are performed on an Unmanned Ground Vehicle (UGV) for both outdoor and indoor environments. Experimental results show that the multi-sensor integrated system can remain sub-meter navigation accuracy during the whole trajectory. PMID:26389906
Primordial black hole seeding from hybrid inflation : the direct integration approach
Giguere, Alexis
2013-01-01
We examine the notion that supermassive black holes at the centre of galaxies, such as the Milky Way, could have been seeded in the early universe by the mechanisms of hybrid inflation. Using luminosity data, we estimate ...
ITS Version 3.0: The Integrated TIGER Series of coupled electron/photon Monte Carlo transport codes
Halbleib, J.A.; Kensek, R.P.; Valdez, G.D.; Mehlhorn, T.A.; Seltzer, S.M.; Berger, M.J.
1993-06-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields. It combines operational simplicity and physical accuracy in order to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Flexibility of construction permits tailoring of the codes to specific applications and extension of code capabilities to more complex applications through simple update procedures.
NASA Astrophysics Data System (ADS)
Srinivasan, P.; Priya, S.; Patel, Tarun; Gopalakrishnan, R. K.; Sharma, D. N.
2015-01-01
DD/DT fusion neutron generators are used as sources of 2.5 MeV/14.1 MeV neutrons in experimental laboratories for various applications. Detailed knowledge of the radiation dose rates around the neutron generators are essential for ensuring radiological protection of the personnel involved with the operation. This work describes the experimental and Monte Carlo studies carried out in the Purnima Neutron Generator facility of the Bhabha Atomic Research Center (BARC), Mumbai. Verification and validation of the shielding adequacy was carried out by measuring the neutron and gamma dose-rates at various locations inside and outside the neutron generator hall during different operational conditions both for 2.5-MeV and 14.1-MeV neutrons and comparing with theoretical simulations. The calculated and experimental dose rates were found to agree with a maximum deviation of 20% at certain locations. This study has served in benchmarking the Monte Carlo simulation methods adopted for shield design of such facilities. This has also helped in augmenting the existing shield thickness to reduce the neutron and associated gamma dose rates for radiological protection of personnel during operation of the generators at higher source neutron yields up to 1 × 1010 n/s.
Riboldi, M.; Chen, G. T. Y.; Baroni, G.; Paganetti, H.; Seco, J.
2015-01-01
We have designed a simulation framework for motion studies in radiation therapy by integrating the anthropomorphic NCAT phantom into a 4D Monte Carlo dose calculation engine based on DPM. Representing an artifact-free environment, the system can be used to identify class solutions as a function of geometric and dosimetric parameters. A pilot dynamic conformal study for three lesions (~ 2.0 cm) in the right lung was performed (70 Gy prescription dose). Tumor motion changed as a function of tumor location, according to the anthropomorphic deformable motion model. Conformal plans were simulated with 0 to 2 cm margin for the aperture, with additional 0.5 cm for beam penumbra. The dosimetric effects of intensity modulated radiotherapy (IMRT) vs. conformal treatments were compared in a static case. Results show that the Monte Carlo simulation framework can model tumor tracking in deformable anatomy with high accuracy, providing absolute doses for IMRT and conformal radiation therapy. A target underdosage of up to 3.67 Gy (lower lung) was highlighted in the composite dose distribution mapped at exhale. Such effects depend on tumor location and treatment margin and are affected by lung deformation and ribcage motion. In summary, the complexity in the irradiation of moving targets has been reduced to a controlled simulation environment, where several treatment options can be accurately modeled and quantified The implemented tools will be utilized for extensive motion study in lung/liver irradiation. PMID:19044324
NASA Astrophysics Data System (ADS)
Suzuki, Takao; Yamamoto, Fujio
2015-10-01
Data-assimilation capabilities of hybrid-type simulations integrating time-resolved particle image velocimetry with unsteady computational fluid dynamics (CFD) are characterized, and a series of algorithms developed previously are evaluated in terms of four criteria: (i) compatibility with the governing equations; (ii) completeness of a set of flow quantities; (iii) temporal and spatial filtering functions; and (iv) spatial resolution. This study specifically introduces a hierarchy of three hybrid simulations combining time-resolved particle tracking velocimetry (PTV) and direct numerical simulation (DNS) from low to high fidelities: the proper orthogonal decomposition-Galerkin-projection approach with proportional feedback of PTV data, the DNS solver with similar feedback, and the DNS solver with the extended Kalman filter. By solving a planar-jet problem at {Re}? 2000, we demonstrate that the resultant hybrid flow fields essentially (i) satisfy the governing equations spatially and approximately temporally, and (ii) can provide instantaneous pressure fields (iii) with the noise levels substantially lower than those of the original PTV data and (iv) the resolution comparable to CFD. The results show that increasing the feedback gain improves replicability, i.e. the agreement between the simulation and the data; however, it degrades temporal compatibility and filtering functions. On the other hand, the fidelity enhances both replicability and spatial filtering, but increases computational cost.
Chen, Sitao; Shi, Yaocheng; He, Sailing; Dai, Daoxin
2015-05-18
A compact silicon hybrid (de)multiplexer is designed and demonstrated by integrating a single bi-directional AWG with a polarization diversity circuit, which consists of an ultra-short polarization-beam splitter (PBS) based on a bent coupler and a polarization rotator (PR) based on a silicon-on-insulator nanowire with a cut corner. The present hybrid (de)multiplexer can operate for both TE- and TM- polarizations and thus is available for PDM-WDM systems. An 18-channel hybrid (de)multiplexer is realized with 9 wavelengths as an example. The wavelength-channel spacing is 400GHz (i.e., ??(ch) = 3.2nm) and the footprint of the device is about 530?m × 210?m. The channel crosstalk is about -13dB and the total excess loss is about 7dB. The excess loss increases by about 1~2dB due to the cascaded polarization diversity circuit in comparison with a single bi-directional AWG. PMID:26074538
NASA Astrophysics Data System (ADS)
Kim, Jaiseung
2011-04-01
We have made a Markov Chain Monte Carlo (MCMC) analysis of primordial non-Gaussianity (fNL) using the WMAP bispectrum and power spectrum. In our analysis, we have simultaneously constrained fNL and cosmological parameters so that the uncertainties of cosmological parameters can properly propagate into the fNL estimation. Investigating the parameter likelihoods deduced from MCMC samples, we find slight deviation from Gaussian shape, which makes a Fisher matrix estimation less accurate. Therefore, we have estimated the confidence interval of fNL by exploring the parameter likelihood without using the Fisher matrix. We find that the best-fit values of our analysis make a good agreement with other results, but the confidence interval is slightly different.
Integrated cell manipulation system--CMOS/microfluidic hybrid{ Hakho Lee,ac
Ham, Donhee
and rapid advances of microelectro- nics in the past decades, considerable effort has been directed applications including cell sorting,7 DNA amplifica- tions and separations,8,9 and chemical synthesis.10 We magnetic beads and magnetically- tagged biological cells with hybrid prototypes that generate local
Integration of rapid DNA hybridization and capillary zone electrophoresis using bidirectional
Santiago, Juan G.
presented by Persat and Santiago8 who applied it to the proling of micro-RNA. Bercovici et al.9 applied ITP based hybridization to the extraction and detection of 16S ribosomal RNA (rRNA) of E. coli in human
NASA Astrophysics Data System (ADS)
Uhde, Britta; Andreas Hahn, W.; Griess, Verena C.; Knoke, Thomas
2015-08-01
Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information.
Uhde, Britta; Hahn, W Andreas; Griess, Verena C; Knoke, Thomas
2015-08-01
Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information. PMID:25896820
INTEGRATION OF GENETIC AND RADIATION HYBRID MAPS OF THE PIG: THE SECOND GENERATION IMPRH MAPS
Technology Transfer Automated Retrieval System (TEKTRAN)
More than 4500 markers, ESTs and genes have been mapped on IMpRH radiation hybrid panel and submitted to IMpRH Server before 30 March 2002, whereas 757 markers only were mapped on the first generation map (Hawken et al, 1999). To take advantage of the different resolutions observed on the genetic an...
NASA Astrophysics Data System (ADS)
Chen, Haitian; Cao, Yu; Zhang, Jialu; Zhou, Chongwu
2014-06-01
Carbon nanotubes and metal oxide semiconductors have emerged as important materials for p-type and n-type thin-film transistors, respectively; however, realizing sophisticated macroelectronics operating in complementary mode has been challenging due to the difficulty in making n-type carbon nanotube transistors and p-type metal oxide transistors. Here we report a hybrid integration of p-type carbon nanotube and n-type indium-gallium-zinc-oxide thin-film transistors to achieve large-scale (>1,000 transistors for 501-stage ring oscillators) complementary macroelectronic circuits on both rigid and flexible substrates. This approach of hybrid integration allows us to combine the strength of p-type carbon nanotube and n-type indium-gallium-zinc-oxide thin-film transistors, and offers high device yield and low device variation. Based on this approach, we report the successful demonstration of various logic gates (inverter, NAND and NOR gates), ring oscillators (from 51 stages to 501 stages) and dynamic logic circuits (dynamic inverter, NAND and NOR gates).
Siegel, J.; Dehmer, J.L.; Dill, D.
1981-02-01
We describe in detail a hybrid method for calculation of electron--polar-molecule scattering in which (1) low-l S-matrix elements are calculated in the body frame using a potential which incorporates a realistic representation of the molecular core; (2) intermediate-l elements are calculated in the body frame using an exact method with a point-dipole potential; and (3) high-l elements are calculated in the laboratory frame using the first Born approximation with a point-dipole potential. By taking into account the dominant interactions of each l range when choosing the coordinate frame, potential, and calculational method, this hybrid framework achieves an exceptionally high level of efficiency and economy of calculation without sacrifice of accuracy. Using this method, we have calculated integrated and momentum-transfer cross sections for e-LiF scattering from 1 to 20 eV (differential cross sections have been reported elsewhere). The integrated cross section is dominated (>99%) by the j=0..-->..j'=1 rotational transition, whereas the momentum-transfer cross section is composed of comparable contributions from transitions to final j'=0 through 3, owing to its deemphasis of small-angle scattering. Eigenphase sums show no sign of resonant activity in this energy range.
NASA Astrophysics Data System (ADS)
Zhang, Xiaoli; Peng, Yong; Zhang, Chi; Wang, Bende
2015-11-01
A number of hydrological studies have proven the superior prediction performance of hybrid models coupled with data preprocessing techniques. However, many studies first decompose the entire data series into components and later divide each component into calibration and validation datasets to establish models, which sends some amount of future information into the decomposition and reconstruction processes. As a consequence, the resulting components used to forecast the value of a particular moment are computed using information from future values, which are not available at that particular moment in a forecasting exercise. Since most papers don't present their model framework in detail, it is difficult to identify whether they are performing a real forecast or not. Even though several other papers have explicitly stated which experiment they are performing, a comparison between results in the hindcast and forecast experiments is still missing. Therefore, it is necessary to investigate and compare the performance of these hybrid models in the two experiments in order to estimate whether they are suitable for real forecasting. With the combination of three preprocessing techniques, such as wavelet analysis (WA), empirical mode decomposition (EMD) and singular spectrum analysis (SSA), and two modeling methods (i.e. ANN model and ARMA model), six hybrid models are developed in this study, including WA-ANN, WA-ARMA, EMD-ANN, EMD-ARMA, SSA-ANN and SSA-ARMA. Preprocessing techniques are used to decompose the data series into sub-series, and then these sub-series are modeled using ANN and ARMA models. These models are examined in hindcasting and forecasting of the monthly streamflow of two sites in the Yangtze River of China. The results of this study indicate that the six hybrid models perform better in the hindcast experiment compared with the original ANN and ARMA models, while the hybrid models in the forecast experiment perform worse than the original models and the performances of WA-based and EMD-based models vary largely across different extension methods. It can be concluded that the hybrid models are not suitable for monthly streamflow forecasting in this study. New extension methods and modified preprocessing techniques can improve the prediction performance of these hybrid models in forecast experiments.
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2004-06-01
ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
Verleyen, Wim; Langdon, Simon P.; Faratian, Dana; Harrison, David J.; Smith, V. Anne
2015-01-01
Current clinical practice in cancer stratifies patients based on tumour histology to determine prognosis. Molecular profiling has been hailed as the path towards personalised care, but molecular data are still typically analysed independently of known clinical information. Conventional clinical and histopathological data, if used, are added only to improve a molecular prediction, placing a high burden upon molecular data to be informative in isolation. Here, we develop a novel Monte Carlo analysis to evaluate the usefulness of data assemblages. We applied our analysis to varying assemblages of clinical data and molecular data in an ovarian cancer dataset, evaluating their ability to discriminate one-year progression-free survival (PFS) and three-year overall survival (OS). We found that Cox proportional hazard regression models based on both data types together provided greater discriminative ability than either alone. In particular, we show that proteomics data assemblages that alone were uninformative (p?=?0.245 for PFS, p?=?0.526 for OS) became informative when combined with clinical information (p?=?0.022 for PFS, p?=?0.048 for OS). Thus, concurrent analysis of clinical and molecular data enables exploitation of prognosis-relevant information that may not be accessible from independent analysis of these data types. PMID:26503707
Integrated optics structures on sol-gel derived organic-inorganic hybrids for optical communications
NASA Astrophysics Data System (ADS)
André, P. S.; Vicente, C. M. S.; Fernandes, V.; Marques, C. A. F.; Pecoraro, E.; Nogueira, R. N.; Wada, N.; Carlos, L. D.; Marques, P. G.; Ferreira, R. A. S.
2011-05-01
Organic-inorganic hybrid materials are a technologically key class of advanced multifunctional materials that fulfil the challenging strict requirements of the beginning of the century: higher levels of sophistication, miniaturisation, recyclability, reliability and low energy consumption with potential to be used as low-cost components in optical networks operating at high bit rates. In this work, high-rejection optical filters (19 dB) first-order Bragg gratings inscribed in channel waveguides written in thin films of sol-gel derived organic-inorganic hybrid based on methacrylic acid modified zirconium tetrapropoxide, Zr(OPrn)4, (so-called di-ureasils), using UV-laser direct-write method.
NASA Astrophysics Data System (ADS)
Topper, Robert Q.; Zhang, Qi; Liu, Yi-Ping; Truhlar, Donald G.
1993-03-01
Converged quantum mechanical vibrational-rotational partition functions and free energies are calculated using realistic potential energy surfaces for several chalcogen dihydrides (H2O, D2O, H2S, H2Se) over a wide range of temperatures (600-4000 K). We employ an adaptively optimized Monte Carlo integration scheme for computing vibrational-rotational partition functions by the Fourier path-integral method. The partition functions and free energies calculated in this way are compared to approximate calculations that assume the separation of vibrational motions from rotational motions. In the approximate calculations, rotations are treated as those of a classical rigid rotator, and vibrations are treated by perturbation theory methods or by the harmonic oscillator model. We find that the perturbation theory treatments yield molecular partition functions which agree closely overall (within ˜7%) with the fully coupled accurate calculations, and these treatments reduce the errors by about a factor of 2 compared to the independent-mode harmonic oscillator model (with errors of ˜16%). These calculations indicate that vibrational anharmonicity and mode-mode coupling effects are significant, but that they may be treated with useful accuracy by perturbation theory for these molecules. The quantal free energies for gaseous water agree well with previously available approximate values for this well studied molecule, and similarly accurate values are also presented for the less well studied D2O, H2S, and H2Se.
Fluorescent In Situ Hybridization to Detect Transgene Integration into Plant Genomes
NASA Astrophysics Data System (ADS)
Schwarzacher, Trude
Fluorescent chromosome analysis technologies have advanced our understanding of genome organization during the last 30 years and have enabled the investigation of DNA organization and structure as well as the evolution of chromosomes. Fluorescent chromosome staining allows even small chromosomes to be visualized, characterized by their composition and morphology, and counted. Aneuploidies and polyploidies can be established for species, breeding lines, and individuals, including changes occurring during hybridization or tissue culture and transformation protocols. Fluorescent in situ hybridization correlates molecular information of a DNA sequence with its physical location on chromosomes and genomes. It thus allows determination of the physical position of sequences and often is the only means to determine the abundance and distribution of DNA sequences that are difficult to map with any other molecular method or would require segregation analysis, in particular multicopy or repetitive DNA. Equally, it is often the best way to establish the incorporation of transgenes, their numbers, and physical organization along chromosomes. This chapter presents protocols for probe and chromosome preparation, fluorescent in situ hybridization, chromosome staining, and the analysis of results.
NASA Technical Reports Server (NTRS)
Richardson, Erin; Hays, M. J.; Blackwood, J. M.; Skinner, T.
2014-01-01
The Liquid Propellant Fragment Overpressure Acceleration Model (L-FOAM) is a tool developed by Bangham Engineering Incorporated (BEi) that produces a representative debris cloud from an exploding liquid-propellant launch vehicle. Here it is applied to the Core Stage (CS) of the National Aeronautics and Space Administration (NASA) Space Launch System (SLS launch vehicle). A combination of Probability Density Functions (PDF) based on empirical data from rocket accidents and applicable tests, as well as SLS specific geometry are combined in a MATLAB script to create unique fragment catalogues each time L-FOAM is run-tailored for a Monte Carlo approach for risk analysis. By accelerating the debris catalogue with the BEi blast model for liquid hydrogen / liquid oxygen explosions, the result is a fully integrated code that models the destruction of the CS at a given point in its trajectory and generates hundreds of individual fragment catalogues with initial imparted velocities. The BEi blast model provides the blast size (radius) and strength (overpressure) as probabilities based on empirical data and anchored with analytical work. The coupling of the L-FOAM catalogue with the BEi blast model is validated with a simulation of the Project PYRO S-IV destruct test. When running a Monte Carlo simulation, L-FOAM can accelerate all catalogues with the same blast (mean blast, 2 s blast, etc.), or vary the blast size and strength based on their respective probabilities. L-FOAM then propagates these fragments until impact with the earth. Results from L-FOAM include a description of each fragment (dimensions, weight, ballistic coefficient, type and initial location on the rocket), imparted velocity from the blast, and impact data depending on user desired application. LFOAM application is for both near-field (fragment impact to escaping crew capsule) and far-field (fragment ground impact footprint) safety considerations. The user is thus able to use statistics from a Monte Carlo set of L-FOAM catalogues to quantify risk for a multitude of potential CS destruct scenarios. Examples include the effect of warning time on the survivability of an escaping crew capsule or the maximum fragment velocities generated by the ignition of leaking propellants in internal cavities.
Arora, Bhavna; Mohanty, Binayak P; McGuire, Jennifer T
2015-04-15
Predicting and controlling the concentrations of redox-sensitive elements are primary concerns for environmental remediation of contaminated sites. These predictions are complicated by dynamic flow processes as hydrologic variability is a governing control on conservative and reactive chemical concentrations. Subsurface heterogeneity in the form of layers and lenses further complicates the flow dynamics of the system impacting chemical concentrations including redox-sensitive elements. In response to these complexities, this study investigates the role of heterogeneity and hydrologic processes in an effective parameter upscaling scheme from the column to the landfill scale. We used a Markov chain Monte Carlo (MCMC) algorithm to derive upscaling coefficients for hydrological and geochemical parameters, which were tested for variations across heterogeneous systems (layers and lenses) and interaction of flow processes based on the output uncertainty of dominant biogeochemical concentrations at the Norman Landfill site, a closed municipal landfill with prevalent organic and trace metal contamination. The results from MCMC analysis indicated that geochemical upscaling coefficients based on effective concentration ratios incorporating local heterogeneity across layered and lensed systems produced better estimates of redox-sensitive biogeochemistry at the field scale. MCMC analysis also suggested that inclusion of hydrological parameters in the upscaling scheme reduced the output uncertainty of effective mean geochemical concentrations by orders of magnitude at the Norman Landfill site. This was further confirmed by posterior density plots of the scaling coefficients that revealed unimodal characteristics when only geochemical processes were involved, but produced multimodal distributions when hydrological parameters were included. The multimodality again suggests the effect of heterogeneity and lithologic variability on the distribution of redox-sensitive elements at the Norman Landfill site. PMID:25644839
NASA Astrophysics Data System (ADS)
Adams, A. D.; Johnson, Greg A.; Jolivet, Noel D.; Metschuleit, Jeff L.
1992-07-01
When testing infrared readouts, detector-readout hybrid assemblies, or focal plane arrays (FPAs), performance optimization is usually limited to adjustment of biases or clock rails, or subtle changes in readout timing. These generally result in global changes to the characteristics of the entire array rather than affecting individual pixels and channels. Using a scanning system that incorporates per channel gain normalization and a redundant time delay and integrate (TDI) architecture in the readout, pixels can be enhanced or deselected using an on- chip static RAM according to user-defined criteria resulting in improved uniformity of performance. A series of tests can be run automatically that evaluate each pixel's behavior at the readout or the hybrid level. When compared to or compiled against array-wide averages or system specifications, a map of dead or degraded pixels is created, and the timing necessary to either normalize each channel from a gain standpoint or mask out individual pixels is applied to the device under test. This technique has been successfully applied to 480 X 6 (120 X 4 X 6 in TDI) scanning architectures in both InSb and HgCdTe systems as well as multiple-chip and dual-band configurations. This paper describes a methodology and details how readout devices were screened and selected for hybridization and FPA build. The chip architecture and control timing is discussed to show how normalization and deselection was accomplished with a minimum of clock lines involved. A software utility is presented that allowed easy graphical interface to the user for manipulating the functions of the device. Algorithms for optimizing performance are then discussed and evaluated. Trade-offs made in optimizing one parameter against another are analyzed. Finally, results are presented demonstrating improved performance, customized by pixel to suit application specifications.
Monte Carlo sampling from the quantum state space. II
Yi-Lin Seah; Jiangwei Shang; Hui Khoon Ng; David John Nott; Berthold-Georg Englert
2015-04-27
High-quality random samples of quantum states are needed for a variety of tasks in quantum information and quantum computation. Searching the high-dimensional quantum state space for a global maximum of an objective function with many local maxima or evaluating an integral over a region in the quantum state space are but two exemplary applications of many. These tasks can only be performed reliably and efficiently with Monte Carlo methods, which involve good samplings of the parameter space in accordance with the relevant target distribution. We show how the Markov-chain Monte Carlo method known as Hamiltonian Monte Carlo, or hybrid Monte Carlo, can be adapted to this context. It is applicable when an efficient parameterization of the state space is available. The resulting random walk is entirely inside the physical parameter space, and the Hamiltonian dynamics enable us to take big steps, thereby avoiding strong correlations between successive sample points while enjoying a high acceptance rate. We use examples of single and double qubit measurements for illustration.
Lujan, Paul Joseph; /UC, Berkeley /LBL, Berkeley
2009-12-01
This thesis presents a measurement of the top quark mass obtained from p{bar p} collisions at {radical}s = 1.96 TeV at the Fermilab Tevatron using the CDF II detector. The measurement uses a matrix element integration method to calculate a t{bar t} likelihood, employing a Quasi-Monte Carlo integration, which enables us to take into account effects due to finite detector angular resolution and quark mass effects. We calculate a t{bar t} likelihood as a 2-D function of the top pole mass m{sub t} and {Delta}{sub JES}, where {Delta}{sub JES} parameterizes the uncertainty in our knowledge of the jet energy scale; it is a shift applied to all jet energies in units of the jet-dependent systematic error. By introducing {Delta}{sub JES} into the likelihood, we can use the information contained in W boson decays to constrain {Delta}{sub JES} and reduce error due to this uncertainty. We use a neural network discriminant to identify events likely to be background, and apply a cut on the peak value of individual event likelihoods to reduce the effect of badly reconstructed events. This measurement uses a total of 4.3 fb{sup -1} of integrated luminosity, requiring events with a lepton, large E{sub T}, and exactly four high-energy jets in the pseudorapidity range |{eta}| < 2.0, of which at least one must be tagged as coming from a b quark. In total, we observe 738 events before and 630 events after applying the likelihood cut, and measure m{sub t} = 172.6 {+-} 0.9 (stat.) {+-} 0.7 (JES) {+-} 1.1 (syst.) GeV/c{sup 2}, or m{sub t} = 172.6 {+-} 1.6 (tot.) GeV/c{sup 2}.
Method for producing a hybridization of detector array and integrated circuit for readout
NASA Technical Reports Server (NTRS)
Fossum, Eric R. (inventor); Grunthaner, Frank J. (inventor)
1993-01-01
A process is explained for fabricating a detector array in a layer of semiconductor material on one substrate and an integrated readout circuit in a layer of semiconductor material on a separate substrate in order to select semiconductor material for optimum performance of each structure, such as GaAs for the detector array and Si for the integrated readout circuit. The detector array layer is lifted off its substrate, laminated on the metallized surface on the integrated surface, etched with reticulating channels to the surface of the integrated circuit, and provided with interconnections between the detector array pixels and the integrated readout circuit through the channels. The adhesive material for the lamination is selected to be chemically stable to provide electrical and thermal insulation and to provide stress release between the two structures fabricated in semiconductor materials that may have different coefficients of thermal expansion.
Li, Liang; Chen, Zhiqiang; Cong, Wenxiang; Wang, Ge
2015-03-01
Spectral CT with photon counting detectors can significantly improve CT performance by reducing image noise and dose, increasing contrast resolution and material specificity, as well as enabling functional and molecular imaging with existing and emerging probes. However, the current photon counting detector architecture is difficult to balance the number of energy bins and the statistical noise in each energy bin. Moreover, the hardware support for multi-energy bins demands a complex circuit which is expensive. In this paper, we promote a new scheme known as hybrid detectors that combine the dynamic-threshold-based counting and integrating modes. In this scheme, an energy threshold can be dynamically changed during a spectral CT scan, which can be considered as compressive sensing along the spectral dimension. By doing so, the number of energy bins can be retrospectively specified, even in a spatially varying fashion. To establish the feasibility and merits of such hybrid detectors, we develop a tensor-based PRISM algorithm to reconstruct a spectral CT image from dynamic dual-energy data, and perform experiments with simulated and real data, producing very promising results. PMID:25252279
NASA Astrophysics Data System (ADS)
Chen, Po-Chiang; Ishikawa, Fumiaki N.; Chang, Hsiao-Kang; Ryu, Koungmin; Zhou, Chongwu
2009-03-01
A novel hybrid chemical sensor array composed of individual In2O3 nanowires, SnO2 nanowires, ZnO nanowires, and single-walled carbon nanotubes with integrated micromachined hotplates for sensitive gas discrimination was demonstrated. Key features of our approach include the integration of nanowire and carbon nanotube sensors, precise control of the sensor temperature using the micromachined hotplates, and the use of principal component analysis for pattern recognition. This sensor array was exposed to important industrial gases such as hydrogen, ethanol and nitrogen dioxide at different concentrations and sensing temperatures, and an excellent selectivity was obtained to build up an interesting 'smell-print' library of these gases. Principal component analysis of the sensing results showed great discrimination of those three tested chemicals, and in-depth analysis revealed clear improvement of selectivity by the integration of carbon nanotube sensors. This nanoelectronic nose approach has great potential for detecting and discriminating between a wide variety of gases, including explosive ones and nerve agents.
Adapted nested force-gradient integrators for the Schwinger model
Dmitry Shcherbakov; Matthias Ehrhardt; Michael Günther; Jacob Finkenrath; Francesco Knechtli; Michael Peardon
2015-12-11
We study a novel class of numerical integrators, the adapted nested force-gradient schemes, used within the molecular dynamics step of the Hybrid Monte Carlo (HMC) algorithm. We test these methods in the Schwinger model on the lattice, a well known benchmark problem. We derive the analytical basis of nested force-gradient type methods and demonstrate the advantage of the proposed approach, namely reduced computational costs compared with other numerical integration schemes in HMC.
Guo, Jianning; Wang, Lingyun; Zhu, Jia; Zhang, Jianguo; Sheng, Deyang; Zhang, Xihui
2013-01-01
This article presents a highly integrated hybrid process for the advanced treatment of drinking water in dealing with the micro-polluted raw water. A flat sheet ceramic membrane with the pore size of 50?60 nm for ultrafiltration (UF) is used to integrate coagulation and ozonation together. At the same time, biological activated carbon filtration (BAC) is used to remove the ammonia and organic pollutants in raw water. A pilot study in the scale of 120 m(3)/d has been conducted in Southern China. The mainly-analyzed parameters include turbidity, particle counts, ammonia, total organic carbon (TOC), UV254, biological dissolved organic carbon (BDOC), dissolved oxygen (DO) as well as trans-membrane pressure (TMP). The experiments demonstrated that ceramic UF-membrane was able to remove most of turbidity and suspended particulate matters. The final effluent turbidity reached to 0.14 NTU on average. BAC was effective in removing ammonia and organic matters. Dissolved oxygen (DO) is necessary for the biodegradation of ammonia at high concentration. The removal efficiencies reached to 90% for ammonia with the initial concentration of 3.6 mg/L and 76% for TOC with the initial concentration of 3.8 mg/L. Ozonation can alter the molecular structure of organics in terms of UV254, reduce membrane fouling, and extend the operation circle. It is believed the hybrid treatment process developed in this article can achieve high performance with less land occupation and lower cost compared with the conventional processes. It is especially suitable for the developing countries in order to obtain high-quality drinking water in a cost-effective way. PMID:23705617
Chu, Chris C.-N.
680 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 20, NO. 5, MAY 2001 Hybrid Dynamic/Quadratic Programming Algorithm for Interconnect Tree Optimization Yu-Yen Mo less memory than the pure DP approach. Index Terms--Buffer insertion, interconnect, optimization
Electronic integration of fuel cell and battery system in novel hybrid vehicle
NASA Astrophysics Data System (ADS)
Fisher, Peter; Jostins, John; Hilmansen, Stuart; Kendall, Kevin
2012-12-01
The objective of this work was to integrate a lithium ion battery pack, together with its management system, into a hydrogen fuel cell drive train contained in a lightweight city car. Electronic units were designed to link the drive train components using conventional circuitry. These were built, tested and shown to perform according to the design. These circuits allowed start-up of battery management system, motor controller, fuel cell warm-up and torque monitoring. After assembling the fuel cell and battery in the vehicle, full system tests were performed. Analysis of results from vehicle demonstrations showed operation was satisfactory. The conclusion was that the electronic integration was successful, but the design needed optimisation and fine tuning. Eight vehicles were then fitted with the electronically integrated fuel cell-battery power pack. Trials were then started to test the integration more fully, with a duration of 12 months from 2011 to 2012 in the CABLED project.
Somatic Integration From an Adenoviral Hybrid Vector into a Hot Spot in Mouse Liver Results in
Kay, Mark A.
-long transgene expression levels at a therapeutic range at a non-toxic dose. Integrating viral and non, but the humoral response against incoming capsid proteins and cell- mediated immunity remain a major challenge
Jungmann-Smith, J H; Bergamaschi, A; Brückner, M; Cartier, S; Dinapoli, R; Greiffenberg, D; Jaggi, A; Maliakal, D; Mayilyan, D; Medjoubi, K; Mezza, D; Mozzanica, A; Ramilli, M; Ruder, Ch; Schädler, L; Schmitt, B; Shi, X; Tinti, G
2015-12-01
JUNGFRAU (adJUstiNg Gain detector FoR the Aramis User station) is a two-dimensional hybrid pixel detector for photon science applications in free electron lasers, particularly SwissFEL, and synchrotron light sources. JUNGFRAU is an automatic gain switching, charge-integrating detector which covers a dynamic range of more than 10(4) photons of an energy of 12 keV with a good linearity, uniformity of response, and spatial resolving power. The JUNGFRAU 1.0 application-specific integrated circuit (ASIC) features a 256 × 256 pixel matrix of 75 × 75 ?m(2) pixels and is bump-bonded to a 320 ?m thick Si sensor. Modules of 2 × 4 chips cover an area of about 4 × 8 cm(2). Readout rates in excess of 2 kHz enable linear count rate capabilities of 20 MHz (at 12 keV) and 50 MHz (at 5 keV). The tolerance of JUNGFRAU to radiation is a key issue to guarantee several years of operation at free electron lasers and synchrotrons. The radiation hardness of JUNGFRAU 1.0 is tested with synchrotron radiation up to 10 MGy of delivered dose. The effect of radiation-induced changes on the noise, baseline, gain, and gain switching is evaluated post-irradiation for both the ASIC and the hybridized assembly. The bare JUNGFRAU 1.0 chip can withstand doses as high as 10 MGy with minor changes to its noise and a reduction in the preamplifier gain. The hybridized assembly, in particular the sensor, is affected by the photon irradiation which mainly shows as an increase in the leakage current. Self-healing of the system is investigated during a period of 11 weeks after the delivery of the radiation dose. Annealing radiation-induced changes by bake-out at 100?°C is investigated. It is concluded that the JUNGFRAU 1.0 pixel is sufficiently radiation-hard for its envisioned applications at SwissFEL and synchrotron beam lines. PMID:26724009
NASA Astrophysics Data System (ADS)
Honda, Norihiro; Nanjo, Takuya; Ishii, Katsunori; Awazu, Kunio
2012-03-01
In laser medicine, the accurate knowledge about the optical properties (absorption coefficient; ?a, scattering coefficient; ?s, anisotropy factor; g) of laser irradiated tissues is important for the prediction of light propagation in tissues, since the efficacy of laser treatment depends on the photon propagation within the irradiated tissues. Thus, it is likely that the optical properties of tissues at near-ultraviolet, visible and near-infrared wavelengths will be more important due to more biomedical applications of lasers will be developed. For improvement of the laser induced thermotherapy, the optical property change during laser treatment should be considered in the wide wavelength range. For estimation of the optical properties of the biological tissues, the optical properties measurement system with a double integrating sphere setup and an inverse Monte Carlo technique was developed. The optical properties of chicken muscle tissue were measured in the native state and after laser coagulation using the optical properties measurement system in the wavelength range from 350 to 2100 nm. A CO2 laser was used for laser coagulation. After laser coagulation, the reduced scattering coefficient of the tissue increased. And, the optical penetration depth decreased. For improvement of the treatment depth during laser coagulation, a quantitative procedure using the treated tissue optical properties for determination of the irradiation power density following light penetration decrease might be important in clinic.
Quantum Gibbs ensemble Monte Carlo
Fantoni, Riccardo; Moroni, Saverio
2014-09-21
We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.
A hybrid approach for integrated healthcare cooperative purchasing and supply chain configuration.
Rego, Nazaré; Claro, João; Pinho de Sousa, Jorge
2014-12-01
This paper presents an innovative and flexible approach for recommending the number, size and composition of purchasing groups, for a set of hospitals willing to cooperate, while minimising their shared supply chain costs. This approach makes the financial impact of the various cooperation alternatives transparent to the group and the individual participants, opening way to a negotiation process concerning the allocation of the cooperation costs and gains. The approach was developed around a hybrid Variable Neighbourhood Search (VNS)/Tabu Search metaheuristic, resulting in a flexible tool that can be applied to purchasing groups with different characteristics, namely different operative and market circumstances, and to supply chains with different topologies and atypical cost characteristics. Preliminary computational results show the potential of the approach in solving a broad range of problems. PMID:24370921
Clark, M. A.; Joo, Balint; Kennedy, A. D.; Silva, P. J.
2011-10-01
We show how the integrators used for the molecular dynamics step of the Hybrid Monte Carlo algorithm can be further improved. These integrators not only approximately conserve some Hamiltonian H but conserve exactly a nearby shadow Hamiltonian H-tilde. This property allows for a new tuning method of the molecular dynamics integrator and also allows for a new class of integrators (force-gradient integrators) which is expected to reduce significantly the computational cost of future large-scale gauge field ensemble generation.
Breen, Matthew; Jouquand, Sophie; Renier, Corinne; Mellersh, Cathryn S.; Hitte, Christophe; Holmes, Nigel G.; Chéron, Angélique; Suter, Nicola; Vignaux, Françoise; Bristow, Anna E.; Priat, Catherine; McCann, E.; André, Catherine; Boundy, Sam; Gitsham, Paul; Thomas, Rachael; Bridge, Wendy L.; Spriggs, Helen F.; Ryder, Ed J.; Curson, Alistair; Sampson, Jeff; Ostrander, Elaine A.; Binns, Matthew M.; Galibert, Francis
2001-01-01
We present here the first fully integrated, comprehensive map of the canine genome, incorporating detailed cytogenetic, radiation hybrid (RH), and meiotic information. We have mapped a collection of 266 chromosome-specific cosmid clones, each containing a microsatellite marker, to all 38 canine autosomes by fluorescence in situ hybridization (FISH). A 1500-marker RH map, comprising 1078 microsatellites, 320 dog gene markers, and 102 chromosome-specific markers, has been constructed using the RHDF5000-2 whole-genome radiation hybrid panel. Meiotic linkage analysis was performed, with at least one microsatellite marker from each dog autosome on a panel of reference families, allowing one meiotic linkage group to be anchored to all 38 dog autosomes. We present a karyotype in which each chromosome is identified by one meiotic linkage group and one or more RH groups. This updated integrated map, containing a total of 1800 markers, covers >90% of the dog genome. Positional selection of anchor clones enabled us, for the first time, to orientate nearly all of the integrated groups on each chromosome and to evaluate the extent of individual chromosome coverage in the integrated genome map. Finally, the inclusion of 320 dog genes into this integrated map enhances existing comparative mapping data between human and dog, and the 1000 mapped microsatellite markers constitute an invaluable tool with which to perform genome scanning studies on pedigrees of interest. PMID:11591656
Demonstration of Orbital Angular Momentum State Conversion using Two Hybrid 3D Photonic Integrated
Yoo, S. J. Ben
state conversion using two 3D photonic integrated circuits for free-space communication of 20-Gb/s QPSK; (060.4230) Multiplexing 1. Introduction Recent optical networks start to employ space approaches utilize holograms [2], spatial light modulators (SLMs) [3], dove prisms [4], and other bulk optics
HORIZONTAL HYBRID SOLAR LIGHT PIPE: AN INTEGRATED SYSTEM OF DAYLIGHT AND ELECTRIC LIGHT
This project will test the feasibility of an advanced energy efficient perimeter lighting system that integrates daylighting, electric lighting, and lighting controls to reduce electricity consumption. The system is designed to provide adequate illuminance levels in deep-floor...
Tan, Wee Chong
2012-07-16
waveguide. .......................... 116 xi Page Fig. 43. The photoluminescence of Er-doped waveguides. .......................................... 118 Fig. 44. The schematic drawing of an integrated Er:As2S3 MZI... pulses of light instead of electrical signals. The following sections describe the current effort taken by two of the technology leader in the semiconductor industry.? This dissertation follows...
ERIC Educational Resources Information Center
Kamruzzaman, M.
2014-01-01
This study reports an action research undertaken at Queensland University of Technology. It evaluates the effectiveness of the integration of geographic information systems (GIS) within the substantive domains of an existing land use planning course in 2011. Using student performance, learning experience survey, and questionnaire survey data, it…
Integrated thermal and energy management of plug-in hybrid electric vehicles
NASA Astrophysics Data System (ADS)
Shams-Zahraei, Mojtaba; Kouzani, Abbas Z.; Kutter, Steffen; Bäker, Bernard
2012-10-01
In plug-in hybrid electric vehicles (PHEVs), the engine temperature declines due to reduced engine load and extended engine off period. It is proven that the engine efficiency and emissions depend on the engine temperature. Also, temperature influences the vehicle air-conditioner and the cabin heater loads. Particularly, while the engine is cold, the power demand of the cabin heater needs to be provided by the batteries instead of the waste heat of engine coolant. The existing energy management strategies (EMS) of PHEVs focus on the improvement of fuel efficiency based on hot engine characteristics neglecting the effect of temperature on the engine performance and the vehicle power demand. This paper presents a new EMS incorporating an engine thermal management method which derives the global optimal battery charge depletion trajectories. A dynamic programming-based algorithm is developed to enforce the charge depletion boundaries, while optimizing a fuel consumption cost function by controlling the engine power. The optimal control problem formulates the cost function based on two state variables: battery charge and engine internal temperature. Simulation results demonstrate that temperature and the cabin heater/air-conditioner power demand can significantly influence the optimal solution for the EMS, and accordingly fuel efficiency and emissions of PHEVs.
Integration of Switchable DNA-Based Hydrogels with Surfaces by the Hybridization Chain Reaction.
Kahn, Jason S; Trifonov, Alexander; Cecconello, Alessandro; Guo, Weiwei; Fan, Chunhai; Willner, Itamar
2015-11-11
A novel method to assemble acrylamide/acrydite DNA copolymer hydrogels on surfaces, specifically gold-coated surfaces, is introduced. The method involves the synthesis of two different copolymer chains consisting of hairpin A, HA, modified acrylamide copolymer and hairpin B, HB, acrylamide copolymer. In the presence of a nucleic acid promoter monolayer associated with the surface, the hybridization chain reaction between the two hairpin-modified polymer chains is initiated, giving rise to the cross-opening of hairpins HA and HB and the formation of a cross-linked hydrogel on the surface. By the cofunctionalization of the HA- and HB-modified polymer chains with G-rich DNA tethers that include the G-quadruplex subunits, hydrogels of switchable stiffness are generated. In the presence of K(+)-ions, the hydrogel associated with the surface is cooperatively cross-linked by duplex units of HA and HB, and K(+)-ion-stabilized G-quadruplex units, giving rise to a stiff hydrogel. The 18-crown-6-ether-stimulated elimination of the K(+)-ions dissociates the bridging G-quadruplex units, resulting in a hydrogel of reduced stiffness. The duplex/G-quadruplex cooperatively stabilized hydrogel associated with the surface reveals switchable electrocatalytic properties. The incorporation of hemin into the G-quadruplex units electrocatalyzes the reduction of H2O2. The 18-crown-6-ether stimulated dissociation of the hemin/G-quadruplex bridging units leads to a catalytically inactive hydrogel. PMID:26488684
Integrating climate change criteria in reforestation projects using a hybrid decision-support system
NASA Astrophysics Data System (ADS)
Curiel-Esparza, Jorge; Gonzalez-Utrillas, Nuria; Canto-Perello, Julian; Martin-Utrillas, Manuel
2015-09-01
The selection of appropriate species in a reforestation project has always been a complex decision-making problem in which, due mostly to government policies and other stakeholders, not only economic criteria but also other environmental issues interact. Climate change has not usually been taken into account in traditional reforestation decision-making strategies and management procedures. Moreover, there is a lack of agreement on the percentage of each one of the species in reforestation planning, which is usually calculated in a discretionary way. In this context, an effective multicriteria technique has been developed in order to improve the process of selecting species for reforestation in the Mediterranean region of Spain. A hybrid Delphi-AHP methodology is proposed, which includes a consistency analysis in order to reduce random choices. As a result, this technique provides an optimal percentage distribution of the appropriate species to be used in reforestation planning. The highest values of the weight given for each subcriteria corresponded to FR (fire forest response) and PR (pests and diseases risk), because of the increasing importance of the impact of climate change in the forest. However, CB (conservation of biodiversitiy) was in the third position in line with the aim of reforestation. Therefore, the most suitable species were Quercus faginea (19.75%) and Quercus ilex (19.35%), which offer a good balance between all the factors affecting the success and viability of reforestation.
Benchmarking the OLGA lower-hybrid full-wave code for a future integration with ALOHA
NASA Astrophysics Data System (ADS)
Preinhaelter, J.; Hillairet, J.; Urban, J.
2014-02-01
The ALOHA [1] code is frequently used as a standard to solve the coupling of lower hybrid grills to the plasma. To remove its limitations on the linear density profile, homogeneous magnetic field and the fully decoupled fast and slow waves in the determination of the plasma surface admittance, we exploit the recently developed efficient full wave code OLGA [2]. There is simple connection between these two codes, namely, the plasma surface admittances used in ALOHA-2D can be expressed as the slowly varying parts of the coupling element integrands in OLGA and the ALOHA coupling elements are then linear combinations of OLGA coupling elements. We developed AOLGA module (subset of OLGA) for ALOHA. An extensive benchmark has been performed. ALOHA admittances differ from AOLGA results mainly for N?in the inaccessible region but the coupling elements differ only slightly. We compare OLGA and ALOHA for a simple 10-waveguide grill operating at 3.7 GHz and the linear density profile as it is used in ALOHA. Hence we can detect pure effects of fast and slow waves coupling on grill efficiency. The effects are weak for parameters near the optimum coupling and confirm the ALOHA results validity. We also compare the effect of the plasma surface density and the density gradient on the grill coupling determined by OLGA and ALOHA.
NASA Astrophysics Data System (ADS)
Yang, Wei; Hall, Trevor J.
2013-12-01
The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users. As a consequence, the nature of the Internet traffic has been fundamentally transformed from a pure packet-based pattern to today's predominantly flow-based pattern. Cloud computing has also brought about an unprecedented growth in the Internet traffic. In this paper, a hybrid optical switch architecture is presented to deal with the flow-based Internet traffic, aiming to offer flexible and intelligent bandwidth on demand to improve fiber capacity utilization. The hybrid optical switch is capable of integrating IP into optical networks for cloud-based traffic with predictable performance, for which the delay performance of the electronic module in the hybrid optical switch architecture is evaluated through simulation.
NASA Astrophysics Data System (ADS)
Zhuang, Hao; Zhang, Lei; Staedler, Thorsten; Jiang, Xin
2012-05-01
The nanoscale integration of SiC nanocables in a diamond thin film is achieved through a novel synthetic pathway, which combines Fe catalyst and detonation nanodiamond seeding technique in a microwave plasma chemical vapor deposition process. The obtained hybrid structures show controllable SiC nanocable fraction depending on the relative fraction of iron catalyst and diamond seeds. The SiC nanocable has a conical structure with 10 nm diameter SiC core, surrounded by SiO2 shell. The diamond crystals show high quality/crystallinity even for hybrid structures featuring an increasing SiC nanocable fraction. In the end, the growth behavior of the hybrid structure is discussed.
Fischer, J
2005-12-21
This report summarizes the results of a field verification pilot site investigation that involved the installation of a hybrid integrated active desiccant/vapor-compression rooftop heating, ventilation, and air-conditioning (HVAC) unit at an elementary school in the Atlanta Georgia area. For years, the school had experienced serious humidity and indoor air quality (IAQ) problems that had resulted in occupant complaints and microbial (mold) remediation. The outdoor air louvers of the original HVAC units had been closed in an attempt to improve humidity control within the space. The existing vapor compression variable air volume system was replaced by the integrated active desiccant rooftop (IADR) system that was described in detail in an Oak Ridge National Laboratory (ORNL) report published in 2004 (Fischer and Sand 2004). The IADR system and all space conditions have been monitored remotely for more than a year. The hybrid system was able to maintain both the space temperature and humidity as desired while delivering the outdoor air ventilation rate required by American Society of Heating, Refrigerating and Air-Conditioning Engineers Standard 62. The performance level of the IADR unit and the overall system energy efficiency was measured and found to be very high. A comprehensive IAQ investigation was completed by the Georgia Tech Research Institute before and after the system retrofit. Before-and-after data resulting from this investigation confirmed a significant improvement in IAQ, humidity control, and occupant comfort. These observations were reported by building occupants and are echoed in a letter to ORNL from the school district energy manager. The IADR system was easily retrofitted in place of the original rooftop system using a custom curb adapter. All work was completed in-house by the school's maintenance staff over one weekend. A subsequent cost analysis completed for the school district by the design engineer of record concluded that the IADR system being investigated was actually less expensive to install than other less-efficient options, most of which were unable to deliver the required ventilation while maintaining the desired space humidity levels.
Hybrid information privacy system: integration of chaotic neural network and RSA coding
NASA Astrophysics Data System (ADS)
Hsu, Ming-Kai; Willey, Jeff; Lee, Ting N.; Szu, Harold H.
2005-03-01
Electronic mails are adopted worldwide; most are easily hacked by hackers. In this paper, we purposed a free, fast and convenient hybrid privacy system to protect email communication. The privacy system is implemented by combining private security RSA algorithm with specific chaos neural network encryption process. The receiver can decrypt received email as long as it can reproduce the specified chaos neural network series, so called spatial-temporal keys. The chaotic typing and initial seed value of chaos neural network series, encrypted by the RSA algorithm, can reproduce spatial-temporal keys. The encrypted chaotic typing and initial seed value are hidden in watermark mixed nonlinearly with message media, wrapped with convolution error correction codes for wireless 3rd generation cellular phones. The message media can be an arbitrary image. The pattern noise has to be considered during transmission and it could affect/change the spatial-temporal keys. Since any change/modification on chaotic typing or initial seed value of chaos neural network series is not acceptable, the RSA codec system must be robust and fault-tolerant via wireless channel. The robust and fault-tolerant properties of chaos neural networks (CNN) were proved by a field theory of Associative Memory by Szu in 1997. The 1-D chaos generating nodes from the logistic map having arbitrarily negative slope a = p/q generating the N-shaped sigmoid was given first by Szu in 1992. In this paper, we simulated the robust and fault-tolerance properties of CNN under additive noise and pattern noise. We also implement a private version of RSA coding and chaos encryption process on messages.
Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT
Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah
2015-01-01
Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics. PMID:26089965
Propulsion Airframe Aeroacoustic Integration Effects for a Hybrid Wing Body Aircraft Configuration
NASA Technical Reports Server (NTRS)
Czech, Michael J.; Thomas, Russell H.; Elkoby, Ronen
2010-01-01
An extensive experimental investigation was performed to study the propulsion airframe aeroacoustic effects of a high bypass ratio engine for a hybrid wing body aircraft configuration where the engine is installed above the wing. The objective was to provide an understanding of the jet noise shielding effectiveness as a function of engine gas condition and location as well as nozzle configuration. A 4.7% scale nozzle of a bypass ratio seven engine was run at characteristic cycle points under static and forward flight conditions. The effect of the pylon and its orientation on jet noise was also studied as a function of bypass ratio and cycle condition. The addition of a pylon yielded significant spectral changes lowering jet noise by up to 4dB at high polar angles and increasing it by 2 to 3dB at forward angles. In order to assess jet noise shielding, a planform representation of the airframe model, also at 4.7% scale was traversed relative to the jet nozzle from downstream to several diameters upstream of the wing trailing edge. Installations at two fan diameters upstream of the wing trailing edge provided only limited shielding in the forward arc at high frequencies for both the axisymmetric and a conventional round nozzle with pylon. This was consistent with phased array measurements suggesting that the high frequency sources are predominantly located near the nozzle exit and, consequently, are amenable to shielding. The mid to low frequencies sources were observed further downstream and shielding was insignificant. Chevrons were designed and used to impact the distribution of sources with the more aggressive design showing a significant upstream migration of the sources in the mid frequency range. Furthermore, the chevrons reduced the low frequency source levels and the typical high frequency increase due to the application of chevron nozzles was successfully shielded. The pylon was further modified with a technology that injects air through the shelf of the pylon which was effective in reducing low frequency noise and moving jet noise sources closer to the nozzle exit. In general, shielding effectiveness varied as a function of cycle condition with the cutback condition producing higher shielding compared to sideline power. The configuration with a more strongly immersed chevron and a pylon oriented opposite to the microphones produced the largest reduction in jet noise. In addition to the jet noise source, the shielding of a broadband point noise source was documented with up to 20 dB of noise reduction at directivity angles directly under the shielding surface.
Propulsion Airframe Aeroacoustic Integration Effects for a Hybrid Wing Body Aircraft Configuration
NASA Technical Reports Server (NTRS)
Czech, Michael J.; Thomas, Russell H; Elkoby, Ronen
2012-01-01
An extensive experimental investigation was performed to study the propulsion airframe aeroacoustic effects of a high bypass ratio engine for a hybrid wing body aircraft configuration where the engine is installed above the wing. The objective was to provide an understanding of the jet noise shielding effectiveness as a function of engine gas condition and location as well as nozzle configuration. A 4.7% scale nozzle of a bypass ratio seven engine was run at characteristic cycle points under static and forward flight conditions. The effect of the pylon and its orientation on jet noise was also studied as a function of bypass ratio and cycle condition. The addition of a pylon yielded significant spectral changes lowering jet noise by up to 4 dB at high polar angles and increasing it by 2 to 3 dB at forward angles. In order to assess jet noise shielding, a planform representation of the airframe model, also at 4.7% scale was traversed such that the jet nozzle was positioned from downstream of to several diameters upstream of the airframe model trailing edge. Installations at two fan diameters upstream of the wing trailing edge provided only limited shielding in the forward arc at high frequencies for both the axisymmetric and a conventional round nozzle with pylon. This was consistent with phased array measurements suggesting that the high frequency sources are predominantly located near the nozzle exit and, consequently, are amenable to shielding. The mid to low frequency sources were observed further downstream and shielding was insignificant. Chevrons were designed and used to impact the distribution of sources with the more aggressive design showing a significant upstream migration of the sources in the mid frequency range. Furthermore, the chevrons reduced the low frequency source levels and the typical high frequency increase due to the application of chevron nozzles was successfully shielded. The pylon was further modified with a technology that injects air through the shelf of the pylon which was effective in reducing low frequency noise and moving jet noise sources closer to the nozzle exit. In general, shielding effectiveness varied as a function of cycle condition with the cutback condition producing higher shielding compared to sideline power. The configuration with a more strongly immersed chevron and a pylon oriented opposite to the microphones produced the largest reduction in jet noise. In addition to the jet noise source, the shielding of a broadband point noise source was documented with up to 20 dB of noise reduction at directivity angles directly under the shielding surface.
Hybrid Semiconductor-Molecular Integrated Circuits for Digital Electronics: CMOL Approach
NASA Astrophysics Data System (ADS)
Strukov, Dmitri B.
This chapter describes architectures of digital circuits including memories, general-purpose, and application-specific reconfigurable Boolean logic circuits for the prospective hybrid CMOS/nanowire/nanodevice ("CMOL") technology. The basic idea of CMOL circuits is to combine the advantages of CMOS technology (including its flexibility and high fabrication yield) with those of molecular-scale nanodevices. Two-terminal nanodevices would be naturally incorporated into nanowire crossbar fabric, enabling very high function density at acceptable fabrication costs. In order to overcome the CMOS/nanodevice interface problem, in CMOL circuits the interface is provided by sharp-tipped pins that are distributed all over the circuit area, on top of the CMOS stack. We show that CMOL memories with a nano/CMOS pitch ratio close to 10 may be far superior to the densest semiconductor memories by providing, e.g., 1 Tbit/cm^2 density even for the plausible defect fraction of 2%. Even greater defect tolerance (more than 20% for 99% circuit yield) can be achieved in both types of programmable Boolean logic CMOL circuits. In such circuits, two-terminal nanodevices provide programmable diode functionality for logic circuit operation, and allow circuit mapping and reconfiguration around defective nanodevices, while CMOS subsystem is used for signal restoration and latching. Using custom-developed design automation tools we have successfully mapped on reconfigurable general-purpose logic fabric ("CMOL FPGA") the well-known Toronto 20 benchmark circuits and estimated their performance. The results have shown that, in addition to high defect tolerance, CMOL FPGA circuits may have extremely high density (more than two orders of magnitude higher that that of usual CMOS FPGA with the same CMOS design rules) while operating at higher speed at acceptable power consumption. Finally, our estimates indicate that reconfigurable application-specific ("CMOL DSP") circuits may increase the speed of low-level image processing tasks by more than two orders of magnitude as compared to the fastest CMOS DSP chips implemented with the same CMOS design rules at the same area and power consumption.
Design and characterization of a hybrid-integrated MEMS scanning grating spectrometer
NASA Astrophysics Data System (ADS)
Grüger, Heinrich; Knobbe, Jens; Pügner, Tino; Schenk, Harald
2013-03-01
Grating spectrometer, like the well-established Czerny-Turner, are based on an optical design consisting of several components. Typically at least two slits, two mirrors, the grating stage and a detector are required. There has been much work to reduce this effort, setups using only one mirror (Ebert - Fastie) or the replacement of the entrance slit through the use of thin optical fibers as well as integrated electronic detector arrays instead of a moving grating and an exit slit and single detector device have been applied. Reduced effort comes along with performance limitations: Either the optical resolution or throughput is affected or the use of the system is limited to the availability of detectors arrays with reasonable price. Components in micro opto electro mechanical systems (MOEMS-) technology and spectroscopic systems based thereon have been developed to improve this situation. Miniaturized scanning gratings fabricated on bonded silicon on insulator (BSOI-) wafers were used to design grating spectrometer for the near infrared requiring single detectors only. Discrete components offer flexibility but also need for adjustment of two mirrors, grating stage, fiber mount and the detector with its slit and optionally a second slit in the entrance area. Further development leads towards the integration of the slits into the MOEMS chip, thus less effort for adjustment. Flexibility might be reduced as adjustments of the optical design or grating spacing would require a new chip with own set of masks. Nevertheless if extreme miniaturization is desired this approach seems to be promising. Besides this, high volume production might be able for a comparable low price. A new chip was developed offering grating, two slits and a cavity for the detector chip. The optical design was adjusted to a planar arrangement of grating and slits. A detector buried in a chip cavity required a new mounting strategy. Other optical components were optimized and fabricated then the systems was assembled with electronics and software adjusted to the new design including some new features like integrated position sensors. A first test of systems to grant function of all components is presented. Further work will be aimed at improved performance like higher resolution and lower SNR.
NASA Astrophysics Data System (ADS)
Hua, H.; Owen, S. E.; Yun, S.; Lundgren, P.; Fielding, E. J.; Agram, P.; Manipon, G.; Stough, T. M.; Simons, M.; Rosen, P. A.; Wilson, B. D.; Poland, M. P.; Cervelli, P. F.; Cruz, J.
2013-12-01
Space-based geodetic measurement techniques such as Interferometric Synthetic Aperture Radar (InSAR) and Continuous Global Positioning System (CGPS) are now important elements in our toolset for monitoring earthquake-generating faults, volcanic eruptions, hurricane damage, landslides, reservoir subsidence, and other natural and man-made hazards. Geodetic imaging's unique ability to capture surface deformation with high spatial and temporal resolution has revolutionized both earthquake science and volcanology. Continuous monitoring of surface deformation and surface change before, during, and after natural hazards improves decision-making from better forecasts, increased situational awareness, and more informed recovery. However, analyses of InSAR and GPS data sets are currently handcrafted following events and are not generated rapidly and reliably enough for use in operational response to natural disasters. Additionally, the sheer data volumes needed to handle a continuous stream of InSAR data sets also presents a bottleneck. It has been estimated that continuous processing of InSAR coverage of California alone over 3-years would reach PB-scale data volumes. Our Advanced Rapid Imaging and Analysis for Monitoring Hazards (ARIA-MH) science data system enables both science and decision-making communities to monitor areas of interest with derived geodetic data products via seamless data preparation, processing, discovery, and access. We will present our findings on the use of hybrid-cloud computing to improve the timely processing and delivery of geodetic data products, integrating event notifications from USGS to improve the timely processing for response, as well as providing browse results for quick looks with other tools for integrative analysis.
Robic, A; Riquet, J; Yerle, M; Milan, D; Lahbib-Mansais, Y; Dubut-Fontana, C; Gellin, J
1996-06-01
Recently two main genetic maps [Rohrer et al. Genetics 136, 231 (1994); Archibald et al. Mamm. Genome 6, 157 (1995)] and a cytogenetic map [Yerle et al. Mamm. Genome 6, 175 (1995)] for the porcine genome were reported. As only a very few micro-satellites are located on the cytogenetic map, it appears to be important to increase the relationships between the genetic and cytogenetic maps. This document describes the regional mapping of 100 genetic markers with a somatic cell hybrid panel. Among the markers, 91 correspond to new localizations. Our study enabled the localization of 14 new markers found on both maps, of 54 found on the USDA map, and of 23 found on the PiGMaP map. Now 21% and 43% of the markers on the USDA and PiGMaP linkage maps respectively are physically mapped. This new cytogenetic information was then integrated within the framework of each genetic map. The cytogenetic orientation of the USDA linkage maps for Chromosomes (Chrs) 3, 8, 9, and 16 and of PiGMaP for Chr 8 was determined. USDA and PiGMaP linkage maps are now oriented for all chromosomes, except for Chrs 17 and 18. Moreover, the linkage group "R" from the USDA linkage map was assigned to Chr 6. PMID:8662227
Darne, Chinmay D.; Lu, Yujie; Tan, I-Chih; Zhu, Banghe; Rasmussen, John C.; Smith, Anne M.; Yan, Shikui; Sevick-Muraca, Eva M
2012-01-01
The work presented herein describes system design and performance evaluation of a miniaturized near-infrared fluorescence (NIRF) frequency-domain photon migration (FDPM) system with non-contact excitation and homodyne detection capability for small animal fluorescence tomography. The FDPM system was developed specifically for incorporation into a Siemens microPET/CT commercial scanner for hybrid small animal imaging, but could be adapted to other systems. Operating at 100 MHz, the system noise was minimized and the associated amplitude and phase errors were characterized to be ±0.7% and ±0.3°, respectively. To demonstrate the tomographic ability, a commercial mouse-shaped phantom with 50 ?M IRDye800CW and 68Ga containing inclusion was used to associate PET and NIRF tomography. 3-D mesh generation and anatomical referencing was accomplished through CT. A simplified spherical harmonics approximation (SP3) algorithm, for efficient prediction of light propagation in small animals, was tailored to incorporate FDPM approach. Finally, PET-NIRF target co-localization accuracy was analyzed in vivo with a dual-labeled imaging agent targeting orthotopic growth of human prostate cancer. The results obtained validate the integration of time-dependent fluorescence tomography system within a commercial microPET/CT scanner for multimodality small animal imaging. PMID:23171509
Darne, Chinmay D; Lu, Yujie; Tan, I-Chih; Zhu, Banghe; Rasmussen, John C; Smith, Anne M; Yan, Shikui; Sevick-Muraca, Eva M
2012-12-21
The work presented herein describes the system design and performance evaluation of a miniaturized near-infrared fluorescence (NIRF) frequency-domain photon migration (FDPM) system with non-contact excitation and homodyne detection capability for small animal fluorescence tomography. The FDPM system was developed specifically for incorporation into a Siemens micro positron emission tomography/computed tomography (microPET/CT) commercial scanner for hybrid small animal imaging, but could be adapted to other systems. Operating at 100 MHz, the system noise was minimized and the associated amplitude and phase errors were characterized to be ±0.7% and ±0.3°, respectively. To demonstrate the tomographic ability, a commercial mouse-shaped phantom with 50 µM IRDye800CW and ??Ga containing inclusion was used to associate PET and NIRF tomography. Three-dimensional mesh generation and anatomical referencing was accomplished through CT. A third-order simplified spherical harmonics approximation (SP?) algorithm, for efficient prediction of light propagation in small animals, was tailored to incorporate the FDPM approach. Finally, the PET-NIRF target co-localization accuracy was analyzed in vivo with a dual-labeled imaging agent targeting orthotopic growth of human prostate cancer. The obtained results validate the integration of time-dependent fluorescence tomography system within a commercial microPET/CT scanner for multimodality small animal imaging. PMID:23171509
NASA Astrophysics Data System (ADS)
Darne, Chinmay D.; Lu, Yujie; Tan, I.-Chih; Zhu, Banghe; Rasmussen, John C.; Smith, Anne M.; Yan, Shikui; Sevick-Muraca, Eva M.
2012-12-01
The work presented herein describes the system design and performance evaluation of a miniaturized near-infrared fluorescence (NIRF) frequency-domain photon migration (FDPM) system with non-contact excitation and homodyne detection capability for small animal fluorescence tomography. The FDPM system was developed specifically for incorporation into a Siemens micro positron emission tomography/computed tomography (microPET/CT) commercial scanner for hybrid small animal imaging, but could be adapted to other systems. Operating at 100 MHz, the system noise was minimized and the associated amplitude and phase errors were characterized to be ±0.7% and ±0.3°, respectively. To demonstrate the tomographic ability, a commercial mouse-shaped phantom with 50 µM IRDye800CW and 68Ga containing inclusion was used to associate PET and NIRF tomography. Three-dimensional mesh generation and anatomical referencing was accomplished through CT. A third-order simplified spherical harmonics approximation (SP3) algorithm, for efficient prediction of light propagation in small animals, was tailored to incorporate the FDPM approach. Finally, the PET-NIRF target co-localization accuracy was analyzed in vivo with a dual-labeled imaging agent targeting orthotopic growth of human prostate cancer. The obtained results validate the integration of time-dependent fluorescence tomography system within a commercial microPET/CT scanner for multimodality small animal imaging.
NASA Astrophysics Data System (ADS)
Jiang, Zhaoshuo; Jig Kim, Sung; Plude, Shelley; Christenson, Richard
2013-10-01
Magneto-rheological (MR) fluid dampers can be used to reduce the traffic induced vibration in highway bridges and protect critical structural components from fatigue. Experimental verification is needed to verify the applicability of the MR dampers for this purpose. Real-time hybrid simulation (RTHS), where the MR dampers are physically tested and dynamically linked to a numerical model of the highway bridge and truck traffic, provides an efficient and effective means to experimentally examine the efficacy of MR dampers for fatigue protection of highway bridges. In this paper a complex highway bridge model with 263?178 degrees-of-freedom under truck loading is tested using the proposed convolution integral (CI) method of RTHS for a semiactive structural control strategy employing two large-scale 200 kN MR dampers. The formation of RTHS using the CI method is first presented, followed by details of the various components in the RTHS and a description of the implementation of the CI method for this particular test. The experimental results confirm the practicability of the CI method for conducting RTHS of complex systems.
Zhang, Xingyu; Subbaraman, Harish; Luo, Jingdong; Jen, Alex K -Y; Chung, Chi-jui; Yan, Hai; Pan, Zeyu; Nelson, Robert L; Chen, Ray T
2015-01-01
Silicon-organic hybrid integrated devices have emerging applications ranging from high-speed optical interconnects to photonic electromagnetic-field sensors. Silicon slot photonic crystal waveguides (PCWs) filled with electro-optic (EO) polymers combine the slow-light effect in PCWs with the high polarizability of EO polymers, which promises the realization of high-performance optical modulators. In this paper, a broadband, power-efficient, low-dispersion, and compact optical modulator based on an EO polymer filled silicon slot PCW is presented. A small voltage-length product of V{\\pi}*L=0.282Vmm is achieved, corresponding to an unprecedented record-high effective in-device EO coefficient (r33) of 1230pm/V. Assisted by a backside gate voltage, the modulation response up to 50GHz is observed, with a 3-dB bandwidth of 15GHz, and the estimated energy consumption is 94.4fJ/bit at 10Gbit/s. Furthermore, lattice-shifted PCWs are utilized to enhance the optical bandwidth by a factor of ~10X over other modulators bas...
NASA Astrophysics Data System (ADS)
Mroczkiewicz, Pawel
A necessity of integration of both information systems and office software existing in organizations has had a long history. The beginning of this kind of solutions reaches back to the old generation of network protocols called EDI (Electronic Data Interchange) and EDIFACT standard, which was initiated in 1988 and has dynamically evolved ever since (S. Michalski, M. Suskiewicz, 1995). The mentioned protocol was usually used for converting documents into natural formats processed by applications. It caused problems with binary files and, furthermore, the communication mechanisms had to be modified each time new documents or applications were added. When we compare EDI with the previously used communication mechanisms, EDI was a great step forward as it was the first, big scale attempt to define standards of data interchange between the applications in business transactions (V. Leyland, 1995, p. 47).
NASA Astrophysics Data System (ADS)
Sandner, Thilo; Baulig, Claudia; Grasshoff, Thomas; Wildenhain, Michael; Schwarzenberg, Markus; Dahlmann, Hans-Georg; Schwarzer, Stefan
2015-03-01
This paper presents a large aperture micro scanning mirror (MSM) array especially developed for the novel 3D-laser camera Fovea3D. This 3D-camera uses a pulsed ToF technique with 1MVoxel distance measuring rate and targets for a large measurement range of 30…100m and FOV of 120°x60° at video like frame rates. To guarantee a large reception aperture of ? 20mm, large FOV and 3200 Hz bi-directional scanning frequency at the same time, a hybrid assembled MSM array was developed consisting of 22 reception mirrors and a separate sending mirror. A hybrid assembly of frequency selected scanner elements and a driving in parametric resonance were chosen to enable a fully synchronized operation of all scanner elements. For position feedback piezo-resistive position sensors are integrated on each MEMS chip. The paper discusses details of the MEMS system integration including the synchronized operation of multiple scanning elements.
NASA Astrophysics Data System (ADS)
Joosten, A.; Bochud, F.; Moeckli, R.
2014-08-01
The comparison of radiotherapy techniques regarding secondary cancer risk has yielded contradictory results possibly stemming from the many different approaches used to estimate risk. The purpose of this study was to make a comprehensive evaluation of different available risk models applied to detailed whole-body dose distributions computed by Monte Carlo for various breast radiotherapy techniques including conventional open tangents, 3D conformal wedged tangents and hybrid intensity modulated radiation therapy (IMRT). First, organ-specific linear risk models developed by the International Commission on Radiological Protection (ICRP) and the Biological Effects of Ionizing Radiation (BEIR) VII committee were applied to mean doses for remote organs only and all solid organs. Then, different general non-linear risk models were applied to the whole body dose distribution. Finally, organ-specific non-linear risk models for the lung and breast were used to assess the secondary cancer risk for these two specific organs. A total of 32 different calculated absolute risks resulted in a broad range of values (between 0.1% and 48.5%) underlying the large uncertainties in absolute risk calculation. The ratio of risk between two techniques has often been proposed as a more robust assessment of risk than the absolute risk. We found that the ratio of risk between two techniques could also vary substantially considering the different approaches to risk estimation. Sometimes the ratio of risk between two techniques would range between values smaller and larger than one, which then translates into inconsistent results on the potential higher risk of one technique compared to another. We found however that the hybrid IMRT technique resulted in a systematic reduction of risk compared to the other techniques investigated even though the magnitude of this reduction varied substantially with the different approaches investigated. Based on the epidemiological data available, a reasonable approach to risk estimation would be to use organ-specific non-linear risk models applied to the dose distributions of organs within or near the treatment fields (lungs and contralateral breast in the case of breast radiotherapy) as the majority of radiation-induced secondary cancers are found in the beam-bordering regions.
Perkins, James Michael, 1978-
2007-01-01
A new heterogeneous integration technique has been developed and demonstrated to integrate vertical cavity surface emitting lasers (VCSELs) on silicon CMOS integrated circuits for optical interconnect applications. Individual ...
Hybrid & Hydrogen Vehicle Research Laboratory
Lee, Dongwon
in energy storage systems. Integrated Vehicle Stability System Modeling of Hybrid Ve- hicles demonstrated the feasibility of integrating ABS, TCS, and Active Yaw Control into hybrid vehicle platforms and extendedHybrid & Hydrogen Vehicle Research Laboratory www.vss.psu.edu/hhvrl Joel R. Anstrom, Director 201
Discrete Diffusion Monte Carlo for Electron Thermal Transport
NASA Astrophysics Data System (ADS)
Chenhall, Jeffrey; Cao, Duc; Wollaeger, Ryan; Moses, Gregory
2014-10-01
The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. is adapted to a Discrete Diffusion Monte Carlo (DDMC) solution method for eventual inclusion in a hybrid IMC-DDMC (Implicit Monte Carlo) method. The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the iSNB-DDMC method will be presented. This work was supported by Sandia National Laboratory - Albuquerque.
Fischer, J
2005-05-06
This report summarizes the results of a research and development (R&D) program to design and optimize an active desiccant-vapor compression hybrid rooftop system. The primary objective was to combine the strengths of both technologies to produce a compact, high-performing, energy-efficient system that could accommodate any percentage of outdoor air and deliver essentially any required combination of temperature and humidity, or sensible heat ratio (SHR). In doing so, such a product would address the significant challenges imposed on the performance capabilities of conventional packaged rooftop equipment by standards 62 and 90.1 of the American Society of Heating, Refrigerating and Air-Conditioning Engineers. The body of work completed as part of this program built upon previous R&D efforts supported by the U.S. Department of Energy and summarized by the Phase 3b report ''Active Desiccant Dehumidification Module Integration with Rooftop Packaged HVAC Units'' (Fischer and Sand 2002), in addition to Fischer, Hallstrom, and Sand 2000; Fischer 2000; and Fischer and Sand 2004. All initial design objectives established for this development program were successfully achieved. The performance flexibility desired was accomplished by a down-sized active desiccant wheel that processes only a portion of the supply airflow, which is pre-conditioned by a novel vapor compression cycle. Variable-speed compressors are used to deliver the capacity control required by a system handling a high percentage of outdoor air. An integrated direct digital control system allows for control capabilities not generally offered by conventional packaged rooftop systems. A 3000-cfm prototype system was constructed and tested in the SEMCO engineering test laboratory in Columbia, MO, and was found to operate in an energy-efficient fashion relative to more conventional systems. Most important, the system offered the capability to independently control the supply air temperature and humidity content to provide individual sensible and latent loads required by an occupied space without over-cooling and reheating air. The product was developed using a housing construction similar to that of a conventional packaged rooftop unit. The resulting integrated active desiccant rooftop (IADR) is similar in size to a currently available conventional rooftop unit sized to provide an equivalent total cooling capacity. Unlike a conventional rooftop unit, the IADR can be operated as a dedicated outdoor air system processing 100% outdoor air, as well as a total conditioning system capable of handling any ratio of return air to outdoor air. As part of this R&D program, a detailed investigation compared the first cost and operating cost of the IADR with costs for a conventional packaged approach for an office building located in Jefferson City, MO. The results of this comparison suggest that the IADR approach, once commercialized, could be cost-competitive with existing technology--exhibiting a one-year to two-year payback period--while simultaneously offering improved humidity control, indoor air quality, and energy efficiency.
Energy Science and Technology Software Center (ESTSC)
2010-10-20
The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.
NASA Astrophysics Data System (ADS)
Manipon, G. J. M.; Hua, H.; Owen, S. E.; Sacco, G. F.; Agram, P. S.; Moore, A. W.; Yun, S. H.; Fielding, E. J.; Lundgren, P.; Rosen, P. A.; Webb, F.; Liu, Z.; Smith, A. T.; Wilson, B. D.; Simons, M.; Poland, M. P.; Cervelli, P. F.
2014-12-01
The Hybrid Science Data System (HySDS) scalably powers the ingestion, metadata extraction, cataloging, high-volume data processing, and publication of the geodetic data products for the Advanced Rapid Imaging & Analysis for Monitoring Hazard (ARIA-MH) project at JPL. HySDS uses a heterogeneous set of worker nodes from private & public clouds as well as virtual & bare-metal machines to perform every aspect of the traditional science data system. For our science data users, the forefront of HySDS is the facet search interface, FacetView, which allows them to browse, filter, and access the published products. Users are able to explore the collection of product metadata information and apply multiple filters to constrain the result set down to their particular interests. It allows them to download these faceted products for further analysis and generation of derived products. However, we have also employed a novel approach to faceting where it is also used to apply constraints for custom monitoring of products, system resources, and triggers for automated data processing. The power of the facet search interface is well documented across various domains and its usefulness is rooted in the current state of existence of metadata. However, user needs usually extend beyond what is currently present in the data system. A user interested in synthetic aperture radar (SAR) data over Kilauea will download them from FacetView but would also want email notification of future incoming scenes. The user may even want that data pushed to a remote workstation for automated processing. Better still, these future products could trigger HySDS to run the user's analysis on its array of worker nodes, on behalf of the user, and ingest the resulting derived products. We will present our findings in integrating an ancillary, user-defined, system-driven processing system for HySDS that allows users to define faceted rules based on facet constraints and triggers actions when new SAR data products arrive that match the constraints. We will discuss use cases where users have defined rules for the automated generation of InSAR derived products: interferograms for California and Kilauea, time-series analyses, and damage proxy maps. These findings are relevant for science data system development of the proposed NASA-ISRO SAR mission.
Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Sallehuddin, Roselina
2013-01-01
Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models. PMID:23766729
Cinget, Benjamin; de Lafontaine, Guillaume; Gérardi, Sébastien; Bousquet, Jean
2015-06-01
Secondary contact between closely related taxa routinely occurs during postglacial migrations. After initial contact, the location of hybrid zones may shift geographically or remain spatially stable over time in response to various selective pressures or neutral processes. Studying the extent and direction of introgression using markers having contrasted levels of gene flow can help unravel the historical dynamics of hybrid zones. Thanks to their contrasted maternal and paternal inheritance, resulting in different levels of gene flow for mitochondrial and chloroplast DNA (mtDNA and cpDNA), the Pinaceae stand out as a relevant biological model for this purpose. The objective of the study was to assess whether the hybrid zone between Abies balsamea and Abies lasiocarpa (two largely distributed Pinaceae) has moved or remained stable over time by analysing the distribution of cytoplasmic DNA variation as well as published palaeobotanical data. Interspecific gene flow was higher for cpDNA than mtDNA markers; hence, the geographic distribution of mitotypes was more congruent with species distributions than chlorotypes. This genetic signature was contrary to expectations under a moving hybrid zone scenario, as well as empirical observations in other conifers. Genetic evidence for this rare instance of stable hybrid zone was corroborated by the colonization chronology derived from published fossil data, indicating that the two fir species initially came into contact in the area corresponding to the current sympatric zone 11 kyr ago. While an explanatory analysis suggested the putative influence of various environmental factors on the relative abundance of cytoplasmic genome combinations, further research appears necessary to assess the role of both demographic history and selective factors in driving the dynamics of hybrid zones. PMID:25865063
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Boyd, Mark A.; Geist, Robert M.; Smotherman, Mark D.
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed to be compatible with most computing platforms and operating systems, and some programs have been beta tested, within the aerospace community for over 8 years. Volume 1 provides an introduction to the HARP program. Comprehensive information on HARP mathematical models can be found in the references.
Onar, Omer C
2012-01-01
This manuscript focuses on a novel actively controlled hybrid magnetic battery/ultracapacitor based energy storage system (ESS) for vehicular propulsion systems. A stand-alone battery system might not be sufficient to satisfy peak power demand and transient load variations in hybrid and plug-in hybrid electric vehicles (HEV, PHEV). Active battery/ultracapacitor hybrid ESS provides a better solution in terms of efficient power management and control flexibility. Moreover, the voltage of the battery pack can be selected to be different than that of the ultracapacitor, which will result in flexibility of design as well as cost and size reduction of the battery pack. In addition, the ultracapacitor bank can supply or recapture a large burst of power and it can be used with high C-rates. Hence, the battery is not subjected to supply peak and sharp power variations, and the stress on the battery will be reduced and the battery lifetime would be increased. Utilizing ultracapacitor results in effective capturing of the braking energy, especially in sudden braking conditions.
Aquino Perez, Gildardo
2009-05-15
. The primary analysis of the karyotype and ideogram construction was based on banding and Fluorescence In Situ Hybridization (FISH) for rDNA detection. FISH confirmed two locations for the NOR on telomeric regions of chromosomes 6 and 12 plus an additional less...
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Kim, Jaiseung
2011-04-01
We have made a Markov Chain Monte Carlo (MCMC) analysis of primordial non-Gaussianity (f{sub NL}) using the WMAP bispectrum and power spectrum. In our analysis, we have simultaneously constrained f{sub NL} and cosmological parameters so that the uncertainties of cosmological parameters can properly propagate into the f{sub NL} estimation. Investigating the parameter likelihoods deduced from MCMC samples, we find slight deviation from Gaussian shape, which makes a Fisher matrix estimation less accurate. Therefore, we have estimated the confidence interval of f{sub NL} by exploring the parameter likelihood without using the Fisher matrix. We find that the best-fit values of our analysis make a good agreement with other results, but the confidence interval is slightly different.
VERIFICATION OF THE SHIFT MONTE CARLO CODE
Sly, Nicholas; Mervin, Mervin Brenden; Mosher, Scott W; Evans, Thomas M; Wagner, John C; Maldonado, G. Ivan
2012-01-01
Shift is a new hybrid Monte Carlo/deterministic radiation transport code being developed at Oak Ridge National Laboratory. At its current stage of development, Shift includes a fully-functional parallel Monte Carlo capability for simulating eigenvalue and fixed-source multigroup transport problems. This paper focuses on recent efforts to verify Shift s Monte Carlo component using the two-dimensional and three-dimensional C5G7 NEA benchmark problems. Comparisons were made between the benchmark eigenvalues and those output by the Shift code. In addition, mesh-based scalar flux tally results generated by Shift were compared to those obtained using MCNP5 on an identical model and tally grid. The Shift-generated eigenvalues were within three standard deviations of the benchmark and MCNP5 values in all cases. The flux tallies generated by Shift were found to be in very good agreement with those from MCNP
Multiscale Monte Carlo equilibration: pure Yang-Mills theory
Michael G. Endres; Richard C. Brower; William Detmold; Kostas Orginos; Andrew V. Pochinsky
2015-10-15
We present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Multiscale Monte Carlo equilibration: Pure Yang-Mills theory
Michael G. Endres; Richard C. Brower; William Detmold; Kostas Orginos; Andrew V. Pochinsky
2015-12-30
We present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Multiscale Monte Carlo equilibration: Pure Yang-Mills theory
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; Detmold, William; Pochinsky, Andrew V.
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Dong, Haiyan; Wu, Zai-Sheng; Xu, Jianguo; Ma, Ji; Zhang, Huijuan; Wang, Jie; Shen, Weiyu; Xie, Jingjing; Jia, Lee
2015-10-15
Molecular beacon (MB) is widely explored as a signaling probe in powerful biosensing systems, for example, enzyme-assisted strand displacement amplification (SDA)-based system. The existing polymerization-based amplification system is often composed of recognition element, primer, template and fluorescence reporter. To develop a new MB sensing system and simply the signal amplification design, we herein attempted to propose a multifunctional integrated MB (MI-MB) for the polymerization amplification detection of target DNA via introducing a G-rich fragment into the loop of MB without using any exogenous auxiliary oligonucleotide probe. Utilizing only one MI-MB probe, the p53 target gene could trigger the cycles of hybridization/polymerization/displacement, resulting in amplification of the target hybridization event. Thus, the p53 gene can be detected down to 5 × 10(-10)M with the linear response range from 5 × 10(-10)M to 4 × 10(-7)M. Using the MI-MB, we could readily discriminate the point mutation-contained p53 from the wild-type one. As a proof-of-concept study, owing to its simplicity and multifunction, including recognition, replication, amplification and signaling, the MI-MB exhibits the great potential for the development of different biosensors for various biomedical applications, especially, for early cancer diagnosis. PMID:25982726
Mineralogy of Libya Montes, Mars
NASA Astrophysics Data System (ADS)
Perry, K. A.; Bishop, J. L.; McKeown, N. K.
2009-12-01
Observations by CRISM (Compact Reconnaissance Imaging Spectrometer for Mars) have revealed a range of minerals in Libya Montes including olivine, pyroxene, and phyllosilicate [1]. Here we extend our spectral analyses of CRISM images in Libya Montes to identify carbonates. We have also performed detailed characterization of the spectral signature of the phyllosilicate- and carbonate-bearing outcrops in order to constrain the types of phyllosilicates and carbonates present. Phyllosilicate-bearing rocks in Libya Montes have spectral bands at 1.42, 2.30 and 2.39 µm, consistent with Fe- and Mg- bearing smectites. The mixture of Fe and Mg in Libya Montes may be within the clay mineral structure or within the CRISM pixel. Because the pixels have 18 meter/pixel spatial resolution, it is possible that the bands observed are due to the mixing of nontronite and saponite rather than a smectite with both Fe and Mg. Carbonates found in Libya Montes are similar to those found in Nili Fossae [2]. The carbonates have bands centered at 2.30 and 2.52 µm. Libya Montes carbonates most closely resemble the Mg-carbonate, magnesite. Olivine spectra are seen throughout Libya Montes, characterized by a positive slope from 1.2-1.8 µm. Large outcrops of olivine are relatively rare on Mars [3]. This implies that fresh bedrock has been recently exposed because olivine weathers readily compared to pyroxene and feldspar. Pyroxene in Libya Montes resembles an Fe-bearing orthopyroxene with a broad band centered at 1.82 µm. The lowermost unit identified in Libya Montes is a clay-bearing unit. Overlying this is a carbonate-bearing unit with a clear unit division visible in at least one CRISM image. An olivine-bearing unit unconformably overlies these two units and may represent a drape related to the Isidis impact, as suggested for Nili Fossae [2]. However, it appears that the carbonate in Libya Montes is an integral portion of the rock underlying the olivine-bearing unit rather than an alteration product, contrasting with proposed stratigraphy for Nili Fossae. The uppermost unit identified is a pyroxene-bearing unit. Some spectra of clays and carbonates in this region present a slope from 1.2 to 1.8 µm similar to olivine. Laboratory experiments were conducted in attempt to understand the relationship of mixtures including olivine, nontronite, and magnesite. The characteristic olivine slope is evident in the spectra in any mixture, even with as little as 10% olivine. In ternary mixtures, the magnesite is almost completely overshadowed by the nontronite and olivine characteristics. The discovery of clays and carbonates in Libya Montes indicates that there was an aqueous environment with neutral pH in the past. In addition, water needs to be relatively still and deep for the small particles to precipitate out and form into the minerals. On Earth, this would be a still lake or deep ocean, and perhaps a similar environment may have been present here in Mars’ past. References [1] Bishop, J. L., et al. (2007) 7th Int'l Mars Conf. [2] Ehlmann, B. L., et al. (2008) Science, 322, 1828. [3] Mustard, J. F., et al. (2008) Nature, 454, 07305.
NASA Astrophysics Data System (ADS)
Liu, Siqi; Weng, Bo; Tang, Zi-Rong; Xu, Yi-Jun
2014-12-01
A ternary hybrid structure of one-dimensional (1D) silver nanowire-doped reduced graphene oxide (RGO) integrated with a CdS nanowire (NW) network has been fabricated via a simple electrostatic self-assembly method followed by a hydrothermal reduction process. The electrical conductivity of RGO can be significantly enhanced by opening up new conduction channels by bridging the high resistance grain-boundaries (HGBs) with 1D Ag nanowires, which results in a prolonged lifetime of photo-generated charge carriers excited from the CdS NW network, thus making Ag NW-RGO an efficient co-catalyst with the CdS NW network toward artificial photosynthesis.A ternary hybrid structure of one-dimensional (1D) silver nanowire-doped reduced graphene oxide (RGO) integrated with a CdS nanowire (NW) network has been fabricated via a simple electrostatic self-assembly method followed by a hydrothermal reduction process. The electrical conductivity of RGO can be significantly enhanced by opening up new conduction channels by bridging the high resistance grain-boundaries (HGBs) with 1D Ag nanowires, which results in a prolonged lifetime of photo-generated charge carriers excited from the CdS NW network, thus making Ag NW-RGO an efficient co-catalyst with the CdS NW network toward artificial photosynthesis. Electronic supplementary information (ESI) available: Experimental details, photographs of the experimental setups for photocatalytic activity testing, SEM images of Ag NWs and CdS NWs, Zeta potential, Raman spectra, DRS spectra, PL spectra and PL decay time evolution, and photocatalytic performances of samples for reduction of 4-NA and recycling test. See DOI: 10.1039/c4nr04229h
NASA Astrophysics Data System (ADS)
Erbis, Vadim; Hegger, Christian; Güth, Dirk; Maas, Jürgen
2015-04-01
Drag losses in the powertrain are a serious deficiency for any energy-efficient application, especially for hybrid electrical vehicles. A promising approach for fulfilling requirements like efficiency, wear, safety and dynamics is the use of an innovative MRF-based clutch design for the transmission of power that is based on magnetorheological fluids (MRF). MRF are smart fluids with the particular characteristics of changing their apparent viscosity significantly under influence of the magnetic field. Their characteristics are fast switching times and a smooth torque control in the powertrain. In this paper, a novel clutch concept is investigated that facilitates the controlled movement of the MRF from an active torque-transmitting region into an inactive region of the shear gap. This concept enables a complete disengagement of the fluid engaging surfaces in a way that viscous drag torque can be eliminated. Therefore, a simulation based design for such MRF-based clutches is used to design the required magnetic excitation systems for enabling a well-defined safety behavior by the fluid control. Based on this approach, an MRF-based clutch is developed in detail which provides a loss-reduced alternative to conventional disengagement devices in the powertrain. The presented MRF-based clutch enables a investigation of different systems in one design by changing the magnetic excitation. Especially, different possibilities for the fail-safe behavior of the MRF-based clutch are considered to ensure a well-defined condition in electrical or hybrid powertrains in case of a system failure.
Grimes, Joshua; Celler, Anna
2014-09-15
Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming the same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume (D90) agreeing within ±3%, on average. Conclusions: Several aspects of OLINDA/EXM dose calculation were compared with patient-specific dose estimates obtained using Monte Carlo. Differences in patient anatomy led to large differences in cross-organ doses. However, total organ doses were still in good agreement since most of the deposited dose is due to self-irradiation. Comparison of voxelized doses calculated by Monte Carlo and the voxel S value technique showed that the 3D dose distributions produced by the respective methods are nearly identical.
NASA Astrophysics Data System (ADS)
Li, Hongbo; Wu, Zai-Sheng; Shen, Zhifa; Shen, Guoli; Yu, Ruqin
2014-01-01
An interesting discovery is reported in that G-rich hairpin-based recognition probes can self-assemble into a nano-architecture based on the integration of an intermolecular G-quadruplex structure with the sticky-end pairing effect in the presence of target DNAs. Moreover, GNPs modified with partly complementary DNAs can intensively aggregate by hybridization-based intercalation between intermolecular G-quadruplexes, indicating an inspiring assembly mechanism and a powerful colorimetric DNA detection. The proposed intermolecular G-quadruplex-integrated sticky-end pairing assembly (called GISA)-based colorimetric system allows a specific and quantitative assay of p53 DNA with a linear range of more than two orders of magnitude and a detection limit of 0.2 nM, suggesting a considerably improved analytical performance. And more to the point, the discrimination of single-base mismatched target DNAs can be easily conducted via visual observation. The successful development of the present colorimetric system, especially the GISA-based aggregation mechanism of GNPs is different from traditional approaches, and offers a critical insight into the dependence of the GNP aggregation on the structural properties of oligonucleotides, opening a good way to design colorimetric sensing probes and DNA nanostructure. An interesting discovery is reported in that G-rich hairpin-based recognition probes can self-assemble into a nano-architecture based on the integration of an intermolecular G-quadruplex structure with the sticky-end pairing effect in the presence of target DNAs. Moreover, GNPs modified with partly complementary DNAs can intensively aggregate by hybridization-based intercalation between intermolecular G-quadruplexes, indicating an inspiring assembly mechanism and a powerful colorimetric DNA detection. The proposed intermolecular G-quadruplex-integrated sticky-end pairing assembly (called GISA)-based colorimetric system allows a specific and quantitative assay of p53 DNA with a linear range of more than two orders of magnitude and a detection limit of 0.2 nM, suggesting a considerably improved analytical performance. And more to the point, the discrimination of single-base mismatched target DNAs can be easily conducted via visual observation. The successful development of the present colorimetric system, especially the GISA-based aggregation mechanism of GNPs is different from traditional approaches, and offers a critical insight into the dependence of the GNP aggregation on the structural properties of oligonucleotides, opening a good way to design colorimetric sensing probes and DNA nanostructure. Electronic supplementary information (ESI) available: Experimental section, supplementary Figures and perspectives. See DOI: 10.1039/c3nr03547f
Discrete diffusion Monte Carlo for frequency-dependent radiative transfer
Densmore, Jeffrey D; Kelly, Thompson G; Urbatish, Todd J
2010-11-17
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
Hybrid solar-fossil fuel power generation
Sheu, Elysia J. (Elysia Ja-Zeng)
2012-01-01
In this thesis, a literature review of hybrid solar-fossil fuel power generation is first given with an emphasis on system integration and evaluation. Hybrid systems are defined as those which use solar energy and fuel ...
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2005-09-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
NASA Astrophysics Data System (ADS)
Xia, Weiwei; Shen, Lianfeng
We propose two vertical handoff schemes for cellular network and wireless local area network (WLAN) integration: integrated service-based handoff (ISH) and integrated service-based handoff with queue capabilities (ISHQ). Compared with existing handoff schemes in integrated cellular/WLAN networks, the proposed schemes consider a more comprehensive set of system characteristics such as different features of voice and data services, dynamic information about the admitted calls, user mobility and vertical handoffs in two directions. The code division multiple access (CDMA) cellular network and IEEE 802.11e WLAN are taken into account in the proposed schemes. We model the integrated networks by using multi-dimensional Markov chains and the major performance measures are derived for voice and data services. The important system parameters such as thresholds to prioritize handoff voice calls and queue sizes are optimized. Numerical results demonstrate that the proposed ISHQ scheme can maximize the utilization of overall bandwidth resources with the best quality of service (QoS) provisioning for voice and data services.
ERIC Educational Resources Information Center
Kalyn, Brenda
2006-01-01
Integrated learning is an exciting adventure for both teachers and students. It is not uncommon to observe the integration of academic subjects such as math, science, and language arts. However, educators need to recognize that movement experiences in physical education also can be linked to academic curricula and, may even lead the…
NASA Technical Reports Server (NTRS)
Sproles, Darrell W.; Bavuso, Salvatore J.
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical postprocessor program HARPO (HARP Output). HARPO reads ASCII files generated by HARP. It provides an interactive plotting capability that can be used to display alternate model data for trade-off analyses. File data can also be imported to other commercial software programs.
NASA Astrophysics Data System (ADS)
Chen, Jeng-Tzong; Lee, Jia-Wei; Shyu, Wen-Shinn
2012-01-01
Following the success of seismic analysis of a semi-circular hill, the problem of SH-wave scattering by a semi-elliptical hill is revisited by using the null-field boundary integral equation method (BIEM). To fully use the analytical property in the null-field boundary integral equation approach in conjunction with degenerate kernels for solving the semi-elliptical hill scattering problem, the problem is decomposed to two regions to produce elliptical boundaries by using the technique of taking free body. One is the half-plane problem containing a semi-elliptical boundary. This semi-infinite problem is imbedded in an infinite plane with an artificial elliptical boundary such that degenerate kernel can be fully applied. The other is an interior problem bounded by an elliptical boundary. The degenerate kernel in the elliptic coordinates for two subdomains is used to expand the closed-form fundamental solution. The semi-analytical formulation in companion with matching boundary conditions yields six constraint equations. Instead of finding admissible wave-expansion bases, our null-field BIEM in conjunction with degenerate kernels has the five features over the conventional BIEM/BEM, (1) free of calculating principal values, (2) exponential convergence, (3) elimination of boundary-layer effect, (4) meshless and (5) well-posed system. All numerical results are compared well with those of using the hybrid method which is also described in this paper. It is interesting to find that a focusing phenomenon is also observed in this study.
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Rothmann, Elizabeth; Mittal, Nitin; Koppen, Sandra Howell
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems, and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical preprocessor Graphics Oriented (GO) program. GO is a graphical user interface for the HARP engine that enables the drawing of reliability/availability models on a monitor. A mouse is used to select fault tree gates or Markov graphical symbols from a menu for drawing.
Kumar, Ramesh; Pal, Parimal
2015-02-01
Experimental investigations were carried out on continuous and direct production of poly-(?-glutamic acid) in a hybrid reactor system that integrated conventional fermentative production step with membrane-based downstream separation and purification. Novelty of the integrated system lies in high degree of purity, conversion, yield and productivity of poly-(?-glutamic acid) through elimination of substrate-product inhibitions of traditional batch production system. This new system is compact, flexible, eco-friendly and largely fouling-free ensuring steady and continuous production of poly-(?-glutamic acid) directly from a renewable carbon source at the rate of 0.91 g/L/h. Cross-flow microfiltration membrane modules ensured almost complete separation and recycle of cells without much fouling problem. Well-screened ultrafiltration membrane module helped to concentrate poly-(?-glutamic acid) while ensuring recovery and recycle of 96% unconverted carbon source resulting in yield of 0.6g/g along with high product purity. PMID:25484125
ERIC Educational Resources Information Center
Reisslein, Jana; Seeling, Patrick; Reisslein, Martin
2005-01-01
An important challenge in the introductory communication networks course in electrical and computer engineering curricula is to integrate emerging topics, such as wireless Internet access and network security, into the already content-intensive course. At the same time it is essential to provide students with experiences in online collaboration,…
ERIC Educational Resources Information Center
Zitter, Ilya; Hoeve, Aimee
2012-01-01
This paper deals with the problematic nature of the transition between education and the workplace. A smooth transition between education and the workplace requires learners to develop an integrated knowledge base, but this is problematic as most educational programmes offer knowledge and experiences in a fragmented manner, scattered over a…
Romand, Raymond; Ripp, Raymond; Poidevin, Laetitia; Boeglin, Marcel; Geffers, Lars; Dollé, Pascal; Poch, Olivier
2015-01-01
An in situ hybridization (ISH) study was performed on 2000 murine genes representing around 10% of the protein-coding genes present in the mouse genome using data generated by the EURExpress consortium. This study was carried out in 25 tissues of late gestation embryos (E14.5), with a special emphasis on the developing ear and on five distinct developing sensory organs, including the cochlea, the vestibular receptors, the sensory retina, the olfactory organ, and the vibrissae follicles. The results obtained from an analysis of more than 11,000 micrographs have been integrated in a newly developed knowledgebase, called ImAnno. In addition to managing the multilevel micrograph annotations performed by human experts, ImAnno provides public access to various integrated databases and tools. Thus, it facilitates the analysis of complex ISH gene expression patterns, as well as functional annotation and interaction of gene sets. It also provides direct links to human pathways and diseases. Hierarchical clustering of expression patterns in the 25 tissues revealed three main branches corresponding to tissues with common functions and/or embryonic origins. To illustrate the integrative power of ImAnno, we explored the expression, function and disease traits of the sensory epithelia of the five presumptive sensory organs. The study identified 623 genes (out of 2000) concomitantly expressed in the five embryonic epithelia, among which many (?12%) were involved in human disorders. Finally, various multilevel interaction networks were characterized, highlighting differential functional enrichments of directly or indirectly interacting genes. These analyses exemplify an under-represention of "sensory" functions in the sensory gene set suggests that E14.5 is a pivotal stage between the developmental stage and the functional phase that will be fully reached only after birth. PMID:25706271
Colaprico, Antonio; Cava, Claudia; Bertoli, Gloria; Bontempi, Gianluca; Castiglioni, Isabella
2015-01-01
In this work an integrated approach was used to identify functional miRNAs regulating gene pathway cross-talk in breast cancer (BC). We first integrated gene expression profiles and biological pathway information to explore the underlying associations between genes differently expressed among normal and BC samples and pathways enriched from these genes. For each pair of pathways, a score was derived from the distribution of gene expression levels by quantifying their pathway cross-talk. Random forest classification allowed the identification of pairs of pathways with high cross-talk. We assessed miRNAs regulating the identified gene pathways by a mutual information analysis. A Fisher test was applied to demonstrate their significance in the regulated pathways. Our results suggest interesting networks of pathways that could be key regulatory of target genes in BC, including stem cell pluripotency, coagulation, and hypoxia pathways and miRNAs that control these networks could be potential biomarkers for diagnostic, prognostic, and therapeutic development in BC. This work shows that standard methods of predicting normal and tumor classes such as differentially expressed miRNAs or transcription factors could lose intrinsic features; instead our approach revealed the responsible molecules of the disease. PMID:26240829
Colaprico, Antonio; Cava, Claudia; Bertoli, Gloria; Bontempi, Gianluca; Castiglioni, Isabella
2015-01-01
In this work an integrated approach was used to identify functional miRNAs regulating gene pathway cross-talk in breast cancer (BC). We first integrated gene expression profiles and biological pathway information to explore the underlying associations between genes differently expressed among normal and BC samples and pathways enriched from these genes. For each pair of pathways, a score was derived from the distribution of gene expression levels by quantifying their pathway cross-talk. Random forest classification allowed the identification of pairs of pathways with high cross-talk. We assessed miRNAs regulating the identified gene pathways by a mutual information analysis. A Fisher test was applied to demonstrate their significance in the regulated pathways. Our results suggest interesting networks of pathways that could be key regulatory of target genes in BC, including stem cell pluripotency, coagulation, and hypoxia pathways and miRNAs that control these networks could be potential biomarkers for diagnostic, prognostic, and therapeutic development in BC. This work shows that standard methods of predicting normal and tumor classes such as differentially expressed miRNAs or transcription factors could lose intrinsic features; instead our approach revealed the responsible molecules of the disease. PMID:26240829
Aphale, Ashish; Maisuria, Krushangi; Mahapatra, Manoj K.; Santiago, Angela; Singh, Prabhakar; Patra, Prabir
2015-01-01
Supercapacitors also known as electrochemical capacitors, that store energy via either Faradaic or non-Faradaic processes, have recently grown popularity mainly because they complement, and can even replace, conventional energy storage systems in variety of applications. Supercapacitor performance can be improved significantly by developing new nanocomposite electrodes which utilizes both the energy storage processes simultaneously. Here we report, fabrication of the freestanding hybrid electrodes, by incorporating graphene and carbon nanotubes (CNT) in pyrrole monomer via its in-situ polymerization. At the scan rate of 5?mV s?1, the specific capacitance of the polypyrrole-CNT-graphene (PCG) electrode film was 453?F g?1 with ultrahigh energy and power density of 62.96?W h kg?1 and 566.66?W kg?1 respectively, as shown in the Ragone plot. A nanofibrous membrane was electrospun and effectively used as a separator in the supercapacitor. Four supercapacitors were assembled in series to demonstrate the device performance by lighting a 2.2?V LED. PMID:26395922
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded. PMID:25110745
Aphale, Ashish; Maisuria, Krushangi; Mahapatra, Manoj K; Santiago, Angela; Singh, Prabhakar; Patra, Prabir
2015-01-01
Supercapacitors also known as electrochemical capacitors, that store energy via either Faradaic or non-Faradaic processes, have recently grown popularity mainly because they complement, and can even replace, conventional energy storage systems in variety of applications. Supercapacitor performance can be improved significantly by developing new nanocomposite electrodes which utilizes both the energy storage processes simultaneously. Here we report, fabrication of the freestanding hybrid electrodes, by incorporating graphene and carbon nanotubes (CNT) in pyrrole monomer via its in-situ polymerization. At the scan rate of 5?mV s(-1), the specific capacitance of the polypyrrole-CNT-graphene (PCG) electrode film was 453?F g(-1) with ultrahigh energy and power density of 62.96?W h kg(-1) and 566.66?W kg(-1) respectively, as shown in the Ragone plot. A nanofibrous membrane was electrospun and effectively used as a separator in the supercapacitor. Four supercapacitors were assembled in series to demonstrate the device performance by lighting a 2.2?V LED. PMID:26395922
NASA Astrophysics Data System (ADS)
Aphale, Ashish; Maisuria, Krushangi; Mahapatra, Manoj K.; Santiago, Angela; Singh, Prabhakar; Patra, Prabir
2015-09-01
Supercapacitors also known as electrochemical capacitors, that store energy via either Faradaic or non-Faradaic processes, have recently grown popularity mainly because they complement, and can even replace, conventional energy storage systems in variety of applications. Supercapacitor performance can be improved significantly by developing new nanocomposite electrodes which utilizes both the energy storage processes simultaneously. Here we report, fabrication of the freestanding hybrid electrodes, by incorporating graphene and carbon nanotubes (CNT) in pyrrole monomer via its in-situ polymerization. At the scan rate of 5?mV s-1, the specific capacitance of the polypyrrole-CNT-graphene (PCG) electrode film was 453?F g-1 with ultrahigh energy and power density of 62.96?W h kg-1 and 566.66?W kg-1 respectively, as shown in the Ragone plot. A nanofibrous membrane was electrospun and effectively used as a separator in the supercapacitor. Four supercapacitors were assembled in series to demonstrate the device performance by lighting a 2.2?V LED.
Luo, Ye; Chamanzar, Maysamreza; Apuzzo, Aniello; Salas-Montiel, Rafael; Nguyen, Kim Ngoc; Blaize, Sylvain; Adibi, Ali
2015-02-11
The enhancement and confinement of electromagnetic radiation to nanometer scale have improved the performances and decreased the dimensions of optical sources and detectors for several applications including spectroscopy, medical applications, and quantum information. Realization of on-chip nanofocusing devices compatible with silicon photonics platform adds a key functionality and provides opportunities for sensing, trapping, on-chip signal processing, and communications. Here, we discuss the design, fabrication, and experimental demonstration of light nanofocusing in a hybrid plasmonic-photonic nanotaper structure. We discuss the physical mechanisms behind the operation of this device, the coupling mechanisms, and how to engineer the energy transfer from a propagating guided mode to a trapped plasmonic mode at the apex of the plasmonic nanotaper with minimal radiation loss. Optical near-field measurements and Fourier modal analysis carried out using a near-field scanning optical microscope (NSOM) show a tight nanofocusing of light in this structure to an extremely small spot of 0.00563(?/(2n(rmax)))(3) confined in 3D and an exquisite power input conversion of 92%. Our experiments also verify the mode selectivity of the device (low transmission of a TM-like input mode and high transmission of a TE-like input mode). A large field concentration factor (FCF) of about 4.9 is estimated from our NSOM measurement with a radius of curvature of about 20 nm at the apex of the nanotaper. The agreement between our theory and experimental results reveals helpful insights about the operation mechanism of the device, the interplay of the modes, and the gradual power transfer to the nanotaper apex. PMID:25562706
Rosca, Florin
2012-06-15
Purpose: To present a mixed electron and photon IMRT planning technique using electron beams with an energy range of 6-22 MeV and standard hardware that minimizes integral dose to patients for targets as deep as 7.5 cm. Methods: Ten brain cases, two lung, a thyroid, an abdominal, and a parotid case were planned using two planning techniques: a photon-only IMRT (IMRT) versus a mixed modality treatment (E + IMRT) that includes an enface electron beam and a photon IMRT portion that ensures a uniform target coverage. The electron beam is delivered using a regular cutout placed in an electron cone. The electron energy was chosen to provide a good trade-off between minimizing integral dose and generating a uniform, deliverable plan. The authors choose electron energies that cover the deepest part of PTV with the 65%-70% isodose line. The normal tissue integral dose, the dose for ring structures around the PTV, and the volumes of the 75%, 50%, and 25% isosurfaces were used to compare the dose distributions generated by the two planning techniques. Results: The normal tissue integral dose was lowered by about 20% by the E + IMRT plans compared to the photon-only IMRT ones for most studied cases. With the exception of lungs, the dose reduction associated to the E + IMRT plans was more pronounced further away from the target. The average dose ratio delivered to the 0-2 cm and the 2-4 cm ring structures for brain patients for the two planning techniques were 89.6% and 70.8%, respectively. The enhanced dose sparing away from the target for the brain patients can also be observed in the ratio of the 75%, 50%, and 25% isodose line volumes for the two techniques, which decreases from 85.5% to 72.6% and further to 65.1%, respectively. For lungs, the lateral electron beams used in the E + IMRT plans were perpendicular to the mostly anterior/posterior photon beams, generating much more conformal plans. Conclusions: The authors proved that even using the existing electron delivery hardware, a mixed electron/photon planning technique (E + IMRT) can decrease the normal tissue integral dose compared to a photon-only IMRT plan. Different planning approaches can be enabled by the use of an electron beam directed toward organs at risk distal to the target, which are still spared due the rapid dose fall-off of the electron beam. Examples of such cases are the lateral electron beams in the thoracic region that do not irradiate the heart and contralateral lung, electron beams pointed toward kidneys in the abdominal region, or beams treating brain lesions pointed toward the brainstem or optical apparatus. For brain, electron vertex beams can also be used without irradiating the whole body. Since radiation retreatments become more and more common, minimizing the normal tissue integral dose and the dose delivered to tissues surrounding the target, as enabled by E + IMRT type techniques, should receive more attention.
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
Powers, J J
2011-11-28
This study focused on creating a new tristructural isotropic (TRISO) coated particle fuel performance model and demonstrating the integration of this model into an existing system of neutronics and heat transfer codes, creating a user-friendly option for including fuel performance analysis within system design optimization and system-level trade-off studies. The end product enables both a deeper understanding and better overall system performance of nuclear energy systems limited or greatly impacted by TRISO fuel performance. A thorium-fueled hybrid fusion-fission Laser Inertial Fusion Energy (LIFE) blanket design was used for illustrating the application of this new capability and demonstrated both the importance of integrating fuel performance calculations into mainstream design studies and the impact that this new integrated analysis had on system-level design decisions. A new TRISO fuel performance model named TRIUNE was developed and verified and validated during this work with a novel methodology established for simulating the actual lifetime of a TRISO particle during repeated passes through a pebble bed. In addition, integrated self-consistent calculations were performed for neutronics depletion analysis, heat transfer calculations, and then fuel performance modeling for a full parametric study that encompassed over 80 different design options that went through all three phases of analysis. Lastly, side studies were performed that included a comparison of thorium and depleted uranium (DU) LIFE blankets as well as some uncertainty quantification work to help guide future experimental work by assessing what material properties in TRISO fuel performance modeling are most in need of improvement. A recommended thorium-fueled hybrid LIFE engine design was identified with an initial fuel load of 20MT of thorium, 15% TRISO packing within the graphite fuel pebbles, and a 20cm neutron multiplier layer with beryllium pebbles in flibe molten salt coolant. It operated at a system power level of 2000 MW{sub th}, took about 3.5 years to reach full plateau power, and was capable of an End of Plateau burnup of 38.7 %FIMA if considering just the neutronic constraints in the system design; however, fuel performance constraints led to a maximum credible burnup of 12.1 %FIMA due to a combination of internal gas pressure and irradiation effects on the TRISO materials (especially PyC) leading to SiC pressure vessel failures. The optimal neutron spectrum for the thorium-fueled blanket options evaluated seemed to favor a hard spectrum (low but non-zero neutron multiplier thicknesses and high TRISO packing fractions) in terms of neutronic performance but the fuel performance constraints demonstrated that a significantly softer spectrum would be needed to decrease the rate of accumulation of fast neutron fluence in order to improve the maximum credible burnup the system could achieve.
NASA Astrophysics Data System (ADS)
Garcia, A.
2001-08-01
In the past decade a number of hybrid methods, which combine Direct Simulation Monte Carlo (DSMC) with computational fluid dynamics (CFD) algorithms, have been proposed. The motivation for developing such hybrids is primarily the computational expense of DSMC relative to continuum CFD schemes. After expanding on this motivation, specifically as it applies to microscopic flows, this talk will present a brief, chronological review of the various existing schemes. Elements common to all schemes (e.g., the choice of CFD algorithm, the creation of DSMC particles, the selection of the interface location, the numerical accuracy and stability of the coupling) will be discussed. Adaptive Mesh and Algorithm Refinement, a hybrid built with using framework of adaptive mesh refinement, will be presented in greater detail. Finally, future directions and challenges for particle/continuum hybrids will be outlined.
Dwyer, Heather E; Jasieniuk, Marie; Okada, Miki; Shapiro, Arthur M
2015-01-01
Gene flow and hybridization among species dramatically affect our understanding of the species as a biological unit, species relationships, and species adaptations. In North American Colias eurytheme and Colias eriphyle, there has been historical debate over the extent of hybridization occurring and the identity of phenotypically intermediate individuals as genetic hybrids. This study assesses the population structure of these two species to measure the extent of hybridization and the genetic identity of phenotypic intermediates as hybrids. Amplified fragment length polymorphism (AFLP) marker analysis was performed on 378 specimens collected from northern California and Nevada. Population structure was inferred using a Bayesian/Markov chain Monte Carlo method, which probabilistically assigns individuals to genetic clusters. Three genetic clusters provided the best fit for the data. C. eurytheme individuals were primarily assigned to two closely related clusters, and C. eriphyle individuals were mostly assigned to a third, more distantly related cluster. There appeared to be significant hybridization between the two species. Individuals of intermediate phenotype (putative hybrids) were found to be genetically indistinguishable from C. eriphyle, indicating that previous work based on the assumption that these intermediate forms are hybrids may warrant reconsideration. PMID:26306172
NASA Astrophysics Data System (ADS)
Li, L.; Wang, K.; Li, H.; Eibert, T. F.
2014-11-01
A hybrid higher-order finite element boundary integral (FE-BI) technique is discussed where the higher-order FE matrix elements are computed by a fully analytical procedure and where the gobal matrix assembly is organized by a self-identifying procedure of the local to global transformation. This assembly procedure applys to both, the FE part as well as the BI part of the algorithm. The geometry is meshed into three-dimensional tetrahedra as finite elements and nearly orthogonal hierarchical basis functions are employed. The boundary conditions are implemented in a strong sense such that the boundary values of the volume basis functions are directly utilized within the BI, either for the tangential electric and magnetic fields or for the asssociated equivalent surface current densities by applying a cross product with the unit surface normals. The self-identified method for the global matrix assembly automatically discerns the global order of the basis functions for generating the matrix elements. Higher order basis functions do need more unknowns for each single FE, however, fewer FEs are needed to achieve the same satisfiable accuracy. This improvement provides a lot more flexibility for meshing and allows the mesh size to raise up to ?/3. The performance of the implemented system is evaluated in terms of computation time, accuracy and memory occupation, where excellent results with respect to precision and computation times of large scale simulations are found.
Yoon, Ki-Hong; Oh, Su Hwan; Kim, Ki Soo; Kwon, O-Kyun; Oh, Dae Kon; Noh, Young-Ouk; Lee, Hyung-Jong
2010-03-15
We presented a hybridly-integrated tunable external cavity laser with 0.8 nm mode spacing 16 channels operating in the direct modulation of 2.5-Gbps for a low-cost source of a WDM-PON system. The tunable laser was fabricated by using a superluminescent diode (SLD) and a polymer Bragg reflector. The maximum output power and the power slope efficiency of the tunable laser were 10.3 mW and 0.132 mW/mA, respectively, at the SLD current of 100 mA and the temperature of 25 degrees C. The directly-modulated tunable laser successfully provided 2.5-Gbps transmissions through 20-km standard single mode fiber. The power penalty of the tunable laser was less than 0.8 dB for 16 channels after a 20-km transmission. The power penalty variation was less than 1.4 dB during the blue-shifted wavelength tuning. PMID:20389571
NASA Astrophysics Data System (ADS)
Michel, R.; Peiffer, F.; Stück, R.
1985-08-01
Integral excitation functions for p-induced reactions on V, Mn and Co were measured for p-energies between 45 and 200 MeV. 30 excitation functions for the production of radionuclides with 43 ? A ? 60 were determined. For most of the reactions, ranging from (p, n) to (p, 7p9n), scarcely no experimental data existed before now in this energy region. Together with earlier measurements the new data provide a consistent set of excitation functions of p-induced reactions from 0 to 200 MeV for these elements. A detailed hybrid model analysis was performed using the recent code ALICE LIVERMORE 82, which in contrast to earlier formulations of the model takes into account experimental mass data, multiple preequilibrium nucleon emission and non-integer initial excitation numbers. In general, the a priori calculations satisfactorily reproduce the experimental excitation functions. Some discrepancies have to be attributed to the neglect of direct processes and of preequilibrium emission of ?-particles. The reasons for a complete failure of the calculations in some exceptional cases are still not clear.
High-coherence semiconductor lasers based on integral high-Q resonators in hybrid Si/III-V platforms
Santis, Christos Theodoros; Steger, Scott T.; Vilenchik, Yaakov; Vasilyev, Arseny; Yariv, Amnon
2014-01-01
The semiconductor laser (SCL) is the principal light source powering the worldwide optical fiber network. The ever-increasing demand for data is causing the network to migrate to phase-coherent modulation formats, which place strict requirements on the temporal coherence of the light source that no longer can be met by current SCLs. This failure can be traced directly to the canonical laser design, in which photons are both generated and stored in the same, optically lossy, III-V material. This leads to an excessive and large amount of noisy spontaneous emission commingling with the laser mode, thereby degrading its coherence. High losses also decrease the amount of stored optical energy in the laser cavity, magnifying the effect of each individual spontaneous emission event on the phase of the laser field. Here, we propose a new design paradigm for the SCL. The keys to this paradigm are the deliberate removal of stored optical energy from the lossy III-V material by concentrating it in a passive, low-loss material and the incorporation of a very high-Q resonator as an integral (i.e., not externally coupled) part of the laser cavity. We demonstrate an SCL with a spectral linewidth of 18 kHz in the telecom band around 1.55 ?m, achieved using a single-mode silicon resonator with Q of 106. PMID:24516134
Nuclear hybrid energy infrastructure
Agarwal, Vivek; Tawfik, Magdy S.
2015-02-01
The nuclear hybrid energy concept is becoming a reality for the US energy infrastructure where combinations of the various potential energy sources (nuclear, wind, solar, biomass, and so on) are integrated in a hybrid energy system. This paper focuses on challenges facing a hybrid system with a Small Modular Reactor at its core. The core of the paper will discuss efforts required to develop supervisory control center that collects data, supports decision-making, and serves as an information hub for supervisory control center. Such a center will also be a model for integrating future technologies and controls. In addition, advanced operations research, thermal cycle analysis, energy conversion analysis, control engineering, and human factors engineering will be part of the supervisory control center. Nuclear hybrid energy infrastructure would allow operators to optimize the cost of energy production by providing appropriate means of integrating different energy sources. The data needs to be stored, processed, analyzed, trended, and projected at right time to right operator to integrate different energy sources.
NASA Astrophysics Data System (ADS)
Robinson, Patrick J.
Gasification has been used in industry on a relatively limited scale for many years, but it is emerging as the premier unit operation in the energy and chemical industries. The switch from expensive and insecure petroleum to solid hydrocarbon sources (coal and biomass) is occurring due to the vast amount of domestic solid resources, national security and global warming issues. Gasification (or partial oxidation) is a vital component of "clean coal" technology. Sulfur and nitrogen emissions can be reduced, overall energy efficiency is increased and carbon dioxide recovery and sequestration are facilitated. Gasification units in an electric power generation plant produce a fuel gas for driving combustion turbines. Gasification units in a chemical plant generate synthesis gas, which can be used to produce a wide spectrum of chemical products. Future plants are predicted to be hybrid power/chemical plants with gasification as the key unit operation. The coupling of an Integrated Gasification Combined Cycle (IGCC) with a methanol plant can handle swings in power demand by diverting hydrogen gas from a combustion turbine and synthesis gas from the gasifier to a methanol plant for the production of an easily-stored, hydrogen-consuming liquid product. An additional control degree of freedom is provided with this hybrid plant, fundamentally improving the controllability of the process. The idea is to base-load the gasifier and use the more responsive gas-phase units to handle disturbances. During the summer days, power demand can fluctuate up to 50% over a 12-hour period. The winter provides a different problem where spikes of power demand can go up 15% within the hour. The following dissertation develops a hybrid IGCC / methanol plant model, validates the steady-state results with a National Energy Technical Laboratory study, and tests a proposed control structure to handle these significant disturbances. All modeling was performed in the widely used chemical process simulators Aspen Plus and Aspen Dynamics. This dissertation first presents a simple approximate method for achieving the objective of having a gasifier model that can be exported into Aspen Dynamics. Limitations in the software dealing with solids make this a necessary task. The basic idea is to use a high molecular weight hydrocarbon that is present in the Aspen library as a pseudo fuel. For many plantwide dynamic studies, a rigorous high-fidelity dynamic model of the gasifier is not needed because its dynamics are very fast and the gasifier gas volume is a relatively small fraction of the total volume of the entire plant. The proposed approximate model captures the essential macro-scale thermal, flow, composition and pressure dynamics. This paper does not attempt to optimize the design or control of gasifiers, but merely presents an idea of how to dynamically simulate coal gasification in an approximate way. This dissertation also presents models of the downstream units of a typical IGCC. Dynamic simulations of the H2S absorption/stripping unit, Water-gas Shift (WGS) reactors, and CO2 absorption/stripping unit are essential for the development of stable and agile plantwide control structures of this hybrid power/chemical plant. Due to the high pressure of the system, hydrogen sulfide is removed by means of physical absorption. SELEXOLRTM (a mixture of the dimethyl ethers of polyethylene glycol) is used to achieve a gas purity of less than 5 ppm H2S. This desulfurized synthesis gas is sent to two water gas shift reactors that convert a total of 99% of carbon monoxide to hydrogen. Physical absorption of carbon dioxide with Selexol produces a hydrogen rich stream (90 mol% H2) to be fed into combustion turbines or to a methanol plant. Steady-state economic designs and plantwide control structures are developed in this dissertation. A steady-state economic design, control structure, and successful turndown of the methanol plant are shown in this dissertation. The Plantwide control structure and interaction among units are also shown. The methanol plant was si
Improved geometry representations for Monte Carlo radiation transport.
Martin, Matthew Ryan
2004-08-01
ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.
Optimally Combining Sampling Techniques for Monte Carlo Rendering
Kazhdan, Michael
Computer Science Department Stanford University Abstract Monte Carlo integration is a powerful technique one sampling technique to estimate an integral with low variance. Normally this is accomplished several techniques. We do not construct new sampling methods--all the samples we use come from one
Optimally Combining Sampling Techniques for Monte Carlo Rendering
Stanford University
Computer Science Department Stanford University Abstract Monte Carlo integration is a powerful technique often need more than one sampling technique to estimate an integral with low variance. Normally by combining samples from several techniques. We do not construct new sampling methods---all the samples we use
Quantum Monte Carlo study of porphyrin transition metal complexes
NASA Astrophysics Data System (ADS)
Koseki, Jun; Maezono, Ryo; Tachikawa, Masanori; Towler, M. D.; Needs, R. J.
2008-08-01
Diffusion quantum Monte Carlo (DMC) calculations for transition metal (M) porphyrin complexes (MPo, M=Ni,Cu,Zn) are reported. We calculate the binding energies of the transition metal atoms to the porphin molecule. Our DMC results are in reasonable agreement with those obtained from density functional theory calculations using the B3LYP hybrid exchange-correlation functional. Our study shows that such calculations are feasible with the DMC method.
Tan, Li; Jiang, Hongbo; Wang, Ying; Wei, Sheng; Nie, Shaofa
2014-01-01
Background Outbreaks of hand-foot-mouth disease (HFMD) have been reported for many times in Asia during the last decades. This emerging disease has drawn worldwide attention and vigilance. Nowadays, the prevention and control of HFMD has become an imperative issue in China. Early detection and response will be helpful before it happening, using modern information technology during the epidemic. Method In this paper, a hybrid model combining seasonal auto-regressive integrated moving average (ARIMA) model and nonlinear auto-regressive neural network (NARNN) is proposed to predict the expected incidence cases from December 2012 to May 2013, using the retrospective observations obtained from China Information System for Disease Control and Prevention from January 2008 to November 2012. Results The best-fitted hybrid model was combined with seasonal ARIMA and NARNN with 15 hidden units and 5 delays. The hybrid model makes the good forecasting performance and estimates the expected incidence cases from December 2012 to May 2013, which are respectively ?965.03, ?1879.58, 4138.26, 1858.17, 4061.86 and 6163.16 with an obviously increasing trend. Conclusion The model proposed in this paper can predict the incidence trend of HFMD effectively, which could be helpful to policy makers. The usefulness of expected cases of HFMD perform not only in detecting outbreaks or providing probability statements, but also in providing decision makers with a probable trend of the variability of future observations that contains both historical and recent information. PMID:24893000
January 30, 2003 1 Center for Satellite and HybridCenter for Satellite and Hybrid
Gligor, Virgil D.
administration of shared applications · Joint Administration Services Joint administration requirements: JointJanuary 30, 2003 1 Center for Satellite and HybridCenter for Satellite and Hybrid Communication NetworksCommunication Networks Integrated Security Services for DynamicIntegrated Security Services
Quantum Monte Carlo Calculations for Carbon Nanotubes
Thomas Luu; Timo A. Lähde
2015-11-16
We show how lattice Quantum Monte Carlo can be applied to the electronic properties of carbon nanotubes in the presence of strong electron-electron correlations. We employ the path-integral formalism and use methods developed within the lattice QCD community for our numerical work. Our lattice Hamiltonian is closely related to the hexagonal Hubbard model augmented by a long-range electron-electron interaction. We apply our method to the single-quasiparticle spectrum of the (3,3) armchair nanotube configuration, and consider the effects of strong electron-electron correlations. Our approach is equally applicable to other nanotubes, as well as to other carbon nanostructures. We benchmark our Monte Carlo calculations against the two- and four-site Hubbard models, where a direct numerical solution is feasible.
ERIC Educational Resources Information Center
Beckwith, E. George; Cunniff, Daniel T.
2009-01-01
Online course enrollment has increased dramatically over the past few years. The authors cite the reasons for this rapid growth and the opportunities open for enhancing teaching/learning techniques such as video conferencing and hybrid class combinations. The authors outlined an example of an accelerated learning, eight-class session course…
NASA Astrophysics Data System (ADS)
Meyer, C. A.; Swanson, E. S.
2015-05-01
A review of the theoretical and experimental status of hybrid hadrons is presented. The states ?1(1400) , ?1(1600) , and ?1(2015) are thoroughly reviewed, along with experimental results from GAMS, VES, Obelix, COMPASS, KEK, CLEO, Crystal Barrel, CLAS, and BNL. Theoretical lattice results on the gluelump spectrum, adiabatic potentials, heavy and light hybrids, and transition matrix elements are discussed. These are compared with bag, string, flux tube, and constituent gluon models. Strong and electromagnetic decay models are described and compared to lattice gauge theory results. We conclude that while good evidence for the existence of a light isovector exotic meson exists, its confirmation as a hybrid meson awaits discovery of its iso-partners. We also conclude that lattice gauge theory rules out a number of hybrid models and provides a reference to judge the success of others.
Furth, H.P.; Ludescher, C.
1984-08-01
The present paper briefly reviews the subject of tokamak-stellarator and pinch-stellarator hybrids, and points to two interesting new possibilities: compact-torus-stellarators and mirror-stellarators.
Quantum Monte Carlo Simulations of Tunneling in Quantum Adiabatic Optimization
Lucas T. Brady; Wim van Dam
2015-09-17
We explore to what extent path-integral quantum Monte Carlo methods can efficiently simulate the tunneling behavior of quantum adiabatic optimization algorithms. Specifically we look at symmetric cost functions defined over n bits with a single potential barrier that a successful optimization algorithm will have to tunnel through. The height and width of this barrier depend on n, and by tuning these dependencies, we can make the optimization algorithm succeed or fail in polynomial time. In this article we compare the strength of quantum adiabatic tunneling with that of path-integral quantum Monte Carlo methods. We find numerical evidence that quantum Monte Carlo algorithms will succeed in the same regimes where quantum adiabatic optimization succeeds.
Cooling/grounding mount for hybrid circuits
NASA Technical Reports Server (NTRS)
Bagstad, B.; Estrada, R.; Mandel, H.
1981-01-01
Extremely short input and output connections, adequate grounding, and efficient heat removal for hybrid integrated circuits are possible with mounting. Rectangular clamp holds hybrid on printed-circuit board, in contact with heat-conductive ground plate. Clamp is attached to ground plane by bolts.
Monte Carlo Generation of Bohmian Trajectories
T. M. Coffey; R. E. Wyatt; W. C. Schieve
2008-07-01
We report on a Monte Carlo method that generates one-dimensional trajectories for Bohm's formulation of quantum mechanics that doesn't involve differentiation or integration of any equations of motion. At each time, t=n\\delta t (n=1,2,3,...), N particle positions are randomly sampled from the quantum probability density. Trajectories are built from the sorted N sampled positions at each time. These trajectories become the exact Bohm solutions in the limits N->\\infty and \\delta t -> 0. Higher dimensional problems can be solved by this method for separable wave functions. Several examples are given, including the two-slit experiment.
Marcus, Ryan C.
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Baker, R.S.; Filippone, W.F. . Dept. of Nuclear and Energy Engineering); Alcouffe, R.E. )
1991-01-01
The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation of decoupling. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and group sources. The hybrid method provides a new method of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by itself. The fully coupled Monte Carlo/S{sub N} method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and group sources, and linkage subroutines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating the S{sub N} calculations. The Monte Carlo routines have been successfully vectorized, with approximately a factor of five increases in speed over the nonvectorized version. The hybrid method is capable of solving forward, inhomogeneous source problems in X-Y and R-Z geometries. This capability now includes mulitigroup problems involving upscatter and fission in non-highly multiplying systems. 8 refs., 8 figs., 1 tab.
Imaginary time correlations within Quantum Monte Carlo methods Markus Holzmann
van Tiggelen, Bart
Imaginary time correlations within Quantum Monte Carlo methods Markus Holzmann LPMMC, CNRS-UJF, BP integral calculations also give access to imaginary time correlations which contain important information about the real time evolution of the quantum system in the linear response regime, experimentally
Tempo Tracking and Rhythm Quantization by Sequential Monte Carlo
Cemgil, A. Taylan
Tempo Tracking and Rhythm Quantization by Sequential Monte Carlo Ali Taylan Cemgil and Bert Kappen known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization variables is integrated out. The resulting model is suitable for realtime tempo tracking and tran- scription
Cao, Shaomei; Feng, Xin; Song, Yuanyuan; Xue, Xin; Liu, Hongjiang; Miao, Miao; Fang, Jianhui; Shi, Liyi
2015-05-27
A free-standing lithium titanate (Li4Ti5O12)/carbon nanotube/cellulose nanofiber hybrid network film is successfully assembled by using a pressure-controlled aqueous extrusion process, which is highly efficient and easily to scale up from the perspective of disposable and recyclable device production. This hybrid network film used as a lithium-ion battery (LIB) electrode has a dual-layer structure consisting of Li4Ti5O12/carbon nanotube/cellulose nanofiber composites (hereinafter referred to as LTO/CNT/CNF), and carbon nanotube/cellulose nanofiber composites (hereinafter referred to as CNT/CNF). In the heterogeneous fibrous network of the hybrid film, CNF serves simultaneously as building skeleton and a biosourced binder, which substitutes traditional toxic solvents and synthetic polymer binders. Of importance here is that the CNT/CNF layer is used as a lightweight current collector to replace traditional heavy metal foils, which therefore reduces the total mass of the electrode while keeping the same areal loading of active materials. The free-standing network film with high flexibility is easy to handle, and has extremely good conductivity, up to 15.0 S cm(-1). The flexible paper-electrode for LIBs shows very good high rate cycling performance, and the specific charge/discharge capacity values are up to 142 mAh g(-1) even at a current rate of 10 C. On the basis of the mild condition and fast assembly process, a CNF template fulfills multiple functions in the fabrication of paper-electrode for LIBs, which would offer an ever increasing potential for high energy density, low cost, and environmentally friendly flexible electronics. PMID:25938940
A new method to assess Monte Carlo convergence
Forster, R.A.; Booth, T.E.; Pederson, S.P.
1993-05-01
The central limit theorem can be applied to a Monte Carlo solution if the following two requirements are satisfied: (1) the random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these are satisfied, a confidence interval based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the type of Monte Carlo tally being used. The Monte Carlo practitioner has only a limited number of marginally quantifiable methods that use sampled values to assess the fulfillment of the second requirement; e.g., statistical error reduction proportional to 1{radical}N with error magnitude guidelines. No consideration is given to what has not yet been sampled. A new method is presented here to assess the convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores, f(x), where the random variable x is the score from one particle history and {integral}{sub {minus}{infinity}}{sup {infinity}} f(x) dx = 1. Since f(x) is seldom known explicitly, Monte Carlo particle random walks sample f(x) implicitly. Unless there is a largest possible history score, the empirical f(x) must eventually decrease more steeply than l/x{sup 3} for the second moment ({integral}{sub {minus}{infinity}}{sup {infinity}} x{sup 2}f(x) dx) to exist.
A Local Approach to Hybrid Data Assimilation
NASA Astrophysics Data System (ADS)
Ide, K.; Kleist, D. T.
2014-12-01
A hybrid system with the local formulation is developed where the prior probability density function (pdf) consists of both statistical and dynamic information of the uncertainty. The dynamic information is provided by the monte caro approach. The local formulation is solved at every grid as in the local ensemble transform Kalman filter. The formulation is flexible in that it allows not only Gaussian but other stable pdfs.
Kim, Daeil; Yun, Junyeong; Lee, Geumbee; Ha, Jeong Sook
2014-10-21
We report on the on-chip fabrication of high performance flexible micro-supercapacitor (MSC) arrays with hybrid electrodes of multi-walled carbon nanotube (MWNT)/V2O5 nanowire (NW) composites and a solid electrolyte, which could power the SnO2 NW UV sensor integrated on the same flexible substrate. The patterned MSC using hybrid electrodes of MWNT/V2O5 NW composites with 10 vol% of V2O5 NWs exhibited excellent electrochemical performance with a high volume capacitance of 80 F cm(-3) at a scan rate of 10 mV s(-1) in a PVA-LiCl electrolyte and good cycle performance to maintain 82% of the capacitance after 10,000 cycles at a current density of 11.6 A cm(-3). The patterned MSC also showed an excellent energy density of 6.8 mW h cm(-3), comparable to that of a Li-thin film battery (1-10 mW h cm(-3)), and a power density of 80.8 W cm(-3) comparable to that of state-of-the-art MSCs. In addition, the flexible MSC array on a PET substrate showed mechanical stability over bending with a bending radius down to 1.5 mm under both compressive and tensile stress. Even after 1000 bending cycles at a bending radius of 7 mm, 94% of the initial capacitance was maintained. Furthermore, we have shown the operation of a SnO2 NW UV sensor using such a fabricated MSC array integrated into the same circuit on the PET substrate. PMID:25184811
Monte Carlo Capabilities of the SCALE Code System
NASA Astrophysics Data System (ADS)
Rearden, B. T.; Petrie, L. M.; Peplow, D. E.; Bekar, K. B.; Wiarda, D.; Celik, C.; Perfetti, C. M.; Ibrahim, A. M.; Hart, S. W. D.; Dunn, M. E.
2014-06-01
SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a "plug-and-play" framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2, to be released in 2014, will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.
Kinetic Monte Carlo Studies of Hydrogen Abstraction from Graphite
H. M. Cuppen; L. Hornekaer
2008-07-01
We present Monte Carlo simulations on Eley-Rideal abstraction reactions of atomic hydrogen chemisorbed on graphite. The results are obtained via a hybrid approach where energy barriers derived from density functional theory calculations are used as input to Monte Carlo simulations. By comparing with experimental data, we discriminate between contributions from different Eley-Rideal mechanisms. A combination of two different mechanisms yields good quantitative and qualitative agreement between the experimentally derived and the simulated Eley-Rideal abstraction cross sections and surface configurations. These two mechanisms include a direct Eley-Rideal reaction with fast diffusing H atoms and a dimer mediated Eley-Rideal mechanism with increased cross section at low coverage. Such a dimer mediated Eley-Rideal mechanism has not previously been proposed and serves as an alternative explanation to the steering behavior often given as the cause of the coverage dependence observed in Eley-Rideal reaction cross sections.
Wormhole Hamiltonian Monte Carlo
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2015-01-01
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551
Zhang, Zhenbin; Sun, Liangliang; Zhu, Guijie; Yan, Xiaojing; Dovichi, Norman J
2015-06-01
A sulfonate-silica hybrid strong cation-exchange (SCX) monolith was synthesized at the proximal end of a capillary zone electrophoresis column and used for on-line solid-phase extraction (SPE) sample preconcentration. Sample was prepared in an acidic buffer and deposited onto the SCX-SPE monolith and eluted using a basic buffer. Electrophoresis was performed in an acidic buffer. This combination of buffers results in formation of a dynamic pH junction, which allows use of relatively large elution buffer volume while maintaining peak efficiency and resolution. All experiments were performed with a 50 µm ID capillary, a 1cm long SCX-SPE monolith, a 60cm long separation capillary, and a electrokinetically pumped nanospray interface. The volume of the capillary is 1.1 µL. By loading 21 µL of a 1×10(-7) M angiotensin II solution, an enrichment factor of 3000 compared to standard electrokinetic injection was achieved on this platform while retaining efficient electrophoretic performance (N=44,000 plates). The loading capacity of the sulfonate SCX hybrid monolith was determined to be ~15 pmol by frontal analysis with 10(-5) M angiotensin II. The system was also applied to the analysis of a 10(-4) mg/mL bovine serum albumin tryptic digest; the protein coverage was 12% and 11 peptides were identified. Finally, by loading 5.5 µL of a 10(-3) mg/mL E. coli digest, 109 proteins and 271 peptides were identified in a 20 min separation; the median separation efficiency generated by these peptides was 25,000 theoretical plates. PMID:25863379
Isotropic Monte Carlo Grain Growth
Energy Science and Technology Software Center (ESTSC)
2013-04-25
IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.
Ozhikandathil, J.; Packirisamy, M.
2012-01-01
Integration of nano-materials in optical microfluidic devices facilitates the realization of miniaturized analytical systems with enhanced sensing abilities for biological and chemical substances. In this work, a novel method of integration of gold nano-islands in a silica-on-silicon-polydimethylsiloxane microfluidic device is reported. The device works based on the nano-enhanced evanescence technique achieved by interacting the evanescent tail of propagating wave with the gold nano-islands integrated on the core of the waveguide resulting in the modification of the propagating UV-visible spectrum. The biosensing ability of the device is investigated by finite-difference time-domain simulation with a simplified model of the device. The performance of the proposed device is demonstrated for the detection of recombinant growth hormone based on antibody-antigen interaction. PMID:24106526
Markov Chain Monte Carlo Usher's Algorithm
Bremen, UniversitÃ¤t
Concepts Markov Chain Monte Carlo Usher's Algorithm Markov Chain Monte Carlo for Parameter Optimization Holger Schultheis 04.11.2014 1 / 27 #12;Concepts Markov Chain Monte Carlo Usher's Algorithm Topics 1 Concepts 2 Markov Chain Monte Carlo Basics Example Metropolis and Simulated Annealing 3 Usher
Hybrid computer optimization of systems with random parameters
NASA Technical Reports Server (NTRS)
White, R. C., Jr.
1972-01-01
A hybrid computer Monte Carlo technique for the simulation and optimization of systems with random parameters is presented. The method is applied to the simultaneous optimization of the means and variances of two parameters in the radar-homing missile problem treated by McGhee and Levine.
NASA Astrophysics Data System (ADS)
Karioja, Pentti; Mäkinen, Jukka-Tapani; Keränen, Kimmo; Aikio, Janne; Alajoki, Teemu; Jaakola, Tuomo; Koponen, Matti; Keränen, Antti; Heikkinen, Mikko; Tuomikoski, Markus; Suhonen, Riikka; Hakalahti, Leena; Kopola, Pälvi; Hast, Jukka; Liedert, Ralf; Hiltunen, Jussi; Masuda, Noriyuki; Kemppainen, Antti; Rönkä, Kari; Korhonen, Raimo
2012-04-01
This paper presents research activities carried out at VTT Technical Research Centre of Finland in the field of hybrid integration of optics, electronics and mechanics. Main focus area in our research is the manufacturing of electronic modules and product structures with printed electronics, film-over-molding and polymer sheet lamination technologies and the goal is in the next generation of smart systems utilizing monolithic polymer packages. The combination of manufacturing technologies such as roll-to-roll -printing, injection molding and traditional component assembly is called Printed Hybrid Systems (PHS). Several demonstrator structures have been made, which show the potential of polymer packaging technology. One demonstrator example is a laminated structure with embedded LED chips. Element thickness is only 0.3mm and the flexible stack of foils can be bent in two directions after assembly process and was shaped curved using heat and pressure. The combination of printed flexible circuit boards and injection molding has also been demonstrated with several functional modules. The demonstrators illustrate the potential of origami electronics, which can be cut and folded to 3D shapes. It shows that several manufacturing process steps can be eliminated by Printed Hybrid Systems technology. The main benefits of this combination are small size, ruggedness and conformality. The devices are ideally suited for medical applications as the sensitive electronic components are well protected inside the plastic and the structures can be cleaned easily due to the fact that they have no joints or seams that can accumulate dirt or bacteria.
SPQR: a Monte Carlo reactor kinetics code. [LMFBR
Cramer, S.N.; Dodds, H.L.
1980-02-01
The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations.
The Monte Carlo method in quantum field theory
Colin Morningstar
2007-02-20
This series of six lectures is an introduction to using the Monte Carlo method to carry out nonperturbative studies in quantum field theories. Path integrals in quantum field theory are reviewed, and their evaluation by the Monte Carlo method with Markov-chain based importance sampling is presented. Properties of Markov chains are discussed in detail and several proofs are presented, culminating in the fundamental limit theorem for irreducible Markov chains. The example of a real scalar field theory is used to illustrate the Metropolis-Hastings method and to demonstrate the effectiveness of an action-preserving (microcanonical) local updating algorithm in reducing autocorrelations. The goal of these lectures is to provide the beginner with the basic skills needed to start carrying out Monte Carlo studies in quantum field theories, as well as to present the underlying theoretical foundations of the method.
Monte Carlo Methods for Tempo Tracking and Rhythm Quantization
Cemgil, A T; 10.1613/jair.1121
2011-01-01
We present a probabilistic generative model for timing deviations in expressive music performance. The structure of the proposed model is equivalent to a switching state space model. The switch variables correspond to discrete note locations as in a musical score. The continuous hidden variables denote the tempo. We formulate two well known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization) as filtering and maximum a posteriori (MAP) state estimation tasks. Exact computation of posterior features such as the MAP state is intractable in this model class, so we introduce Monte Carlo methods for integration and optimization. We compare Markov Chain Monte Carlo (MCMC) methods (such as Gibbs sampling, simulated annealing and iterative improvement) and sequential Monte Carlo methods (particle filters). Our simulation results suggest better results with sequential methods. The methods can be applied in both online and batch scenarios such as tempo tracking and transcr...
QED multiphoton corrections to Bhabha scattering at low angels. Monte Carlo solution
NASA Astrophysics Data System (ADS)
Jadach, S.; Richter-Ws, E.; Ward, B. F. L.; Ws, Z.
1991-10-01
We calculate the QED-corrected integrated cross-section for small-angle Bhabha scattering with kinematical cuts very close to the real experimental cuts at the LEP/SLC experiments - no distinction between electrons and photons is made. We effectively compare up to four independent Monte Carlo calculations and one semi-analytical calculation. The aim of the exercise is to establish precision of the O (?) Monte Carlo calculation with the exclusive exponentiation of the Yennie-Frautschi-Suura type. It provides the integrated cross-section for the low-angle Bhabha scattering process (?<10°), necessary for luminosity measurement, with an overall precision of 0.25%. The corresponding computer program BHLUMI 2.00 is in the form of a stand-alone Monte Carlo event generator. The complete and explicit definition of the multiphoton matrix element is given. Examples of numerical results from Monte Carlo phase space integration are demonstrated and discussed.
Continuous-time quantum Monte Carlo using worm sampling
NASA Astrophysics Data System (ADS)
Gunacker, P.; Wallerberger, M.; Gull, E.; Hausoel, A.; Sangiovanni, G.; Held, K.
2015-10-01
We present a worm sampling method for calculating one- and two-particle Green's functions using continuous-time quantum Monte Carlo simulations in the hybridization expansion (CT-HYB). Instead of measuring Green's functions by removing hybridization lines from partition function configurations, as in conventional CT-HYB, the worm algorithm directly samples the Green's function. We show that worm sampling is necessary to obtain general two-particle Green's functions which are not of density-density type and that it improves the sampling efficiency when approaching the atomic limit. Such two-particle Green's functions are needed to compute off-diagonal elements of susceptibilities and occur in diagrammatic extensions of the dynamical mean-field theory and in efficient estimators for the single-particle self-energy.
A general method for debiasing a Monte Carlo estimator
McLeish, Don
2010-01-01
Consider a process, stochastic or deterministic, obtained by using a numerical integration scheme, or from Monte-Carlo methods involving an approximation to an integral, or a Newton-Raphson iteration to approximate the root of an equation. We will assume that we can sample from the distribution of the process from time 0 to finite time n. We propose a scheme for unbiased estimation of the limiting value of the process, together with estimates of standard error and apply this to examples including numerical integrals, root-finding and option pricing in a Heston Stochastic Volatility model. This results in unbiased estimators in place of biased ones i nmany potential applications.
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded. PMID:25110745
Geissler, Matthias; Clime, Liviu; Hoa, Xuyen D; Morton, Keith J; Hébert, Harold; Poncelet, Lucas; Mounier, Maxence; Deschênes, Mylène; Gauthier, Martine E; Huszczynski, George; Corneau, Nathalie; Blais, Burton W; Veres, Teodor
2015-10-20
We describe the translation of a cloth-based hybridization array system (CHAS), a colorimetric DNA detection method that is used by food inspection laboratories for colony screening of pathogenic agents, onto a microfluidic chip format. We also introduce an articulated centrifugal platform with a novel fluid manipulation concept based on changes in the orientation of the chip with respect to the centrifugal force field to time the passage of multiple components required for the process. The platform features two movable and motorized carriers that can be reoriented on demand between 0 and 360° during stage rotation. Articulation of the chip can be used to trigger on-the-fly fluid dispensing through independently addressable siphon structures or to relocate solutions against the centrifugal force field, making them newly accessible for downstream transfer. With the microfluidic CHAS, we achieved significant reduction in the size of the cloth substrate as well as the volume of reagents and wash solutions. Both the chip design and the operational protocol were optimized to perform the entire process in a reliable, fully automated fashion. A demonstration with PCR-amplified genomic DNA confirms on-chip detection and identification of Escherichia coli O157:H7 from colony isolates in a colorimetric multiplex assay using rfbO157, fliCH7, vt1, and vt2 genes. PMID:26416260
Enhanced Neoclassical Polarization: Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Xiao, Yong; Molvig, Kim; Ernst, Darin; Hallatschek, Klaus
2003-10-01
The theoretical prediction of enhanced neoclassical polarization (K. Molvig, Yong Xiao, D. R. Ernst, K. Hallatschek, Sherwood Fusion Theory Conference, 2003.) in a tokamak plasma is investigated numerically using a Monte Carlo approach to combine the effects of collisions with guiding center tokamak orbits. The collisionless, kinematic contribution to the polarization first calculated by Rosenbluth and Hinton (M.N. Rosenbluth and F.L. Hinton, Phys. Rev. Lett. 80), 724 (1998). is reproduced from the orbits directly. A fifth order Runge-Kutta orbit integrator is used to give extremely high orbit accuracy. The cancellation of opposite trapped and circulating particle radial flows is verified explicitly in this simulation. The Monte Carlo representation of pitch angle scattering collisions (X.Q. Xu and M.N. Rosenbluth, Phys. Fluids B 3), 627 (1991) is used to compute the collisional processes. The numerical simulation determines the generalized Fokker-Planck coefficients used as the basis for transport in the Lagrangian formulation (I.B. Bernstein and K. Molvig, Phys. Fluids, 26), 1488 (1983). of transport theory. The computation generates the banana diffusion coefficient, < ? ? ^2/? t>, and the correlated cross process, < ? ? ? ? /? t>, responsible for the enhanced polarization. The numerical procedure generates smooth coefficients and resolves the analytic singularity that occurs at the trapped-circulating boundary.
Genomic Networks of Hybrid Sterility
Turner, Leslie M.; White, Michael A.; Tautz, Diethard; Payseur, Bret A.
2014-01-01
Hybrid dysfunction, a common feature of reproductive barriers between species, is often caused by negative epistasis between loci (“Dobzhansky-Muller incompatibilities”). The nature and complexity of hybrid incompatibilities remain poorly understood because identifying interacting loci that affect complex phenotypes is difficult. With subspecies in the early stages of speciation, an array of genetic tools, and detailed knowledge of reproductive biology, house mice (Mus musculus) provide a model system for dissecting hybrid incompatibilities. Male hybrids between M. musculus subspecies often show reduced fertility. Previous studies identified loci and several X chromosome-autosome interactions that contribute to sterility. To characterize the genetic basis of hybrid sterility in detail, we used a systems genetics approach, integrating mapping of gene expression traits with sterility phenotypes and QTL. We measured genome-wide testis expression in 305 male F2s from a cross between wild-derived inbred strains of M. musculus musculus and M. m. domesticus. We identified several thousand cis- and trans-acting QTL contributing to expression variation (eQTL). Many trans eQTL cluster into eleven ‘hotspots,’ seven of which co-localize with QTL for sterility phenotypes identified in the cross. The number and clustering of trans eQTL—but not cis eQTL—were substantially lower when mapping was restricted to a ‘fertile’ subset of mice, providing evidence that trans eQTL hotspots are related to sterility. Functional annotation of transcripts with eQTL provides insights into the biological processes disrupted by sterility loci and guides prioritization of candidate genes. Using a conditional mapping approach, we identified eQTL dependent on interactions between loci, revealing a complex system of epistasis. Our results illuminate established patterns, including the role of the X chromosome in hybrid sterility. The integrated mapping approach we employed is applicable in a broad range of organisms and we advocate for widespread adoption of a network-centered approach in speciation genetics. PMID:24586194
Neutrino oscillation parameter sampling with MonteCUBES
NASA Astrophysics Data System (ADS)
Blennow, Mattias; Fernandez-Martinez, Enrique
2010-01-01
We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those used in GLoBES [1,2]. Solution method: MonteCUBES is written as a plug-in to the GLoBES software [1,2] and provides the necessary methods to perform Markov Chain Monte Carlo sampling of the parameter space. This allows an efficient sampling of the parameter space and has a complexity which does not grow exponentially with the parameter space dimension. The integration of the MonteCUBES package with the GLoBES software makes sure that the experimental definitions already in use by the community can also be used with MonteCUBES, while also lowering the learning threshold for users who already know GLoBES. Additional comments: A Matlab GUI for interpretation of results is included in the distribution. Running time: The typical running time varies depending on the dimensionality of the parameter space, the complexity of the experiment, and how well the parameter space should be sampled. The running time for our simulations [3] with 15 free parameters at a Neutrino Factory with O(10) samples varied from a few hours to tens of hours. References:P. Huber, M. Lindner, W. Winter, Comput. Phys. Comm. 167 (2005) 195, hep-ph/0407333. P. Huber, J. Kopp, M. Lindner, M. Rolinec, W. Winter, Comput. Phys. Comm. 177 (2007) 432, hep-ph/0701187. S. Antusch, M. Blennow, E. Fernandez-Martinez, J. Lopez-Pavon, arXiv:0903.3986 [hep-ph].
Improved Evolutionary Hybrids for Flexible Ligand Docking in Autodock
Belew, R.K.; Hart, W.E.; Morris, G.M.; Rosin, C.
1999-01-27
In this paper we evaluate the design of the hybrid evolutionary algorithms (EAs) that are currently used to perform flexible ligand binding in the Autodock docking software. Hybrid EAs incorporate specialized operators that exploit domain-specific features to accelerate an EA's search. We consider hybrid EAs that use an integrated local search operator to reline individuals within each iteration of the search. We evaluate several factors that impact the efficacy of a hybrid EA, and we propose new hybrid EAs that provide more robust convergence to low-energy docking configurations than the methods currently available in Autodock.
Parametric Learning and Monte Carlo Optimization
Wolpert, David H
2007-01-01
This paper uncovers and explores the close relationship between Monte Carlo Optimization of a parametrized integral (MCO), Parametric machine-Learning (PL), and `blackbox' or `oracle'-based optimization (BO). We make four contributions. First, we prove that MCO is mathematically identical to a broad class of PL problems. This identity potentially provides a new application domain for all broadly applicable PL techniques: MCO. Second, we introduce immediate sampling, a new version of the Probability Collectives (PC) algorithm for blackbox optimization. Immediate sampling transforms the original BO problem into an MCO problem. Accordingly, by combining these first two contributions, we can apply all PL techniques to BO. In our third contribution we validate this way of improving BO by demonstrating that cross-validation and bagging improve immediate sampling. Finally, conventional MC and MCO procedures ignore the relationship between the sample point locations and the associated values of the integrand; only th...
Proton Upset Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
Energy Science and Technology Software Center (ESTSC)
2005-10-15
HybSim (short for Hybrid Simulator) is a flexible, easy to use screening tool that allows the user to quanti the technical and economic benefits of installing a village hybrid generating system and simulates systems with any combination of ?Diesel generator sets ?Photovoltaic arrays -Wind Turbines and -Battery energy storage systems Most village systems (or small population sites such as villages, remote military bases, small communities, independent or isolated buildings or centers) depend on diesel generationmore »systems for their source of energy. HybSim allows the user to determine other "sources" of energy that can greatly reduce the dollar to kilo-watt hour ratio. Supported by the DOE, Energy Storage Program, HybSim was initially developed to help analyze the benefits of energy storage systems in Alaskan villages. Soon after its development, other sources of energy were added providing the user with a greater range of analysis opportunities and providing the village with potentially added savings. In addition to village systems, HybSim has generated interest for use from military institutions in energy provisions and USAID for international village analysis.« less
Multivariate normal integration
NASA Technical Reports Server (NTRS)
Falls, L. W.; Carter, M. C.
1976-01-01
Monte Carlo program evaluates integrals over rectangular regions for dimensions less than six and over elliptical regions in bivariate case. Program gives positive definite symmetric variance/covariance matrix factorization and calculates reciprocal of lower triangular matrix and product of diagonal elements of triangular matrix.
HOPSPACK: Hybrid Optimization Parallel Search Package.
Gray, Genetha A.; Kolda, Tamara G.; Griffin, Joshua; Taddy, Matt; Martinez-Canales, Monica
2008-12-01
In this paper, we describe the technical details of HOPSPACK (Hybrid Optimization Parallel SearchPackage), a new software platform which facilitates combining multiple optimization routines into asingle, tightly-coupled, hybrid algorithm that supports parallel function evaluations. The frameworkis designed such that existing optimization source code can be easily incorporated with minimalcode modification. By maintaining the integrity of each individual solver, the strengths and codesophistication of the original optimization package are retained and exploited.4
NASA Astrophysics Data System (ADS)
Kido, Kentaro; Kasahara, Kento; Yokogawa, Daisuke; Sato, Hirofumi
2015-07-01
In this study, we reported the development of a new quantum mechanics/molecular mechanics (QM/MM)-type framework to describe chemical processes in solution by combining standard molecular-orbital calculations with a three-dimensional formalism of integral equation theory for molecular liquids (multi-center molecular Ornstein-Zernike (MC-MOZ) method). The theoretical procedure is very similar to the 3D-reference interaction site model self-consistent field (RISM-SCF) approach. Since the MC-MOZ method is highly parallelized for computation, the present approach has the potential to be one of the most efficient procedures to treat chemical processes in solution. Benchmark tests to check the validity of this approach were performed for two solute (solute water and formaldehyde) systems and a simple SN2 reaction (Cl- + CH3Cl ? ClCH3 + Cl-) in aqueous solution. The results for solute molecular properties and solvation structures obtained by the present approach were in reasonable agreement with those obtained by other hybrid frameworks and experiments. In particular, the results of the proposed approach are in excellent agreements with those of 3D-RISM-SCF.
NASA Astrophysics Data System (ADS)
Congote, J.; Moreno, A.; Kabongo, L.; Pérez, J.-L.; San-José, R.; Ruiz, O.
2012-10-01
City models visualisation, buildings, structures and volumetric information, is an important task in Computer Graphics and Urban Planning. The different formats and data sources involved in the visualisation make the development of applications a big challenge. We present a homogeneous web visualisation framework using X3DOM and MEDX3DOM for the visualisation of these urban objects. We present an integration of different declarative data sources, enabling the utilization of advanced visualisation algorithms to render the models. It has been tested with a city model composed of buildings from the Madrid University Campus, some volumetric datasets coming from Air Quality Models and 2D layers wind datasets. Results show that the visualisation of all the urban models can be performed in real time on the Web. An HTML5 web interface is presented to the users, enabling real time modifications of visualisation parameters.
NASA Astrophysics Data System (ADS)
Berger, C. E.; Anderson, E. R.; Drut, J. E.
2015-05-01
We determine the ground-state energy and Tan's contact of attractively interacting few-fermion systems in a one-dimensional harmonic trap, for a range of couplings and particle numbers. Complementing those results, we show the corresponding density profiles. The calculations were performed with a lattice Monte Carlo approach based on a nonuniform discretization of space, defined via Gauss-Hermite quadrature points and weights. This particular coordinate basis is natural for systems in harmonic traps, and can be generalized to traps of other shapes. In all cases, it yields a position-dependent coupling and a corresponding nonuniform Hubbard-Stratonovich transformation. The resulting path integral is performed with hybrid Monte Carlo as a proof of principle for calculations at finite temperature and in higher dimensions. We present results for N =4 ,...,20 particles (although the method can be extended beyond that) to cover the range from few- to many-particle systems. This method is exact up to statistical and systematic uncertainties, which we account for—and thus also represents an ab initio calculation of this system, providing a benchmark for other methods and a prediction for ultracold-atom experiments.
C. E. Berger; E. R. Anderson; J. E. Drut
2015-06-08
We determine the ground-state energy and Tan's contact of attractively interacting few-fermion systems in a one-dimensional harmonic trap, for a range of couplings and particle numbers. Complementing those results, we show the corresponding density profiles. The calculations were performed with a new lattice Monte Carlo approach based on a non-uniform discretization of space, defined via Gauss-Hermite quadrature points and weights. This particular coordinate basis is natural for systems in harmonic traps, and can be generalized to traps of other shapes. In all cases, it yields a position-dependent coupling and a corresponding non-uniform Hubbard-Stratonovich transformation. The resulting path integral is performed with hybrid Monte Carlo as a proof of principle for calculations at finite temperature and in higher dimensions. We present results for N=4,...,20 particles (although the method can be extended beyond that) to cover the range from few- to many-particle systems. This method is also exact up to statistical and systematic uncertainties, which we account for -- and thus also represents the first ab initio calculation of this system, providing a benchmark for other methods and a prediction for ultracold-atom experiments.
Mission Analysis, Operations, and Navigation Toolkit Environment (Monte) Version 040
NASA Technical Reports Server (NTRS)
Sunseri, Richard F.; Wu, Hsi-Cheng; Evans, Scott E.; Evans, James R.; Drain, Theodore R.; Guevara, Michelle M.
2012-01-01
Monte is a software set designed for use in mission design and spacecraft navigation operations. The system can process measurement data, design optimal trajectories and maneuvers, and do orbit determination, all in one application. For the first time, a single software set can be used for mission design and navigation operations. This eliminates problems due to different models and fidelities used in legacy mission design and navigation software. The unique features of Monte 040 include a blowdown thruster model for GRAIL (Gravity Recovery and Interior Laboratory) with associated pressure models, as well as an updated, optimalsearch capability (COSMIC) that facilitated mission design for ARTEMIS. Existing legacy software lacked the capabilities necessary for these two missions. There is also a mean orbital element propagator and an osculating to mean element converter that allows long-term orbital stability analysis for the first time in compiled code. The optimized trajectory search tool COSMIC allows users to place constraints and controls on their searches without any restrictions. Constraints may be user-defined and depend on trajectory information either forward or backwards in time. In addition, a long-term orbit stability analysis tool (morbiter) existed previously as a set of scripts on top of Monte. Monte is becoming the primary tool for navigation operations, a core competency at JPL. The mission design capabilities in Monte are becoming mature enough for use in project proposals as well as post-phase A mission design. Monte has three distinct advantages over existing software. First, it is being developed in a modern paradigm: object- oriented C++ and Python. Second, the software has been developed as a toolkit, which allows users to customize their own applications and allows the development team to implement requirements quickly, efficiently, and with minimal bugs. Finally, the software is managed in accordance with the CMMI (Capability Maturity Model Integration), where it has been ap praised at maturity level 3.
Hybrid inflation along waterfall trajectories
Sebastien Clesse
2011-02-28
We identify a new inflationary regime for which more than 60 e-folds are generated classically during the waterfall phase occuring after the usual hybrid inflation. By performing a bayesian Monte-Carlo-Markov-Chain analysis, this scenario is shown to take place in a large part of the parameter space of the model. When this occurs, the observable perturbation modes leave the Hubble radius during waterfall inflation. The power spectrum of adiabatic perturbations is red, possibly in agreement with CMB constraints. A particular attention has been given to study only the regions for which quantum backreactions do not affect the classical dynamics. Implications concerning the preheating and the absence of topological defects in our universe are discussed.
Michael H. Seymour
2010-08-17
I review the status of the general-purpose Monte Carlo event generators for the LHC, with emphasis on areas of recent physics developments. There has been great progress, especially in multi-jet simulation, but I mention some question marks that have recently arisen.
Monte Carlo calculations of nuclei
Pieper, S.C.
1997-10-01
Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.
Is Monte Carlo embarrassingly parallel?
Hoogenboom, J. E.
2012-07-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Markov Chain Monte Carlo and Gibbs Sampling
Walsh, Bruce
Appendix 3 Markov Chain Monte Carlo and Gibbs Sampling Far better an approximate answer development of Markov Chain Monte Carlo (MCMC) meth uses the previous sample value to randomly generate the next sample value, generating a Markov chain
Applications of Monte Carlo Methods in Calculus.
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Gordon, Florence S.
1990-01-01
Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)
Novel Quantum Monte Carlo Approaches for Quantum Liquids
NASA Astrophysics Data System (ADS)
Rubenstein, Brenda M.
Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While the eventual hope is to apply this algorithm to the exploration of yet unidentified high-pressure, low-temperature phases of hydrogen, I employ this algorithm to determine whether or not quantum hard spheres can form a low-temperature bcc solid if exchange is not taken into account. In the final chapter of this thesis, I use Path Integral Monte Carlo once again to explore whether glassy para-hydrogen exhibits superfluidity. Physicists have long searched for ways to coax hydrogen into becoming a superfluid. I present evidence that, while glassy hydrogen does not crystallize at the temperatures at which hydrogen might become a superfluid, it nevertheless does not exhibit superfluidity. This is because the average binding energy per p-H2 molecule poses a severe barrier to exchange regardless of whether the system is crystalline. All in all, this work extends the reach of Quantum Monte Carlo methods to new systems and brings the power of existing methods to bear on new problems. Portions of this work have been published in Rubenstein, PRE (2010) and Rubenstein, PRA (2012) [167;169]. Other papers not discussed here published during my Ph.D. include Rubenstein, BPJ (2008) and Rubenstein, PRL (2012) [166;168]. The work in Chapters 6 and 7 is currently unpublished. [166] Brenda M. Rubenstein, Ivan Coluzza, and Mark A. Miller. Controlling the folding and substrate-binding of proteins using polymer brushes. Physical Review Letters, 108(20):208104, May 2012. [167] Brenda M. Rubenstein, J.E. Gubernatis, and J.D. Doll. Comparative monte carlo efficiency by monte carlo analysis. Physical Review E, 82(3):036701, September 2010. [168] Brenda M. Rubenstein and Laura J. Kaufman. The role of extracellular matrix in glioma invasion: A cellular potts model approach. Biophysical Journal, 95(12):5661-- 5680, December 2008. [169] Brenda M. Rubenstein, Shiwei Zhang, and David R. Reichman. Finite-temperature auxiliary-field quantum monte carlo for bose-fermi mixtures. Physical Review A, 86(5):053606, November 2012.
Erickson, Lori
1995-01-01
Monte Carlo modeling techniques using mean information fields (MIF), developed by Torsten Hagerstrand in the 1950s, were integrated with a geographic information system (GIS) to simulate lost person behavior in wilderness areas. Big Bend Ranch State...
Efficient pseudo-random number generation for monte-carlo simulations using graphic processors
NASA Astrophysics Data System (ADS)
Mohanty, Siddhant; Mohanty, A. K.; Carminati, F.
2012-06-01
A hybrid approach based on the combination of three Tausworthe generators and one linear congruential generator for pseudo random number generation for GPU programing as suggested in NVIDIA-CUDA library has been used for MONTE-CARLO sampling. On each GPU thread, a random seed is generated on fly in a simple way using the quick and dirty algorithm where mod operation is not performed explicitly due to unsigned integer overflow. Using this hybrid generator, multivariate correlated sampling based on alias technique has been carried out using both CUDA and OpenCL languages.
Microwave Enginering Microwave Integrated Circuits
Iqbal, Sheikh Sharif
EE 407 Microwave Enginering Microwave Integrated Circuits Dr. Sheikh Sharif Iqbal Lecture 27-28 #12;Microwave Integrated Circuits (MIC's): ·There are three types of circuit elements that either are used: · Hybrid Microwave Integrated Circuits (HMICs): where solid state devices and passive elements (both lumped
Monte Carlo Experiments: Design and Implementation.
ERIC Educational Resources Information Center
Paxton, Pamela; Curran, Patrick J.; Bollen, Kenneth A.; Kirby, Jim; Chen, Feinian
2001-01-01
Illustrates the design and planning of Monte Carlo simulations, presenting nine steps in planning and performing a Monte Carlo analysis from developing a theoretically derived question of interest through summarizing the results. Uses a Monte Carlo simulation to illustrate many of the relevant points. (SLD)
MARKOV CHAIN MONTE CARLO MATTHEW JOSEPH
May, J. Peter
MARKOV CHAIN MONTE CARLO MATTHEW JOSEPH Abstract. Markov chain Monte Carlo is an umbrella term for algorithms that use Markov chains to sample from a given probability distribution. This paper is a brief examination of Markov chain Monte Carlo and its usage. We begin by discussing Markov chains and the ergodicity
Monte Carlo Simulations and Generation of the SPI Response
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.
2003-01-01
In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.
Monte Carlo Simulations and Generation of the SPI Response
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.
2003-01-01
In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.
NASA Astrophysics Data System (ADS)
Weichsel, T.; Hartung, U.; Kopte, T.; Zschornack, G.; Kreller, M.; Philipp, A.
2015-09-01
A metal ion source prototype has been developed: a combination of magnetron sputter technology with 2.45 GHz electron cyclotron resonance (ECR) ion source technology—a so called magnetron ECR ion source (MECRIS). An integrated ring-shaped sputter magnetron with an Al target is acting as a powerful metal atom supply in order to produce an intense current of singly charged metal ions. Preliminary experiments show that an Al+ ion current with a density of 167 ?A/cm2 is extracted from the source at an acceleration voltage of 27 kV. Spatially resolved double Langmuir probe measurements and optical emission spectroscopy were used to study the plasma states of the ion source: sputter magnetron, ECR, and MECRIS plasma. Electron density and temperature as well as Al atom density were determined as a function of microwave and sputter magnetron power. The effect of ECR heating is strongly pronounced in the center of the source. There the electron density is increased by one order of magnitude from 6 × 109 cm-3 to 6 × 1010 cm-3 and the electron temperature is enhanced from about 5 eV to 12 eV, when the ECR plasma is ignited to the magnetron plasma. Operating the magnetron at constant power, it was observed that its discharge current is raised from 1.8 A to 4.8 A, when the ECR discharge was superimposed with a microwave power of 2 kW. At the same time, the discharge voltage decreased from about 560 V to 210 V, clearly indicating a higher plasma density of the MECRIS mode. The optical emission spectrum of the MECRIS plasma is dominated by lines of excited Al atoms and shows a significant contribution of lines arising from singly ionized Al. Plasma emission photography with a CCD camera was used to prove probe measurements and to identify separated plasma emission zones originating from the ECR and magnetron discharge.
Weichsel, T; Hartung, U; Kopte, T; Zschornack, G; Kreller, M; Philipp, A
2015-09-01
A metal ion source prototype has been developed: a combination of magnetron sputter technology with 2.45 GHz electron cyclotron resonance (ECR) ion source technology-a so called magnetron ECR ion source (MECRIS). An integrated ring-shaped sputter magnetron with an Al target is acting as a powerful metal atom supply in order to produce an intense current of singly charged metal ions. Preliminary experiments show that an Al(+) ion current with a density of 167 ?A/cm(2) is extracted from the source at an acceleration voltage of 27 kV. Spatially resolved double Langmuir probe measurements and optical emission spectroscopy were used to study the plasma states of the ion source: sputter magnetron, ECR, and MECRIS plasma. Electron density and temperature as well as Al atom density were determined as a function of microwave and sputter magnetron power. The effect of ECR heating is strongly pronounced in the center of the source. There the electron density is increased by one order of magnitude from 6 × 10(9) cm(-3) to 6 × 10(10) cm(-3) and the electron temperature is enhanced from about 5 eV to 12 eV, when the ECR plasma is ignited to the magnetron plasma. Operating the magnetron at constant power, it was observed that its discharge current is raised from 1.8 A to 4.8 A, when the ECR discharge was superimposed with a microwave power of 2 kW. At the same time, the discharge voltage decreased from about 560 V to 210 V, clearly indicating a higher plasma density of the MECRIS mode. The optical emission spectrum of the MECRIS plasma is dominated by lines of excited Al atoms and shows a significant contribution of lines arising from singly ionized Al. Plasma emission photography with a CCD camera was used to prove probe measurements and to identify separated plasma emission zones originating from the ECR and magnetron discharge. PMID:26429434
A new method to assess Monte Carlo convergence
Forster, R.A.; Booth, T.E.; Pederson, S.P.
1993-01-01
The central limit theorem can be applied to a Monte Carlo solution if the following two requirements are satisfied: (1) the random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these are satisfied, a confidence interval based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the type of Monte Carlo tally being used. The Monte Carlo practitioner has only a limited number of marginally quantifiable methods that use sampled values to assess the fulfillment of the second requirement; e.g., statistical error reduction proportional to 1[radical]N with error magnitude guidelines. No consideration is given to what has not yet been sampled. A new method is presented here to assess the convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores, f(x), where the random variable x is the score from one particle history and [integral][sub [minus][infinity
Hybrid community energy systems.
Jody, B. J.; Daniels, E. J.; Karvelas, D. E.; Energy Systems
2000-01-01
The availability of efficient, economical, and reliable energy supplies can help attract industry and commercial businesses to a municipality or a region. Efficient use of energy can also improve the air quality and reduce pollution. Therefore, municipalities should explore and encourage the development and implementation of efficient energy systems. Integrated hybrid energy systems can be designed to meet the total energy requirements of large and small communities. These systems can yield significant energy and cost savings when compared with independent systems serving individual units or when compared with the conventional practice of buying power from a utility and producing thermal energy on-site. To maximize energy and cost savings, the design engineer should look beyond the conventional when designing such systems.
Genetic analyses reveal hybridization but no hybrid swarm in one of the world's rarest birds.
Steeves, Tammy E; Maloney, Richard F; Hale, Marie L; Tylianakis, Jason M; Gemmell, Neil J
2010-12-01
Hybridization facilitated by human activities has dramatically altered the evolutionary trajectories of threatened taxa around the globe. Whereas introduced mammalian predators and widespread habitat loss and degradation clearly imperil the recovery and survival of the New Zealand endemic black stilt or kak? (Himantopus novaezelandiae), the risk associated with hybridization between this critically endangered endemic and its self-introduced congener, the pied stilt or poaka (Himantopus himantopus leucocephalus) is less clear. Here, we combine Bayesian admixture analyses of microsatellite data with mitochondrial DNA sequence data to assess the levels of hybridization and introgression between kak? and poaka. We show that birds classified as hybrids on the basis of adult plumage are indeed of hybrid origin and that hybridization between kak? and poaka is both extensive and bidirectional. Despite this, we found almost no evidence for introgression from poaka to kak?, thus negating the popular belief that kak? represent a hybrid swarm. To our knowledge, ours represents the first comprehensive study to document a lack of widespread introgression for a species at risk despite a recent history of extensive bidirectional human-induced hybridization. We attribute this rather surprising result, in part, to reduced reproductive success in female hybrids combined with a transient male-biased kak? sex ratio. To maximize the evolutionary potential of kak?, we use these data to recommend conservation management activities aimed to maintain the genetic integrity and to maximize the genetic diversity of this iconic rare bird. PMID:21050294
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
Verification of the shift Monte Carlo code with the C5G7 reactor benchmark
Sly, N. C.; Mervin, B. T.; Mosher, S. W.; Evans, T. M.; Wagner, J. C.; Maldonado, G. I.
2012-07-01
Shift is a new hybrid Monte Carlo/deterministic radiation transport code being developed at Oak Ridge National Laboratory. At its current stage of development, Shift includes a parallel Monte Carlo capability for simulating eigenvalue and fixed-source multigroup transport problems. This paper focuses on recent efforts to verify Shift's Monte Carlo component using the two-dimensional and three-dimensional C5G7 NEA benchmark problems. Comparisons were made between the benchmark eigenvalues and those output by the Shift code. In addition, mesh-based scalar flux tally results generated by Shift were compared to those obtained using MCNP5 on an identical model and tally grid. The Shift-generated eigenvalues were within three standard deviations of the benchmark and MCNP5-1.60 values in all cases. The flux tallies generated by Shift were found to be in very good agreement with those from MCNP. (authors)
Han, Eun Young; Lee, Choonsik; Mcguire, Lynn; Brown, Tracy L. Y.; Bolch, Wesley E.
2013-08-15
Purpose: To calculate organ S values (mGy/Bq-s) and effective doses per time-integrated activity (mSv/Bq-s) for pediatric and adult family members exposed to an adult male or female patient treated with I-131 using a series of hybrid computational phantoms coupled with a Monte Carlo radiation transport technique.Methods: A series of pediatric and adult hybrid computational phantoms were employed in the study. Three different exposure scenarios were considered: (1) standing face-to-face exposures between an adult patient and pediatric or adult family phantoms at five different separation distances; (2) an adult female patient holding her newborn child, and (3) a 1-yr-old child standing on the lap of an adult female patient. For the adult patient model, two different thyroid-related diseases were considered: hyperthyroidism and differentiated thyroid cancer (DTC) with corresponding internal distributions of {sup 131}I. A general purpose Monte Carlo code, MCNPX v2.7, was used to perform the Monte Carlo radiation transport.Results: The S values show a strong dependency on age and organ location within the family phantoms at short distances. The S values and effective dose per time-integrated activity from the adult female patient phantom are relatively high at shorter distances and to younger family phantoms. At a distance of 1 m, effective doses per time-integrated activity are lower than those values based on the NRC (Nuclear Regulatory Commission) by a factor of 2 for both adult male and female patient phantoms. The S values to target organs from the hyperthyroid-patient source distribution strongly depend on the height of the exposed family phantom, so that their values rapidly decrease with decreasing height of the family phantom. Active marrow of the 10-yr-old phantom shows the highest S values among family phantoms for the DTC-patient source distribution. In the exposure scenario of mother and baby, S values and effective doses per time-integrated activity to the newborn and 1-yr-old phantoms for a hyperthyroid-patient source are higher than values for a DTC-patient source.Conclusions: The authors performed realistic assessments of {sup 131}I organ S values and effective dose per time-integrated activity from adult patients treated for hyperthyroidism and DTC to family members. In addition, the authors’ studies consider Monte Carlo simulated “mother and baby/child” exposure scenarios for the first time. Based on these results, the authors reconfirm the strong conservatism underlying the point source method recommended by the US NRC. The authors recommend that various factors such as the type of the patient's disease, the age of family members, and the distance/posture between the patient and family members must be carefully considered to provide realistic dose estimates for patient-to-family exposures.
NASA Astrophysics Data System (ADS)
Rozov, V.; Alekseev, A.
2015-08-01
A necessity to address a wide spectrum of engineering problems in ITER determined the need for efficient tools for modeling of the magnetic environment and force interactions between the main components of the magnet system. The assessment of the operating window for the machine, determined by the electro-magnetic (EM) forces, and the check of feasibility of particular scenarios play an important role for ensuring the safety of exploitation. Such analysis-powered prevention of damages forms an element of the Machine Operations and Investment Protection strategy. The corresponding analysis is a necessary step in preparation of the commissioning, which finalizes the construction phase. It shall be supported by the development of the efficient and robust simulators and multi-physics/multi-system integration of models. The developed numerical model of interactions in the ITER magnetic system, based on the use of pre-computed influence matrices, facilitated immediate and complete assessment and systematic specification of EM loads on magnets in all foreseen operating regimes, their maximum values, envelopes and the most critical scenarios. The common principles of interaction in typical bilateral configurations have been generalized for asymmetry conditions, inspired by the plasma and by the hardware, including asymmetric plasma event and magnetic system fault cases. The specification of loads is supported by the technology of functional approximation of nodal and distributed data by continuous patterns/analytical interpolants. The global model of interactions together with the mesh-independent analytical format of output provides the source of self-consistent and transferable data on the spatial distribution of the system of forces for assessments of structural performance of the components, assemblies and supporting structures. The numerical model used is fully parametrized, which makes it very suitable for multi-variant and sensitivity studies (positioning, off-normal events, asymmetry, etc). The obtained results and matrices form a basis for a relatively simple and robust force processor as a specialized module of a global simulator for diagnostic, operational instrumentation, monitoring and control, as well as a scenario assessment tool. This paper gives an overview of the model, applied technique, assessed problems and obtained qualitative and quantitative results.
Hybrid mimics and hybrid vigor in Arabidopsis
Wang, Li; Greaves, Ian K.; Groszmann, Michael; Wu, Li Min; Dennis, Elizabeth S.; Peacock, W. James
2015-01-01
F1 hybrids can outperform their parents in yield and vegetative biomass, features of hybrid vigor that form the basis of the hybrid seed industry. The yield advantage of the F1 is lost in the F2 and subsequent generations. In Arabidopsis, from F2 plants that have a F1-like phenotype, we have by recurrent selection produced pure breeding F5/F6 lines, hybrid mimics, in which the characteristics of the F1 hybrid are stabilized. These hybrid mimic lines, like the F1 hybrid, have larger leaves than the parent plant, and the leaves have increased photosynthetic cell numbers, and in some lines, increased size of cells, suggesting an increased supply of photosynthate. A comparison of the differentially expressed genes in the F1 hybrid with those of eight hybrid mimic lines identified metabolic pathways altered in both; these pathways include down-regulation of defense response pathways and altered abiotic response pathways. F6 hybrid mimic lines are mostly homozygous at each locus in the genome and yet retain the large F1-like phenotype. Many alleles in the F6 plants, when they are homozygous, have expression levels different to the level in the parent. We consider this altered expression to be a consequence of transregulation of genes from one parent by genes from the other parent. Transregulation could also arise from epigenetic modifications in the F1. The pure breeding hybrid mimics have been valuable in probing the mechanisms of hybrid vigor and may also prove to be useful hybrid vigor equivalents in agriculture. PMID:26283378
Comparative Monte Carlo efficiency by Monte Carlo analysis.
Rubenstein, B M; Gubernatis, J E; Doll, J D
2010-09-01
We propose a modified power method for computing the subdominant eigenvalue ?{2} of a matrix or continuous operator. While useful both deterministically and stochastically, we focus on defining simple Monte Carlo methods for its application. The methods presented use random walkers of mixed signs to represent the subdominant eigenfunction. Accordingly, the methods must cancel these signs properly in order to sample this eigenfunction faithfully. We present a simple procedure to solve this sign problem and then test our Monte Carlo methods by computing ?{2} of various Markov chain transition matrices. As |?{2}| of this matrix controls the rate at which Monte Carlo sampling relaxes to a stationary condition, its computation also enabled us to compare efficiencies of several Monte Carlo algorithms as applied to two quite different types of problems. We first computed ?{2} for several one- and two-dimensional Ising models, which have a discrete phase space, and compared the relative efficiencies of the Metropolis and heat-bath algorithms as functions of temperature and applied magnetic field. Next, we computed ?{2} for a model of an interacting gas trapped by a harmonic potential, which has a mutidimensional continuous phase space, and studied the efficiency of the Metropolis algorithm as a function of temperature and the maximum allowable step size ?. Based on the ?{2} criterion, we found for the Ising models that small lattices appear to give an adequate picture of comparative efficiency and that the heat-bath algorithm is more efficient than the Metropolis algorithm only at low temperatures where both algorithms are inefficient. For the harmonic trap problem, we found that the traditional rule of thumb of adjusting ? so that the Metropolis acceptance rate is around 50% is often suboptimal. In general, as a function of temperature or ? , ?{2} for this model displayed trends defining optimal efficiency that the acceptance ratio does not. The cases studied also suggested that Monte Carlo simulations for a continuum model are likely more efficient than those for a discretized version of the model. PMID:21230207
Zimmerman, G.B.
1997-06-24
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Monte Carlo approaches to effective field theories
Carlson, J. ); Schmidt, K.E. . Dept. of Physics)
1991-01-01
In this paper, we explore the application of continuum Monte Carlo methods to effective field theory models. Effective field theories, in this context, are those in which a Fock space decomposition of the state is useful. These problems arise both in nuclear and condensed matter physica. In nuclear physics, much work has been done on effective field theories of mesons and baryons. While the theories are not fundamental, they should be able to describe nuclear properties at low energy and momentum scales. After describing the methods, we solve two simple scalar field theory problems; the polaron and two nucleons interacting through scalar meson exchange. The methods presented here are rather straightforward extensions of methods used to solve quantum mechanics problems. Monte Carlo methods are used to avoid the truncation inherent in a Tamm-Dancoff approach and its associated difficulties. Nevertheless, the methods will be most valuable when the Fock space decomposition of the states is useful. Hence, while they are not intended for ab initio studies of QCD, they may prove valuable in studies of light nuclei, or for systems of interacting electrons and phonons. In these problems a Fock space decomposition can be used to reduce the number of degrees of freedom and to retain the rotational symmetries exactly. The problems we address here are comparatively simple, but offer useful initial tests of the method. We present results for the polaron and two non-relativistic nucleons interacting through scalar meson exchange. In each case, it is possible to integrate out the boson degrees of freedom exactly, and obtain a retarded form of the action that depends only upon the fermion paths. Here we keep the explicit bosons, though, since we would like to retain information about the boson components of the states and it will be necessary to keep these components in order to treat non-scalar of interacting bosonic fields.
Womersley, J. . Dept. of Physics)
1992-10-01
The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.
Parallel Monte Carlo reactor neutronics
Blomquist, R.N.; Brown, F.B.
1994-03-01
The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved.
Computer Techniques for Evaluating the Double Integral.
ERIC Educational Resources Information Center
Walton, Karen D.; Walton, Zachary D.
1992-01-01
Examines the use of the computer to approximate the value of the definite integral normally calculated by mathematical means. Presents four examples using BASIC programs to approximate single and double integrals by numerical integration and the Monte Carlo method. Programs are provided. (MDH)
Mobile Hybrid Virtual Reality and Telepresence for Planning and
Bowden, Richard
. Keywords: Augmented Reality, Hybrid Virtual Reality, Telepresence, Mobile Communications, Construction. 1.retik@strath.ac.uk Abstract The paper presents an EPSRC and industry funded research project which aims to integrate Virtual
Comparison of Early-stage Design Methods for a Two-mode Hybrid Electric Vehicle
Papalambros, Panos
Comparison of Early-stage Design Methods for a Two-mode Hybrid Electric Vehicle Kukhyun Ahn+ , J the propulsion system of a hybrid electric vehicle (HEV), engine, transmission, motor, battery, power electronics. In this paper, two design optimization methods for a two-mode hybrid vehicle are examined: The first integrates
JOURNAL OF LATEX CLASS FILES, CDC 2006 1 A Maximum Principle for Hybrid Optimal Control
Maume-Deschamps, VÃ©ronique
JOURNAL OF LATEX CLASS FILES, CDC 2006 1 A Maximum Principle for Hybrid Optimal Control Problems class of hybrid optimal control problems, in which the dynamics of the constituent processes take the problem of integrating discrete and continuous decision making in this context. Hybrid optimal control
The MCLIB library: Monte Carlo simulation of neutron scattering instruments
Seeger, P.A.
1995-09-01
Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC{_}RUN) which use the library are shown as an example.
Recent developments in quantum Monte Carlo simulations with applications for cold gases
NASA Astrophysics Data System (ADS)
Pollet, Lode
2012-09-01
This is a review of recent developments in Monte Carlo methods in the field of ultracold gases. For bosonic atoms in an optical lattice we discuss path-integral Monte Carlo simulations with worm updates and show the excellent agreement with cold atom experiments. We also review recent progress in simulating bosonic systems with long-range interactions, disordered bosons, mixtures of bosons and spinful bosonic systems. For repulsive fermionic systems, determinantal methods at half filling are sign free, but in general no sign-free method exists. We review the developments in diagrammatic Monte Carlo for the Fermi polaron problem and the Hubbard model, and show the connection with dynamical mean-field theory. We end the review with diffusion Monte Carlo for the Stoner problem in cold gases.
Gupta, Avinash
2001-01-01
Hybrid networks are networks that have wired as well as wireless components. Several routing protocols exist for traditional wired networks and mobile ad-hoc networks. However, there are very few routing protocols designed for hybrid networks...
Synchronous parallel kinetic Monte Carlo Diffusion in Heterogeneous Systems
Martinez Saez, Enrique; Hetherly, Jeffery; Caro, Jose A
2010-12-06
A new hybrid Molecular Dynamics-kinetic Monte Carlo algorithm has been developed in order to study the basic mechanisms taking place in diffusion in concentrated alloys under the action of chemical and stress fields. Parallel implementation of the k-MC part based on a recently developed synchronous algorithm [1. Compo Phys. 227 (2008) 3804-3823] resorting on the introduction of a set of null events aiming at synchronizing the time for the different subdomains, added to the parallel efficiency of MD, provides the computer power required to evaluate jump rates 'on the flight', incorporating in this way the actual driving force emerging from chemical potential gradients, and the actual environment-dependent jump rates. The time gain has been analyzed and the parallel performance reported. The algorithm is tested on simple diffusion problems to verify its accuracy.
Mesoscale hybrid calibration artifact
Tran, Hy D. (Albuquerque, NM); Claudet, Andre A. (Albuquerque, NM); Oliver, Andrew D. (Waltham, MA)
2010-09-07
A mesoscale calibration artifact, also called a hybrid artifact, suitable for hybrid dimensional measurement and the method for make the artifact. The hybrid artifact has structural characteristics that make it suitable for dimensional measurement in both vision-based systems and touch-probe-based systems. The hybrid artifact employs the intersection of bulk-micromachined planes to fabricate edges that are sharp to the nanometer level and intersecting planes with crystal-lattice-defined angles.
From hybrid swarms to swarms of hybrids
Technology Transfer Automated Retrieval System (TEKTRAN)
The introgression of modern humans (Homo sapiens) with Neanderthals 40,000 YBP after a half-million years of separation, may have led to the best example of a hybrid swarm on earth. Modern trade and transportation in support of the human hybrids has continued to introduce additional species, genotyp...
Hybrid cc, Hybrid Automata and Program Verification
Jagadeesan, Radha
of differential equations and real analysis on the one hand, and the theory of programming languages on the otherHybrid cc, Hybrid Automata and Program Verification (Extended Abstract) Vineet Gupta ? Radha Jagadeesan ?? Vijay Saraswat ? 1 Introduction Synchronous programming. Discrete event driven systems [HP85
Present status of vectorized Monte Carlo
Brown, F.B.
1987-01-01
Monte Carlo applications have traditionally been limited by the large amounts of computer time required to produce acceptably small statistical uncertainties, so the immediate benefit of vectorization is an increase in either the number of jobs completed or the number of particles processed per job, typically by one order of magnitude or more. This results directly in improved engineering design analyses, since Monte Carlo methods are used as standards for correcting more approximate methods. The relatively small number of vectorized programs is a consequence of the newness of vectorized Monte Carlo, the difficulties of nonportability, and the very large development effort required to rewrite or restructure Monte Carlo codes for vectorization. Based on the successful efforts to date, it may be concluded that Monte Carlo vectorization will spread to increasing numbers of codes and applications. The possibility of multitasking provides even further motivation for vectorizing Monte Carlo, since the step from vector to multitasked vector is relatively straightforward.
Use of commercially available radiation hybrid panels.
Dechairo, B M; Carey, A H
2001-05-01
Several panels are available for puchase, and this unit provides update information on the use of the three commercially available panels and on the interpretation of mapping results using the Internet. Radiation hybrid panels continue to serve as integral biological reagents in physical mapping projects and positional cloning. PMID:18428278
Hawke, R.S.; Asay, J.R.; Hall, C.A.; Konrad, C.H.; Sauve, G.L.; Shahinpoor, M.; Susoeff, A.R.
1993-03-02
A projectile for a railgun that uses a hybrid armature and provides a seed block around part of the outer surface of the projectile to seed the hybrid plasma brush. In addition, the hybrid armature is continuously vaporized to replenish plasma in a plasma armature to provide a tandem armature and provides a unique ridge and groove to reduce plasma blowby.
Hawke, Ronald S. (Livermore, CA); Asay, James R. (Los Lunas, NM); Hall, Clint A. (Albuquerque, NM); Konrad, Carl H. (Albuquerque, NM); Sauve, Gerald L. (Berthoud, CO); Shahinpoor, Mohsen (Albuquerque, NM); Susoeff, Allan R. (Pleasanton, CA)
1993-01-01
A projectile for a railgun that uses a hybrid armature and provides a seed block around part of the outer surface of the projectile to seed the hybrid plasma brush. In addition, the hybrid armature is continuously vaporized to replenish plasma in a plasma armature to provide a tandem armature and provides a unique ridge and groove to reduce plasama blowby.
Hybrid quantum information processing
Furusawa, Akira
2014-12-04
I will briefly explain the definition and advantage of hybrid quantum information processing, which is hybridization of qubit and continuous-variable technologies. The final goal would be realization of universal gate sets both for qubit and continuous-variable quantum information processing with the hybrid technologies. For that purpose, qubit teleportation with a continuousvariable teleporter is one of the most important ingredients.
Path Integral representation of quantum particles in fluids: Convergence of observables
NASA Astrophysics Data System (ADS)
Reese, Terrebce; Miller, Bruce
2015-03-01
In previous work the Path Integral Monte Carlo (PIMC) technique was used to simulate a low mass quantum particle (qp) in a dense Lennard-Jones 6-12 fluid having the thermodynamic properties of Xenon. Because of the difference in thermal wavelengths between the qp and the fluid molecules, the fluid molecules can be treated classically. This combination of using quantum mechanics for the qp and classical mechanics for the fluid molecules is known as a hybrid model. In the path integral formulation the qp is represented as a closed chain of P classical particles where the quantum uncertainty in the position of the qp is manifested by the finite spread of the polymer chain. The PIMC technique allows standard classical Monte Carlo techniques to be used to compute quantum mechanical equilibrium values like the ortho-Positronium pick-off decay rate. Here we compare the convergence of PIMC for different thermodynamic states, including one near the liquid-vapor critical point of the fluid. We employ the correlation function of the iterated quantum observables to estimate the number of statistically independent configurations in a run and provide an estimate of the standard error.
NASA Technical Reports Server (NTRS)
Eichenberg, Dennis
2005-01-01
An engineering discipline denoted as hybrid power management (HPM) has emerged from continuing efforts to increase energy efficiency and reliability of hybrid power systems. HPM is oriented toward integration of diverse electric energy-generating, energy-storing, and energy-consuming devices in optimal configurations for both terrestrial and outer-space applications. The basic concepts of HPM are potentially applicable at power levels ranging from nanowatts to megawatts. Potential applications include terrestrial power-generation, terrestrial transportation, biotechnology, and outer-space power systems. Instances of this discipline at prior stages of development were reported (though not explicitly labeled as HPM) in three prior NASA Tech Briefs articles: "Ultracapacitors Store Energy in a Hybrid Electric Vehicle"(LEW-16876), Vol. 24, No. 4 (April 2000), page 63; "Photovoltaic Power Station With Ultracapacitors for Storage" (LEW-17177), Vol. 27, No. 8 (August 2003), page 38; and "Flasher Powered by Photovoltaic Cells and Ultracapacitors" (LEW-17246), Vol. 24, No. 10 (October 2003), page 37. As the titles of the cited articles indicate, the use of ultracapacitors as energy-storage devices lies at the heart of HPM. An ultracapacitor is an electrochemical energy-storage device, but unlike in a conventional rechargeable electrochemical cell or battery, chemical reactions do not take place during operation. Instead, energy is stored electrostatically at an electrode/electrolyte interface. The capacitance per unit volume of an ultracapacitor is much greater than that of a conventional capacitor because its electrodes have much greater surface area per unit volume and the separation between the electrodes is much smaller. Power-control circuits for ultracapacitors can be simpler than those for batteries, for two reasons: (1) Because of the absence of chemical reactions, charge and discharge currents can be greater than those in batteries, limited only by the electrical resistances of conductors; and (2) whereas the charge level of a battery depends on voltage, temperature, age, and load condition, the charge level of an ultracapacitor, like that of a conventional capacitor, depends only on voltage.
A study of hybrid ring and branch line couplers utilizing uniplanar asymmetric coplanar stripline
Heimer, Brad Ryan
1997-01-01
This thesis introduces four new uniplanar 3-dB hybrid couplers using asymmetrical coplanar striplines (ACPS) for microwave integrated circuit (MIC) and monolithic microwave integrated circuit (MMIC) applications. The ACPS transmission line...
HYBRID TECHNOLOGY PLATFORMS AND INTEGRATED SYSTEMS
to many applications are dynamically extracted from a rich data stream. This article surveys a series's Responsive Environments Group that explores such sensor infrastructures for creating new channels of over 300 networked multimodal percussion sensors (the Rhythm Tree) and a handheld baton controller
Single scatter electron Monte Carlo
Svatos, M.M.
1997-03-01
A single scatter electron Monte Carlo code (SSMC), CREEP, has been written which bridges the gap between existing transport methods and modeling real physical processes. CREEP simulates ionization, elastic and bremsstrahlung events individually. Excitation events are treated with an excitation-only stopping power. The detailed nature of these simulations allows for calculation of backscatter and transmission coefficients, backscattered energy spectra, stopping powers, energy deposits, depth dose, and a variety of other associated quantities. Although computationally intense, the code relies on relatively few mathematical assumptions, unlike other charged particle Monte Carlo methods such as the commonly-used condensed history method. CREEP relies on sampling the Lawrence Livermore Evaluated Electron Data Library (EEDL) which has data for all elements with an atomic number between 1 and 100, over an energy range from approximately several eV (or the binding energy of the material) to 100 GeV. Compounds and mixtures may also be used by combining the appropriate element data via Bragg additivity.
Monte Carlo surface flux tallies
Favorite, Jeffrey A
2010-11-19
Particle fluxes on surfaces are difficult to calculate with Monte Carlo codes because the score requires a division by the surface-crossing angle cosine, and grazing angles lead to inaccuracies. We revisit the standard practice of dividing by half of a cosine 'cutoff' for particles whose surface-crossing cosines are below the cutoff. The theory behind this approximation is sound, but the application of the theory to all possible situations does not account for two implicit assumptions: (1) the grazing band must be symmetric about 0, and (2) a single linear expansion for the angular flux must be applied in the entire grazing band. These assumptions are violated in common circumstances; for example, for separate in-going and out-going flux tallies on internal surfaces, and for out-going flux tallies on external surfaces. In some situations, dividing by two-thirds of the cosine cutoff is more appropriate. If users were able to control both the cosine cutoff and the substitute value, they could use these parameters to make accurate surface flux tallies. The procedure is demonstrated in a test problem in which Monte Carlo surface fluxes in cosine bins are converted to angular fluxes and compared with the results of a discrete ordinates calculation.
Realistic Monte Carlo Simulation of PEN Apparatus
NASA Astrophysics Data System (ADS)
Glaser, Charles; PEN Collaboration
2015-04-01
The PEN collaboration undertook to measure the ?+ -->e+?e(?) branching ratio with a relative uncertainty of 5 ×10-4 or less at the Paul Scherrer Institute. This observable is highly susceptible to small non V - A contributions, i.e, non-Standard Model physics. The detector system included a beam counter, mini TPC for beam tracking, an active degrader and stopping target, MWPCs and a plastic scintillator hodoscope for particle tracking and identification, and a spherical CsI EM calorimeter. GEANT 4 Monte Carlo simulation is integral to the analysis as it is used to generate fully realistic events for all pion and muon decay channels. The simulated events are constructed so as to match the pion beam profiles, divergence, and momentum distribution. Ensuring the placement of individual detector components at the sub-millimeter level and proper construction of active target waveforms and associated noise, enables us to more fully understand temporal and geometrical acceptances as well as energy, time, and positional resolutions and calibrations in the detector system. This ultimately leads to reliable discrimination of background events, thereby improving cut based or multivariate branching ratio extraction. Work supported by NSF Grants PHY-0970013, 1307328, and others.
Adaptive multi-stage integrators for optimal energy conservation in molecular simulations
Fernández-Pendás, Mario; Sanz-Serna, J M
2015-01-01
We introduce a new Adaptive Integration Approach (AIA) to be used in a wide range of molecular simulations. Given a simulation problem and a step size, the method automatically chooses the optimal scheme out of an available family of numerical integrators. Although we focus on two-stage splitting integrators, the idea may be used with more general families. In each instance, the system-specific integrating scheme identified by our approach is optimal in the sense that it provides the best conservation of energy for harmonic forces. The AIA method has been implemented in the BCAM-modified GROMACS software package. Numerical tests in molecular dynamics and hybrid Monte Carlo simulations of constrained and unconstrained physical systems show that the method successfully realises the fail-safe strategy. In all experiments, and for each of the criteria employed, the AIA is at least as good as, and often significantly outperforms the standard Verlet scheme, as well as fixed parameter, optimized two-stage integrator...
NASA Astrophysics Data System (ADS)
Espel, Federico Puente
The main objective of this PhD research is to develop a high accuracy modeling tool using a Monte Carlo based coupled system. The presented research comprises the development of models to include the thermal-hydraulic feedback to the Monte Carlo method and speed-up mechanisms to accelerate the Monte Carlo criticality calculation. Presently, deterministic codes based on the diffusion approximation of the Boltzmann transport equation, coupled with channel-based (or sub-channel based) thermal-hydraulic codes, carry out the three-dimensional (3-D) reactor core calculations of the Light Water Reactors (LWRs). These deterministic codes utilize nuclear homogenized data (normally over large spatial zones, consisting of fuel assembly or parts of fuel assembly, and in the best case, over small spatial zones, consisting of pin cell), which is functionalized in terms of thermal-hydraulic feedback parameters (in the form of off-line pre-generated cross-section libraries). High accuracy modeling is required for advanced nuclear reactor core designs that present increased geometry complexity and material heterogeneity. Such high-fidelity methods take advantage of the recent progress in computation technology and coupled neutron transport solutions with thermal-hydraulic feedback models on pin or even on sub-pin level (in terms of spatial scale). The continuous energy Monte Carlo method is well suited for solving such core environments with the detailed representation of the complicated 3-D problem. The major advantages of the Monte Carlo method over the deterministic methods are the continuous energy treatment and the exact 3-D geometry modeling. However, the Monte Carlo method involves vast computational time. The interest in Monte Carlo methods has increased thanks to the improvements of the capabilities of high performance computers. Coupled Monte-Carlo calculations can serve as reference solutions for verifying high-fidelity coupled deterministic neutron transport methods with detailed and accurate thermal-hydraulic models. The development of such reference high-fidelity coupled multi-physics scheme is described in this dissertation on the basis of MCNP5, NEM, NJOY and COBRA-TF (CTF) computer codes. This work presents results from studies performed and implemented at the Pennsylvania State University (PSU) on both accelerating Monte Carlo criticality calculations by using hybrid nodal diffusion Monte Carlo schemes and thermal-hydraulic feedback modeling in Monte Carlo core calculations. The hybrid MCNP5/CTF/NEM/NJOY coupled code system is proposed and developed in this dissertation work. The hybrid coupled code system contains a special interface developed to update the required MCNP5 input changes to account for dimension and density changes provided by the thermal-hydraulics feedback module. The interface has also been developed to extract the flux and reaction rates calculated by MCNP5 to later transform the data into the power feedback needed by CTF (axial and radial peaking factors). The interface is contained in a master program that controls the flow of the calculations. Both feedback modules (thermal-hydraulic and power subroutines) use a common internal interface to further accelerate the data exchange. One of the most important steps to correctly include the thermal hydraulic feedback into MCNP5 calculations begins with temperature dependent cross section libraries. If the cross sections used for the calculations are not at the correct temperature, the temperature feedback cannot be included into MCNP5 (referred to the effect of temperature on cross sections: Doppler boarding of resolve and unresolved resonances, thermal scattering and elastic scattering). The only method of considering the temperature effects on cross sections is through the generation (or as introduced in this dissertation through a novel interpolation mechanism) of continuous energy temperature-dependent cross section libraries. An automated methodology for generation of continuous energy temperature-dependent cross section libraries has been developed
NASA Astrophysics Data System (ADS)
Alexander, Andrew William
Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and optimization algorithms are demonstrated. We investigated the clinical significance of MERT on spinal irradiation, breast boost irradiation, and a head and neck sarcoma cancer site using several parameters to analyze the treatment plans. Finally, we investigated the idea of mixed beam photon and electron treatment planning. Photon optimization treatment planning tools were included within the MERT planning toolkit for the purpose of mixed beam optimization. In conclusion, this thesis work has resulted in the development of an advanced framework for photon and electron Monte Carlo treatment planning studies and the development of an inverse planning system for photon, electron or mixed beam radiotherapy (MBRT). The justification and validation of this work is found within the results of the planning studies, which have demonstrated dosimetric advantages to using MERT or MBRT in comparison to clinical treatment alternatives.
Building-Integrated Solar Energy Devices based on Wavelength Selective Films
NASA Astrophysics Data System (ADS)
Ulavi, Tejas
A potentially attractive option for building integrated solar is to employ hybrid solar collectors which serve dual purposes, combining solar thermal technology with either thin film photovoltaics or daylighting. In this study, two hybrid concepts, a hybrid photovoltaic/thermal (PV/T) collector and a hybrid 'solar window', are presented and analyzed to evaluate technical performance. In both concepts, a wavelength selective film is coupled with a compound parabolic concentrator (CPC) to reflect and concentrate the infrared portion of the solar spectrum onto a tubular absorber. The visible portion of the spectrum is transmitted through the concentrator to either a thin film Cadmium Telluride (CdTe) solar panel for electricity generation or into the interior space for daylighting. Special attention is given to the design of the hybrid devices for aesthetic building integration. An adaptive concentrator design based on asymmetrical truncation of CPCs is presented for the hybrid solar window concept. The energetic and spectral split between the solar thermal module and the PV or daylighting module are functions of the optical properties of the wavelength selective film and the concentrator geometry, and are determined using a Monte Carlo Ray-Tracing (MCRT) model. Results obtained from the MCRT can be used in conjugation with meteorological data for specific applications to study the impact of CPC design parameters including the half-acceptance angle thetac, absorber diameter D and truncation on the annual thermal and PV/daylighting efficiencies. The hybrid PV/T system is analyzed for a rooftop application in Phoenix, AZ. Compared to a system of the same area with independent solar thermal and PV modules, the hybrid PV/T provides 20% more energy, annually. However, the increase in total delivered energy is due solely to the addition of the thermal module and is achieved at an expense of a decrease in the annual electrical efficiency from 8.8% to 5.8% due to shading by the absorber tubes. For this reason, the PV/T hybrid is not recommended over other options in new installations. The hybrid solar window is evaluated for a horizontal skylight and south and east facing vertical windows in Minneapolis, MN. The predicted visible transmittance for the solar window is 0.66 to 0.73 for single glazed systems and 0.61 to 0.67 for double glazed systems. The solar heat gain coefficient and the U-factor for the window are comparable to existing glazing technology. Annual thermal efficiencies of up to 24% and 26% are predicted for the vertical window and the horizontal skylight respectively. Experimental measurements of the solar thermal component of the window confirm the trends of the model. In conclusion, the hybrid solar window combines the functionality of an energy efficient fenestration system with hybrid thermal energy generation to provide a compelling solution towards sustainable design of the built environment.
Global variance reduction for Monte Carlo reactor physics calculations
Zhang, Q.; Abdel-Khalik, H. S.
2013-07-01
Over the past few decades, hybrid Monte-Carlo-Deterministic (MC-DT) techniques have been mostly focusing on the development of techniques primarily with shielding applications in mind, i.e. problems featuring a limited number of responses. This paper focuses on the application of a new hybrid MC-DT technique: the SUBSPACE method, for reactor analysis calculation. The SUBSPACE method is designed to overcome the lack of efficiency that hampers the application of MC methods in routine analysis calculations on the assembly level where typically one needs to execute the flux solver in the order of 10{sup 3}-10{sup 5} times. It places high premium on attaining high computational efficiency for reactor analysis application by identifying and capitalizing on the existing correlations between responses of interest. This paper places particular emphasis on using the SUBSPACE method for preparing homogenized few-group cross section sets on the assembly level for subsequent use in full-core diffusion calculations. A BWR assembly model is employed to calculate homogenized few-group cross sections for different burn-up steps. It is found that using the SUBSPACE method significant speedup can be achieved over the state of the art FW-CADIS method. While the presented speed-up alone is not sufficient to render the MC method competitive with the DT method, we believe this work will become a major step on the way of leveraging the accuracy of MC calculations for assembly calculations. (authors)
Quantum Monte Carlo in the era of petascale computers
NASA Astrophysics Data System (ADS)
Kim, Jeongnim; Esler, Kenneth; McMinis, Jeremy; Morales, Miguel; Clark, Bryan; Shulenburger, Luke; Ceperley, David
2012-02-01
Continuum quantum Monte Carlo (QMC) methods are a leading contender for high accuracy calculations for the electronic structure of realistic systems, especially on massively parallel high-performance computers (HPC). The performance gain on recent HPC systems is largely driven by increasing parallelism: the number of compute cores of a SMP and the number of SMPs have been going up, as the Top500 list attests. However, the available memory as well as the communication and memory bandwidth per element has not kept pace with the increasing parallelism. This severely limits the applicability of QMC and the problem size it can handle. (OpenMP,CUDA)/MPI hybrid programming provides applications with simple but effective solutions to overcome efficiency and scalability bottlenecks on large-scale clusters based on multi/many-core SMPs. We discuss the design and implementation of hybrid methods in QMCPACK and analyze its performance on multi-petaflop platforms characterized by various memory and communication hierarchies. Also presented are QMC calculations of bulk systems, including defects in semiconductors.
Hindrikson, Maris; Männil, Peep; Ozolins, Janis; Krzywinski, Andrzej; Saarma, Urmas
2012-01-01
Studies on hybridization have proved critical for understanding key evolutionary processes such as speciation and adaptation. However, from the perspective of conservation, hybridization poses a concern, as it can threaten the integrity and fitness of many wild species, including canids. As a result of habitat fragmentation and extensive hunting pressure, gray wolf (Canis lupus) populations have declined dramatically in Europe and elsewhere during recent centuries. Small and fragmented populations have persisted, but often only in the presence of large numbers of dogs, which increase the potential for hybridization and introgression to deleteriously affect wolf populations. Here, we demonstrate hybridization between wolf and dog populations in Estonia and Latvia, and the role of both genders in the hybridization process, using combined analysis of maternal, paternal and biparental genetic markers. Eight animals exhibiting unusual external characteristics for wolves - six from Estonia and two from Latvia - proved to be wolf-dog hybrids. However, one of the hybridization events was extraordinary. Previous field observations and genetic studies have indicated that mating between wolves and dogs is sexually asymmetrical, occurring predominantly between female wolves and male dogs. While this was also the case among the Estonian hybrids, our data revealed the existence of dog mitochondrial genomes in the Latvian hybrids and, together with Y chromosome and autosomal microsatellite data, thus provided the first evidence from Europe of mating between male wolves and female dogs. We discuss patterns of sexual asymmetry in wolf-dog hybridization. PMID:23056315
Hindrikson, Maris; Männil, Peep; Ozolins, Janis; Krzywinski, Andrzej; Saarma, Urmas
2012-01-01
Studies on hybridization have proved critical for understanding key evolutionary processes such as speciation and adaptation. However, from the perspective of conservation, hybridization poses a concern, as it can threaten the integrity and fitness of many wild species, including canids. As a result of habitat fragmentation and extensive hunting pressure, gray wolf (Canis lupus) populations have declined dramatically in Europe and elsewhere during recent centuries. Small and fragmented populations have persisted, but often only in the presence of large numbers of dogs, which increase the potential for hybridization and introgression to deleteriously affect wolf populations. Here, we demonstrate hybridization between wolf and dog populations in Estonia and Latvia, and the role of both genders in the hybridization process, using combined analysis of maternal, paternal and biparental genetic markers. Eight animals exhibiting unusual external characteristics for wolves - six from Estonia and two from Latvia - proved to be wolf-dog hybrids. However, one of the hybridization events was extraordinary. Previous field observations and genetic studies have indicated that mating between wolves and dogs is sexually asymmetrical, occurring predominantly between female wolves and male dogs. While this was also the case among the Estonian hybrids, our data revealed the existence of dog mitochondrial genomes in the Latvian hybrids and, together with Y chromosome and autosomal microsatellite data, thus provided the first evidence from Europe of mating between male wolves and female dogs. We discuss patterns of sexual asymmetry in wolf-dog hybridization. PMID:23056315
Summarizing Monte Carlo Results in Methodological Research.
ERIC Educational Resources Information Center
Harwell, Michael R.
Monte Carlo studies of statistical tests are prominently featured in the methodological research literature. Unfortunately, the information from these studies does not appear to have significantly influenced methodological practice in educational and psychological research. One reason is that Monte Carlo studies lack an overarching theory to guide…
A hybrid parallel framework for the cellular Potts model simulations
Jiang, Yi; He, Kejing; Dong, Shoubin
2009-01-01
The Cellular Potts Model (CPM) has been widely used for biological simulations. However, most current implementations are either sequential or approximated, which can't be used for large scale complex 3D simulation. In this paper we present a hybrid parallel framework for CPM simulations. The time-consuming POE solving, cell division, and cell reaction operation are distributed to clusters using the Message Passing Interface (MPI). The Monte Carlo lattice update is parallelized on shared-memory SMP system using OpenMP. Because the Monte Carlo lattice update is much faster than the POE solving and SMP systems are more and more common, this hybrid approach achieves good performance and high accuracy at the same time. Based on the parallel Cellular Potts Model, we studied the avascular tumor growth using a multiscale model. The application and performance analysis show that the hybrid parallel framework is quite efficient. The hybrid parallel CPM can be used for the large scale simulation ({approx}10{sup 8} sites) of complex collective behavior of numerous cells ({approx}10{sup 6}).
Structural mapping of Maxwell Montes
NASA Technical Reports Server (NTRS)
Keep, Myra; Hansen, Vicki L.
1993-01-01
Four sets of structures were mapped in the western and southern portions of Maxwell Montes. An early north-trending set of penetrative lineaments is cut by dominant, spaced ridges and paired valleys that trend northwest. To the south the ridges and valleys splay and graben form in the valleys. The spaced ridges and graben are cut by northeast-trending graben. The northwest-trending graben formed synchronously with or slightly later than the spaced ridges. Formation of the northeast-trending graben may have overlapped with that of the northwest-trending graben, but occurred in a spatially distinct area (regions of 2 deg slope). Graben formation, with northwest-southeast extension, may be related to gravity-sliding. Individually and collectively these structures are too small to support the immense topography of Maxwell, and are interpreted as parasitic features above a larger mass that supports the mountain belt.
HYBRID2: A versatile model of the performance of hybrid power systems
NASA Astrophysics Data System (ADS)
Green, H. James; Manwell, James
1995-04-01
In 1993, the National Renewable Laboratory (NREL) made an assessment of the available tools from the United States and Europe for predicting the long-term performance of hybrid power systems. By hybrid power the authors mean combinations of two or more power sources wind turbines, photovoltaics (PV), diesel gensets, or other generators into integrated systems for electric power generation in remote locations. Their conclusion was that there was no single, user-friendly tool capable of modeling the full range of hybrid power technologies being considered for the 1990s and beyond. The existing tools were, in particular, lacking flexibility in system configuration and in dispatch of components. As a result, NREL developed a specification for a model, called HYBRID2, for making comparisons of competing technology options on a level playing field. This specification was prepared with a range of potential users in mind including not only the US Department of Energy (DOE) renewable energy programs, but also the US wind industry, technical consultants, international development institutions/banks, and rural electrification programs in developing countries. During 1994, NREL and subcontractor, the University of Massachusetts (UMass), began development of HYBRID2 with funding from the DOE Wind Energy Program. It builds on the wind/diesel model, HYBRID1, developed previously by UMass, and expands that model to accommodate the wider array of technologies used in hybrid power systems. This paper will provide an overview of the model's features, functions, and status.
NASA Astrophysics Data System (ADS)
Dindarloo, Saeid R.; Bagherieh, Amirhossein; Hower, James C.; Calder, John H.; Wagner, Nicola J.
2015-11-01
Markov Chain analysis was applied to the description of the megascopic lithologic transitions in Pennsylvanian-age eastern Kentucky coals. Coal lithology modeling can be problematic as individual lithotypes can represent near-instantaneous events (vitrain), prolonged degradation (durain), or fire-induced loss of previously deposited lithologies (fusain). Each of the latter lithotypes, potentially representing vastly different amounts of time, could be of the same thickness. Therefore, equal thickness does not necessarily imply equal time. Probability transform matrices that employ uniform lithotype thicknesses were used, allowing transitions between like lithotypes; embedded Markov Chains, thereby only considering transitions between different lithotypes; and continuous-time Markov Chains were employed in the assessment of a section of the No. 5 Block coal (Pennsylvanian Breathitt Group, Martin County, Kentucky). Embedded Markov Chains could successfully simulate the lithologic transitions. A Monte Carlo random process was programmed to simulate thickness variations of lithotypes between the transitions. The proposed hybrid model of Monte Carlo-Markov Chain was able to predict the random pattern that underlies lithotypes transitions and thickness. The hybrid Monte Carlo-Markov Chain technique proved to be effective in the case study in simulating both the lithologic thickness variations and transitions.
Hybrid Streamers for Polar Seismic
NASA Astrophysics Data System (ADS)
Gifford, C. M.; Agah, A.; Tsoflias, G. P.
2006-12-01
We propose a new hybrid streamer seismic approach for polar regions that incorporates insertion of spiked geophones, the land streamer method of transportation, and mobile robotics. Current land streamers do not plant the geophone spike at each node location on the streamer(s) nor use robotic control. This approach combines the two methods, and is therefore termed "Hybrid Streamers". Land seismic 3D surveying is costly and time consuming due to manual handling of geophones and cables. Multiple streamers make this process simpler by allowing efficient deployment of large numbers of geophones. Hybrid streamers go further to robotically insert the geophone spike at each node location to achieve higher frequency and better resolution seismic images. For deployment and retrieval, the geophone spikes are drilled into the ground, or inserted using heat. This can be accomplished by modifying the geophone spike to be similar to a threaded screw or similar to a soldering iron for polar environments. Heat could help melt the ice during deployment, which would refreeze around the geophone for firm coupling. Heat could also be used to make polar geophone retrieval easier. By ensuring that the towing robots are robust and effective, the problem of single point of failure can be less of an issue. Polar rovers have proven useful in harsh environments, and could be utilized in polar seismic applications. Towing geophone nodes in a tethered fashion not only provides all nodes with power to operate the onboard equipment, but also gives them a medium to transfer data to the towing rover. Hybrid streamers could be used in several ways. One or more hybrid streamers could be tethered and towed by a single robot. Several robots could be used to form a single grid, working in conjunction to image larger areas in three dimensions. Such an approach could speed up entire missions and make efficient use of seismic source ignitions. The reduction of human involvement by use of mobile robots makes the hybrid seismic approach attractive. It is an enhancement of the current seismic technology by integrating mobile robots, the streamer structure and mode of transportation, and higher resolution acquisition through spiked geophone nodes.
EXCALIBUR — a Monte Carlo program to evaluate all four-fermion processes at LEP 200 and beyond
NASA Astrophysics Data System (ADS)
Berends, F. A.; Pittau, R.; Kleiss, R.
1995-03-01
A Monte Carlo program is presented that computes all four-fermion processes in e+e- annihilation. QED initial state corrections and QCD contributions are included. Fermions are taken to be massless, allowing a very fast evaluation of the matrix element. A systematic, modular and self-optimizing strategy has been adopted for the Monte Carlo integration, which serves also as an example for further event generators in high energy particle physics.
Hybrid image recognition architecture
NASA Astrophysics Data System (ADS)
Delrieux, Claudio; Katz, Roman
2002-06-01
Current research on artificial vision and pattern recognition tends to concentrate either on numerical processing (filtering, morphological, spectral) or in symbolic or subsymbolic processing (neural networks, fuzzy logic, knowledge-based systems). In this work we combine both kinds of processing in a hybrid image processing architecture. The numerical processing part implements the most usual facilities (equalization, convolution filters, morphological filters, segmentation and description) in a way adequate to transform the input image into a polygonal outline. Then recognition is performed with a rule-based system implemented in Prolog. This allows a neat high-level representation of the patterns to recognize as a set of logical relations (predicates), and also the recognition procedure is represented as a set of logical rules. To integrate the numerical and logical components of our system, we embedded a Prolog interpreter as a software component within a visual programming language. Thus, our architecture features both the speed and versatility of a visual language application, and the abstraction level and modularity of a logical description.
NASA Technical Reports Server (NTRS)
Eichenberg, Dennis J.
2007-01-01
The NASA Glenn Research Center s Avionics, Power and Communications Branch of the Engineering and Systems Division initiated the Hybrid Power Management (HPM) Program for the GRC Technology Transfer and Partnership Office. HPM is the innovative integration of diverse, state-of-the-art power devices in an optimal configuration for space and terrestrial applications. The appropriate application and control of the various power devices significantly improves overall system performance and efficiency. The advanced power devices include ultracapacitors and fuel cells. HPM has extremely wide potential. Applications include power generation, transportation systems, biotechnology systems, and space power systems. HPM has the potential to significantly alleviate global energy concerns, improve the environment, and stimulate the economy. One of the unique power devices being utilized by HPM for energy storage is the ultracapacitor. An ultracapacitor is an electrochemical energy storage device, which has extremely high volumetric capacitance energy due to high surface area electrodes, and very small electrode separation. Ultracapacitors are a reliable, long life, maintenance free, energy storage system. This flexible operating system can be applied to all power systems to significantly improve system efficiency, reliability, and performance. There are many existing and conceptual applications of HPM.
Quantitative Monte Carlo-based holmium-166 SPECT reconstruction
Elschot, Mattijs; Smits, Maarten L. J.; Nijsen, Johannes F. W.; Lam, Marnix G. E. H.; Zonnenberg, Bernard A.; Bosch, Maurice A. A. J. van den; Jong, Hugo W. A. M. de; Viergever, Max A.
2013-11-15
Purpose: Quantitative imaging of the radionuclide distribution is of increasing interest for microsphere radioembolization (RE) of liver malignancies, to aid treatment planning and dosimetry. For this purpose, holmium-166 ({sup 166}Ho) microspheres have been developed, which can be visualized with a gamma camera. The objective of this work is to develop and evaluate a new reconstruction method for quantitative {sup 166}Ho SPECT, including Monte Carlo-based modeling of photon contributions from the full energy spectrum.Methods: A fast Monte Carlo (MC) simulator was developed for simulation of {sup 166}Ho projection images and incorporated in a statistical reconstruction algorithm (SPECT-fMC). Photon scatter and attenuation for all photons sampled from the full {sup 166}Ho energy spectrum were modeled during reconstruction by Monte Carlo simulations. The energy- and distance-dependent collimator-detector response was modeled using precalculated convolution kernels. Phantom experiments were performed to quantitatively evaluate image contrast, image noise, count errors, and activity recovery coefficients (ARCs) of SPECT-fMC in comparison with those of an energy window-based method for correction of down-scattered high-energy photons (SPECT-DSW) and a previously presented hybrid method that combines MC simulation of photopeak scatter with energy window-based estimation of down-scattered high-energy contributions (SPECT-ppMC+DSW). Additionally, the impact of SPECT-fMC on whole-body recovered activities (A{sup est}) and estimated radiation absorbed doses was evaluated using clinical SPECT data of six {sup 166}Ho RE patients.Results: At the same noise level, SPECT-fMC images showed substantially higher contrast than SPECT-DSW and SPECT-ppMC+DSW in spheres ?17 mm in diameter. The count error was reduced from 29% (SPECT-DSW) and 25% (SPECT-ppMC+DSW) to 12% (SPECT-fMC). ARCs in five spherical volumes of 1.96–106.21 ml were improved from 32%–63% (SPECT-DSW) and 50%–80% (SPECT-ppMC+DSW) to 76%–103% (SPECT-fMC). Furthermore, SPECT-fMC recovered whole-body activities were most accurate (A{sup est}= 1.06 × A ? 5.90 MBq, R{sup 2}= 0.97) and SPECT-fMC tumor absorbed doses were significantly higher than with SPECT-DSW (p = 0.031) and SPECT-ppMC+DSW (p = 0.031).Conclusions: The quantitative accuracy of {sup 166}Ho SPECT is improved by Monte Carlo-based modeling of the image degrading factors. Consequently, the proposed reconstruction method enables accurate estimation of the radiation absorbed dose in clinical practice.
Modeling wall effects in a micro-scale shock tube using hybrid MD-DSMC algorithm
NASA Astrophysics Data System (ADS)
Watvisave, D. S.; Puranik, B. P.; Bhandarkar, U. V.
2015-07-01
Wall effects in a micro-scale shock tube are investigated using the Direct Simulation Monte Carlo method as well as a hybrid Molecular Dynamics-Direct Simulation Monte Carlo algorithm. In the Direct Simulation Monte Carlo simulations, the Cercingani-Lampis-Lord model of gas-surface interactions is employed to incorporate the wall effects, and it is shown that the shock attenuation is significantly affected by the choice of the values of tangential momentum accommodation coefficient. A loosely coupled Molecular Dynamics-Direct Simulation Monte Carlo approach is then employed to demonstrate incomplete accommodation in micro-scale shock tube flows. This approach uses fixed values of the accommodation coefficients in the gas-surface interaction model, with their values determined from a separate dynamically similar Molecular Dynamics simulation. Finally, a completely coupled Molecular Dynamics-Direct Simulation Monte Carlo algorithm is used, wherein the bulk of the flow is modeled using Direct Simulation Monte Carlo, while the interaction of gas molecules with the shock tube walls is modeled using Molecular Dynamics. The two regions are separate and coupled both ways using buffer zones and a bootstrap coupling algorithm that accounts for the mismatch of the number of molecules in both regions. It is shown that the hybrid method captures the effect of local properties that cannot be captured using a single value of accommodation coefficient for the entire domain.
Hybrid PIC acceleration schemes for PDPs
NASA Astrophysics Data System (ADS)
Christenson, P.; Cartwright, K.; Mardahl, P.; Verboncoeur, J.
1998-11-01
A 2d particle in cell code (PIC) with Monte Carlo collisions (XOOPIC(Verboncoeur, et. al., Comp. Phys. Com., 87 (1995))) is used to model breakdown of an atmospheric pressure Ne/Xe gas in 3-electrode ac plasma display panels (PDP). Particle models are required in order to properly resolve ionization rates, electron distribution functions, ion fluxes, and the spatial distribution of charge deposited on the dielectric surfaces. Kinetic simulation of plasmas in which equilibrium occurs over ion time scales poses a computational challenge due to the disparate time scales of the electron and ion plasma frequencies, ion transit and ionization frequencies. In order to retain the spatial and temporal evolution from the early stages of breakdown through the end of glow discharge, it is necessary to employ various methods for accelerating the simulation. Among the methods explored in this paper are hybrid PIC-fluid models such as a PIC/Boltzmann hybrid and parallelizing the particle push. The PIC/Boltzmann hybrid electrostatic algorithms allow the electrons to reach thermodynamic equilibrium with the ions each time step, using the nonlinear Boltzmann relationship for the electrons with PIC ion and electron source terms.
Stabilization of Nonholonomic Integrators via Logic-Based Switching
Hespanha, João Pedro
time- varying periodic controllers, stochastic control laws, and sliding modes control laws law employing switching and logic is proposed to stabilize a "nonholonomic integrator". Results explains how to stabilize a "nonholonomic integrator" using a hybrid control law employing switching
Measurement of the cosmic ray energy spectrum using hybrid events of the Pierre Auger Observatory
Mariangela Settimo; for the Pierre Auger Collaboration
2012-10-11
The energy spectrum of ultra-high energy cosmic rays above 10$^{18}$ eV is measured using the hybrid events collected by the Pierre Auger Observatory between November 2005 and September 2010. The large exposure of the Observatory allows the measurement of the main features of the energy spectrum with high statistics. Full Monte Carlo simulations of the extensive air showers (based on the CORSIKA code) and of the hybrid detector response are adopted here as an independent cross check of the standard analysis (Phys. Lett. B 685, 239 (2010)). The dependence on mass composition and other systematic uncertainties are discussed in detail and, in the full Monte Carlo approach, a region of confidence for flux measurements is defined when all the uncertainties are taken into account. An update is also reported of the energy spectrum obtained by combining the hybrid spectrum and that measured using the surface detector array.
Canid hybridization: contemporary evolution in human-modified landscapes.
Stronen, Astrid V; Tessier, Nathalie; Jolicoeur, Hélène; Paquet, Paul C; Hénault, Michel; Villemure, Mario; Patterson, Brent R; Sallows, Tim; Goulet, Gloria; Lapointe, François-Joseph
2012-09-01
Contemporary evolution through human-induced hybridization occurs throughout the taxonomic range. Formerly allopatric species appear especially susceptible to hybridization. Consequently, hybridization is expected to be more common in regions with recent sympatry owing to human activity than in areas of historical range overlap. Coyotes (Canis latrans) and gray wolves (C. lupus) are historically sympatric in western North America. Following European settlement gray wolf range contracted, whereas coyote range expanded to include eastern North America. Furthermore, wolves with New World (NW) mitochondrial DNA (mtDNA) haplotypes now extend from Manitoba to Québec in Canada and hybridize with gray wolves and coyotes. Using mtDNA and 12 microsatellite markers, we evaluated levels of wolf-coyote hybridization in regions where coyotes were present (the Canadian Prairies, n = 109 samples) and absent historically (Québec, n = 154). Wolves with NW mtDNA extended from central Saskatchewan (51°N, 69°W) to northeastern Québec (54°N, 108°W). On the Prairies, 6.3% of coyotes and 9.2% of wolves had genetic profiles suggesting wolf-coyote hybridization. In contrast, 12.6% of coyotes and 37.4% of wolves in Québec had profiles indicating hybrid origin. Wolves with NW and Old World (C. lupus) mtDNA appear to form integrated populations in both regions. Our results suggest that hybridization is more frequent in historically allopatric populations. Range shifts, now expected across taxa following climate change and other human influence on the environment, might therefore promote contemporary evolution by hybridization. PMID:23139873
Canid hybridization: contemporary evolution in human-modified landscapes
Stronen, Astrid V; Tessier, Nathalie; Jolicoeur, Hélène; Paquet, Paul C; Hénault, Michel; Villemure, Mario; Patterson, Brent R; Sallows, Tim; Goulet, Gloria; Lapointe, François-Joseph
2012-01-01
Contemporary evolution through human-induced hybridization occurs throughout the taxonomic range. Formerly allopatric species appear especially susceptible to hybridization. Consequently, hybridization is expected to be more common in regions with recent sympatry owing to human activity than in areas of historical range overlap. Coyotes (Canis latrans) and gray wolves (C. lupus) are historically sympatric in western North America. Following European settlement gray wolf range contracted, whereas coyote range expanded to include eastern North America. Furthermore, wolves with New World (NW) mitochondrial DNA (mtDNA) haplotypes now extend from Manitoba to Québec in Canada and hybridize with gray wolves and coyotes. Using mtDNA and 12 microsatellite markers, we evaluated levels of wolf-coyote hybridization in regions where coyotes were present (the Canadian Prairies, n = 109 samples) and absent historically (Québec, n = 154). Wolves with NW mtDNA extended from central Saskatchewan (51°N, 69°W) to northeastern Québec (54°N, 108°W). On the Prairies, 6.3% of coyotes and 9.2% of wolves had genetic profiles suggesting wolf-coyote hybridization. In contrast, 12.6% of coyotes and 37.4% of wolves in Québec had profiles indicating hybrid origin. Wolves with NW and Old World (C. lupus) mtDNA appear to form integrated populations in both regions. Our results suggest that hybridization is more frequent in historically allopatric populations. Range shifts, now expected across taxa following climate change and other human influence on the environment, might therefore promote contemporary evolution by hybridization. PMID:23139873
Locomotion control of hybrid cockroach robots.
Sanchez, Carlos J; Chiu, Chen-Wei; Zhou, Yan; González, Jorge M; Vinson, S Bradleigh; Liang, Hong
2015-04-01
Natural systems retain significant advantages over engineered systems in many aspects, including size and versatility. In this research, we develop a hybrid robotic system using American (Periplaneta americana) and discoid (Blaberus discoidalis) cockroaches that uses the natural locomotion and robustness of the insect. A tethered control system was firstly characterized using American cockroaches, wherein implanted electrodes were used to apply an electrical stimulus to the prothoracic ganglia. Using this approach, larger discoid cockroaches were engineered into a remotely controlled hybrid robotic system. Locomotion control was achieved through electrical stimulation of the prothoracic ganglia, via a remotely operated backpack system and implanted electrodes. The backpack consisted of a microcontroller with integrated transceiver protocol, and a rechargeable battery. The hybrid discoid roach was able to walk, and turn in response to an electrical stimulus to its nervous system with high repeatability of 60%. PMID:25740855
Importance iteration in MORSE Monte Carlo calculations
Kloosterman, J.L.; Hoogenboom, J.E. . Interfaculty Reactor Institute)
1994-05-01
an expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example that shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation.
Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport
McKinley, M S; Brooks III, E D; Daffin, F
2004-12-13
Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.
FREYA-a new Monte Carlo code for improved modeling of fission chains
Hagmann, C A; Randrup, J; Vogt, R L
2012-06-12
A new simulation capability for modeling of individual fission events and chains and the transport of fission products in materials is presented. FREYA ( Fission Yield Event Yield Algorithm ) is a Monte Carlo code for generating fission events providing correlated kinematic information for prompt neutrons, gammas, and fragments. As a standalone code, FREYA calculates quantities such as multiplicity-energy, angular, and gamma-neutron energy sharing correlations. To study materials with multiplication, shielding effects, and detectors, we have integrated FREYA into the general purpose Monte Carlo code MCNP. This new tool will allow more accurate modeling of detector responses including correlations and the development of SNM detectors with increased sensitivity.
Monte carlo sampling of fission multiplicity.
Hendricks, J. S.
2004-01-01
Two new methods have been developed for fission multiplicity modeling in Monte Carlo calculations. The traditional method of sampling neutron multiplicity from fission is to sample the number of neutrons above or below the average. For example, if there are 2.7 neutrons per fission, three would be chosen 70% of the time and two would be chosen 30% of the time. For many applications, particularly {sup 3}He coincidence counting, a better estimate of the true number of neutrons per fission is required. Generally, this number is estimated by sampling a Gaussian distribution about the average. However, because the tail of the Gaussian distribution is negative and negative neutrons cannot be produced, a slight positive bias can be found in the average value. For criticality calculations, the result of rejecting the negative neutrons is an increase in k{sub eff} of 0.1% in some cases. For spontaneous fission, where the average number of neutrons emitted from fission is low, the error also can be unacceptably large. If the Gaussian width approaches the average number of fissions, 10% too many fission neutrons are produced by not treating the negative Gaussian tail adequately. The first method to treat the Gaussian tail is to determine a correction offset, which then is subtracted from all sampled values of the number of neutrons produced. This offset depends on the average value for any given fission at any energy and must be computed efficiently at each fission from the non-integrable error function. The second method is to determine a corrected zero point so that all neutrons sampled between zero and the corrected zero point are killed to compensate for the negative Gaussian tail bias. Again, the zero point must be computed efficiently at each fission. Both methods give excellent results with a negligible computing time penalty. It is now possible to include the full effects of fission multiplicity without the negative Gaussian tail bias.
From hybrid swarms to swarms of hybrids
Stohlgren, Thomas J.; Szalanski, Allen L; Gaskin, John F.; Young, Nicholas E.; West, Amanda; Jarnevich, Catherine S.; Tripodi, Amber
2015-01-01
Science has shown that the introgression or hybridization of modern humans (Homo sapiens) with Neanderthals up to 40,000 YBP may have led to the swarm of modern humans on earth. However, there is little doubt that modern trade and transportation in support of the humans has continued to introduce additional species, genotypes, and hybrids to every country on the globe. We assessed the utility of species distributions modeling of genotypes to assess the risk of current and future invaders. We evaluated 93 locations of the genus Tamarix for which genetic data were available. Maxent models of habitat suitability showed that the hybrid, T. ramosissima x T. chinensis, was slightly greater than the parent taxa (AUCs > 0.83). General linear models of Africanized honey bees, a hybrid cross of Tanzanian Apis mellifera scutellata and a variety of European honey bee including A. m. ligustica, showed that the Africanized bees (AUC = 0.81) may be displacing European honey bees (AUC > 0.76) over large areas of the southwestern U.S. More important, Maxent modeling of sub-populations (A1 and A26 mitotypes based on mDNA) could be accurately modeled (AUC > 0.9), and they responded differently to environmental drivers. This suggests that rapid evolutionary change may be underway in the Africanized bees, allowing the bees to spread into new areas and extending their total range. Protecting native species and ecosystems may benefit from risk maps of harmful invasive species, hybrids, and genotypes.
A hybrid approach to the neutron transport K-eigenvalue problem using NDA-based algorithms
Willert, J. A.; Kelley, C. T.; Knoll, D. A.; Park, H.
2013-07-01
In order to provide more physically accurate solutions to the neutron transport equation it has become increasingly popular to use Monte Carlo simulation to model nuclear reactor dynamics. These Monte Carlo methods can be extremely expensive, so we turn to a class of methods known as hybrid methods, which combine known deterministic and stochastic techniques to solve the transport equation. In our work, we show that we can simulate the action of a transport sweep using a Monte Carlo simulation in order to solve the k-eigenvalue problem. We'll accelerate the solution using nonlinear diffusion acceleration (NDA) as in [1,2]. Our work extends the results in [1] to use Monte Carlo simulation as the high-order solver. (authors)
Page, P. R.
2002-01-01
The authors review the status of hybrid baryons. The only known way to study hybrids rigorously is via excited adiabatic potentials. Hybrids can be modeled by both the bag and flux tube models. The low lying hybrid baryon is N 1/2{sup +} with a mass of 1.5 - 1.8 GeV. Hybrid baryons can be produced in the glue rich processes of diffractive {gamma}N and {pi}N production, {Psi} decays and p{bar p} annihilation. We review the current status of research on three quarks with a gluonic excitation, called a hybrid baryon. The excitation is not an orbital or radial excitation between the quarks. Hybrid baryons have also been reviewed elsewhere. The Mercedes-Benz logl in Figure 1 indicates two possible views of the confining interaction of three quarks, an essential issue in the study of hybrid baryons. In the logo the three points where the Y shape meets the boundary circle should be identified with the three quarks. There are two possibilities fo rthe interaction of the quarks: (1) a pairwise interaction of the quarks represented by the circle, or (2) a Y shaped interaction between the quarks, represented by the Y-shape in the logo.
Seth Lloyd
2000-08-11
Necessary and sufficient conditions are given for the construction of a hybrid quantum computer that operates on both continuous and discrete quantum variables. Such hybrid computers are shown to be more efficient than conventional quantum computers for performing a variety of quantum algorithms, such as computing eigenvectors and eigenvalues.
Monte Carlo techniques in statistical physics
NASA Astrophysics Data System (ADS)
Murthy, K. P. N.
2006-11-01
In this paper we shall briefly review a few Markov Chain Monte Carlo methods for simulating closed systems described by canonical ensembles. We cover both Boltzmann and non-Boltzmann sampling techniques. The Metropolis algorithm is a typical example of Boltzmann Monte Carlo method. We discuss the time-symmetry of the Markov chain generated by Metropolis like algo- rithms that obey detailed balance. The non-Boltzmann Monte Carlo techniques reviewed include the multicanonical and Wang-Landau sampling. We list what we consider as milestones in the historical development of Monte Carlo methods in statistical physics. We dedicate this article to Prof. Dr. G. Ananthakrishna and wish him the very best in the coming years
A non-Monte Carlo approach to analyzing 1D Anderson localization in dispersive metamaterials
NASA Astrophysics Data System (ADS)
Kissel, Glen J.
2015-09-01
Monte Carlo simulations have long been used to study Anderson localization in models of one-dimensional random stacks. Because such simulations use substantial computational resources and because the randomness of random number generators for such simulations has been called into question, a non-Monte Carlo approach is of interest. This paper uses a non-Monte Carlo methodology, limited to discrete random variables, to determine the Lyapunov exponent, or its reciprocal, known as the localization length, for a one-dimensional random stack model, proposed by Asatryan, et al., consisting of various combinations of negative, imaginary and positive index materials that include the effects of dispersion and absorption, as well as off-axis incidence and polarization effects. Dielectric permittivity and magnetic permeability are the two variables randomized in the models. In the paper, Furstenberg's integral formula is used to calculate the Lyapunov exponent of an infinite product of random matrices modeling the one-dimensional stack. The integral formula requires integration with respect to the probability distribution of the randomized layer parameters, as well as integration with respect to the so-called invariant probability measure of the direction of the vector propagated by the long chain of random matrices. The non-Monte Carlo approach uses a numerical procedure of Froyland and Aihara which calculates the invariant measure as the left eigenvector of a certain sparse row-stochastic matrix, thus avoiding the use of any random number generator. The results show excellent agreement with the Monte Carlo generated simulations which make use of continuous random variables, while frequently providing reductions in computation time.
Hybrid Quantum Cloning Machine
Satyabrata Adhikari; A. K. Pati; Indranil Chakrabarty; B. S. Choudhury
2007-06-14
In this work, we introduce a special kind of quantum cloning machine called Hybrid quantum cloning machine. The introduced Hybrid quantum cloning machine or transformation is nothing but a combination of pre-existing quantum cloning transformations. In this sense it creates its own identity in the field of quantum cloners. Hybrid quantum cloning machine can be of two types: (i) State dependent and (ii) State independent or Universal. We study here the above two types of Hybrid quantum cloning machines. Later we will show that the state dependent hybrid quantum-cloning machine can be applied on only four input states. We will also find in this paper another asymmetric universal quantum cloning machine constructed from the combination of optimal universal B-H quantum cloning machine and universal anti-cloning machine. The fidelities of the two outputs are different and their values lie in the neighborhood of ${5/6} $
Dudek, Jozef J.; Edwards, Robert G.
2012-03-21
In this study, we present the first comprehensive study of hybrid baryons using lattice QCD methods. Using a large basis of composite QCD interpolating fields we extract an extensive spectrum of baryon states and isolate those of hybrid character using their relatively large overlap onto operators which sample gluonic excitations. We consider the spectrum of Nucleon and Delta states at several quark masses finding a set of positive parity hybrid baryons with quantum numbers $N_{1/2^+},\\,N_{1/2^+},\\,N_{3/2^+},\\, N_{3/2^+},\\,N_{5/2^+},\\,$ and $\\Delta_{1/2^+},\\, \\Delta_{3/2^+}$ at an energy scale above the first band of `conventional' excited positive parity baryons. This pattern of states is compatible with a color octet gluonic excitation having $J^{P}=1^{+}$ as previously reported in the hybrid meson sector and with a comparable energy scale for the excitation, suggesting a common bound-state construction for hybrid mesons and baryons.
Hybrid propulsion technology program
NASA Technical Reports Server (NTRS)
1990-01-01
Technology was identified which will enable application of hybrid propulsion to manned and unmanned space launch vehicles. Two design concepts are proposed. The first is a hybrid propulsion system using the classical method of regression (classical hybrid) resulting from the flow of oxidizer across a fuel grain surface. The second system uses a self-sustaining gas generator (gas generator hybrid) to produce a fuel rich exhaust that was mixed with oxidizer in a separate combustor. Both systems offer cost and reliability improvement over the existing solid rocket booster and proposed liquid boosters. The designs were evaluated using life cycle cost and reliability. The program consisted of: (1) identification and evaluation of candidate oxidizers and fuels; (2) preliminary evaluation of booster design concepts; (3) preparation of a detailed point design including life cycle costs and reliability analyses; (4) identification of those hybrid specific technologies needing improvement; and (5) preperation of a technology acquisition plan and large scale demonstration plan.
NASA Technical Reports Server (NTRS)
Frederick, Robert A., Jr.
1992-01-01
A hybrid rocket is a system consisting of a solid fuel grain and a gaseous or liquid oxidizer. Figure 1 shows three popular hybrid propulsion cycles that are under current consideration. NASA MSFC has teamed with industry to test two hybrid propulsion systems that will allow scaling to motors of potential interest for Titan and Atlas systems, as well as encompassing the range of interest for SEI lunar ascent stages and National Launch System Cargo Transfer Vehicle (NLS CTV) and NLS deorbit systems. Hybrid systems also offer advantages as moderate-cost, environmentally acceptable propulsion system. The objective of this work was to recommend a performance prediction methodology for hybrid rocket motors. The scope included completion of: a literature review, a general methodology, and a simplified performance model.
Monte-Carlo Go Reinforcement Learning Experiments Bruno Bouzy
Bouzy, Bruno
Monte-Carlo Go Reinforcement Learning Experiments Bruno Bouzy Universit´e Ren´e Descartes UFR de during simulations performed in a Monte-Carlo Go archi- tecture. Currently, Monte-Carlo is a popular technique for computer Go. In a previous study, Monte-Carlo was associated with domain-dependent knowledge
Monte-Carlo study of the phase transition in the AA-stacked bilayer graphene
A. A. Nikolaev; M. V. Ulybyshev
2014-12-04
Tight-binding model of the AA-stacked bilayer graphene with screened electron-electron interactions has been studied using the Hybrid Monte Carlo simulations on the original double-layer hexagonal lattice. Instantaneous screened Coulomb potential is taken into account using Hubbard-Stratonovich transformation. G-type antiferromagnetic ordering has been studied and the phase transition with spontaneous generation of the mass gap has been observed. Dependence of the antiferromagnetic condensate on the on-site electron-electron interaction is examined.
Survivability design for a hybrid underwater vehicle
Wang, Biao; Wu, Chao; Li, Xiang; Zhao, Qingkai; Ge, Tong
2015-03-10
A novel hybrid underwater robotic vehicle (HROV) capable of working to the full ocean depth has been developed. The battery powered vehicle operates in two modes: operate as an untethered autonomous vehicle in autonomous underwater vehicle (AUV) mode and operate under remote control connected to the surface vessel by a lightweight, fiber optic tether in remotely operated vehicle (ROV) mode. Considering the hazardous underwater environment at the limiting depth and the hybrid operating modes, survivability has been placed on an equal level with the other design attributes of the HROV since the beginning of the project. This paper reports the survivability design elements for the HROV including basic vehicle design of integrated navigation and integrated communication, emergency recovery strategy, distributed architecture, redundant bus, dual battery package, emergency jettison system and self-repairing control system.
Induction as Knowledge Integration
NASA Technical Reports Server (NTRS)
Smith, Benjamin D.; Rosenbloom, Paul S.
1996-01-01
Two key issues for induction algorithms are the accuracy of the learned hypothesis and the computational resources consumed in inducing that hypothesis. One of the most promising ways to improve performance along both dimensions is to make use of additional knowledge. Multi-strategy learning algorithms tackle this problem by employing several strategies for handling different kinds of knowledge in different ways. However, integrating knowledge into an induction algorithm can be difficult when the new knowledge differs significantly from the knowledge the algorithm already uses. In many cases the algorithm must be rewritten. This paper presents Knowledge Integration framework for Induction (KII), a KII, that provides a uniform mechanism for integrating knowledge into induction. In theory, arbitrary knowledge can be integrated with this mechanism, but in practice the knowledge representation language determines both the knowledge that can be integrated, and the costs of integration and induction. By instantiating KII with various set representations, algorithms can be generated at different trade-off points along these dimensions. One instantiation of KII, called RS-KII, is presented that can implement hybrid induction algorithms, depending on which knowledge it utilizes. RS-KII is demonstrated to implement AQ-11, as well as a hybrid algorithm that utilizes a domain theory and noisy examples. Other algorithms are also possible.
Nuclear Hybrid Energy System Modeling: RELAP5 Dynamic Coupling Capabilities
Piyush Sabharwall; Nolan Anderson; Haihua Zhao; Shannon Bragg-Sitton; George Mesina
2012-09-01
The nuclear hybrid energy systems (NHES) research team is currently developing a dynamic simulation of an integrated hybrid energy system. A detailed simulation of proposed NHES architectures will allow initial computational demonstration of a tightly coupled NHES to identify key reactor subsystem requirements, identify candidate reactor technologies for a hybrid system, and identify key challenges to operation of the coupled system. This work will provide a baseline for later coupling of design-specific reactor models through industry collaboration. The modeling capability addressed in this report focuses on the reactor subsystem simulation.
Dhiman, Isha
2015-01-01
This work is devoted to the development of a novel theoretical approach, named hybrid approach, to handle a localized bottleneck in a symmetrically coupled two-channel totally asymmetric simple exclusion process with Langmuir kinetics. The hybrid approach is combined with singular perturbation technique to get steady-state phase diagrams and density profiles. We have thoroughly examined the role played by the strength of bottleneck, binding constant and lane-changing rate in the system dynamics. The appearances of bottleneck-induced shock, a bottleneck phase and Meissner phase are explained. Further, the critical values of bottleneck rate are identified, which signify the changes in the topology of phase diagram. It is also found that increase in lane-changing rate as well as unequal attachment, detachment rates weaken the bottleneck effect. Our theoretical arguments are in good agreement with extensively performed Monte Carlo simulations.
exas bluegrass hybrid turf, or "hybrid bluegrass" for short, is
T exas bluegrass hybrid turf, or "hybrid bluegrass" for short, is the latest turfgrass to enter of hybrid bluegrass in com- parison to turf-type tall fescue and Kentucky bluegrass. GROWTH CHAMBER STUDY Sponsored ResearchYou Can Use Heat and Drought Performance of Texas Bluegrass Hybrid Turf Does this new
Optimized Vertex Method and Hybrid Reliability
NASA Technical Reports Server (NTRS)
Smith, Steven A.; Krishnamurthy, T.; Mason, B. H.
2002-01-01
A method of calculating the fuzzy response of a system is presented. This method, called the Optimized Vertex Method (OVM), is based upon the vertex method but requires considerably fewer function evaluations. The method is demonstrated by calculating the response membership function of strain-energy release rate for a bonded joint with a crack. The possibility of failure of the bonded joint was determined over a range of loads. After completing the possibilistic analysis, the possibilistic (fuzzy) membership functions were transformed to probability density functions and the probability of failure of the bonded joint was calculated. This approach is called a possibility-based hybrid reliability assessment. The possibility and probability of failure are presented and compared to a Monte Carlo Simulation (MCS) of the bonded joint.
Condensing Hybrid Water Heater Monitoring Field Evaluation
Maguire, J.; Earle, L.; Booten, C.; Hancock, C. E.
2011-10-01
This paper summarizes the Mascot home, an abandoned property that was extensively renovated. Several efficiency upgrades were integrated into this home, of particular interest, a unique water heater (a Navien CR240-A). Field monitoring was performed to determine the in-use efficiency of the hybrid condensing water heater. The results were compared to the unit's rated efficiency. This unit is Energy Star qualified and one of the most efficient gas water heaters currently available on the market.
Lee, Choonsik; Kim, Kwang Pyo; Long, Daniel; Fisher, Ryan; Tien, Chris; Simon, Steven L.; Bouville, Andre; Bolch, Wesley E.
2011-03-15
Purpose: To develop a computed tomography (CT) organ dose estimation method designed to readily provide organ doses in a reference adult male and female for different scan ranges to investigate the degree to which existing commercial programs can reasonably match organ doses defined in these more anatomically realistic adult hybrid phantomsMethods: The x-ray fan beam in the SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code MCNPX2.6. The simulated CT scanner model was validated through comparison with experimentally measured lateral free-in-air dose profiles and computed tomography dose index (CTDI) values. The reference adult male and female hybrid phantoms were coupled with the established CT scanner model following arm removal to simulate clinical head and other body region scans. A set of organ dose matrices were calculated for a series of consecutive axial scans ranging from the top of the head to the bottom of the phantoms with a beam thickness of 10 mm and the tube potentials of 80, 100, and 120 kVp. The organ doses for head, chest, and abdomen/pelvis examinations were calculated based on the organ dose matrices and compared to those obtained from two commercial programs, CT-EXPO and CTDOSIMETRY. Organ dose calculations were repeated for an adult stylized phantom by using the same simulation method used for the adult hybrid phantom. Results: Comparisons of both lateral free-in-air dose profiles and CTDI values through experimental measurement with the Monte Carlo simulations showed good agreement to within 9%. Organ doses for head, chest, and abdomen/pelvis scans reported in the commercial programs exceeded those from the Monte Carlo calculations in both the hybrid and stylized phantoms in this study, sometimes by orders of magnitude. Conclusions: The organ dose estimation method and dose matrices established in this study readily provides organ doses for a reference adult male and female for different CT scan ranges and technical parameters. Organ doses from existing commercial programs do not reasonably match organ doses calculated for the hybrid phantoms due to differences in phantom anatomy, as well as differences in organ dose scaling parameters. The organ dose matrices developed in this study will be extended to cover different technical parameters, CT scanner models, and various age groups.
Multidimensional integration in a heterogeneous network environment
NASA Astrophysics Data System (ADS)
Veseli, Siniša
1998-01-01
We consider several issues related to the multidimensional integration using a network of heterogeneous computers. Based on these considerations, we develop a new general purpose scheme which can significantly reduce the time needed for evaluation of integrals with CPU intensive integrands. This scheme is a parallel version of the well-known adaptive Monte Carlo method (the VEGAS algorithm), and is incorporated into a new integration package which uses the standard set of message-passing routines in the PVM software system.
Hydraulic Hybrid Parcel Delivery Truck Deployment, Testing & Demonstration
Gallo, Jean-Baptiste
2014-03-07
Although hydraulic hybrid systems have shown promise over the last few years, commercial deployment of these systems has primarily been limited to Class 8 refuse trucks. In 2005, the Hybrid Truck Users Forum initiated the Parcel Delivery Working Group including the largest parcel delivery fleets in North America. The goal of the working group was to evaluate and accelerate commercialization of hydraulic hybrid technology for parcel delivery vehicles. FedEx Ground, Purolator and United Parcel Service (UPS) took delivery of the world’s first commercially available hydraulic hybrid parcel delivery trucks in early 2012. The vehicle chassis includes a Parker Hannifin hydraulic hybrid drive system, integrated and assembled by Freightliner Custom Chassis Corp., with a body installed by Morgan Olson. With funding from the U.S. Department of Energy, CALSTART and its project partners assessed the performance, reliability, maintainability and fleet acceptance of three pre-production Class 6 hydraulic hybrid parcel delivery vehicles using information and data from in-use data collection and on-road testing. This document reports on the deployment of these vehicles operated by FedEx Ground, Purolator and UPS. The results presented provide a comprehensive overview of the performance of commercial hydraulic hybrid vehicles in parcel delivery applications. This project also informs fleets and manufacturers on the overall performance of hydraulic hybrid vehicles, provides insights on how the technology can be both improved and more effectively used. The key findings and recommendations of this project fall into four major categories: -Performance, -Fleet deployment, -Maintenance, -Business case. Hydraulic hybrid technology is relatively new to the market, as commercial vehicles have been introduced only in the past few years in refuse and parcel delivery applications. Successful demonstration could pave the way for additional purchases of hydraulic hybrid vehicles throughout the trucking industry. By providing unbiased, third-party assessment of this “hybrid without batteries” technology, this report offers relevant, timely and valuable information to the industry.
Monte Carlo method with heuristic adjustment for irregularly shaped food product volume measurement.
Siswantoro, Joko; Prabuwono, Anton Satria; Abdullah, Azizi; Idrus, Bahari
2014-01-01
Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method. PMID:24892069
Benchmarking of Proton Transport in Super Monte Carlo Simulation Program
NASA Astrophysics Data System (ADS)
Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican
2014-06-01
The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been intergrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with exitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to treat the intermediate energy nuclear reactions for proton. Some other hadronic models are also being developed now. The benchmarking of proton transport in SuperMC has been performed according to Accelerator Driven subcritical System (ADS) benchmark data and model released by IAEA from IAEA's Cooperation Research Plan (CRP). The incident proton energy is 1.0 GeV. The neutron flux and energy deposition were calculated. The results simulated using SupeMC and FLUKA are in agreement within the statistical uncertainty inherent in the Monte Carlo method. The proton transport in SuperMC has also been applied in China Lead-Alloy cooled Reactor (CLEAR), which is designed by FDS Team for the calculation of spallation reaction in the target.
Karney, Charles
. INTRODUCTIO In typical lower hybrid heating schemes, lower hybrid waves are launched at the wall sf tokamakEVOLUTION OF L HYBRID WAVES C, F , F, KARNEY, A , SEN, AND F, Y * F, CHU This work was supported hybrid waves is governed by the complex modifled Korteweg-deVries equation, vT + v### + (la2u)[ = 0
Anisotropic seismic inversion using a multigrid Monte Carlo approach
NASA Astrophysics Data System (ADS)
Mewes, Armin; Kulessa, Bernd; McKinley, John D.; Binley, Andrew M.
2010-10-01
We propose a new approach for the inversion of anisotropic P-wave data based on Monte Carlo methods combined with a multigrid approach. Simulated annealing facilitates objective minimization of the functional characterizing the misfit between observed and predicted traveltimes, as controlled by the Thomsen anisotropy parameters (?, ?). Cycling between finer and coarser grids enhances the computational efficiency of the inversion process, thus accelerating the convergence of the solution while acting as a regularization technique of the inverse problem. Multigrid perturbation samples the probability density function without the requirements for the user to adjust tuning parameters. This increases the probability that the preferred global, rather than a poor local, minimum is attained. Undertaking multigrid refinement and Monte Carlo search in parallel produces more robust convergence than does the initially more intuitive approach of completing them sequentially. We demonstrate the usefulness of the new multigrid Monte Carlo (MGMC) scheme by applying it to (a) synthetic, noise-contaminated data reflecting an isotropic subsurface of constant slowness, horizontally layered geologic media and discrete subsurface anomalies; and (b) a crosshole seismic data set acquired by previous authors at the Reskajeage test site in Cornwall, UK. Inverted distributions of slowness (s) and the Thomson anisotropy parameters (?, ?) compare favourably with those obtained previously using a popular matrix-based method. Reconstruction of the Thomsen ? parameter is particularly robust compared to that of slowness and the Thomsen ? parameter, even in the face of complex subsurface anomalies. The Thomsen ? and ? parameters have enhanced sensitivities to bulk-fabric and fracture-based anisotropies in the TI medium at Reskajeage. Because reconstruction of slowness (s) is intimately linked to that ? and ? in the MGMC scheme, inverted images of phase velocity reflect the integrated effects of these two modes of anisotropy. The new MGMC technique thus promises to facilitate rapid inversion of crosshole P-wave data for seismic slownesses and the Thomsen anisotropy parameters, with minimal user input in the inversion process.
MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
1989-01-01
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.
Continuous-time quantum Monte Carlo impurity solvers
NASA Astrophysics Data System (ADS)
Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias
2011-04-01
Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states. Program summaryProgram title: dmft Catalogue identifier: AEIL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: ALPS LIBRARY LICENSE version 1.1 No. of lines in distributed program, including test data, etc.: 899 806 No. of bytes in distributed program, including test data, etc.: 32 153 916 Distribution format: tar.gz Programming language: C++ Operating system: The ALPS libraries have been tested on the following platforms and compilers: Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher) MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0) IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers Compaq Tru64 UNIX with Compq C++ Compiler (cxx) SGI IRIX with MIPSpro C++ Compiler (CC) HP-UX with HP C++ Compiler (aCC) Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher) RAM: 10 MB-1 GB Classification: 7.3 External routines: ALPS [1], BLAS/LAPACK, HDF5 Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as representations of quantum dots and molecular conductors and play an increasingly important role in the theory of "correlated electron" materials as auxiliary problems whose solution gives the "dynamical mean field" approximation to the self-energy and local correlation functions. Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2]. Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper. Running time: 60 s-8 h per iteration.
Hybrid stretchable circuits on silicone substrate
Robinson, A. Aziz, A.; Liu, Q.; Suo, Z.; Lacour, S. P.
2014-04-14
When rigid and stretchable components are integrated onto a single elastic carrier substrate, large strain heterogeneities appear in the vicinity of the deformable-non-deformable interfaces. In this paper, we report on a generic approach to manufacture hybrid stretchable circuits where commercial electronic components can be mounted on a stretchable circuit board. Similar to printed circuit board development, the components are electrically bonded on the elastic substrate and interconnected with stretchable electrical traces. The substrate—a silicone matrix carrying concentric rigid disks—ensures both the circuit elasticity and the mechanical integrity of the most fragile materials.
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
Badal, A; Zbijewski, W; Bolch, W; Sechopoulos, I
2014-06-15
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters pr
Hybrid matrix fiber composites
Deteresa, Steven J.; Lyon, Richard E.; Groves, Scott E.
2003-07-15
Hybrid matrix fiber composites having enhanced compressive performance as well as enhanced stiffness, toughness and durability suitable for compression-critical applications. The methods for producing the fiber composites using matrix hybridization. The hybrid matrix fiber composites include two chemically or physically bonded matrix materials, whereas the first matrix materials are used to impregnate multi-filament fibers formed into ribbons and the second matrix material is placed around and between the fiber ribbons that are impregnated with the first matrix material and both matrix materials are cured and solidified.
Artificial mismatch hybridization
Guo, Zhen (Madison, WI); Smith, Lloyd M. (Madison, WI)
1998-01-01
An improved nucleic acid hybridization process is provided which employs a modified oligonucleotide and improves the ability to discriminate a control nucleic acid target from a variant nucleic acid target containing a sequence variation. The modified probe contains at least one artificial mismatch relative to the control nucleic acid target in addition to any mismatch(es) arising from the sequence variation. The invention has direct and advantageous application to numerous existing hybridization methods, including, applications that employ, for example, the Polymerase Chain Reaction, allele-specific nucleic acid sequencing methods, and diagnostic hybridization methods.
Bockholt, A. J.; Collier, J. W.
1960-01-01
stream_source_info Bull0968.pdf.txt stream_content_type text/plain stream_size 29256 Content-Encoding ISO-8859-1 stream_name Bull0968.pdf.txt Content-Type text/plain; charset=ISO-8859-1 NOVEMBER 196 Corn C Hybrids... . . k3;;.:- " for Texas LIBRARY W~MENTS DIVISIOI( A & M COLLEGE OF TD(&# STATION, TUAS SUMMARY 1 Corn hybrids were planted on 85 percent of the Texas corn acreage in 1959. Most of this ac was devoted to hybrids developed and released by the Texas...
Eustice, Ryan
A hybrid pulse shape discrimination technique with enhanced performance at neutron energies below scintillation detector Pulse shape discrimination Charge-integration Reference-pulses a b s t r a c t A hybrid pulse shape discrimination (PSD) method is presented that combines a charge-integration PSD method
Monte Carlo and Molecular Dynamics Tools 3. Monte Carlo techniques for time evolution
Sjöstrand, Torbjörn
Monte Carlo and Molecular Dynamics Tools 3. Monte Carlo techniques for time evolution Torbj¨orn Sj for it to decay at time t Naively P(t) = c = N(t) = 1 - ct. Wrong! Conservation of probability driven by depletion
Quick, Harald H
2014-02-01
Integrated whole-body PET/MR hybrid imaging combines excellent soft tissue contrast and various functional imaging parameters provided by MR with high sensitivity and quantification of radiotracer metabolism provided by positron emission tomography (PET). While clinical evaluation now is under way, integrated PET/MR demands for new technologies and innovative solutions, currently subject to interdisciplinary research. Attenuation correction of human soft tissues and of hardware components has to be MR-based to maintain quantification of PET imaging because computed tomography (CT) attenuation information is missing. This brings up the question of how to provide bone information with MR imaging. The limited field-of-view in MR imaging leads to truncations in body imaging and MR-based attenuation correction. Another research field is the implementation of motion correction technologies to correct for breathing and cardiac motion in view of the relatively long PET data acquisition times. Initial clinical applications of integrated PET/MR in oncology, neurology, pediatric oncology, and cardiovascular disease are highlighted. The hybrid imaging workflow here has to be tailored to the clinical indication to maximize diagnostic information while minimizing acquisition time. PET/MR introduces new artifacts that need special observation and innovative solutions for correction. Finally, the rising need for appropriate phantoms and standardization efforts in PET/MR hybrid imaging is discussed. PMID:24338921
NASA Astrophysics Data System (ADS)
Kaspar, P.; Jany, C.; Le Liepvre, A.; Accard, A.; Lamponi, M.; Make, D.; Levaufre, G.; Girard, N.; Lelarge, F.; Shen, A.; Charbonnier, P.; Mallecot, F.; Duan, G.-H.; Gentner, J.-.; Fedeli, J.-M.; Olivier, S.; Descos, A.; Ben Bakir, B.; Messaoudene, S.; Bordel, D.; Malhouitre, S.; Kopp, C.; Menezo, S.
2014-05-01
The lack of potent integrated light emitters is one of the bottlenecks that have so far hindered the silicon photonics platform from revolutionizing the communication market. Photonic circuits with integrated light sources have the potential to address a wide range of applications from short-distance data communication to long-haul optical transmission. Notably, the integration of lasers would allow saving large assembly costs and reduce the footprint of optoelectronic products by combining photonic and microelectronic functionalities on a single chip. Since silicon and germanium-based sources are still in their infancy, hybrid approaches using III-V semiconductor materials are currently pursued by several research laboratories in academia as well as in industry. In this paper we review recent developments of hybrid III-V/silicon lasers and discuss the advantages and drawbacks of several integration schemes. The integration approach followed in our laboratory makes use of wafer-bonded III-V material on structured silicon-on-insulator substrates and is based on adiabatic mode transfers between silicon and III-V waveguides. We will highlight some of the most interesting results from devices such as wavelength-tunable lasers and AWG lasers. The good performance demonstrates that an efficient mode transfer can be achieved between III-V and silicon waveguides and encourages further research efforts in this direction.
Non-equilibrium Hybridization Expansion Impurity-solver
NASA Astrophysics Data System (ADS)
Dong, Qiaoyuan
2015-03-01
The study of non-equilibrium phenomena in strongly correlated systems has developed into one of the most active and exciting branches of condensed matter physics. Meanwhile, quantum impurity models play a prominent role as mathematical representations of quantum dots, single-molecule devices, and effective models for the dynamical mean field theory. We show results for a generalization of the hybridization expansion diagrammatic Monte Carlo technique for the Anderson impurity model. And we perform non-equilibrium calculations on the full Keldysh contour, where a dynamical sign problem vastly increases the complexity of real-time simulation. By further combining this method with a non-crossing approximation, our ``bold-line'' Monte Carlo can reach substantially longer times out of equilibrium than previously accessible, and provides an accurate description of quench and driven dynamics of correlated systems. Sponsored by the Department of Energy.
Quantum Monte Carlo finite temperature electronic structure of quantum dots
NASA Astrophysics Data System (ADS)
Leino, Markku; Rantala, Tapio T.
2002-08-01
Quantum Monte Carlo methods allow a straightforward procedure for evaluation of electronic structures with a proper treatment of electronic correlations. This can be done even at finite temperatures [1]. We test the Path Integral Monte Carlo (PIMC) simulation method [2] for one and two electrons in one and three dimensional harmonic oscillator potentials and apply it in evaluation of finite temperature effects of single and coupled quantum dots. Our simulations show the correct finite temperature excited state populations including degeneracy in cases of one and three dimensional harmonic oscillators. The simulated one and two electron distributions of a single and coupled quantum dots are compared to those from experiments and other theoretical (0 K) methods [3]. Distributions are shown to agree and the finite temperature effects are discussed. Computational capacity is found to become the limiting factor in simulations with increasing accuracy. Other essential aspects of PIMC and its capability in this type of calculations are also discussed. [1] R.P. Feynman: Statistical Mechanics, Addison Wesley, 1972. [2] D.M. Ceperley, Rev.Mod.Phys. 67, 279 (1995). [3] M. Pi, A. Emperador and M. Barranco, Phys.Rev.B 63, 115316 (2001).
Integrative Modeling of Macromolecular Assemblies from Low to Near-Atomic Resolution
Xu, Xiaojun; Yan, Chunli; Wohlhueter, Robert; Ivanov, Ivaylo
2015-01-01
While conventional high-resolution techniques in structural biology are challenged by the size and flexibility of many biological assemblies, recent advances in low-resolution techniques such as cryo-electron microscopy (cryo-EM) and small angle X-ray scattering (SAXS) have opened up new avenues to define the structures of such assemblies. By systematically combining various sources of structural, biochemical and biophysical information, integrative modeling approaches aim to provide a unified structural description of such assemblies, starting from high-resolution structures of the individual components and integrating all available information from low-resolution experimental methods. In this review, we describe integrative modeling approaches, which use complementary data from either cryo-EM or SAXS. Specifically, we focus on the popular molecular dynamics flexible fitting (MDFF) method, which has been widely used for flexible fitting into cryo-EM maps. Second, we describe hybrid molecular dynamics, Rosetta Monte-Carlo and minimum ensemble search (MES) methods that can be used to incorporate SAXS into pseudoatomic structural models. We present concise descriptions of the two methods and their most popular alternatives, along with select illustrative applications to protein/nucleic acid assemblies involved in DNA replication and repair. PMID:26557958
Integrated modelling of DEMO-FNS current ramp-up scenario and steady-state regime
NASA Astrophysics Data System (ADS)
Dnestrovskij, A. Yu.; Kuteev, B. V.; Bykov, A. S.; Ivanov, A. A.; Lukash, V. E.; Medvedev, S. Yu.; Sergeev, V. Yu.; Sychugov, D. Yu.; Khayrutdinov, R. R.
2015-06-01
An approach to the integrated modelling of plasma regimes in the projected neutron source DEMO-FNS based on different codes is developed. The consistency check of the steady-state regime is carried out, namely, the possibility of the plasma current ramp-up, acceptance of growth rates of MHD modes in the steady-state regime, heat loads to the wall and divertor plates and neutron yield value. The following codes are employed for the integrated modelling. ASTRA transport code for calculation of plasma parameters in the steady-state regime, NUBEAM Monte Carlo code for NBI incorporated into the ASTRA code, DINA free boundary equilibrium and evolution code, SPIDER free boundary equilibrium and equilibrium reconstruction code, KINX ideal MHD stability code, TOKSTAB rigid shift vertical stability code, edge and divertor plasma B2SOLPS5.2 code and Semi-analytic Hybrid Model (SHM) code for self-consistent description of the core, edge and divertor plasmas based on the experimental scaling laws. The consistent steady-state regime for the DEMO-FNS plasma and the plasma current ramp-up scenario are developed using the integrated modelling approach. Passive copper coils are suggested to reduce the plasma vertical instability growth rate to below ˜30 s-1.The outer divertor operation in the ‘high-recycling’ regime is numerically demonstrated with a maximal heat flux density of 7-9 MW m-2 that is technically acceptable.
Hybrid adsorptive membrane reactor
Tsotsis, Theodore T. (Huntington Beach, CA); Sahimi, Muhammad (Altadena, CA); Fayyaz-Najafi, Babak (Richmond, CA); Harale, Aadesh (Los Angeles, CA); Park, Byoung-Gi (Yeosu, KR); Liu, Paul K. T. (Lafayette Hill, PA)
2011-03-01
A hybrid adsorbent-membrane reactor in which the chemical reaction, membrane separation, and product adsorption are coupled. Also disclosed are a dual-reactor apparatus and a process using the reactor or the apparatus.
Hybrid geared traction transmissions
NASA Technical Reports Server (NTRS)
Nasvytis, A. L.; White, G.
1983-01-01
The basic configuration of geared traction drives, geometric and structural factors to be considered in their construction, and current work on hybrid helicopter transmissions rated at 500 and 3000 hp are discussed.
Hybrid adsorptive membrane reactor
NASA Technical Reports Server (NTRS)
Tsotsis, Theodore T. (Inventor); Sahimi, Muhammad (Inventor); Fayyaz-Najafi, Babak (Inventor); Harale, Aadesh (Inventor); Park, Byoung-Gi (Inventor); Liu, Paul K. T. (Inventor)
2011-01-01
A hybrid adsorbent-membrane reactor in which the chemical reaction, membrane separation, and product adsorption are coupled. Also disclosed are a dual-reactor apparatus and a process using the reactor or the apparatus.
Hybrid rocket combustion study
NASA Technical Reports Server (NTRS)
Strand, L. D.; Ray, R. L.; Cohen, N. S.
1993-01-01
The objectives of this study of 'pure' or 'classic' hybrids are to (1) extend our understanding of the boundary layer combustion process and the critical engineering parameters that define this process, (2) develop an up-to-date hybrid fuel combustion model, and (3) apply the model to correlate the regression rate and scaling properties of potential fuel candidates. Tests were carried out with a hybrid slab window motor, using several diagnostic techniques, over a range of motor pressure and oxidizer mass flux conditions. The results basically confirmed turbulent boundary layer heat and mass transfer as the rate limiting process for hybrid fuel decomposition and combustion. The measured fuel regression rates showed good agreement with the analytical model predictions. The results of model scaling calculations to Shuttle SRM size conditions are presented.
D'Ambrosio, C
2003-01-01
Hybrid photon detectors detect light via vacuum photocathodes and accelerate the emitted photoelectrons by an electric field towards inversely polarized silicon anodes, where they are absorbed, thus producing electron-hole pairs. These, in turn, are collected and generate electronic signals on their ohmic contacts. This review first describes the characteristic properties of the main components of hybrid photon detectors: light entrance windows, photocathodes, and silicon anodes. Then, essential relations describing the trajectories of photoelectrons in electric and magnetic fields and their backscattering from the silicon anodes are derived. Depending on their anode configurations, three families of hybrid photon detectors are presented: hybrid photomultiplier tubes with single anodes for photon counting with high sensitivity and for gamma spectroscopy; multi-anode photon detector tubes with anodes subdivided into square or hexagonal pads for position-sensitive photon detection; imaging silicon pixel array t...
NASA Technical Reports Server (NTRS)
Robinson, E. A.
1973-01-01
Quality, reliability, and design standards for microwave hybrid microcircuits were established. The MSFC Standard 85M03926 for hybrid microcircuits was reviewed and modifications were generated for use with microwave hybrid microcircuits. The results for reliability tests of microwave thin film capacitors, transistors, and microwave circuits are presented. Twenty-two microwave receivers were tested for 13,500 unit hours. The result of 111,121 module burn-in and operating hours for an integrated solid state transceiver module is reported.
Robust Hybrid Finite Element Methods for Antennas and Microwave Circuits
NASA Technical Reports Server (NTRS)
Gong, J.; Volakis, John L.
1996-01-01
One of the primary goals in this dissertation is concerned with the development of robust hybrid finite element-boundary integral (FE-BI) techniques for modeling and design of conformal antennas of arbitrary shape. Both the finite element and integral equation methods will be first overviewed in this chapter with an emphasis on recently developed hybrid FE-BI methodologies for antennas, microwave and millimeter wave applications. The structure of the dissertation is then outlined. We conclude the chapter with discussions of certain fundamental concepts and methods in electromagnetics, which are important to this study.
NASA Astrophysics Data System (ADS)
Bitsche, Otmar; Gutmann, Guenter
Not only sharp competition but also legislation are pushing development of hybrid drive trains. Based on conventional internal combustion engine (ICE) vehicles, these drive trains offer a wide range of benefits from reduced fuel consumption and emission to multifaceted performance improvements. Hybrid electric drive trains may also facilitate the introduction of fuel cells (FC). The battery is the key component for all hybrid drive trains, as it dominates cost and performance issues. The selection of the right battery technology for the specific automotive application is an important task with an impact on costs of development and use. Safety, power, and high cycle life are a must for all hybrid applications. The greatest pressure to reduce cost is in soft hybrids, where lead-acid embedded in a considerate management presents the cheapest solution, with a considerable improvement in performance needed. From mild to full hybridization, an improvement in specific power makes higher costs more acceptable, provided that the battery's service life is equivalent to the vehicle's lifetime. Today, this is proven for the nickel-metal hydride system. Lithium ion batteries, which make use of a multiple safety concept, and with some development anticipated, provide even better prospects in terms of performance and costs. Also, their scalability permits their application in battery electric vehicles—the basis for better performance and enhanced user acceptance. Development targets for the batteries are discussed with a focus on system aspects such as electrical and thermal management and safety.
Tiebout, R.F.; van Boxtel-Oosterhof, F.; Stricker, E.A.M.; Zeijlemaker, W.P.
1987-11-15
Hybrid hybridomas are obtained by fusion of two cells, each producing its own antibody. Several authors have reported the construction of murine hybrid hybridomas with the aim to obtain bispecific monoclonal antibodies. The authors have investigated, in a model system, the feasibility of constructing a human hybrid hybridoma. They fused two monoclonal cell lines: an ouabain-sensitive and azaserine/hypoxanthine-resistant Epstein-Barr virus-transformed human cell line that produces an IgG1kappa antibody directed against tetanus toxiod and an azaserine/hypoxanthine-sensitive and ouabain-resistant human-mouse xenohybrid cell line that produces a human IgG1lambda antibody directed against hepatitis-B surface antigen. Hybrid hybridoma cells were selected in culture medium containing azaserine/hypoxanthine and ouabain. The hybrid nature of the secreted antibodies was analyzed by means of two antigen-specific immunoassay. The results show that it is possible, with the combined use of transformation and xenohybridization techniques, to construct human hybrid hybridomas that produce bispecific antibodies. Bispecific antibodies activity was measured by means of two radioimmunoassays.
Quantum speedup of Monte Carlo methods
Montanaro, Ashley
2015-01-01
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079