NASA Astrophysics Data System (ADS)
Jimenez, Edward S.; Goodman, Eric L.; Park, Ryeojin; Orr, Laurel J.; Thompson, Kyle R.
2014-09-01
This paper will investigate energy-efficiency for various real-world industrial computed-tomography reconstruction algorithms, both CPU- and GPU-based implementations. This work shows that the energy required for a given reconstruction is based on performance and problem size. There are many ways to describe performance and energy efficiency, thus this work will investigate multiple metrics including performance-per-watt, energy-delay product, and energy consumption. This work found that irregular GPU-based approaches1 realized tremendous savings in energy consumption when compared to CPU implementations while also significantly improving the performance-per- watt and energy-delay product metrics. Additional energy savings and other metric improvement was realized on the GPU-based reconstructions by improving storage I/O by implementing a parallel MIMD-like modularization of the compute and I/O tasks.
Energy Efficiency Challenges of 5G Small Cell Networks.
Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang
2017-05-01
The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks.
Energy Efficiency Challenges of 5G Small Cell Networks
Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang
2017-01-01
The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks. PMID:28757670
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Patrick
2014-01-31
The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.
Energy Efficiency in Public Buildings through Context-Aware Social Computing.
García, Óscar; Alonso, Ricardo S; Prieto, Javier; Corchado, Juan M
2017-04-11
The challenge of promoting behavioral changes in users that leads to energy savings in public buildings has become a complex task requiring the involvement of multiple technologies. Wireless sensor networks have a great potential for the development of tools, such as serious games, that encourage acquiring good energy and healthy habits among users in the workplace. This paper presents the development of a serious game using CAFCLA, a framework that allows for integrating multiple technologies, which provide both context-awareness and social computing. Game development has shown that the data provided by sensor networks encourage users to reduce energy consumption in their workplace and that social interactions and competitiveness allow for accelerating the achievement of good results and behavioral changes that favor energy savings.
Morishita, Tetsuya; Yonezawa, Yasushige; Ito, Atsushi M
2017-07-11
Efficient and reliable estimation of the mean force (MF), the derivatives of the free energy with respect to a set of collective variables (CVs), has been a challenging problem because free energy differences are often computed by integrating the MF. Among various methods for computing free energy differences, logarithmic mean-force dynamics (LogMFD) [ Morishita et al., Phys. Rev. E 2012 , 85 , 066702 ] invokes the conservation law in classical mechanics to integrate the MF, which allows us to estimate the free energy profile along the CVs on-the-fly. Here, we present a method called parallel dynamics, which improves the estimation of the MF by employing multiple replicas of the system and is straightforwardly incorporated in LogMFD or a related method. In the parallel dynamics, the MF is evaluated by a nonequilibrium path-ensemble using the multiple replicas based on the Crooks-Jarzynski nonequilibrium work relation. Thanks to the Crooks relation, realizing full-equilibrium states is no longer mandatory for estimating the MF. Additionally, sampling in the hidden subspace orthogonal to the CV space is highly improved with appropriate weights for each metastable state (if any), which is hardly achievable by typical free energy computational methods. We illustrate how to implement parallel dynamics by combining it with LogMFD, which we call logarithmic parallel dynamics (LogPD). Biosystems of alanine dipeptide and adenylate kinase in explicit water are employed as benchmark systems to which LogPD is applied to demonstrate the effect of multiple replicas on the accuracy and efficiency in estimating the free energy profiles using parallel dynamics.
Energy Efficiency in Public Buildings through Context-Aware Social Computing
García, Óscar; Alonso, Ricardo S.; Prieto, Javier; Corchado, Juan M.
2017-01-01
The challenge of promoting behavioral changes in users that leads to energy savings in public buildings has become a complex task requiring the involvement of multiple technologies. Wireless sensor networks have a great potential for the development of tools, such as serious games, that encourage acquiring good energy and healthy habits among users in the workplace. This paper presents the development of a serious game using CAFCLA, a framework that allows for integrating multiple technologies, which provide both context-awareness and social computing. Game development has shown that the data provided by sensor networks encourage users to reduce energy consumption in their workplace and that social interactions and competitiveness allow for accelerating the achievement of good results and behavioral changes that favor energy savings. PMID:28398237
A Multiple Period Problem in Distributed Energy Management Systems Considering CO2 Emissions
NASA Astrophysics Data System (ADS)
Muroda, Yuki; Miyamoto, Toshiyuki; Mori, Kazuyuki; Kitamura, Shoichi; Yamamoto, Takaya
Consider a special district (group) which is composed of multiple companies (agents), and where each agent responds to an energy demand and has a CO2 emission allowance imposed. A distributed energy management system (DEMS) optimizes energy consumption of a group through energy trading in the group. In this paper, we extended the energy distribution decision and optimal planning problem in DEMSs from a single period problem to a multiple periods one. The extension enabled us to consider more realistic constraints such as demand patterns, the start-up cost, and minimum running/outage times of equipment. At first, we extended the market-oriented programming (MOP) method for deciding energy distribution to the multiple periods problem. The bidding strategy of each agent is formulated by a 0-1 mixed non-linear programming problem. Secondly, we proposed decomposing the problem into a set of single period problems in order to solve it faster. In order to decompose the problem, we proposed a CO2 emission allowance distribution method, called an EP method. We confirmed that the proposed method was able to produce solutions whose group costs were close to lower-bound group costs by computational experiments. In addition, we verified that reduction in computational time was achieved without losing the quality of solutions by using the EP method.
Multiplicity moments at low and high energy in hadron--hadron scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antich, P.; Calligarich, E.; Cecchet, G.
1974-01-19
A phenomenological investigation is made of the relation obtained by Weingarten for the multiplicity moments in hadron -hadron interactions. The predictions are compared with moments computed from the experimental data, over a wide energy range, of the reactions pp, pp, pi /sup approximately /p, and K/sup approximately /p. (LBS)
Laboratories | Energy Systems Integration Facility | NREL
laboratories to be safely divided into multiple test stand locations (or "capability hubs") to enable Fabrication Laboratory Energy Systems High-Pressure Test Laboratory Energy Systems Integration Laboratory Energy Systems Sensor Laboratory Fuel Cell Development and Test Laboratory High-Performance Computing
Coupling of Multiple Coulomb Scattering with Energy Loss and Straggling in HZETRN
NASA Technical Reports Server (NTRS)
Mertens, Christopher J.; Wilson, John W.; Walker, Steven A.; Tweed, John
2007-01-01
The new version of the HZETRN deterministic transport code based on Green's function methods, and the incorporation of ground-based laboratory boundary conditions, has lead to the development of analytical and numerical procedures to include off-axis dispersion of primary ion beams due to small-angle multiple Coulomb scattering. In this paper we present the theoretical formulation and computational procedures to compute ion beam broadening and a methodology towards achieving a self-consistent approach to coupling multiple scattering interactions with ionization energy loss and straggling. Our initial benchmark case is a 60 MeV proton beam on muscle tissue, for which we can compare various attributes of beam broadening with Monte Carlo simulations reported in the open literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuo, Wangda; McNeil, Andrew; Wetter, Michael
2013-05-23
Building designers are increasingly relying on complex fenestration systems to reduce energy consumed for lighting and HVAC in low energy buildings. Radiance, a lighting simulation program, has been used to conduct daylighting simulations for complex fenestration systems. Depending on the configurations, the simulation can take hours or even days using a personal computer. This paper describes how to accelerate the matrix multiplication portion of a Radiance three-phase daylight simulation by conducting parallel computing on heterogeneous hardware of a personal computer. The algorithm was optimized and the computational part was implemented in parallel using OpenCL. The speed of new approach wasmore » evaluated using various daylighting simulation cases on a multicore central processing unit and a graphics processing unit. Based on the measurements and analysis of the time usage for the Radiance daylighting simulation, further speedups can be achieved by using fast I/O devices and storing the data in a binary format.« less
NASA Technical Reports Server (NTRS)
Kim, S.-W.; Chen, C.-P.
1987-01-01
A multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method is presented. In the model, the effect of the ratio of the production rate to the dissipation rate on eddy viscosity is modeled by use of the multiple-time-scales and a variable partitioning of the turbulent kinetic energy spectrum. The concept of a variable partitioning of the turbulent kinetic energy spectrum and the rest of the model details are based on the previously reported algebraic stress turbulence model. Example problems considered include: a fully developed channel flow, a plane jet exhausting into a moving stream, a wall jet flow, and a weakly coupled wake-boundary layer interaction flow. The computational results compared favorably with those obtained by using the algebraic stress turbulence model as well as experimental data. The present turbulence model, as well as the algebraic stress turbulence model, yielded significantly improved computational results for the complex turbulent boundary layer flows, such as the wall jet flow and the wake boundary layer interaction flow, compared with available computational results obtained by using the standard kappa-epsilon turbulence model.
NASA Technical Reports Server (NTRS)
Kim, S.-W.; Chen, C.-P.
1989-01-01
A multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method is presented. In the model, the effect of the ratio of the production rate to the dissipation rate on eddy viscosity is modeled by use of the multiple-time-scales and a variable partitioning of the turbulent kinetic energy spectrum. The concept of a variable partitioning of the turbulent kinetic energy spectrum and the rest of the model details are based on the previously reported algebraic stress turbulence model. Example problems considered include: a fully developed channel flow, a plane jet exhausting into a moving stream, a wall jet flow, and a weakly coupled wake-boundary layer interaction flow. The computational results compared favorably with those obtained by using the algebraic stress turbulence model as well as experimental data. The present turbulence model, as well as the algebraic stress turbulence model, yielded significantly improved computational results for the complex turbulent boundary layer flows, such as the wall jet flow and the wake boundary layer interaction flow, compared with available computational results obtained by using the standard kappa-epsilon turbulence model.
Low-energy multiple rendezvous of main belt asteroids
NASA Technical Reports Server (NTRS)
Penzo, Paul A.; Bender, David F.
1992-01-01
An approach to multiple asteroid rendezvous missions to the main belt region is proposed. In this approach key information which consists of a launch date and delta V can be generated for all possible pairs of asteroids satisfying specific constraints. This information is made available on a computer file for 1000 numbered asteroids with reasonable assumptions, limitations, and approximations to limit the computer requirements and the size of the data file.
PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)
NASA Astrophysics Data System (ADS)
Vincenti, Henri
2016-03-01
The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.
Computational scheme for pH-dependent binding free energy calculation with explicit solvent.
Lee, Juyong; Miller, Benjamin T; Brooks, Bernard R
2016-01-01
We present a computational scheme to compute the pH-dependence of binding free energy with explicit solvent. Despite the importance of pH, the effect of pH has been generally neglected in binding free energy calculations because of a lack of accurate methods to model it. To address this limitation, we use a constant-pH methodology to obtain a true ensemble of multiple protonation states of a titratable system at a given pH and analyze the ensemble using the Bennett acceptance ratio (BAR) method. The constant pH method is based on the combination of enveloping distribution sampling (EDS) with the Hamiltonian replica exchange method (HREM), which yields an accurate semi-grand canonical ensemble of a titratable system. By considering the free energy change of constraining multiple protonation states to a single state or releasing a single protonation state to multiple states, the pH dependent binding free energy profile can be obtained. We perform benchmark simulations of a host-guest system: cucurbit[7]uril (CB[7]) and benzimidazole (BZ). BZ experiences a large pKa shift upon complex formation. The pH-dependent binding free energy profiles of the benchmark system are obtained with three different long-range interaction calculation schemes: a cutoff, the particle mesh Ewald (PME), and the isotropic periodic sum (IPS) method. Our scheme captures the pH-dependent behavior of binding free energy successfully. Absolute binding free energy values obtained with the PME and IPS methods are consistent, while cutoff method results are off by 2 kcal mol(-1) . We also discuss the characteristics of three long-range interaction calculation methods for constant-pH simulations. © 2015 The Protein Society.
The Multiple-Minima Problem in Protein Folding
NASA Astrophysics Data System (ADS)
Scheraga, Harold A.
1991-10-01
The conformational energy surface of a polypeptide or protein has many local minima, and conventional energy minimization procedures reach only a local minimum (near the starting point of the optimization algorithm) instead of the global minimum (the multiple-minima problem). Several procedures have been developed to surmount this problem, the most promising of which are: (a) build up procedure, (b) optimization of electrostatics, (c) Monte Carlo-plus-energy minimization, (d) electrostatically-driven Monte Carlo, (e) inclusion of distance restraints, (f) adaptive importance-sampling Monte Carlo, (g) relaxation of dimensionality, (h) pattern-recognition, and (i) diffusion equation method. These procedures have been applied to a variety of polypeptide structural problems, and the results of such computations are presented. These include the computation of the structures of open-chain and cyclic peptides, fibrous proteins and globular proteins. Present efforts are being devoted to scaling up these procedures from small polypeptides to proteins, to try to compute the three-dimensional structure of a protein from its amino sequence.
Eigenvector decomposition of full-spectrum x-ray computed tomography.
Gonzales, Brian J; Lalush, David S
2012-03-07
Energy-discriminated x-ray computed tomography (CT) data were projected onto a set of basis functions to suppress the noise in filtered back-projection (FBP) reconstructions. The x-ray CT data were acquired using a novel x-ray system which incorporated a single-pixel photon-counting x-ray detector to measure the x-ray spectrum for each projection ray. A matrix of the spectral response of different materials was decomposed using eigenvalue decomposition to form the basis functions. Projection of FBP onto basis functions created a de facto image segmentation of multiple contrast agents. Final reconstructions showed significant noise suppression while preserving important energy-axis data. The noise suppression was demonstrated by a marked improvement in the signal-to-noise ratio (SNR) along the energy axis for multiple regions of interest in the reconstructed images. Basis functions used on a more coarsely sampled energy axis still showed an improved SNR. We conclude that the noise-resolution trade off along the energy axis was significantly improved using the eigenvalue decomposition basis functions.
Lawrenz, Morgan; Baron, Riccardo; Wang, Yi; McCammon, J Andrew
2012-01-01
The Independent-Trajectory Thermodynamic Integration (IT-TI) approach for free energy calculation with distributed computing is described. IT-TI utilizes diverse conformational sampling obtained from multiple, independent simulations to obtain more reliable free energy estimates compared to single TI predictions. The latter may significantly under- or over-estimate the binding free energy due to finite sampling. We exemplify the advantages of the IT-TI approach using two distinct cases of protein-ligand binding. In both cases, IT-TI yields distributions of absolute binding free energy estimates that are remarkably centered on the target experimental values. Alternative protocols for the practical and general application of IT-TI calculations are investigated. We highlight a protocol that maximizes predictive power and computational efficiency.
FireProt: Energy- and Evolution-Based Computational Design of Thermostable Multiple-Point Mutants.
Bednar, David; Beerens, Koen; Sebestova, Eva; Bendl, Jaroslav; Khare, Sagar; Chaloupkova, Radka; Prokop, Zbynek; Brezovsky, Jan; Baker, David; Damborsky, Jiri
2015-11-01
There is great interest in increasing proteins' stability to enhance their utility as biocatalysts, therapeutics, diagnostics and nanomaterials. Directed evolution is a powerful, but experimentally strenuous approach. Computational methods offer attractive alternatives. However, due to the limited reliability of predictions and potentially antagonistic effects of substitutions, only single-point mutations are usually predicted in silico, experimentally verified and then recombined in multiple-point mutants. Thus, substantial screening is still required. Here we present FireProt, a robust computational strategy for predicting highly stable multiple-point mutants that combines energy- and evolution-based approaches with smart filtering to identify additive stabilizing mutations. FireProt's reliability and applicability was demonstrated by validating its predictions against 656 mutations from the ProTherm database. We demonstrate that thermostability of the model enzymes haloalkane dehalogenase DhaA and γ-hexachlorocyclohexane dehydrochlorinase LinA can be substantially increased (ΔTm = 24°C and 21°C) by constructing and characterizing only a handful of multiple-point mutants. FireProt can be applied to any protein for which a tertiary structure and homologous sequences are available, and will facilitate the rapid development of robust proteins for biomedical and biotechnological applications.
Experiences in autotuning matrix multiplication for energy minimization on GPUs
Anzt, Hartwig; Haugen, Blake; Kurzak, Jakub; ...
2015-05-20
In this study, we report extensive results and analysis of autotuning the computationally intensive graphics processing units kernel for dense matrix–matrix multiplication in double precision. In contrast to traditional autotuning and/or optimization for runtime performance only, we also take the energy efficiency into account. For kernels achieving equal performance, we show significant differences in their energy balance. We also identify the memory throughput as the most influential metric that trades off performance and energy efficiency. Finally, as a result, the performance optimal case ends up not being the most efficient kernel in overall resource use.
NASA Astrophysics Data System (ADS)
Chacón, L.; Chen, G.; Barnes, D. C.
2013-01-01
We describe the extension of the recent charge- and energy-conserving one-dimensional electrostatic particle-in-cell algorithm in Ref. [G. Chen, L. Chacón, D.C. Barnes, An energy- and charge-conserving, implicit electrostatic particle-in-cell algorithm, Journal of Computational Physics 230 (2011) 7018-7036] to mapped (body-fitted) computational meshes. The approach maintains exact charge and energy conservation properties. Key to the algorithm is a hybrid push, where particle positions are updated in logical space, while velocities are updated in physical space. The effectiveness of the approach is demonstrated with a challenging numerical test case, the ion acoustic shock wave. The generalization of the approach to multiple dimensions is outlined.
Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace.
Zhang, Cheng; Lai, Chun-Liang; Pettitt, B Montgomery
The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution.
NASA Astrophysics Data System (ADS)
Eisenbach, Markus
The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code. This work has been sponsored by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Material Sciences and Engineering Division and by the Office of Advanced Scientific Computing. This work used resources of the Oak Ridge Leadership Computing Facility, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
Extending Moore's Law via Computationally Error Tolerant Computing.
Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.; ...
2018-03-01
Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less
Extending Moore's Law via Computationally Error Tolerant Computing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.
Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less
Perthold, Jan Walther; Oostenbrink, Chris
2018-05-17
Enveloping distribution sampling (EDS) is an efficient approach to calculate multiple free-energy differences from a single molecular dynamics (MD) simulation. However, the construction of an appropriate reference-state Hamiltonian that samples all states efficiently is not straightforward. We propose a novel approach for the construction of the EDS reference-state Hamiltonian, related to a previously described procedure to smoothen energy landscapes. In contrast to previously suggested EDS approaches, our reference-state Hamiltonian preserves local energy minima of the combined end-states. Moreover, we propose an intuitive, robust and efficient parameter optimization scheme to tune EDS Hamiltonian parameters. We demonstrate the proposed method with established and novel test systems and conclude that our approach allows for the automated calculation of multiple free-energy differences from a single simulation. Accelerated EDS promises to be a robust and user-friendly method to compute free-energy differences based on solid statistical mechanics.
Kobayashi, Chigusa; Jung, Jaewoon; Matsunaga, Yasuhiro; Mori, Takaharu; Ando, Tadashi; Tamura, Koichi; Kamiya, Motoshi; Sugita, Yuji
2017-09-30
GENeralized-Ensemble SImulation System (GENESIS) is a software package for molecular dynamics (MD) simulation of biological systems. It is designed to extend limitations in system size and accessible time scale by adopting highly parallelized schemes and enhanced conformational sampling algorithms. In this new version, GENESIS 1.1, new functions and advanced algorithms have been added. The all-atom and coarse-grained potential energy functions used in AMBER and GROMACS packages now become available in addition to CHARMM energy functions. The performance of MD simulations has been greatly improved by further optimization, multiple time-step integration, and hybrid (CPU + GPU) computing. The string method and replica-exchange umbrella sampling with flexible collective variable choice are used for finding the minimum free-energy pathway and obtaining free-energy profiles for conformational changes of a macromolecule. These new features increase the usefulness and power of GENESIS for modeling and simulation in biological research. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
The free energy of a reaction coordinate at multiple constraints: a concise formulation
NASA Astrophysics Data System (ADS)
Schlitter, Jürgen; Klähn, Marco
The free energy as a function of the reaction coordinate (rc) is the key quantity for the computation of equilibrium and kinetic quantities. When it is considered as the potential of mean force, the problem is the calculation of the mean force for given values of the rc. We reinvestigate the PMCF (potential of mean constraint force) method which applies a constraint to the rc to compute the mean force as the mean negative constraint force and a metric tensor correction. The latter allows for the constraint imposed to the rc and possible artefacts due to multiple constraints of other variables which for practical reasons are often used in numerical simulations. Two main results are obtained that are of theoretical and practical interest. First, the correction term is given a very concise and simple shape which facilitates its interpretation and evaluation. Secondly, a theorem describes various rcs and possible combinations with constraints that can be used without introducing any correction to the constraint force. The results facilitate the computation of free energy by molecular dynamics simulations.
Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth
2007-05-21
The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.
Using MCNP6 to Estimate Fission Neutron Properties of a Reflected Plutonium Sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, Alexander Rich; Nelson, Mark Andrew; Hutchinson, Jesson D.
The purpose of this project was to determine the fission multiplicity distribution, p(v), for the Beryllium Reflected Plutonium (BeRP) ball and to determine whether or not it changed appreciably for various High Density Polyethylene (HDPE) reflected configurations. The motivation for this project was to determine whether or not the average number of neutrons emitted per fission, v, changed significantly enough to reduce the discrepancy between MCNP6 and Robba, Dowdy, Atwater (RDA) point kinetic model estimates of multiplication. The energy spectrum of neutrons that induced fissions in the BeRP ball, NIF (E), was also computed in order to determine the averagemore » energy of neutrons inducing fissions, NIF . p(v) was computed using the FMULT card, NIF (E) and NIF were computed using an F4 tally with an FM tally modifier (F4/FM) card, and the multiplication factor, k eff, was computed using the KCODE card. Although NIF (E) changed significantly between bare and HDPE reflected configurations of the BeRP ball, the change in p(v), and thus the change in v, was insignificant. This is likely due to a difference between the way that NIF is computed using the FMULT and F4/FM cards. The F4/FM card indicated that NIF (E) was essentially Watt-fission distributed for a bare configuration and highly thermalized for all HDPE reflected configurations, while the FMULT card returned an average energy between 1 and 2 MeV for all configurations, which would indicate that the spectrum is Watt-fission distributed, regardless of the amount of HDPE reflector. The spectrum computed with the F4/FM cards is more physically meaningful and so the discrepancy between it and the FMULT card result is being investigated. It is hoped that resolving the discrepancy between the FMULT and F4/FM card estimates of NIF(E) will provide better v estimates that will lead to RDA multiplication estimates that are in better agreement with MCNP6 simulations.« less
Free energy minimization to predict RNA secondary structures and computational RNA design.
Churkin, Alexander; Weinbrand, Lina; Barash, Danny
2015-01-01
Determining the RNA secondary structure from sequence data by computational predictions is a long-standing problem. Its solution has been approached in two distinctive ways. If a multiple sequence alignment of a collection of homologous sequences is available, the comparative method uses phylogeny to determine conserved base pairs that are more likely to form as a result of billions of years of evolution than by chance. In the case of single sequences, recursive algorithms that compute free energy structures by using empirically derived energy parameters have been developed. This latter approach of RNA folding prediction by energy minimization is widely used to predict RNA secondary structure from sequence. For a significant number of RNA molecules, the secondary structure of the RNA molecule is indicative of its function and its computational prediction by minimizing its free energy is important for its functional analysis. A general method for free energy minimization to predict RNA secondary structures is dynamic programming, although other optimization methods have been developed as well along with empirically derived energy parameters. In this chapter, we introduce and illustrate by examples the approach of free energy minimization to predict RNA secondary structures.
Exciton multiplication from first principles.
Jaeger, Heather M; Hyeon-Deuk, Kim; Prezhdo, Oleg V
2013-06-18
Third-generation photovolatics require demanding cost and power conversion efficiency standards, which may be achieved through efficient exciton multiplication. Therefore, generating more than one electron-hole pair from the absorption of a single photon has vast ramifications on solar power conversion technology. Unlike their bulk counterparts, irradiated semiconductor quantum dots exhibit efficient exciton multiplication, due to confinement-enhanced Coulomb interactions and slower nonradiative losses. The exact characterization of the complicated photoexcited processes within quantum-dot photovoltaics is a work in progress. In this Account, we focus on the photophysics of nanocrystals and investigate three constituent processes of exciton multiplication, including photoexcitation, phonon-induced dephasing, and impact ionization. We quantify the role of each process in exciton multiplication through ab initio computation and analysis of many-electron wave functions. The probability of observing a multiple exciton in a photoexcited state is proportional to the magnitude of electron correlation, where correlated electrons can be simultaneously promoted across the band gap. Energies of multiple excitons are determined directly from the excited state wave functions, defining the threshold for multiple exciton generation. This threshold is strongly perturbed in the presence of surface defects, dopants, and ionization. Within a few femtoseconds following photoexcitation, the quantum state loses coherence through interactions with the vibrating atomic lattice. The phase relationship between single excitons and multiple excitons dissipates first, followed by multiple exciton fission. Single excitons are coupled to multiple excitons through Coulomb and electron-phonon interactions, and as a consequence, single excitons convert to multiple excitons and vice versa. Here, exciton multiplication depends on the initial energy and coupling magnitude and competes with electron-phonon energy relaxation. Multiple excitons are generated through impact ionization within picoseconds. The basis of exciton multiplication in quantum dots is the collective result of photoexcitation, dephasing, and nonadiabatic evolution. Each process is characterized by a distinct time-scale, and the overall multiple exciton generation dynamics is complete by about 10 ps. Without relying on semiempirical parameters, we computed quantum mechanical probabilities of multiple excitons for small model systems. Because exciton correlations and coherences are microscopic, quantum properties, results for small model systems can be extrapolated to larger, realistic quantum dots.
Verbeke, J. M.; Petit, O.
2016-06-01
From nuclear safeguards to homeland security applications, the need for the better modeling of nuclear interactions has grown over the past decades. Current Monte Carlo radiation transport codes compute average quantities with great accuracy and performance; however, performance and averaging come at the price of limited interaction-by-interaction modeling. These codes often lack the capability of modeling interactions exactly: for a given collision, energy is not conserved, energies of emitted particles are uncorrelated, and multiplicities of prompt fission neutrons and photons are uncorrelated. Many modern applications require more exclusive quantities than averages, such as the fluctuations in certain observables (e.g., themore » neutron multiplicity) and correlations between neutrons and photons. In an effort to meet this need, the radiation transport Monte Carlo code TRIPOLI-4® was modified to provide a specific mode that models nuclear interactions in a full analog way, replicating as much as possible the underlying physical process. Furthermore, the computational model FREYA (Fission Reaction Event Yield Algorithm) was coupled with TRIPOLI-4 to model complete fission events. As a result, FREYA automatically includes fluctuations as well as correlations resulting from conservation of energy and momentum.« less
Energy requirement for the production of silicon solar arrays
NASA Technical Reports Server (NTRS)
Lindmayer, J.; Wihl, M.; Scheinine, A.; Morrison, A.
1977-01-01
An assessment of potential changes and alternative technologies which could impact the photovoltaic manufacturing process is presented. Topics discussed include: a multiple wire saw, ribbon growth techniques, silicon casting, and a computer model for a large-scale solar power plant. Emphasis is placed on reducing the energy demands of the manufacturing process.
High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics
Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis
2014-07-28
The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less
Efficient construction of exchange and correlation potentials by inverting the Kohn-Sham equations.
Kananenka, Alexei A; Kohut, Sviataslau V; Gaiduk, Alex P; Ryabinkin, Ilya G; Staroverov, Viktor N
2013-08-21
Given a set of canonical Kohn-Sham orbitals, orbital energies, and an external potential for a many-electron system, one can invert the Kohn-Sham equations in a single step to obtain the corresponding exchange-correlation potential, vXC(r). For orbitals and orbital energies that are solutions of the Kohn-Sham equations with a multiplicative vXC(r) this procedure recovers vXC(r) (in the basis set limit), but for eigenfunctions of a non-multiplicative one-electron operator it produces an orbital-averaged potential. In particular, substitution of Hartree-Fock orbitals and eigenvalues into the Kohn-Sham inversion formula is a fast way to compute the Slater potential. In the same way, we efficiently construct orbital-averaged exchange and correlation potentials for hybrid and kinetic-energy-density-dependent functionals. We also show how the Kohn-Sham inversion approach can be used to compute functional derivatives of explicit density functionals and to approximate functional derivatives of orbital-dependent functionals.
Piezoelectric energy harvesting computer controlled test bench
NASA Astrophysics Data System (ADS)
Vázquez-Rodriguez, M.; Jiménez, F. J.; de Frutos, J.; Alonso, D.
2016-09-01
In this paper a new computer controlled (C.C.) laboratory test bench is presented. The patented test bench is made up of a C.C. road traffic simulator, C.C. electronic hardware involved in automating measurements, and test bench control software interface programmed in LabVIEW™. Our research is focused on characterizing electronic energy harvesting piezoelectric-based elements in road traffic environments to extract (or "harvest") maximum power. In mechanical to electrical energy conversion, mechanical impacts or vibrational behavior are commonly used, and several major problems need to be solved to perform optimal harvesting systems including, but no limited to, primary energy source modeling, energy conversion, and energy storage. It is described a novel C.C. test bench that obtains, in an accurate and automatized process, a generalized linear equivalent electrical model of piezoelectric elements and piezoelectric based energy store harvesting circuits in order to scale energy generation with multiple devices integrated in different topologies.
Piezoelectric energy harvesting computer controlled test bench.
Vázquez-Rodriguez, M; Jiménez, F J; de Frutos, J; Alonso, D
2016-09-01
In this paper a new computer controlled (C.C.) laboratory test bench is presented. The patented test bench is made up of a C.C. road traffic simulator, C.C. electronic hardware involved in automating measurements, and test bench control software interface programmed in LabVIEW™. Our research is focused on characterizing electronic energy harvesting piezoelectric-based elements in road traffic environments to extract (or "harvest") maximum power. In mechanical to electrical energy conversion, mechanical impacts or vibrational behavior are commonly used, and several major problems need to be solved to perform optimal harvesting systems including, but no limited to, primary energy source modeling, energy conversion, and energy storage. It is described a novel C.C. test bench that obtains, in an accurate and automatized process, a generalized linear equivalent electrical model of piezoelectric elements and piezoelectric based energy store harvesting circuits in order to scale energy generation with multiple devices integrated in different topologies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryu, Jun Hyung; Lee, Soo bin; Hodge, Bri-Mathias
The energy system of process industry are faced with a new unprecedented challenge. Renewable energies should be incorporated but single of them cannot meet its energy demand of high degree and a large quantity. This paper investigates a simulation framework to compute the capacity of multiple energy sources including solar, wind power, diesel and batteries. The framework involves actual renewable energy supply and demand profile generation and supply demand matching. Eight configurations of different supply options are evaluated to illustrate the applicability of the proposed framework with some remarks.
Recent developments in the theory of protein folding: searching for the global energy minimum.
Scheraga, H A
1996-04-16
Statistical mechanical theories and computer simulation are being used to gain an understanding of the fundamental features of protein folding. A major obstacle in the computation of protein structures is the multiple-minima problem arising from the existence of many local minima in the multidimensional energy landscape of the protein. This problem has been surmounted for small open-chain and cyclic peptides, and for regular-repeating sequences of models of fibrous proteins. Progress is being made in resolving this problem for globular proteins.
Free Energy Calculations using a Swarm-Enhanced Sampling Molecular Dynamics Approach.
Burusco, Kepa K; Bruce, Neil J; Alibay, Irfan; Bryce, Richard A
2015-10-26
Free energy simulations are an established computational tool in modelling chemical change in the condensed phase. However, sampling of kinetically distinct substates remains a challenge to these approaches. As a route to addressing this, we link the methods of thermodynamic integration (TI) and swarm-enhanced sampling molecular dynamics (sesMD), where simulation replicas interact cooperatively to aid transitions over energy barriers. We illustrate the approach by using alchemical alkane transformations in solution, comparing them with the multiple independent trajectory TI (IT-TI) method. Free energy changes for transitions computed by using IT-TI grew increasingly inaccurate as the intramolecular barrier was heightened. By contrast, swarm-enhanced sampling TI (sesTI) calculations showed clear improvements in sampling efficiency, leading to more accurate computed free energy differences, even in the case of the highest barrier height. The sesTI approach, therefore, has potential in addressing chemical change in systems where conformations exist in slow exchange. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multiple exciton generation in chiral carbon nanotubes: Density functional theory based computation
NASA Astrophysics Data System (ADS)
Kryjevski, Andrei; Mihaylov, Deyan; Kilina, Svetlana; Kilin, Dmitri
2017-10-01
We use a Boltzmann transport equation (BE) to study time evolution of a photo-excited state in a nanoparticle including phonon-mediated exciton relaxation and the multiple exciton generation (MEG) processes, such as exciton-to-biexciton multiplication and biexciton-to-exciton recombination. BE collision integrals are computed using Kadanoff-Baym-Keldysh many-body perturbation theory based on density functional theory simulations, including exciton effects. We compute internal quantum efficiency (QE), which is the number of excitons generated from an absorbed photon in the course of the relaxation. We apply this approach to chiral single-wall carbon nanotubes (SWCNTs), such as (6,2) and (6,5). We predict efficient MEG in the (6,2) and (6,5) SWCNTs within the solar spectrum range starting at the 2Eg energy threshold and with QE reaching ˜1.6 at about 3Eg, where Eg is the electronic gap.
Multiple exciton generation in chiral carbon nanotubes: Density functional theory based computation.
Kryjevski, Andrei; Mihaylov, Deyan; Kilina, Svetlana; Kilin, Dmitri
2017-10-21
We use a Boltzmann transport equation (BE) to study time evolution of a photo-excited state in a nanoparticle including phonon-mediated exciton relaxation and the multiple exciton generation (MEG) processes, such as exciton-to-biexciton multiplication and biexciton-to-exciton recombination. BE collision integrals are computed using Kadanoff-Baym-Keldysh many-body perturbation theory based on density functional theory simulations, including exciton effects. We compute internal quantum efficiency (QE), which is the number of excitons generated from an absorbed photon in the course of the relaxation. We apply this approach to chiral single-wall carbon nanotubes (SWCNTs), such as (6,2) and (6,5). We predict efficient MEG in the (6,2) and (6,5) SWCNTs within the solar spectrum range starting at the 2E g energy threshold and with QE reaching ∼1.6 at about 3E g , where E g is the electronic gap.
Molecular dynamics based enhanced sampling of collective variables with very large time steps.
Chen, Pei-Yang; Tuckerman, Mark E
2018-01-14
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
Molecular dynamics based enhanced sampling of collective variables with very large time steps
NASA Astrophysics Data System (ADS)
Chen, Pei-Yang; Tuckerman, Mark E.
2018-01-01
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
Structures and Statistics of Citation Networks
2011-05-01
the citations among them. The papers are in the field of high- energy physics, and they were added to the online library between 1992-2003. Each paper... energy , physics:astrophysics, mathematics, computer science, statistics and many others. The value of the setSpec field can be any of these. However...the value of the categories field might contain multiple set names listed. For instance, a paper can primarily be considered as a high- energy physics
Best bang for your buck: GPU nodes for GROMACS biomolecular simulations
Páll, Szilárd; Fechner, Martin; Esztermann, Ansgar; de Groot, Bert L.; Grubmüller, Helmut
2015-01-01
The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well‐exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)‐based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off‐loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance‐to‐price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer‐class GPUs this improvement equally reflects in the performance‐to‐price ratio. Although memory issues in consumer‐class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost‐efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well‐balanced ratio of CPU and consumer‐class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26238484
Best bang for your buck: GPU nodes for GROMACS biomolecular simulations.
Kutzner, Carsten; Páll, Szilárd; Fechner, Martin; Esztermann, Ansgar; de Groot, Bert L; Grubmüller, Helmut
2015-10-05
The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well-exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)-based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off-loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance-to-price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer-class GPUs this improvement equally reflects in the performance-to-price ratio. Although memory issues in consumer-class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost-efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well-balanced ratio of CPU and consumer-class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.
An Optimization Framework for Dynamic Hybrid Energy Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenbo Du; Humberto E Garcia; Christiaan J.J. Paredis
A computational framework for the efficient analysis and optimization of dynamic hybrid energy systems (HES) is developed. A microgrid system with multiple inputs and multiple outputs (MIMO) is modeled using the Modelica language in the Dymola environment. The optimization loop is implemented in MATLAB, with the FMI Toolbox serving as the interface between the computational platforms. Two characteristic optimization problems are selected to demonstrate the methodology and gain insight into the system performance. The first is an unconstrained optimization problem that optimizes the dynamic properties of the battery, reactor and generator to minimize variability in the HES. The second problemmore » takes operating and capital costs into consideration by imposing linear and nonlinear constraints on the design variables. The preliminary optimization results obtained in this study provide an essential step towards the development of a comprehensive framework for designing HES.« less
Simulation of X-ray absorption spectra with orthogonality constrained density functional theory.
Derricotte, Wallace D; Evangelista, Francesco A
2015-06-14
Orthogonality constrained density functional theory (OCDFT) [F. A. Evangelista, P. Shushkov and J. C. Tully, J. Phys. Chem. A, 2013, 117, 7378] is a variational time-independent approach for the computation of electronic excited states. In this work we extend OCDFT to compute core-excited states and generalize the original formalism to determine multiple excited states. Benchmark computations on a set of 13 small molecules and 40 excited states show that unshifted OCDFT/B3LYP excitation energies have a mean absolute error of 1.0 eV. Contrary to time-dependent DFT, OCDFT excitation energies for first- and second-row elements are computed with near-uniform accuracy. OCDFT core excitation energies are insensitive to the choice of the functional and the amount of Hartree-Fock exchange. We show that OCDFT is a powerful tool for the assignment of X-ray absorption spectra of large molecules by simulating the gas-phase near-edge spectrum of adenine and thymine.
Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments
Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361
Multi-objective approach for energy-aware workflow scheduling in cloud computing environments.
Yassa, Sonia; Chelouah, Rachid; Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.
Xue, Hua-dan; Liu, Wei; Sun, Hao; Wang, Xuan; Chen, Yu; Su, Bai-yan; Sun, Zhao-yong; Chen, Fang; Jin, Zheng-yu
2010-12-01
To analyze the clinical value of multiple sequences derived from dual-source computed tomography (DSCT) dual-energy scan mode in detecting pancreatic adenocarcinoma. Totally 23 patients with clinically or pathologically diagnosed pancreatic cancer were enrolled in this retrospective study. DSCT (Definition Flash) was used and dual-energy scan mode was used in their pancreatic parenchyma phase scan (100kVp/230mAs and Sn140kVp/178mAs) . Mono-energetic 60kev, mono-energetic 80kev, mono-energetic 100kev, mono-energetic 120kev, linear blend image, non-linear blend image, and iodine map were acquired. pancreatic parenchyma-tumor CT value difference, ratio of tumor to pancreatic parenchyma, and pancreatic parenchyma-tumor contrast to noise ratio were calculated. One-way ANOVA was used for the comparison of diagnostic values of the above eight different dual-energy derived sequences for pancreatic cancer. The pancreatic parenchyma-tumor CT value difference, ratio of tumor to pancreatic parenchyma, and pancreatic parenchyma-tumor contrast to noise ratio were significantly different among eight sequences (P<0.05) . Mono-energetic 60kev image showed the largest parenchyma-tumor CT value [ (77.53 ± 23.42) HU] , and iodine map showed the lowest tumor/parenchyma enhancement ratio (0.39?0.12) and the largest contrast to noise ratio (4.08 ± 1.46) . Multiple sequences can be derived from dual-energy scan mode with DSCT via multiple post-processing methods. Integration of these sequences may further improve the sensitivity of the multislice spiral CT in the diagnosis of pancreatic cancer.
Research on elastic resource management for multi-queue under cloud computing environment
NASA Astrophysics Data System (ADS)
CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang
2017-10-01
As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.
User's Manual for Aerofcn: a FORTRAN Program to Compute Aerodynamic Parameters
NASA Technical Reports Server (NTRS)
Conley, Joseph L.
1992-01-01
The computer program AeroFcn is discussed. AeroFcn is a utility program that computes the following aerodynamic parameters: geopotential altitude, Mach number, true velocity, dynamic pressure, calibrated airspeed, equivalent airspeed, impact pressure, total pressure, total temperature, Reynolds number, speed of sound, static density, static pressure, static temperature, coefficient of dynamic viscosity, kinematic viscosity, geometric altitude, and specific energy for a standard- or a modified standard-day atmosphere using compressible flow and normal shock relations. Any two parameters that define a unique flight condition are selected, and their values are entered interactively. The remaining parameters are computed, and the solutions are stored in an output file. Multiple cases can be run, and the multiple case solutions can be stored in another output file for plotting. Parameter units, the output format, and primary constants in the atmospheric and aerodynamic equations can also be changed.
A projection gradient method for computing ground state of spin-2 Bose–Einstein condensates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hanquan, E-mail: hanquan.wang@gmail.com; Yunnan Tongchang Scientific Computing and Data Mining Research Center, Kunming, Yunnan Province, 650221
In this paper, a projection gradient method is presented for computing ground state of spin-2 Bose–Einstein condensates (BEC). We first propose the general projection gradient method for solving energy functional minimization problem under multiple constraints, in which the energy functional takes real functions as independent variables. We next extend the method to solve a similar problem, where the energy functional now takes complex functions as independent variables. We finally employ the method into finding the ground state of spin-2 BEC. The key of our method is: by constructing continuous gradient flows (CGFs), the ground state of spin-2 BEC can bemore » computed as the steady state solution of such CGFs. We discretized the CGFs by a conservative finite difference method along with a proper way to deal with the nonlinear terms. We show that the numerical discretization is normalization and magnetization conservative and energy diminishing. Numerical results of the ground state and their energy of spin-2 BEC are reported to demonstrate the effectiveness of the numerical method.« less
The effective use of virtualization for selection of data centers in a cloud computing environment
NASA Astrophysics Data System (ADS)
Kumar, B. Santhosh; Parthiban, Latha
2018-04-01
Data centers are the places which consist of network of remote servers to store, access and process the data. Cloud computing is a technology where users worldwide will submit the tasks and the service providers will direct the requests to the data centers which are responsible for execution of tasks. The servers in the data centers need to employ the virtualization concept so that multiple tasks can be executed simultaneously. In this paper we proposed an algorithm for data center selection based on energy of virtual machines created in server. The virtualization energy in each of the server is calculated and total energy of the data center is obtained by the summation of individual server energy. The tasks submitted are routed to the data center with least energy consumption which will result in minimizing the operational expenses of a service provider.
ERIC Educational Resources Information Center
Rosengrant, David
2011-01-01
Multiple representations are a valuable tool to help students learn and understand physics concepts. Furthermore, representations help students learn how to think and act like real scientists. These representations include: pictures, free-body diagrams, energy bar charts, electrical circuits, and, more recently, computer simulations and…
A time to search: finding the meaning of variable activation energy.
Vyazovkin, Sergey
2016-07-28
This review deals with the phenomenon of variable activation energy frequently observed when studying the kinetics in the liquid or solid phase. This phenomenon commonly manifests itself through nonlinear Arrhenius plots or dependencies of the activation energy on conversion computed by isoconversional methods. Variable activation energy signifies a multi-step process and has a meaning of a collective parameter linked to the activation energies of individual steps. It is demonstrated that by using appropriate models of the processes, the link can be established in algebraic form. This allows one to analyze experimentally observed dependencies of the activation energy in a quantitative fashion and, as a result, to obtain activation energies of individual steps, to evaluate and predict other important parameters of the process, and generally to gain deeper kinetic and mechanistic insights. This review provides multiple examples of such analysis as applied to the processes of crosslinking polymerization, crystallization and melting of polymers, gelation, and solid-solid morphological and glass transitions. The use of appropriate computational techniques is discussed as well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas
In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less
Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas; ...
2016-01-06
In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less
Approximation of super-ions for single-file diffusion of multiple ions through narrow pores.
Kharkyanen, Valery N; Yesylevskyy, Semen O; Berezetskaya, Natalia M
2010-11-01
The general theory of the single-file multiparticle diffusion in the narrow pores could be greatly simplified in the case of inverted bell-like shape of the single-particle energy profile, which is often observed in biological ion channels. There is a narrow and deep groove in the energy landscape of multiple interacting ions in such profiles, which corresponds to the pre-defined optimal conduction pathway in the configurational space. If such groove exists, the motion of multiple ions can be reduced to the motion of single quasiparticle, called the superion, which moves in one-dimensional effective potential. The concept of the superions dramatically reduces the computational complexity of the problem and provides very clear physical interpretation of conduction phenomena in the narrow pores.
A method for computing ion energy distributions for multifrequency capacitive discharges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Alan C. F.; Lieberman, M. A.; Verboncoeur, J. P.
2007-03-01
The ion energy distribution (IED) at a surface is an important parameter for processing in multiple radio frequency driven capacitive discharges. An analytical model is developed for the IED in a low pressure discharge based on a linear transfer function that relates the time-varying sheath voltage to the time-varying ion energy response at the surface. This model is in good agreement with particle-in-cell simulations over a wide range of single, dual, and triple frequency driven capacitive discharge excitations.
NASA Technical Reports Server (NTRS)
Kelley, H. J.; Lefton, L.
1976-01-01
The numerical analysis of composite differential-turn trajectory pairs was studied for 'fast-evader' and 'neutral-evader' attitude dynamics idealization for attack aircraft. Transversality and generalized corner conditions are examined and the joining of trajectory segments discussed. A criterion is given for the screening of 'tandem-motion' trajectory segments. Main focus is upon the computation of barrier surfaces. Fortunately, from a computational viewpoint, the trajectory pairs defining these surfaces need not be calculated completely, the final subarc of multiple-subarc pairs not being required. Some calculations for pairs of example aircraft are presented. A computer program used to perform the calculations is included.
Development of a real-time radon monitoring system for simultaneous measurements in multiple sites
NASA Astrophysics Data System (ADS)
Yamamoto, S.; Yamasoto, K.; Iida, T.
1999-12-01
A real-time radon monitoring system that can simultaneously measure radon concentrations in multiple sites was developed and tested. The system consists of maximum of four radon detectors, optical fiber cables and a data acquisition personal computer. The radon detector uses a plastic scintillation counter that collects radon daughters in the chamber electrostatically. The applied voltage on the photocathode for the photomultiplier tube (PMT) acts as an electrode for radon daughters. The thickness of the plastic scintillator was thin, 50 /spl mu/m, so as to minimize the background counts due to the environmental gamma rays or beta particles. The energy discriminated signals from the radon detectors are fed to the data acquisition personal computer via optical fiber cables. The system made it possible to measure the radon concentrations in multiple sites simultaneously.
NASA Technical Reports Server (NTRS)
Kim, S.-W.; Chen, C.-P.
1988-01-01
The paper presents a multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method. Consideration is given to a class of turbulent boundary layer flows and of separated and/or swirling elliptic turbulent flows. For the separated and/or swirling turbulent flows, the present turbulence model yielded significantly improved computational results over those obtained with the standard k-epsilon turbulence model.
Ion distributions in electrolyte confined by multiple dielectric interfaces
NASA Astrophysics Data System (ADS)
Jing, Yufei; Zwanikken, Jos W.; Jadhao, Vikram; de La Cruz, Monica
2014-03-01
The distribution of ions at dielectric interfaces between liquids characterized by different dielectric permittivities is crucial to nanoscale assembly processes in many biological and synthetic materials such as cell membranes, colloids and oil-water emulsions. The knowledge of ionic structure of these systems is also exploited in energy storage devices such as double-layer super-capacitors. The presence of multiple dielectric interfaces often complicates computing the desired ionic distributions via simulations or theory. Here, we use coarse-grained models to compute the ionic distributions in a system of electrolyte confined by two planar dielectric interfaces using Car-Parrinello molecular dynamics simulations and liquid state theory. We compute the density profiles for various electrolyte concentrations, stoichiometric ratios and dielectric contrasts. The explanations for the trends in these profiles and discuss their effects on the behavior of the confined charged fluid are also presented.
Nelson, Tammie; Fernandez-Alberti, Sebastian; Roitberg, Adrian E; Tretiak, Sergei
2014-04-15
To design functional photoactive materials for a variety of technological applications, researchers need to understand their electronic properties in detail and have ways to control their photoinduced pathways. When excited by photons of light, organic conjugated materials (OCMs) show dynamics that are often characterized by large nonadiabatic (NA) couplings between multiple excited states through a breakdown of the Born-Oppenheimer (BO) approximation. Following photoexcitation, various nonradiative intraband relaxation pathways can lead to a number of complex processes. Therefore, computational simulation of nonadiabatic molecular dynamics is an indispensable tool for understanding complex photoinduced processes such as internal conversion, energy transfer, charge separation, and spatial localization of excitons. Over the years, we have developed a nonadiabatic excited-state molecular dynamics (NA-ESMD) framework that efficiently and accurately describes photoinduced phenomena in extended conjugated molecular systems. We use the fewest-switches surface hopping (FSSH) algorithm to treat quantum transitions among multiple adiabatic excited state potential energy surfaces (PESs). Extended molecular systems often contain hundreds of atoms and involve large densities of excited states that participate in the photoinduced dynamics. We can achieve an accurate description of the multiple excited states using the configuration interaction single (CIS) formalism with a semiempirical model Hamiltonian. Analytical techniques allow the trajectory to be propagated "on the fly" using the complete set of NA coupling terms and remove computational bottlenecks in the evaluation of excited-state gradients and NA couplings. Furthermore, the use of state-specific gradients for propagation of nuclei on the native excited-state PES eliminates the need for simplifications such as the classical path approximation (CPA), which only uses ground-state gradients. Thus, the NA-ESMD methodology offers a computationally tractable route for simulating hundreds of atoms on ~10 ps time scales where multiple coupled excited states are involved. In this Account, we review recent developments in the NA-ESMD modeling of photoinduced dynamics in extended conjugated molecules involving multiple coupled electronic states. We have successfully applied the outlined NA-ESMD framework to study ultrafast conformational planarization in polyfluorenes where the rate of torsional relaxation can be controlled based on the initial excitation. With the addition of the state reassignment algorithm to identify instances of unavoided crossings between noninteracting PESs, NA-ESMD can now be used to study systems in which these so-called trivial unavoided crossings are expected to predominate. We employ this technique to analyze the energy transfer between poly(phenylene vinylene) (PPV) segments where conformational fluctuations give rise to numerous instances of unavoided crossings leading to multiple pathways and complex energy transfer dynamics that cannot be described using a simple Förster model. In addition, we have investigated the mechanism of ultrafast unidirectional energy transfer in dendrimers composed of poly(phenylene ethynylene) (PPE) chromophores and have demonstrated that differential nuclear motion favors downhill energy transfer in dendrimers. The use of native excited-state gradients allows us to observe this feature.
Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.
2013-01-01
Binding free energy calculations offer a thermodynamically rigorous method to compute protein-ligand binding, and they depend on empirical force fields with hundreds of parameters. We examined the sensitivity of computed binding free energies to the ligand’s electrostatic and van der Waals parameters. Dielectric screening and cancellation of effects between ligand-protein and ligand-solvent interactions reduce the parameter sensitivity of binding affinity by 65%, compared with interaction strengths computed in the gas-phase. However, multiple changes to parameters combine additively on average, which can lead to large changes in overall affinity from many small changes to parameters. Using these results, we estimate that random, uncorrelated errors in force field nonbonded parameters must be smaller than 0.02 e per charge, 0.06 Å per radius, and 0.01 kcal/mol per well depth in order to obtain 68% (one standard deviation) confidence that a computed affinity for a moderately-sized lead compound will fall within 1 kcal/mol of the true affinity, if these are the only sources of error considered. PMID:24015114
Cloud@Home: A New Enhanced Computing Paradigm
NASA Astrophysics Data System (ADS)
Distefano, Salvatore; Cunsolo, Vincenzo D.; Puliafito, Antonio; Scarpa, Marco
Cloud computing is a distributed computing paradigm that mixes aspects of Grid computing, ("… hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities" (Foster, 2002)) Internet Computing ("…a computing platform geographically distributed across the Internet" (Milenkovic et al., 2003)), Utility computing ("a collection of technologies and business practices that enables computing to be delivered seamlessly and reliably across multiple computers, ... available as needed and billed according to usage, much like water and electricity are today" (Ross & Westerman, 2004)) Autonomic computing ("computing systems that can manage themselves given high-level objectives from administrators" (Kephart & Chess, 2003)), Edge computing ("… provides a generic template facility for any type of application to spread its execution across a dedicated grid, balancing the load …" Davis, Parikh, & Weihl, 2004) and Green computing (a new frontier of Ethical computing1 starting from the assumption that in next future energy costs will be related to the environment pollution).
Gallicchio, Emilio; Deng, Nanjie; He, Peng; Wickstrom, Lauren; Perryman, Alexander L.; Santiago, Daniel N.; Forli, Stefano; Olson, Arthur J.; Levy, Ronald M.
2014-01-01
As part of the SAMPL4 blind challenge, filtered AutoDock Vina ligand docking predictions and large scale binding energy distribution analysis method binding free energy calculations have been applied to the virtual screening of a focused library of candidate binders to the LEDGF site of the HIV integrase protein. The computational protocol leveraged docking and high level atomistic models to improve enrichment. The enrichment factor of our blind predictions ranked best among all of the computational submissions, and second best overall. This work represents to our knowledge the first example of the application of an all-atom physics-based binding free energy model to large scale virtual screening. A total of 285 parallel Hamiltonian replica exchange molecular dynamics absolute protein-ligand binding free energy simulations were conducted starting from docked poses. The setup of the simulations was fully automated, calculations were distributed on multiple computing resources and were completed in a 6-weeks period. The accuracy of the docked poses and the inclusion of intramolecular strain and entropic losses in the binding free energy estimates were the major factors behind the success of the method. Lack of sufficient time and computing resources to investigate additional protonation states of the ligands was a major cause of mispredictions. The experiment demonstrated the applicability of binding free energy modeling to improve hit rates in challenging virtual screening of focused ligand libraries during lead optimization. PMID:24504704
Liang, Yuzhen; Xiong, Ruichang; Sandler, Stanley I; Di Toro, Dominic M
2017-09-05
Polyparameter Linear Free Energy Relationships (pp-LFERs), also called Linear Solvation Energy Relationships (LSERs), are used to predict many environmentally significant properties of chemicals. A method is presented for computing the necessary chemical parameters, the Abraham parameters (AP), used by many pp-LFERs. It employs quantum chemical calculations and uses only the chemical's molecular structure. The method computes the Abraham E parameter using density functional theory computed molecular polarizability and the Clausius-Mossotti equation relating the index refraction to the molecular polarizability, estimates the Abraham V as the COSMO calculated molecular volume, and computes the remaining AP S, A, and B jointly with a multiple linear regression using sixty-five solvent-water partition coefficients computed using the quantum mechanical COSMO-SAC solvation model. These solute parameters, referred to as Quantum Chemically estimated Abraham Parameters (QCAP), are further adjusted by fitting to experimentally based APs using QCAP parameters as the independent variables so that they are compatible with existing Abraham pp-LFERs. QCAP and adjusted QCAP for 1827 neutral chemicals are included. For 24 solvent-water systems including octanol-water, predicted log solvent-water partition coefficients using adjusted QCAP have the smallest root-mean-square errors (RMSEs, 0.314-0.602) compared to predictions made using APs estimated using the molecular fragment based method ABSOLV (0.45-0.716). For munition and munition-like compounds, adjusted QCAP has much lower RMSE (0.860) than does ABSOLV (4.45) which essentially fails for these compounds.
Caricato, Marco
2013-07-28
The calculation of vertical electronic transition energies of molecular systems in solution with accurate quantum mechanical methods requires the use of approximate and yet reliable models to describe the effect of the solvent on the electronic structure of the solute. The polarizable continuum model (PCM) of solvation represents a computationally efficient way to describe this effect, especially when combined with coupled cluster (CC) methods. Two formalisms are available to compute transition energies within the PCM framework: State-Specific (SS) and Linear-Response (LR). The former provides a more complete account of the solute-solvent polarization in the excited states, while the latter is computationally very efficient (i.e., comparable to gas phase) and transition properties are well defined. In this work, I review the theory for the two formalisms within CC theory with a focus on their computational requirements, and present the first implementation of the LR-PCM formalism with the coupled cluster singles and doubles method (CCSD). Transition energies computed with LR- and SS-CCSD-PCM are presented, as well as a comparison between solvation models in the LR approach. The numerical results show that the two formalisms provide different absolute values of transition energy, but similar relative solvatochromic shifts (from nonpolar to polar solvents). The LR formalism may then be used to explore the solvent effect on multiple states and evaluate transition probabilities, while the SS formalism may be used to refine the description of specific states and for the exploration of excited state potential energy surfaces of solvated systems.
Integrating computational methods to retrofit enzymes to synthetic pathways.
Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula
2012-02-01
Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.
Koshka, Yaroslav; Perera, Dilina; Hall, Spencer; Novotny, M A
2017-07-01
The possibility of using a quantum computer D-Wave 2X with more than 1000 qubits to determine the global minimum of the energy landscape of trained restricted Boltzmann machines is investigated. In order to overcome the problem of limited interconnectivity in the D-Wave architecture, the proposed RBM embedding combines multiple qubits to represent a particular RBM unit. The results for the lowest-energy (the ground state) and some of the higher-energy states found by the D-Wave 2X were compared with those of the classical simulated annealing (SA) algorithm. In many cases, the D-Wave machine successfully found the same RBM lowest-energy state as that found by SA. In some examples, the D-Wave machine returned a state corresponding to one of the higher-energy local minima found by SA. The inherently nonperfect embedding of the RBM into the Chimera lattice explored in this work (i.e., multiple qubits combined into a single RBM unit were found not to be guaranteed to be all aligned) and the existence of small, persistent biases in the D-Wave hardware may cause a discrepancy between the D-Wave and the SA results. In some of the investigated cases, introduction of a small bias field into the energy function or optimization of the chain-strength parameter in the D-Wave embedding successfully addressed difficulties of the particular RBM embedding. With further development of the D-Wave hardware, the approach will be suitable for much larger numbers of RBM units.
NASA Astrophysics Data System (ADS)
Saha, Srilekha; Maiti, Santanu K.; Karmakar, S. N.
2016-09-01
Electronic behavior of a 1D Aubry chain with Hubbard interaction is critically analyzed in presence of electric field. Multiple energy bands are generated as a result of Hubbard correlation and Aubry potential, and, within these bands localized states are developed under the application of electric field. Within a tight-binding framework we compute electronic transmission probability and average density of states using Green's function approach where the interaction parameter is treated under Hartree-Fock mean field scheme. From our analysis we find that selective transmission can be obtained by tuning injecting electron energy, and thus, the present model can be utilized as a controlled switching device.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.
Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. Formore » better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.« less
Deng, Nanjie; Zhang, Bin W.; Levy, Ronald M.
2015-01-01
The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions and protein-ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ~3 kcal/mol at only ~8 % of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the explicit/implicit thermodynamic cycle. PMID:26236174
Deng, Nanjie; Zhang, Bin W; Levy, Ronald M
2015-06-09
The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions, and protein–ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ∼3 kcal/mol at only ∼8% of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the implicit/explicit thermodynamic cycle.
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.
NASA Astrophysics Data System (ADS)
Sharma, Abhiraj; Suryanarayana, Phanish
2018-05-01
We present an accurate and efficient real-space Density Functional Theory (DFT) framework for the ab initio study of non-orthogonal crystal systems. Specifically, employing a local reformulation of the electrostatics, we develop a novel Kronecker product formulation of the real-space kinetic energy operator that significantly reduces the number of operations associated with the Laplacian-vector multiplication, the dominant cost in practical computations. In particular, we reduce the scaling with respect to finite-difference order from quadratic to linear, thereby significantly bridging the gap in computational cost between non-orthogonal and orthogonal systems. We verify the accuracy and efficiency of the proposed methodology through selected examples.
The Study of the Successive Metal-ligand Binding Energies for Fe(+), Fe(-), V(+) and Co(+)
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Ricca, Alessandra; Maitre, Philippe; Langhoff, Stephen R. (Technical Monitor)
1994-01-01
The successive binding energies of CO and H2O to Fe(+), CO to Fe(-), and H2 to Co(+) and V(+) are presented. Overall the computed results are in good agreement with experiment. The trends in binding energies are analyzed in terms of metal to ligand donation, ligand to metal donation, ligand-ligand repulsion, and changes in the metal atom, such as hybridization, promotion, and spin multiplicity. The geometry and vibrational frequencies are also shown to be directly affected by these effects.
The Study Of The Successive Metal-Ligand Binding Energies For Fe+, Fe-, V+ and Co+
NASA Technical Reports Server (NTRS)
Bauschicher, Charles W., Jr.; Ricca, Alessandra; Maitre, Philippe; Langhoff, Stephen R. (Technical Monitor)
1995-01-01
The successive binding energies of CO and H2O to Fe(+), CO to Fe(-), and H2 to Co(+) and V(+) are presented. Overall the computed results are in good agreement with experiment. The trends in binding energies are analyzed in terms of metal to ligand donation, ligand to metal donation, ligand-ligand repulsion, and changes in the metal atom, such as hybridization, promotion, and spin multiplicity. The geometry and vibrational frequencies are also shown to be directly affected by these effects.
Born approximation, multiple scattering, and butterfly algorithm
NASA Astrophysics Data System (ADS)
Martinez, Alex; Qiao, Zhijun
2014-06-01
Many imaging algorithms have been designed assuming the absence of multiple scattering. In the 2013 SPIE proceeding, we discussed an algorithm for removing high order scattering components from collected data. In this paper, our goal is to continue this work. First, we survey the current state of multiple scattering in SAR. Then, we revise our method and test it. Given an estimate of our target reflectivity, we compute the multi scattering effects in our target region for various frequencies. Furthermore, we propagate this energy through free space towards our antenna, and remove it from the collected data.
RESTOP: Retaining External Peripheral State in Intermittently-Powered Sensor Systems.
Rodriguez Arreola, Alberto; Balsamo, Domenico; Merrett, Geoff V; Weddell, Alex S
2018-01-10
Energy harvesting sensor systems typically incorporate energy buffers (e.g., rechargeable batteries and supercapacitors) to accommodate fluctuations in supply. However, the presence of these elements limits the miniaturization of devices. In recent years, researchers have proposed a new paradigm, transient computing, where systems operate directly from the energy harvesting source and allow computation to span across power cycles, without adding energy buffers. Various transient computing approaches have addressed the challenge of power intermittency by retaining the processor's state using non-volatile memory. However, no generic approach has yet been proposed to retain the state of peripherals external to the processing element. This paper proposes RESTOP, flexible middleware which retains the state of multiple external peripherals that are connected to a computing element (i.e., a microcontroller) through protocols such as SPI or I 2 C. RESTOP acts as an interface between the main application and the peripheral, which keeps a record, at run-time, of the transmitted data in order to restore peripheral configuration after a power interruption. RESTOP is practically implemented and validated using three digitally interfaced peripherals, successfully restoring their configuration after power interruptions, imposing a maximum time overhead of 15% when configuring a peripheral. However, this represents an overhead of only 0.82% during complete execution of our typical sensing application, which is substantially lower than existing approaches.
Fox, Stephen J; Pittock, Chris; Tautermann, Christofer S; Fox, Thomas; Christ, Clara; Malcolm, N O J; Essex, Jonathan W; Skylaris, Chris-Kriton
2013-08-15
Schemes of increasing sophistication for obtaining free energies of binding have been developed over the years, where configurational sampling is used to include the all-important entropic contributions to the free energies. However, the quality of the results will also depend on the accuracy with which the intermolecular interactions are computed at each molecular configuration. In this context, the energy change associated with the rearrangement of electrons (electronic polarization and charge transfer) upon binding is a very important effect. Classical molecular mechanics force fields do not take this effect into account explicitly, and polarizable force fields and semiempirical quantum or hybrid quantum-classical (QM/MM) calculations are increasingly employed (at higher computational cost) to compute intermolecular interactions in free-energy schemes. In this work, we investigate the use of large-scale quantum mechanical calculations from first-principles as a way of fully taking into account electronic effects in free-energy calculations. We employ a one-step free-energy perturbation (FEP) scheme from a molecular mechanical (MM) potential to a quantum mechanical (QM) potential as a correction to thermodynamic integration calculations within the MM potential. We use this approach to calculate relative free energies of hydration of small aromatic molecules. Our quantum calculations are performed on multiple configurations from classical molecular dynamics simulations. The quantum energy of each configuration is obtained from density functional theory calculations with a near-complete psinc basis set on over 600 atoms using the ONETEP program.
Materials Science Research | Materials Science | NREL
Structure Theory We use high-performance computing to design and discover materials for energy, and to study structure of surfaces and critical interfaces. Images of red and yellow particles Materials Discovery Our by traditional targeted experiments. Photo of a stainless steel piece of equipment with multiple
The Application of High Energy Resolution Green's Functions to Threat Scenario Simulation
NASA Astrophysics Data System (ADS)
Thoreson, Gregory G.; Schneider, Erich A.
2012-04-01
Radiation detectors installed at key interdiction points provide defense against nuclear smuggling attempts by scanning vehicles and traffic for illicit nuclear material. These hypothetical threat scenarios may be modeled using radiation transport simulations. However, high-fidelity models are computationally intensive. Furthermore, the range of smuggler attributes and detector technologies create a large problem space not easily overcome by brute-force methods. Previous research has demonstrated that decomposing the scenario into independently simulated components using Green's functions can simulate photon detector signals with coarse energy resolution. This paper extends this methodology by presenting physics enhancements and numerical treatments which allow for an arbitrary level of energy resolution for photon transport. As a result, spectroscopic detector signals produced from full forward transport simulations can be replicated while requiring multiple orders of magnitude less computation time.
Piezoelectric T-matrix approach and multiple scattering of electroacoustic waves in thin plates
NASA Astrophysics Data System (ADS)
Darabi, Amir; Ruzzene, Massimo; Leamy, Michael J.
2017-12-01
Metamaterial-enhanced harvesting (MEH) of wave energy in thin plates and other structures has appeared recently for powering small sensors and devices. To support continued MEH concept development, this paper proposes a fully coupled T-matrix formulation for analyzing scattering of incident wave energy from a piezoelectric patch attached to a thin plate. More generally, the T-matrix represents an input-output relationship between incident and reflected waves from inclusions in a host layer, and is introduced herein for a piezoelectric patch connected to an external circuit. The utility of a T-matrix formalism is most apparent in scenarios employing multiple piezoelectric harvesters, where it can be re-used with other T-matrices (such as those previously formulated for rigid, void, and elastic inclusions) in a multiple scattering context to compute the total wavefield and other response quantities, such as harvested power. Following development of the requisite T-matrix, harvesting in an example funnel-shaped metamaterial waveguide structure is predicted using the multiple scattering approach. Enhanced wave energy harvesting predictions are verified through comparisons to experimental results of a funnel-shaped waveguide formed by placing rigid aluminum inclusions in, and multiple piezoelectric harvesters on, a Lexan plate. Good agreement with predicted response quantities is noted.
NASA Astrophysics Data System (ADS)
Aghakhani, Amirreza; Basdogan, Ipek; Erturk, Alper
2016-04-01
Plate-like components are widely used in numerous automotive, marine, and aerospace applications where they can be employed as host structures for vibration based energy harvesting. Piezoelectric patch harvesters can be easily attached to these structures to convert the vibrational energy to the electrical energy. Power output investigations of these harvesters require accurate models for energy harvesting performance evaluation and optimization. Equivalent circuit modeling of the cantilever-based vibration energy harvesters for estimation of electrical response has been proposed in recent years. However, equivalent circuit formulation and analytical modeling of multiple piezo-patch energy harvesters integrated to thin plates including nonlinear circuits has not been studied. In this study, equivalent circuit model of multiple parallel piezoelectric patch harvesters together with a resistive load is built in electronic circuit simulation software SPICE and voltage frequency response functions (FRFs) are validated using the analytical distributedparameter model. Analytical formulation of the piezoelectric patches in parallel configuration for the DC voltage output is derived while the patches are connected to a standard AC-DC circuit. The analytic model is based on the equivalent load impedance approach for piezoelectric capacitance and AC-DC circuit elements. The analytic results are validated numerically via SPICE simulations. Finally, DC power outputs of the harvesters are computed and compared with the peak power amplitudes in the AC output case.
Hong, Zhi-Jie; Chen, Cheng-Jueng; Yu, Jyh-Cherng; Chan, De-Chuan; Chou, Yu-Ching; Liang, Chia-Ming; Hsu, Sheng-Der
2016-01-01
Abstract We aimed to evaluate the benefit of whole-body computed tomography (WBCT) scanning for unconscious adult patients suffering from high-energy multiple trauma compared with the conventional stepwise approach of organ-selective CT. Totally, 144 unconscious patients with high-energy multiple trauma from single level I trauma center in North Taiwan were enrolled from January 2009 to December 2013. All patients were managed by a well-trained trauma team and were suitable for CT examination. The enrolled patients are all transferred directly from the scene of an accident, not from other medical institutions with a definitive diagnosis. The scanning regions of WBCT include head, neck, chest, abdomen, and pelvis. We analyzed differences between non-WBCT and WBCT groups, including gender, age, hospital stay, Injury Severity Score, Glasgow Coma Scale, Revised Trauma Score, time in emergency department (ED), medical cost, and survival outcome. Fifty-five patients received the conventional approach for treating trauma, and 89 patients received immediate WBCT scanning after an initial examination. Patients’ time in ED was significantly shorter in the WBCT group in comparison with the non-WBCT group (158.62 ± 80.13 vs 216.56 ± 168.32 min, P = 0.02). After adjusting for all possible confounding factors, we also found that survival outcome of the WBCT group was better than that of the non-WBCT group (odds ratio: 0.21, 95% confidence interval: 0.06–0.75, P = 0.016). Early performing WBCT during initial trauma management is a better approach for treating unconscious patients with high-energy multiple trauma. PMID:27631215
Hong, Zhi-Jie; Chen, Cheng-Jueng; Yu, Jyh-Cherng; Chan, De-Chuan; Chou, Yu-Ching; Liang, Chia-Ming; Hsu, Sheng-Der
2016-09-01
We aimed to evaluate the benefit of whole-body computed tomography (WBCT) scanning for unconscious adult patients suffering from high-energy multiple trauma compared with the conventional stepwise approach of organ-selective CT.Totally, 144 unconscious patients with high-energy multiple trauma from single level I trauma center in North Taiwan were enrolled from January 2009 to December 2013. All patients were managed by a well-trained trauma team and were suitable for CT examination. The enrolled patients are all transferred directly from the scene of an accident, not from other medical institutions with a definitive diagnosis. The scanning regions of WBCT include head, neck, chest, abdomen, and pelvis. We analyzed differences between non-WBCT and WBCT groups, including gender, age, hospital stay, Injury Severity Score, Glasgow Coma Scale, Revised Trauma Score, time in emergency department (ED), medical cost, and survival outcome.Fifty-five patients received the conventional approach for treating trauma, and 89 patients received immediate WBCT scanning after an initial examination. Patients' time in ED was significantly shorter in the WBCT group in comparison with the non-WBCT group (158.62 ± 80.13 vs 216.56 ± 168.32 min, P = 0.02). After adjusting for all possible confounding factors, we also found that survival outcome of the WBCT group was better than that of the non-WBCT group (odds ratio: 0.21, 95% confidence interval: 0.06-0.75, P = 0.016).Early performing WBCT during initial trauma management is a better approach for treating unconscious patients with high-energy multiple trauma.
NASA Astrophysics Data System (ADS)
Miller, Stephen D.; Herwig, Kenneth W.; Ren, Shelly; Vazhkudai, Sudharshan S.; Jemian, Pete R.; Luitz, Steffen; Salnikov, Andrei A.; Gaponenko, Igor; Proffen, Thomas; Lewis, Paul; Green, Mark L.
2009-07-01
The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research [1]. We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of one's laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage today's data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need [2]. Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990's to integrate data from across multiple modalities to achieve better diagnoses [3] - similarly, data fusion across BES facilities will lead to new scientific discoveries.
Fine-grained parallel RNAalifold algorithm for RNA secondary structure prediction on FPGA
Xia, Fei; Dou, Yong; Zhou, Xingming; Yang, Xuejun; Xu, Jiaqing; Zhang, Yang
2009-01-01
Background In the field of RNA secondary structure prediction, the RNAalifold algorithm is one of the most popular methods using free energy minimization. However, general-purpose computers including parallel computers or multi-core computers exhibit parallel efficiency of no more than 50%. Field Programmable Gate-Array (FPGA) chips provide a new approach to accelerate RNAalifold by exploiting fine-grained custom design. Results RNAalifold shows complicated data dependences, in which the dependence distance is variable, and the dependence direction is also across two dimensions. We propose a systolic array structure including one master Processing Element (PE) and multiple slave PEs for fine grain hardware implementation on FPGA. We exploit data reuse schemes to reduce the need to load energy matrices from external memory. We also propose several methods to reduce energy table parameter size by 80%. Conclusion To our knowledge, our implementation with 16 PEs is the only FPGA accelerator implementing the complete RNAalifold algorithm. The experimental results show a factor of 12.2 speedup over the RNAalifold (ViennaPackage – 1.6.5) software for a group of aligned RNA sequences with 2981-residue running on a Personal Computer (PC) platform with Pentium 4 2.6 GHz CPU. PMID:19208138
Shock Interaction with Random Spherical Particle Beds
NASA Astrophysics Data System (ADS)
Neal, Chris; Mehta, Yash; Salari, Kambiz; Jackson, Thomas L.; Balachandar, S. "Bala"; Thakur, Siddharth
2016-11-01
In this talk we present results on fully resolved simulations of shock interaction with randomly distributed bed of particles. Multiple simulations were carried out by varying the number of particles to isolate the effect of volume fraction. Major focus of these simulations was to understand 1) the effect of the shockwave and volume fraction on the forces experienced by the particles, 2) the effect of particles on the shock wave, and 3) fluid mediated particle-particle interactions. Peak drag force for particles at different volume fractions show a downward trend as the depth of the bed increased. This can be attributed to dissipation of energy as the shockwave travels through the bed of particles. One of the fascinating observations from these simulations was the fluctuations in different quantities due to presence of multiple particles and their random distribution. These are large simulations with hundreds of particles resulting in large amount of data. We present statistical analysis of the data and make relevant observations. Average pressure in the computational domain is computed to characterize the strengths of the reflected and transmitted waves. We also present flow field contour plots to support our observations. U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.
MsSpec-1.0: A multiple scattering package for electron spectroscopies in material science
NASA Astrophysics Data System (ADS)
Sébilleau, Didier; Natoli, Calogero; Gavaza, George M.; Zhao, Haifeng; Da Pieve, Fabiana; Hatada, Keisuke
2011-12-01
We present a multiple scattering package to calculate the cross-section of various spectroscopies namely photoelectron diffraction (PED), Auger electron diffraction (AED), X-ray absorption (XAS), low-energy electron diffraction (LEED) and Auger photoelectron coincidence spectroscopy (APECS). This package is composed of three main codes, computing respectively the cluster, the potential and the cross-section. In the latter case, in order to cover a range of energies as wide as possible, three different algorithms are provided to perform the multiple scattering calculation: full matrix inversion, series expansion or correlation expansion of the multiple scattering matrix. Numerous other small Fortran codes or bash/csh shell scripts are also provided to perform specific tasks. The cross-section code is built by the user from a library of subroutines using a makefile. Program summaryProgram title: MsSpec-1.0 Catalogue identifier: AEJT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 504 438 No. of bytes in distributed program, including test data, etc.: 14 448 180 Distribution format: tar.gz Programming language: Fortran 77 Computer: Any Operating system: Linux, MacOs RAM: Bytes Classification: 7.2 External routines: Lapack ( http://www.netlib.org/lapack/) Nature of problem: Calculation of the cross-section of various spectroscopies. Solution method: Multiple scattering. Running time: The test runs provided only take a few seconds to run.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sunderam, Vaidy S.
2007-01-09
The Harness project has developed novel software frameworks for the execution of high-end simulations in a fault-tolerant manner on distributed resources. The H2O subsystem comprises the kernel of the Harness framework, and controls the key functions of resource management across multiple administrative domains, especially issues of access and allocation. It is based on a “pluggable” architecture that enables the aggregated use of distributed heterogeneous resources for high performance computing. The major contributions of the Harness II project result in significantly enhancing the overall computational productivity of high-end scientific applications by enabling robust, failure-resilient computations on cooperatively pooled resource collections.
Accelerating Time Integration for the Shallow Water Equations on the Sphere Using GPUs
Archibald, R.; Evans, K. J.; Salinger, A.
2015-06-01
The push towards larger and larger computational platforms has made it possible for climate simulations to resolve climate dynamics across multiple spatial and temporal scales. This direction in climate simulation has created a strong need to develop scalable timestepping methods capable of accelerating throughput on high performance computing. This study details the recent advances in the implementation of implicit time stepping of the spectral element dynamical core within the United States Department of Energy (DOE) Accelerated Climate Model for Energy (ACME) on graphical processing units (GPU) based machines. We demonstrate how solvers in the Trilinos project are interfaced with ACMEmore » and GPU kernels to increase computational speed of the residual calculations in the implicit time stepping method for the atmosphere dynamics. We demonstrate the optimization gains and data structure reorganization that facilitates the performance improvements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krueger, Jens; Micikevicius, Paulius; Williams, Samuel
Reverse Time Migration (RTM) is one of the main approaches in the seismic processing industry for imaging the subsurface structure of the Earth. While RTM provides qualitative advantages over its predecessors, it has a high computational cost warranting implementation on HPC architectures. We focus on three progressively more complex kernels extracted from RTM: for isotropic (ISO), vertical transverse isotropic (VTI) and tilted transverse isotropic (TTI) media. In this work, we examine performance optimization of forward wave modeling, which describes the computational kernels used in RTM, on emerging multi- and manycore processors and introduce a novel common subexpression elimination optimization formore » TTI kernels. We compare attained performance and energy efficiency in both the single-node and distributed memory environments in order to satisfy industry’s demands for fidelity, performance, and energy efficiency. Moreover, we discuss the interplay between architecture (chip and system) and optimizations (both on-node computation) highlighting the importance of NUMA-aware approaches to MPI communication. Ultimately, our results show we can improve CPU energy efficiency by more than 10× on Magny Cours nodes while acceleration via multiple GPUs can surpass the energy-efficient Intel Sandy Bridge by as much as 3.6×.« less
NASA Astrophysics Data System (ADS)
Decraene, Carolina; Dijckmans, Arne; Reynders, Edwin P. B.
2018-05-01
A method is developed for computing the mean and variance of the diffuse field sound transmission loss of finite-sized layered wall and floor systems that consist of solid, fluid and/or poroelastic layers. This is achieved by coupling a transfer matrix model of the wall or floor to statistical energy analysis subsystem models of the adjacent room volumes. The modal behavior of the wall is approximately accounted for by projecting the wall displacement onto a set of sinusoidal lateral basis functions. This hybrid modal transfer matrix-statistical energy analysis method is validated on multiple wall systems: a thin steel plate, a polymethyl methacrylate panel, a thick brick wall, a sandwich panel, a double-leaf wall with poro-elastic material in the cavity, and a double glazing. The predictions are compared with experimental data and with results obtained using alternative prediction methods such as the transfer matrix method with spatial windowing, the hybrid wave based-transfer matrix method, and the hybrid finite element-statistical energy analysis method. These comparisons confirm the prediction accuracy of the proposed method and the computational efficiency against the conventional hybrid finite element-statistical energy analysis method.
Recent Progress in the Development of a Multi-Layer Green's Function Code for Ion Beam Transport
NASA Technical Reports Server (NTRS)
Tweed, John; Walker, Steven A.; Wilson, John W.; Tripathi, Ram K.
2008-01-01
To meet the challenge of future deep space programs, an accurate and efficient engineering code for analyzing the shielding requirements against high-energy galactic heavy radiation is needed. To address this need, a new Green's function code capable of simulating high charge and energy ions with either laboratory or space boundary conditions is currently under development. The computational model consists of combinations of physical perturbation expansions based on the scales of atomic interaction, multiple scattering, and nuclear reactive processes with use of the Neumann-asymptotic expansions with non-perturbative corrections. The code contains energy loss due to straggling, nuclear attenuation, nuclear fragmentation with energy dispersion and downshifts. Previous reports show that the new code accurately models the transport of ion beams through a single slab of material. Current research efforts are focused on enabling the code to handle multiple layers of material and the present paper reports on progress made towards that end.
Liu, Lihong; Liu, Jian; Martinez, Todd J.
2015-12-17
Here, we investigate the photoisomerization of a model retinal protonated Schiff base (trans-PSB3) using ab initio multiple spawning (AIMS) based on multi-state second order perturbation theory (MSPT2). Discrepancies between the photodynamical mechanism computed with three-root state-averaged complete active space self-consistent field (SA-3-CASSCF, which does not include dynamic electron correlation effects) and MSPT2 show that dynamic correlation is critical in this photoisomerization reaction. Furthermore, we show that the photodynamics of trans-PSB3 is not well described by predictions based on minimum energy conical intersections (MECIs) or minimum energy conical intersection (CI) seam paths. Instead, most of the CIs involved in the photoisomerizationmore » are far from MECIs and minimum energy CI seam paths. Thus, both dynamical nuclear effects and dynamic electron correlation are critical to understanding the photochemical mechanism.« less
Strong correlation in incremental full configuration interaction
NASA Astrophysics Data System (ADS)
Zimmerman, Paul M.
2017-06-01
Incremental Full Configuration Interaction (iFCI) reaches high accuracy electronic energies via a many-body expansion of the correlation energy. In this work, the Perfect Pairing (PP) ansatz replaces the Hartree-Fock reference of the original iFCI method. This substitution captures a large amount of correlation at zero-order, which allows iFCI to recover the remaining correlation energy with low-order increments. The resulting approach, PP-iFCI, is size consistent, size extensive, and systematically improvable with increasing order of incremental expansion. Tests on multiple single bond, multiple double bond, and triple bond dissociations of main group polyatomics using double and triple zeta basis sets demonstrate the power of the method for handling strong correlation. The smooth dissociation profiles that result from PP-iFCI show that FCI-quality ground state computations are now within reach for systems with up to about 10 heavy atoms.
NASA Astrophysics Data System (ADS)
Schenke, Björn; Tribedy, Prithwish; Venugopalan, Raju
2012-09-01
The event-by-event multiplicity distribution, the energy densities and energy density weighted eccentricity moments ɛn (up to n=6) at early times in heavy-ion collisions at both the BNL Relativistic Heavy Ion Collider (RHIC) (s=200GeV) and the CERN Large Hardron Collider (LHC) (s=2.76TeV) are computed in the IP-Glasma model. This framework combines the impact parameter dependent saturation model (IP-Sat) for nucleon parton distributions (constrained by HERA deeply inelastic scattering data) with an event-by-event classical Yang-Mills description of early-time gluon fields in heavy-ion collisions. The model produces multiplicity distributions that are convolutions of negative binomial distributions without further assumptions or parameters. In the limit of large dense systems, the n-particle gluon distribution predicted by the Glasma-flux tube model is demonstrated to be nonperturbatively robust. In the general case, the effect of additional geometrical fluctuations is quantified. The eccentricity moments are compared to the MC-KLN model; a noteworthy feature is that fluctuation dominated odd moments are consistently larger than in the MC-KLN model.
Calculation of protein-ligand binding affinities.
Gilson, Michael K; Zhou, Huan-Xiang
2007-01-01
Accurate methods of computing the affinity of a small molecule with a protein are needed to speed the discovery of new medications and biological probes. This paper reviews physics-based models of binding, beginning with a summary of the changes in potential energy, solvation energy, and configurational entropy that influence affinity, and a theoretical overview to frame the discussion of specific computational approaches. Important advances are reported in modeling protein-ligand energetics, such as the incorporation of electronic polarization and the use of quantum mechanical methods. Recent calculations suggest that changes in configurational entropy strongly oppose binding and must be included if accurate affinities are to be obtained. The linear interaction energy (LIE) and molecular mechanics Poisson-Boltzmann surface area (MM-PBSA) methods are analyzed, as are free energy pathway methods, which show promise and may be ready for more extensive testing. Ultimately, major improvements in modeling accuracy will likely require advances on multiple fronts, as well as continued validation against experiment.
2001-09-01
starting from the energy approach, but unfortunately the geometry assumed in their work does not apply to the hexapods available at the Satellite...harmonics multiple of 1Hz, which was the difference between the two frequencies. The two assigned frequencies were actually suppressed, but the energy ...Audio Processing, Vol. 3, No. 3, May 1995, pp. 217–222. [20] Li, D. and Salcudean, S. E., “Modeling, Simulation and Control of a Hidraulic Stewart Plat
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Ricca, Alessandra; Maitre, Philippe; Langhoff, Stephen R. (Technical Monitor)
1995-01-01
The successive binding energies of CO and H2O to Fe(sup +), CO to Fe(sup -), and H2 to Co(sup +) and V(sup +) are presented. Overall the computed results are in good agreement with experiment. The trends in binding energies are analyzed in terms of metal to ligand donation, ligand to metal donation, ligand-ligand repulsion, and changes in the metal atom, such as hybridization, promotion, and spin multiplicity. The geometry and vibrational frequencies are also shown to be directly affected by these effects.
Arnold, Jeffrey
2018-05-14
Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided. About the speaker: Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.
Calculation of absolute protein-ligand binding free energy using distributed replica sampling.
Rodinger, Tomas; Howell, P Lynne; Pomès, Régis
2008-10-21
Distributed replica sampling [T. Rodinger et al., J. Chem. Theory Comput. 2, 725 (2006)] is a simple and general scheme for Boltzmann sampling of conformational space by computer simulation in which multiple replicas of the system undergo a random walk in reaction coordinate or temperature space. Individual replicas are linked through a generalized Hamiltonian containing an extra potential energy term or bias which depends on the distribution of all replicas, thus enforcing the desired sampling distribution along the coordinate or parameter of interest regardless of free energy barriers. In contrast to replica exchange methods, efficient implementation of the algorithm does not require synchronicity of the individual simulations. The algorithm is inherently suited for large-scale simulations using shared or heterogeneous computing platforms such as a distributed network. In this work, we build on our original algorithm by introducing Boltzmann-weighted jumping, which allows moves of a larger magnitude and thus enhances sampling efficiency along the reaction coordinate. The approach is demonstrated using a realistic and biologically relevant application; we calculate the standard binding free energy of benzene to the L99A mutant of T4 lysozyme. Distributed replica sampling is used in conjunction with thermodynamic integration to compute the potential of mean force for extracting the ligand from protein and solvent along a nonphysical spatial coordinate. Dynamic treatment of the reaction coordinate leads to faster statistical convergence of the potential of mean force than a conventional static coordinate, which suffers from slow transitions on a rugged potential energy surface.
Calculation of absolute protein-ligand binding free energy using distributed replica sampling
NASA Astrophysics Data System (ADS)
Rodinger, Tomas; Howell, P. Lynne; Pomès, Régis
2008-10-01
Distributed replica sampling [T. Rodinger et al., J. Chem. Theory Comput. 2, 725 (2006)] is a simple and general scheme for Boltzmann sampling of conformational space by computer simulation in which multiple replicas of the system undergo a random walk in reaction coordinate or temperature space. Individual replicas are linked through a generalized Hamiltonian containing an extra potential energy term or bias which depends on the distribution of all replicas, thus enforcing the desired sampling distribution along the coordinate or parameter of interest regardless of free energy barriers. In contrast to replica exchange methods, efficient implementation of the algorithm does not require synchronicity of the individual simulations. The algorithm is inherently suited for large-scale simulations using shared or heterogeneous computing platforms such as a distributed network. In this work, we build on our original algorithm by introducing Boltzmann-weighted jumping, which allows moves of a larger magnitude and thus enhances sampling efficiency along the reaction coordinate. The approach is demonstrated using a realistic and biologically relevant application; we calculate the standard binding free energy of benzene to the L99A mutant of T4 lysozyme. Distributed replica sampling is used in conjunction with thermodynamic integration to compute the potential of mean force for extracting the ligand from protein and solvent along a nonphysical spatial coordinate. Dynamic treatment of the reaction coordinate leads to faster statistical convergence of the potential of mean force than a conventional static coordinate, which suffers from slow transitions on a rugged potential energy surface.
Phunchongharn, Phond; Hossain, Ekram; Camorlinga, Sergio
2011-11-01
We study the multiple access problem for e-Health applications (referred to as secondary users) coexisting with medical devices (referred to as primary or protected users) in a hospital environment. In particular, we focus on transmission scheduling and power control of secondary users in multiple spatial reuse time-division multiple access (STDMA) networks. The objective is to maximize the spectrum utilization of secondary users and minimize their power consumption subject to the electromagnetic interference (EMI) constraints for active and passive medical devices and minimum throughput guarantee for secondary users. The multiple access problem is formulated as a dual objective optimization problem which is shown to be NP-complete. We propose a joint scheduling and power control algorithm based on a greedy approach to solve the problem with much lower computational complexity. To this end, an enhanced greedy algorithm is proposed to improve the performance of the greedy algorithm by finding the optimal sequence of secondary users for scheduling. Using extensive simulations, the tradeoff in performance in terms of spectrum utilization, energy consumption, and computational complexity is evaluated for both the algorithms.
High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software
Fabregat-Traver, Diego; Sharapov, Sodbo Zh.; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo
2014-01-01
To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the ’omics’ context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL. PMID:25717363
High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software.
Fabregat-Traver, Diego; Sharapov, Sodbo Zh; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo
2014-01-01
To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the 'omics' context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL.
RESTOP: Retaining External Peripheral State in Intermittently-Powered Sensor Systems
Rodriguez Arreola, Alberto; Balsamo, Domenico
2018-01-01
Energy harvesting sensor systems typically incorporate energy buffers (e.g., rechargeable batteries and supercapacitors) to accommodate fluctuations in supply. However, the presence of these elements limits the miniaturization of devices. In recent years, researchers have proposed a new paradigm, transient computing, where systems operate directly from the energy harvesting source and allow computation to span across power cycles, without adding energy buffers. Various transient computing approaches have addressed the challenge of power intermittency by retaining the processor’s state using non-volatile memory. However, no generic approach has yet been proposed to retain the state of peripherals external to the processing element. This paper proposes RESTOP, flexible middleware which retains the state of multiple external peripherals that are connected to a computing element (i.e., a microcontroller) through protocols such as SPI or I2C. RESTOP acts as an interface between the main application and the peripheral, which keeps a record, at run-time, of the transmitted data in order to restore peripheral configuration after a power interruption. RESTOP is practically implemented and validated using three digitally interfaced peripherals, successfully restoring their configuration after power interruptions, imposing a maximum time overhead of 15% when configuring a peripheral. However, this represents an overhead of only 0.82% during complete execution of our typical sensing application, which is substantially lower than existing approaches. PMID:29320441
Dual- and Multi-Energy CT: Principles, Technical Approaches, and Clinical Applications
Leng, Shuai; Yu, Lifeng; Fletcher, Joel G.
2015-01-01
In x-ray computed tomography (CT), materials having different elemental compositions can be represented by identical pixel values on a CT image (ie, CT numbers), depending on the mass density of the material. Thus, the differentiation and classification of different tissue types and contrast agents can be extremely challenging. In dual-energy CT, an additional attenuation measurement is obtained with a second x-ray spectrum (ie, a second “energy”), allowing the differentiation of multiple materials. Alternatively, this allows quantification of the mass density of two or three materials in a mixture with known elemental composition. Recent advances in the use of energy-resolving, photon-counting detectors for CT imaging suggest the ability to acquire data in multiple energy bins, which is expected to further improve the signal-to-noise ratio for material-specific imaging. In this review, the underlying motivation and physical principles of dual- or multi-energy CT are reviewed and each of the current technical approaches is described. In addition, current and evolving clinical applications are introduced. © RSNA, 2015 PMID:26302388
Molecular t-matrices for Low-Energy Electron Diffraction (TMOL v1.1)
NASA Astrophysics Data System (ADS)
Blanco-Rey, Maria; de Andres, Pedro; Held, Georg; King, David A.
2004-08-01
We describe a FORTRAN-90 program that computes scattering t-matrices for a molecule. These can be used in a Low-Energy Electron Diffraction program to solve the molecular structural problem very efficiently. The intramolecular multiple scattering is computed within a Dyson-like approach, using free space Green propagators in a basis of spherical waves. The advantage of this approach is related to exploiting the chemical identity of the molecule, and to the simplicity to translate and rotate these t-matrices without performing a new multiple-scattering calculation for each configuration. FORTRAN-90 routines for rotating the resulting t-matrices using Wigner matrices are also provided. Program summaryTitle of program: TMOL Catalogue number: ADUF Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUF Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland. Computers: Alpha ev6-21264 (700 MHz) and Pentium-IV. Operating systems: Digital UNIX V5.0 and Linux (Red Hat 8.0). Programming language: FORTRAN-90/95 (Compaq True64 compiler, and Intel Fortran Compiler 7.0 for Linux). High-speed storage required for the test run: minimum 64 Mbytes, it can grow to more depending on the system considered. Disk storage required: None. No. of bits in a word: 64 and 32. No. of lines in distributed program, including test data etc.: 5404 No. of bytes in distributed program, including test data etc.: 59 856 Distribution format: tar.gz Nature of problem: We describe the FORTRAN-90 program TMOL (v1.1) for the computation of non-diagonal scattering t-matrices for molecules or any other poly-atomic sub-unit of surface structures. These matrices can be used in an standard Low-Energy Electron Diffraction program, such as LEED90 or CLEED. Method of solution: A general non-diagonal t-matrix is assumed for the atoms or more general scatterers forming the molecule. The molecular t-matrix is solved adding the possible intramolecular multiple scattering events using Green's propagator formalism. The resulting t-matrix is referred to the mass centre of the molecule and can be easily translated with these propagators and rotated applying Wigner matrices. Typical running time: Calculating the t-matrix for a single energy takes a few seconds. Time depends on the maximum angular momentum quantum number, lmax, and the number of scatterers in the molecule, N. Running time scales as lmax6 and N3. References: [1] S. Andersson, J.B. Pendry, J. Phys. C: Solid St. Phys. 13 (1980) 3547. [2] A. Gonis, W.H. Butler, Multiple Scattering in Solids, Springer-Verlag, Berlin/New York, 2000.
Mobile Cloud Computing with SOAP and REST Web Services
NASA Astrophysics Data System (ADS)
Ali, Mushtaq; Fadli Zolkipli, Mohamad; Mohamad Zain, Jasni; Anwar, Shahid
2018-05-01
Mobile computing in conjunction with Mobile web services drives a strong approach where the limitations of mobile devices may possibly be tackled. Mobile Web Services are based on two types of technologies; SOAP and REST, which works with the existing protocols to develop Web services. Both the approaches carry their own distinct features, yet to keep the constraint features of mobile devices in mind, the better in two is considered to be the one which minimize the computation and transmission overhead while offloading. The load transferring of mobile device to remote servers for execution called computational offloading. There are numerous approaches to implement computational offloading a viable solution for eradicating the resources constraints of mobile device, yet a dynamic method of computational offloading is always required for a smooth and simple migration of complex tasks. The intention of this work is to present a distinctive approach which may not engage the mobile resources for longer time. The concept of web services utilized in our work to delegate the computational intensive tasks for remote execution. We tested both SOAP Web services approach and REST Web Services for mobile computing. Two parameters considered in our lab experiments to test; Execution Time and Energy Consumption. The results show that RESTful Web services execution is far better than executing the same application by SOAP Web services approach, in terms of execution time and energy consumption. Conducting experiments with the developed prototype matrix multiplication app, REST execution time is about 200% better than SOAP execution approach. In case of energy consumption REST execution is about 250% better than SOAP execution approach.
Sidler, Dominik; Schwaninger, Arthur; Riniker, Sereina
2016-10-21
In molecular dynamics (MD) simulations, free-energy differences are often calculated using free energy perturbation or thermodynamic integration (TI) methods. However, both techniques are only suited to calculate free-energy differences between two end states. Enveloping distribution sampling (EDS) presents an attractive alternative that allows to calculate multiple free-energy differences in a single simulation. In EDS, a reference state is simulated which "envelopes" the end states. The challenge of this methodology is the determination of optimal reference-state parameters to ensure equal sampling of all end states. Currently, the automatic determination of the reference-state parameters for multiple end states is an unsolved issue that limits the application of the methodology. To resolve this, we have generalised the replica-exchange EDS (RE-EDS) approach, introduced by Lee et al. [J. Chem. Theory Comput. 10, 2738 (2014)] for constant-pH MD simulations. By exchanging configurations between replicas with different reference-state parameters, the complexity of the parameter-choice problem can be substantially reduced. A new robust scheme to estimate the reference-state parameters from a short initial RE-EDS simulation with default parameters was developed, which allowed the calculation of 36 free-energy differences between nine small-molecule inhibitors of phenylethanolamine N-methyltransferase from a single simulation. The resulting free-energy differences were in excellent agreement with values obtained previously by TI and two-state EDS simulations.
High-order moments of spin-orbit energy in a multielectron configuration
NASA Astrophysics Data System (ADS)
Na, Xieyu; Poirier, M.
2016-07-01
In order to analyze the energy-level distribution in complex ions such as those found in warm dense plasmas, this paper provides values for high-order moments of the spin-orbit energy in a multielectron configuration. Using second-quantization results and standard angular algebra or fully analytical expressions, explicit values are given for moments up to 10th order for the spin-orbit energy. Two analytical methods are proposed, using the uncoupled or coupled orbital and spin angular momenta. The case of multiple open subshells is considered with the help of cumulants. The proposed expressions for spin-orbit energy moments are compared to numerical computations from Cowan's code and agree with them. The convergence of the Gram-Charlier expansion involving these spin-orbit moments is analyzed. While a spectrum with infinitely thin components cannot be adequately represented by such an expansion, a suitable convolution procedure ensures the convergence of the Gram-Charlier series provided high-order terms are accounted for. A corrected analytical formula for the third-order moment involving both spin-orbit and electron-electron interactions turns out to be in fair agreement with Cowan's numerical computations.
Extrinsic extinction cross-section in the multiple acoustic scattering by fluid particles
NASA Astrophysics Data System (ADS)
Mitri, F. G.
2017-04-01
Cross-sections (and their related energy efficiency factors) are physical parameters used in the quantitative analysis of different phenomena arising from the interaction of waves with a particle (or multiple particles). Earlier works with the acoustic scattering theory considered such quadratic (i.e., nonlinear) quantities for a single scatterer, although a few extended the formalism for a pair of scatterers but were limited to the scattering cross-section only. Therefore, the standard formalism applied to viscous particles is not suitable for the complete description of the cross-sections and energy balance of the multiple-particle system because both absorption and extinction phenomena arise during the multiple scattering process. Based upon the law of the conservation of energy, this work provides a complete comprehensive analysis for the extrinsic scattering, absorption, and extinction cross-sections (i.e., in the far-field) of a pair of viscous scatterers of arbitrary shape, immersed in a nonviscous isotropic fluid. A law of acoustic extinction taking into consideration interparticle effects in wave propagation is established, which constitutes a generalized form of the optical theorem in multiple scattering. Analytical expressions for the scattering, absorption, and extinction cross-sections are derived for plane progressive waves with arbitrary incidence. The mathematical expressions are formulated in partial-wave series expansions in cylindrical coordinates involving the angle of incidence, the addition theorem for the cylindrical wave functions, and the expansion coefficients of the scatterers. The analysis shows that the multiple scattering cross-section depends upon the expansion coefficients of both scatterers in addition to an interference factor that depends on the interparticle distance. However, the extinction cross-section depends on the expansion coefficients of the scatterer located in a particular system of coordinates, in addition to the interference term. Numerical examples illustrate the analysis for two viscous fluid circular cylindrical cross-sections immersed in a non-viscous fluid. Computations for the (non-dimensional) scattering, absorption, and extinction cross-section factors are performed with particular emphasis on varying the angle of incidence, the interparticle distance, and the sizes, and the physical properties of the particles. A symmetric behavior is observed for the dimensionless multiple scattering cross-section, while asymmetries arise for both the dimensionless absorption and extinction cross-sections with respect to the angle of incidence. The present analysis provides a complete analytical and computational method for the prediction of cross-section and energy efficiency factors in multiple acoustic scattering of plane waves of arbitrary incidence by a pair of scatterers. The results can be used as a priori information in the direct or inverse characterization of multiple scattering systems such as acoustically engineered fluid metamaterials with reconfigurable periodicities, cloaking devices, liquid crystals, and other applications.
Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems.
Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao
2017-12-20
Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm.
Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems
Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao
2017-01-01
Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm. PMID:29261135
NASA Technical Reports Server (NTRS)
Santi, L. Michael
1986-01-01
Computational predictions of turbulent flow in sharply curved 180 degree turn around ducts are presented. The CNS2D computer code is used to solve the equations of motion for two-dimensional incompressible flows transformed to a nonorthogonal body-fitted coordinate system. This procedure incorporates the pressure velocity correction algorithm SIMPLE-C to iteratively solve a discretized form of the transformed equations. A multiple scale turbulence model based on simplified spectral partitioning is employed to obtain closure. Flow field predictions utilizing the multiple scale model are compared to features predicted by the traditional single scale k-epsilon model. Tuning parameter sensitivities of the multiple scale model applied to turn around duct flows are also determined. In addition, a wall function approach based on a wall law suitable for incompressible turbulent boundary layers under strong adverse pressure gradients is tested. Turn around duct flow characteristics utilizing this modified wall law are presented and compared to results based on a standard wall treatment.
Solar radiation for Mars power systems
NASA Technical Reports Server (NTRS)
Appelbaum, Joseph; Landis, Geoffrey A.
1991-01-01
Detailed information about the solar radiation characteristics on Mars are necessary for effective design of future planned solar energy systems operating on the surface of Mars. A procedure and solar radiation related data from which the diurnally and daily variation of the global, direct (or beam), and diffuse insolation on Mars are calculated, are presented. The radiation data are based on measured optical depth of the Martian atmosphere derived from images taken of the Sun with a special diode on the Viking Lander cameras; and computation based on multiple wavelength and multiple scattering of the solar radiation.
NASA Technical Reports Server (NTRS)
Appelbaum, Joseph; Flood, Dennis J.
1989-01-01
Detailed information on solar radiation characteristics on Mars are necessary for effective design of future planned solar energy systems operating on the surface of Mars. Presented here is a procedure and solar radiation related data from which the diurnally, hourly and daily variation of the global, direct beam and diffuse insolation on Mars are calculated. The radiation data are based on measured optical depth of the Martian atmosphere derived from images taken of the sun with a special diode on the Viking cameras; and computation based on multiple wavelength and multiple scattering of the solar radiation.
NASA Technical Reports Server (NTRS)
Appelbaum, Joseph; Flood, Dennis J.
1990-01-01
Detailed information on solar radiation characteristics on Mars are necessary for effective design of future planned solar energy systems operating on the surface of Mars. Presented here is a procedure and solar radiation related data from which the diurnally, hourly and daily variation of the global, direct beam and diffuse insolation on Mars are calculated. The radiation data are based on measured optical depth of the Martian atmosphere derived from images taken of the sun with a special diode on the Viking cameras; and computation based on multiple wavelength and multiple scattering of the solar radiation.
Reconstructed Image Spatial Resolution of Multiple Coincidences Compton Imager
NASA Astrophysics Data System (ADS)
Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna
2010-02-01
We study the multiple coincidences Compton imager (MCCI) which is based on a simultaneous acquisition of several photons emitted in cascade from a single nuclear decay. Theoretically, this technique should provide a major improvement in localization of a single radioactive source as compared to a standard Compton camera. In this work, we investigated the performance and limitations of MCCI using Monte Carlo computer simulations. Spatial resolutions of the reconstructed point source have been studied as a function of the MCCI parameters, including geometrical dimensions and detector characteristics such as materials, energy and spatial resolutions.
Kawai, Ryoko; Araki, Mitsugu; Yoshimura, Masashi; Kamiya, Narutoshi; Ono, Masahiro; Saji, Hideo; Okuno, Yasushi
2018-05-16
Development of new diagnostic imaging probes for Alzheimer's disease, such as positron emission tomography (PET) and single photon emission computed tomography (SPECT) probes, has been strongly desired. In this study, we investigated the most accessible amyloid β (Aβ) binding site of [ 123 I]IMPY, a Thioflavin-T-derived SPECT probe, using experimental and computational methods. First, we performed a competitive inhibition assay with Orange-G, which recognizes the KLVFFA region in Aβ fibrils, suggesting that IMPY and Orange-G bind to different sites in Aβ fibrils. Next, we precisely predicted the IMPY binding site on a multiple-protofilament Aβ fibril model using computational approaches, consisting of molecular dynamics and docking simulations. We generated possible IMPY-binding structures using docking simulations to identify candidates for probe-binding sites. The binding free energy of IMPY with the Aβ fibril was calculated by a free energy simulation method, MP-CAFEE. These computational results suggest that IMPY preferentially binds to an interfacial pocket located between two protofilaments and is stabilized mainly through hydrophobic interactions. Finally, our computational approach was validated by comparing it with the experimental results. The present study demonstrates the possibility of computational approaches to screen new PET/SPECT probes for Aβ imaging.
NASA Astrophysics Data System (ADS)
Lutz, Jesse J.; Duan, Xiaofeng F.; Burggraf, Larry W.
2018-03-01
Valence excitation spectra are computed for deep-center silicon-vacancy defects in 3C, 4H, and 6H silicon carbide (SiC), and comparisons are made with literature photoluminescence measurements. Optimizations of nuclear geometries surrounding the defect centers are performed within a Gaussian basis-set framework using many-body perturbation theory or density functional theory (DFT) methods, with computational expenses minimized by a QM/MM technique called SIMOMM. Vertical excitation energies are subsequently obtained by applying excitation-energy, electron-attached, and ionized equation-of-motion coupled-cluster (EOMCC) methods, where appropriate, as well as time-dependent (TD) DFT, to small models including only a few atoms adjacent to the defect center. We consider the relative quality of various EOMCC and TD-DFT methods for (i) energy-ordering potential ground states differing incrementally in charge and multiplicity, (ii) accurately reproducing experimentally measured photoluminescence peaks, and (iii) energy-ordering defects of different types occurring within a given polytype. The extensibility of this approach to transition-metal defects is also tested by applying it to silicon-substituted chromium defects in SiC and comparing with measurements. It is demonstrated that, when used in conjunction with SIMOMM-optimized geometries, EOMCC-based methods can provide a reliable prediction of the ground-state charge and multiplicity, while also giving a quantitative description of the photoluminescence spectra, accurate to within 0.1 eV of measurement for all cases considered.
Two-body Schrödinger wave functions in a plane-wave basis via separation of dimensions
NASA Astrophysics Data System (ADS)
Jerke, Jonathan; Poirier, Bill
2018-03-01
Using a combination of ideas, the ground and several excited electronic states of the helium atom and the hydrogen molecule are computed to chemical accuracy—i.e., to within 1-2 mhartree or better. The basic strategy is very different from the standard electronic structure approach in that the full two-electron six-dimensional (6D) problem is tackled directly, rather than starting from a single-electron Hartree-Fock approximation. Electron correlation is thus treated exactly, even though computational requirements remain modest. The method also allows for exact wave functions to be computed, as well as energy levels. From the full-dimensional 6D wave functions computed here, radial distribution functions and radial correlation functions are extracted—as well as a 2D probability density function exhibiting antisymmetry for a single Cartesian component. These calculations support a more recent interpretation of Hund's rule, which states that the lower energy of the higher spin-multiplicity states is actually due to reduced screening, rather than reduced electron-electron repulsion. Prospects for larger systems and/or electron dynamics applications appear promising.
Two-body Schrödinger wave functions in a plane-wave basis via separation of dimensions.
Jerke, Jonathan; Poirier, Bill
2018-03-14
Using a combination of ideas, the ground and several excited electronic states of the helium atom and the hydrogen molecule are computed to chemical accuracy-i.e., to within 1-2 mhartree or better. The basic strategy is very different from the standard electronic structure approach in that the full two-electron six-dimensional (6D) problem is tackled directly, rather than starting from a single-electron Hartree-Fock approximation. Electron correlation is thus treated exactly, even though computational requirements remain modest. The method also allows for exact wave functions to be computed, as well as energy levels. From the full-dimensional 6D wave functions computed here, radial distribution functions and radial correlation functions are extracted-as well as a 2D probability density function exhibiting antisymmetry for a single Cartesian component. These calculations support a more recent interpretation of Hund's rule, which states that the lower energy of the higher spin-multiplicity states is actually due to reduced screening, rather than reduced electron-electron repulsion. Prospects for larger systems and/or electron dynamics applications appear promising.
Computational design of RNAs with complex energy landscapes.
Höner zu Siederdissen, Christian; Hammer, Stefan; Abfalter, Ingrid; Hofacker, Ivo L; Flamm, Christoph; Stadler, Peter F
2013-12-01
RNA has become an integral building material in synthetic biology. Dominated by their secondary structures, which can be computed efficiently, RNA molecules are amenable not only to in vitro and in vivo selection, but also to rational, computation-based design. While the inverse folding problem of constructing an RNA sequence with a prescribed ground-state structure has received considerable attention for nearly two decades, there have been few efforts to design RNAs that can switch between distinct prescribed conformations. We introduce a user-friendly tool for designing RNA sequences that fold into multiple target structures. The underlying algorithm makes use of a combination of graph coloring and heuristic local optimization to find sequences whose energy landscapes are dominated by the prescribed conformations. A flexible interface allows the specification of a wide range of design goals. We demonstrate that bi- and tri-stable "switches" can be designed easily with moderate computational effort for the vast majority of compatible combinations of desired target structures. RNAdesign is freely available under the GPL-v3 license. Copyright © 2013 Wiley Periodicals, Inc.
Ozyurt, A Sinem; Selby, Thomas L
2008-07-01
This study describes a method to computationally assess the function of homologous enzymes through small molecule binding interaction energy. Three experimentally determined X-ray structures and four enzyme models from ornithine cyclo-deaminase, alanine dehydrogenase, and mu-crystallin were used in combination with nine small molecules to derive a function score (FS) for each enzyme-model combination. While energy values varied for a single molecule-enzyme combination due to differences in the active sites, we observe that the binding energies for the entire pathway were proportional for each set of small molecules investigated. This proportionality of energies for a reaction pathway appears to be dependent on the amino acids in the active site and their direct interactions with the small molecules, which allows a function score (FS) to be calculated to assess the specificity of each enzyme. Potential of mean force (PMF) calculations were used to obtain the energies, and the resulting FS values demonstrate that a measurement of function may be obtained using differences between these PMF values. Additionally, limitations of this method are discussed based on: (a) larger substrates with significant conformational flexibility; (b) low homology enzymes; and (c) open active sites. This method should be useful in accurately predicting specificity for single enzymes that have multiple steps in their reactions and in high throughput computational methods to accurately annotate uncharacterized proteins based on active site interaction analysis. 2008 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Hawthorne, Bryant; Panchal, Jitesh H.
2014-07-01
A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.
The Power of Flexibility: Autonomous Agents That Conserve Energy in Commercial Buildings
NASA Astrophysics Data System (ADS)
Kwak, Jun-young
Agent-based systems for energy conservation are now a growing area of research in multiagent systems, with applications ranging from energy management and control on the smart grid, to energy conservation in residential buildings, to energy generation and dynamic negotiations in distributed rural communities. Contributing to this area, my thesis presents new agent-based models and algorithms aiming to conserve energy in commercial buildings. More specifically, my thesis provides three sets of algorithmic contributions. First, I provide online predictive scheduling algorithms to handle massive numbers of meeting/event scheduling requests considering flexibility , which is a novel concept for capturing generic user constraints while optimizing the desired objective. Second, I present a novel BM-MDP ( Bounded-parameter Multi-objective Markov Decision Problem) model and robust algorithms for multi-objective optimization under uncertainty both at the planning and execution time. The BM-MDP model and its robust algorithms are useful in (re)scheduling events to achieve energy efficiency in the presence of uncertainty over user's preferences. Third, when multiple users contribute to energy savings, fair division of credit for such savings to incentivize users for their energy saving activities arises as an important question. I appeal to cooperative game theory and specifically to the concept of Shapley value for this fair division. Unfortunately, scaling up this Shapley value computation is a major hindrance in practice. Therefore, I present novel approximation algorithms to efficiently compute the Shapley value based on sampling and partitions and to speed up the characteristic function computation. These new models have not only advanced the state of the art in multiagent algorithms, but have actually been successfully integrated within agents dedicated to energy efficiency: SAVES, TESLA and THINC. SAVES focuses on the day-to-day energy consumption of individuals and groups in commercial buildings by reactively suggesting energy conserving alternatives. TESLA takes a long-range planning perspective and optimizes overall energy consumption of a large number of group events or meetings together. THINC provides an end-to-end integration within a single agent of energy efficient scheduling, rescheduling and credit allocation. While SAVES, TESLA and THINC thus differ in their scope and applicability, they demonstrate the utility of agent-based systems in actually reducing energy consumption in commercial buildings. I evaluate my algorithms and agents using extensive analysis on data from over 110,000 real meetings/events at multiple educational buildings including the main libraries at the University of Southern California. I also provide results on simulations and real-world experiments, clearly demonstrating the power of agent technology to assist human users in saving energy in commercial buildings.
Naden, Levi N; Shirts, Michael R
2016-04-12
We show how thermodynamic properties of molecular models can be computed over a large, multidimensional parameter space by combining multistate reweighting analysis with a linear basis function approach. This approach reduces the computational cost to estimate thermodynamic properties from molecular simulations for over 130,000 tested parameter combinations from over 1000 CPU years to tens of CPU days. This speed increase is achieved primarily by computing the potential energy as a linear combination of basis functions, computed from either modified simulation code or as the difference of energy between two reference states, which can be done without any simulation code modification. The thermodynamic properties are then estimated with the Multistate Bennett Acceptance Ratio (MBAR) as a function of multiple model parameters without the need to define a priori how the states are connected by a pathway. Instead, we adaptively sample a set of points in parameter space to create mutual configuration space overlap. The existence of regions of poor configuration space overlap are detected by analyzing the eigenvalues of the sampled states' overlap matrix. The configuration space overlap to sampled states is monitored alongside the mean and maximum uncertainty to determine convergence, as neither the uncertainty or the configuration space overlap alone is a sufficient metric of convergence. This adaptive sampling scheme is demonstrated by estimating with high precision the solvation free energies of charged particles of Lennard-Jones plus Coulomb functional form with charges between -2 and +2 and generally physical values of σij and ϵij in TIP3P water. We also compute entropy, enthalpy, and radial distribution functions of arbitrary unsampled parameter combinations using only the data from these sampled states and use the estimates of free energies over the entire space to examine the deviation of atomistic simulations from the Born approximation to the solvation free energy.
Zhang, Baofeng; D'Erasmo, Michael P; Murelli, Ryan P; Gallicchio, Emilio
2016-09-30
We report the results of a binding free energy-based virtual screening campaign of a library of 77 α-hydroxytropolone derivatives against the challenging RNase H active site of the reverse transcriptase (RT) enzyme of human immunodeficiency virus-1. Multiple protonation states, rotamer states, and binding modalities of each compound were individually evaluated. The work involved more than 300 individual absolute alchemical binding free energy parallel molecular dynamics calculations and over 1 million CPU hours on national computing clusters and a local campus computational grid. The thermodynamic and structural measures obtained in this work rationalize a series of characteristics of this system useful for guiding future synthetic and biochemical efforts. The free energy model identified key ligand-dependent entropic and conformational reorganization processes difficult to capture using standard docking and scoring approaches. Binding free energy-based optimization of the lead compounds emerging from the virtual screen has yielded four compounds with very favorable binding properties, which will be the subject of further experimental investigations. This work is one of the few reported applications of advanced-binding free energy models to large-scale virtual screening and optimization projects. It further demonstrates that, with suitable algorithms and automation, advanced-binding free energy models can have a useful role in early-stage drug-discovery programs.
Analyzing machupo virus-receptor binding by molecular dynamics simulations.
Meyer, Austin G; Sawyer, Sara L; Ellington, Andrew D; Wilke, Claus O
2014-01-01
In many biological applications, we would like to be able to computationally predict mutational effects on affinity in protein-protein interactions. However, many commonly used methods to predict these effects perform poorly in important test cases. In particular, the effects of multiple mutations, non alanine substitutions, and flexible loops are difficult to predict with available tools and protocols. We present here an existing method applied in a novel way to a new test case; we interrogate affinity differences resulting from mutations in a host-virus protein-protein interface. We use steered molecular dynamics (SMD) to computationally pull the machupo virus (MACV) spike glycoprotein (GP1) away from the human transferrin receptor (hTfR1). We then approximate affinity using the maximum applied force of separation and the area under the force-versus-distance curve. We find, even without the rigor and planning required for free energy calculations, that these quantities can provide novel biophysical insight into the GP1/hTfR1 interaction. First, with no prior knowledge of the system we can differentiate among wild type and mutant complexes. Moreover, we show that this simple SMD scheme correlates well with relative free energy differences computed via free energy perturbation. Second, although the static co-crystal structure shows two large hydrogen-bonding networks in the GP1/hTfR1 interface, our simulations indicate that one of them may not be important for tight binding. Third, one viral site known to be critical for infection may mark an important evolutionary suppressor site for infection-resistant hTfR1 mutants. Finally, our approach provides a framework to compare the effects of multiple mutations, individually and jointly, on protein-protein interactions.
Analyzing machupo virus-receptor binding by molecular dynamics simulations
Sawyer, Sara L.; Ellington, Andrew D.; Wilke, Claus O.
2014-01-01
In many biological applications, we would like to be able to computationally predict mutational effects on affinity in protein–protein interactions. However, many commonly used methods to predict these effects perform poorly in important test cases. In particular, the effects of multiple mutations, non alanine substitutions, and flexible loops are difficult to predict with available tools and protocols. We present here an existing method applied in a novel way to a new test case; we interrogate affinity differences resulting from mutations in a host–virus protein–protein interface. We use steered molecular dynamics (SMD) to computationally pull the machupo virus (MACV) spike glycoprotein (GP1) away from the human transferrin receptor (hTfR1). We then approximate affinity using the maximum applied force of separation and the area under the force-versus-distance curve. We find, even without the rigor and planning required for free energy calculations, that these quantities can provide novel biophysical insight into the GP1/hTfR1 interaction. First, with no prior knowledge of the system we can differentiate among wild type and mutant complexes. Moreover, we show that this simple SMD scheme correlates well with relative free energy differences computed via free energy perturbation. Second, although the static co-crystal structure shows two large hydrogen-bonding networks in the GP1/hTfR1 interface, our simulations indicate that one of them may not be important for tight binding. Third, one viral site known to be critical for infection may mark an important evolutionary suppressor site for infection-resistant hTfR1 mutants. Finally, our approach provides a framework to compare the effects of multiple mutations, individually and jointly, on protein–protein interactions. PMID:24624315
Lin, Yunyue; Wu, Qishi; Cai, Xiaoshan; ...
2010-01-01
Data transmission from sensor nodes to a base station or a sink node often incurs significant energy consumption, which critically affects network lifetime. We generalize and solve the problem of deploying multiple base stations to maximize network lifetime in terms of two different metrics under one-hop and multihop communication models. In the one-hop communication model, the sensors far away from base stations always deplete their energy much faster than others. We propose an optimal solution and a heuristic approach based on the minimal enclosing circle algorithm to deploy a base station at the geometric center of each cluster. In themore » multihop communication model, both base station location and data routing mechanism need to be considered in maximizing network lifetime. We propose an iterative algorithm based on rigorous mathematical derivations and use linear programming to compute the optimal routing paths for data transmission. Simulation results show the distinguished performance of the proposed deployment algorithms in maximizing network lifetime.« less
The Effect of Amplifier Bias Drift on Differential Magnitude Estimation in Multiple-Star Systems
NASA Astrophysics Data System (ADS)
Tyler, David W.; Muralimanohar, Hariharan; Borelli, Kathy J.
2007-02-01
We show how the temporal drift of CCD amplifier bias can cause significant relative magnitude estimation error in speckle interferometric observations of multiple-star systems. When amplifier bias varies over time, the estimation error arises if the time between acquisition of dark-frame calibration data and science data is long relative to the timescale over which the bias changes. Using analysis, we show that while detector-temperature drift over time causes a variation in accumulated dark current and a residual bias in calibrated imagery, only amplifier bias variations cause a residual bias in the estimated energy spectrum. We then use telescope data taken specifically to investigate this phenomenon to show that for the detector used, temporal bias drift can cause residual energy spectrum bias as large or larger than the mean value of the noise energy spectrum. Finally, we use a computer simulation to demonstrate the effect of residual bias on differential magnitude estimation. A supplemental calibration technique is described in the appendices.
An on-board near-optimal climb-dash energy management
NASA Technical Reports Server (NTRS)
Weston, A. R.; Cliff, E. M.; Kelley, H. J.
1982-01-01
On-board real time flight control is studied in order to develop algorithms which are simple enough to be used in practice, for a variety of missions involving three dimensional flight. The intercept mission in symmetric flight is emphasized. Extensive computation is required on the ground prior to the mission but the ensuing on-board exploitation is extremely simple. The scheme takes advantage of the boundary layer structure common in singular perturbations, arising with the multiple time scales appropriate to aircraft dynamics. Energy modelling of aircraft is used as the starting point for the analysis. In the symmetric case, a nominal path is generated which fairs into the dash or cruise state. Feedback coefficients are found as functions of the remaining energy to go (dash energy less current energy) along the nominal path.
Provenance-aware optimization of workload for distributed data production
NASA Astrophysics Data System (ADS)
Makatun, Dzmitry; Lauret, Jérôme; Rudová, Hana; Šumbera, Michal
2017-10-01
Distributed data processing in High Energy and Nuclear Physics (HENP) is a prominent example of big data analysis. Having petabytes of data being processed at tens of computational sites with thousands of CPUs, standard job scheduling approaches either do not address well the problem complexity or are dedicated to one specific aspect of the problem only (CPU, network or storage). Previously we have developed a new job scheduling approach dedicated to distributed data production - an essential part of data processing in HENP (preprocessing in big data terminology). In this contribution, we discuss the load balancing with multiple data sources and data replication, present recent improvements made to our planner and provide results of simulations which demonstrate the advantage against standard scheduling policies for the new use case. Multi-source or provenance is common in computing models of many applications whereas the data may be copied to several destinations. The initial input data set would hence be already partially replicated to multiple locations and the task of the scheduler is to maximize overall computational throughput considering possible data movements and CPU allocation. The studies have shown that our approach can provide a significant gain in overall computational performance in a wide scope of simulations considering realistic size of computational Grid and various input data distribution.
Makhov, Dmitry V.; Saita, Kenichiro; Martinez, Todd J.; ...
2014-12-11
In this study, we report a detailed computational simulation of the photodissociation of pyrrole using the ab initio Multiple Cloning (AIMC) method implemented within MOLPRO. The efficiency of the AIMC implementation, employing train basis sets, linear approximation for matrix elements, and Ehrenfest configuration cloning, allows us to accumulate significant statistics. We calculate and analyze the total kinetic energy release (TKER) spectrum and Velocity Map Imaging (VMI) of pyrrole and compare the results directly with experimental measurements. Both the TKER spectrum and the structure of the velocity map image (VMI) are well reproduced. Previously, it has been assumed that the isotropicmore » component of the VMI arises from long time statistical dissociation. Instead, our simulations suggest that ultrafast dynamics contributes significantly to both low and high energy portions of the TKER spectrum.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makhov, Dmitry V.; Saita, Kenichiro; Martinez, Todd J.
In this study, we report a detailed computational simulation of the photodissociation of pyrrole using the ab initio Multiple Cloning (AIMC) method implemented within MOLPRO. The efficiency of the AIMC implementation, employing train basis sets, linear approximation for matrix elements, and Ehrenfest configuration cloning, allows us to accumulate significant statistics. We calculate and analyze the total kinetic energy release (TKER) spectrum and Velocity Map Imaging (VMI) of pyrrole and compare the results directly with experimental measurements. Both the TKER spectrum and the structure of the velocity map image (VMI) are well reproduced. Previously, it has been assumed that the isotropicmore » component of the VMI arises from long time statistical dissociation. Instead, our simulations suggest that ultrafast dynamics contributes significantly to both low and high energy portions of the TKER spectrum.« less
Optimization of design parameters of low-energy buildings
NASA Astrophysics Data System (ADS)
Vala, Jiří; Jarošová, Petra
2017-07-01
Evaluation of temperature development and related consumption of energy required for heating, air-conditioning, etc. in low-energy buildings requires the proper physical analysis, covering heat conduction, convection and radiation, including beam and diffusive components of solar radiation, on all building parts and interfaces. The system approach and the Fourier multiplicative decomposition together with the finite element technique offers the possibility of inexpensive and robust numerical and computational analysis of corresponding direct problems, as well as of the optimization ones with several design variables, using the Nelder-Mead simplex method. The practical example demonstrates the correlation between such numerical simulations and the time series of measurements of energy consumption on a small family house in Ostrov u Macochy (35 km northern from Brno).
Techniques for deriving tissue structure from multiple projection dual-energy x-ray absorptiometry
NASA Technical Reports Server (NTRS)
Feldmesser, Howard S. (Inventor); Charles, Jr., Harry K. (Inventor); Beck, Thomas J. (Inventor); Magee, Thomas C. (Inventor)
2004-01-01
Techniques for deriving bone properties from images generated by a dual-energy x-ray absorptiometry apparatus include receiving first image data having pixels indicating bone mineral density projected at a first angle of a plurality of projection angles. Second image data and third image data are also received. The second image data indicates bone mineral density projected at a different second angle. The third image data indicates bone mineral density projected at a third angle. The third angle is different from the first angle and the second angle. Principal moments of inertia for a bone in the subject are computed based on the first image data, the second image data and the third image data. The techniques allow high-precision, high-resolution dual-energy x-ray attenuation images to be used for computing principal moments of inertia and strength moduli of individual bones, plus risk of injury and changes in risk of injury to a patient.
NASA Astrophysics Data System (ADS)
Moaienla, T.; Singh, Th. David; Singh, N. Rajmuhon; Devi, M. Indira
2009-10-01
Studying the absorption difference and comparative absorption spectra of the interaction of Pr(III) and Nd(III) with L-phenylalanine, L-glycine, L-alanine and L-aspartic acid in the presence and absence of Ca 2+ in organic solvents, various energy interaction parameters like Slater-Condon ( FK), Racah ( Ek), Lande factor ( ξ4f), nephelauxetic ratio ( β), bonding ( b1/2), percentage-covalency ( δ) have been evaluated applying partial and multiple regression analysis. The values of oscillator strength ( P) and Judd-Ofelt electric dipole intensity parameter Tλ ( λ = 2, 4, 6) for different 4f-4f transitions have been computed. On analysis of the variation of the various energy interaction parameters as well as the changes in the oscillator strength ( P) and Tλ values reveal the mode of binding with different ligands.
NASA Technical Reports Server (NTRS)
Toon, Owen B.; Mckay, C. P.; Ackerman, T. P.; Santhanam, K.
1989-01-01
The solution of the generalized two-stream approximation for radiative transfer in homogeneous multiple scattering atmospheres is extended to vertically inhomogeneous atmospheres in a manner which is numerically stable and computationally efficient. It is shown that solar energy deposition rates, photolysis rates, and infrared cooling rates all may be calculated with the simple modifications of a single algorithm. The accuracy of the algorithm is generally better than 10 percent, so that other uncertainties, such as in absorption coefficients, may often dominate the error in calculation of the quantities of interest to atmospheric studies.
A system for spacecraft attitude control and energy storage
NASA Technical Reports Server (NTRS)
Shaughnessy, J. D.
1974-01-01
A conceptual design for a double-gimbal reaction-wheel energy-wheel device which has three-axis attitude control and electrical energy storage capability is given. A mathematical model for the three-axis gyroscope (TAG) was developed, and a system of multiple units is proposed for attitude control and energy storage for a class of spacecraft. Control laws were derived to provide the required attitude-control torques and energy transfer while minimizing functions of TAG gimbal angles, gimbal rates, reaction-wheel speeds, and energy-wheel speed differences. A control law is also presented for a magnetic torquer desaturation system. A computer simulation of a three-TAG system for an orbiting telescope was used to evaluate the concept. The results of the study indicate that all control and power requirements can be satisfied by using the TAG concept.
NASA Technical Reports Server (NTRS)
Rendell, Alistair P.; Lee, Timothy J.
1991-01-01
The analytic energy gradient for the single and double excitation coupled-cluster (CCSD) wave function has been reformulated and implemented in a new set of programs. The reformulated set of gradient equations have a smaller computational cost than any previously published. The iterative solution of the linear equations and the construction of the effective density matrices are fully vectorized, being based on matrix multiplications. The new method has been used to investigate the Cl2O2 molecule, which has recently been postulated as an important intermediate in the destruction of ozone in the stratosphere. In addition to reporting computational timings, the CCSD equilibrium geometries, harmonic vibrational frequencies, infrared intensities, and relative energetics of three isomers of Cl2O2 are presented.
NASA Astrophysics Data System (ADS)
Ghale, Purnima; Johnson, Harley T.
2018-06-01
We present an efficient sparse matrix-vector (SpMV) based method to compute the density matrix P from a given Hamiltonian in electronic structure computations. Our method is a hybrid approach based on Chebyshev-Jackson approximation theory and matrix purification methods like the second order spectral projection purification (SP2). Recent methods to compute the density matrix scale as O(N) in the number of floating point operations but are accompanied by large memory and communication overhead, and they are based on iterative use of the sparse matrix-matrix multiplication kernel (SpGEMM), which is known to be computationally irregular. In addition to irregularity in the sparse Hamiltonian H, the nonzero structure of intermediate estimates of P depends on products of H and evolves over the course of computation. On the other hand, an expansion of the density matrix P in terms of Chebyshev polynomials is straightforward and SpMV based; however, the resulting density matrix may not satisfy the required constraints exactly. In this paper, we analyze the strengths and weaknesses of the Chebyshev-Jackson polynomials and the second order spectral projection purification (SP2) method, and propose to combine them so that the accurate density matrix can be computed using the SpMV computational kernel only, and without having to store the density matrix P. Our method accomplishes these objectives by using the Chebyshev polynomial estimate as the initial guess for SP2, which is followed by using sparse matrix-vector multiplications (SpMVs) to replicate the behavior of the SP2 algorithm for purification. We demonstrate the method on a tight-binding model system of an oxide material containing more than 3 million atoms. In addition, we also present the predicted behavior of our method when applied to near-metallic Hamiltonians with a wide energy spectrum.
Simultaneous fits in ISIS on the example of GRO J1008-57
NASA Astrophysics Data System (ADS)
Kühnel, Matthias; Müller, Sebastian; Kreykenbohm, Ingo; Schwarm, Fritz-Walter; Grossberger, Christoph; Dauser, Thomas; Pottschmidt, Katja; Ferrigno, Carlo; Rothschild, Richard E.; Klochkov, Dmitry; Staubert, Rüdiger; Wilms, Joern
2015-04-01
Parallel computing and steadily increasing computation speed have led to a new tool for analyzing multiple datasets and datatypes: fitting several datasets simultaneously. With this technique, physically connected parameters of individual data can be treated as a single parameter by implementing this connection into the fit directly. We discuss the terminology, implementation, and possible issues of simultaneous fits based on the X-ray data analysis tool Interactive Spectral Interpretation System (ISIS). While all data modeling tools in X-ray astronomy allow in principle fitting data from multiple data sets individually, the syntax used in these tools is not often well suited for this task. Applying simultaneous fits to the transient X-ray binary GRO J1008-57, we find that the spectral shape is only dependent on X-ray flux. We determine time independent parameters such as, e.g., the folding energy E_fold, with unprecedented precision.
Jou, Jonathan D; Jain, Swati; Georgiev, Ivelin S; Donald, Bruce R
2016-06-01
Sparse energy functions that ignore long range interactions between residue pairs are frequently used by protein design algorithms to reduce computational cost. Current dynamic programming algorithms that fully exploit the optimal substructure produced by these energy functions only compute the GMEC. This disproportionately favors the sequence of a single, static conformation and overlooks better binding sequences with multiple low-energy conformations. Provable, ensemble-based algorithms such as A* avoid this problem, but A* cannot guarantee better performance than exhaustive enumeration. We propose a novel, provable, dynamic programming algorithm called Branch-Width Minimization* (BWM*) to enumerate a gap-free ensemble of conformations in order of increasing energy. Given a branch-decomposition of branch-width w for an n-residue protein design with at most q discrete side-chain conformations per residue, BWM* returns the sparse GMEC in O([Formula: see text]) time and enumerates each additional conformation in merely O([Formula: see text]) time. We define a new measure, Total Effective Search Space (TESS), which can be computed efficiently a priori before BWM* or A* is run. We ran BWM* on 67 protein design problems and found that TESS discriminated between BWM*-efficient and A*-efficient cases with 100% accuracy. As predicted by TESS and validated experimentally, BWM* outperforms A* in 73% of the cases and computes the full ensemble or a close approximation faster than A*, enumerating each additional conformation in milliseconds. Unlike A*, the performance of BWM* can be predicted in polynomial time before running the algorithm, which gives protein designers the power to choose the most efficient algorithm for their particular design problem.
NASA Astrophysics Data System (ADS)
Araki, Samuel J.
2016-11-01
In the plumes of Hall thrusters and ion thrusters, high energy ions experience elastic collisions with slow neutral atoms. These collisions involve a process of momentum exchange, altering the initial velocity vectors of the collision pair. In addition to the momentum exchange process, ions and atoms can exchange electrons, resulting in slow charge-exchange ions and fast atoms. In these simulations, it is particularly important to accurately perform computations of ion-atom elastic collisions in determining the plume current profile and assessing the integration of spacecraft components. The existing models are currently capable of accurate calculation but are not fast enough such that the calculation can be a bottleneck of plume simulations. This study investigates methods to accelerate an ion-atom elastic collision calculation that includes both momentum- and charge-exchange processes. The scattering angles are pre-computed through a classical approach with ab initio spin-orbit free potential and are stored in a two-dimensional array as functions of impact parameter and energy. When performing a collision calculation for an ion-atom pair, the scattering angle is computed by a table lookup and multiple linear interpolations, given the relative energy and randomly determined impact parameter. In order to further accelerate the calculations, the number of collision calculations is reduced by properly defining two cut-off cross-sections for the elastic scattering. In the MCC method, the target atom needs to be sampled; however, it is confirmed that initial target atom velocity does not play a significant role in typical electric propulsion plume simulations such that the sampling process is unnecessary. With these implementations, the computational run-time to perform a collision calculation is reduced significantly compared to previous methods, while retaining the accuracy of the high fidelity models.
Multicore Challenges and Benefits for High Performance Scientific Computing
Nielsen, Ida M. B.; Janssen, Curtis L.
2008-01-01
Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexitymore » of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.« less
A Machine LearningFramework to Forecast Wave Conditions
NASA Astrophysics Data System (ADS)
Zhang, Y.; James, S. C.; O'Donncha, F.
2017-12-01
Recently, significant effort has been undertaken to quantify and extract wave energy because it is renewable, environmental friendly, abundant, and often close to population centers. However, a major challenge is the ability to accurately and quickly predict energy production, especially across a 48-hour cycle. Accurate forecasting of wave conditions is a challenging undertaking that typically involves solving the spectral action-balance equation on a discretized grid with high spatial resolution. The nature of the computations typically demands high-performance computing infrastructure. Using a case-study site at Monterey Bay, California, a machine learning framework was trained to replicate numerically simulated wave conditions at a fraction of the typical computational cost. Specifically, the physics-based Simulating WAves Nearshore (SWAN) model, driven by measured wave conditions, nowcast ocean currents, and wind data, was used to generate training data for machine learning algorithms. The model was run between April 1st, 2013 and May 31st, 2017 generating forecasts at three-hour intervals yielding 11,078 distinct model outputs. SWAN-generated fields of 3,104 wave heights and a characteristic period could be replicated through simple matrix multiplications using the mapping matrices from machine learning algorithms. In fact, wave-height RMSEs from the machine learning algorithms (9 cm) were less than those for the SWAN model-verification exercise where those simulations were compared to buoy wave data within the model domain (>40 cm). The validated machine learning approach, which acts as an accurate surrogate for the SWAN model, can now be used to perform real-time forecasts of wave conditions for the next 48 hours using available forecasted boundary wave conditions, ocean currents, and winds. This solution has obvious applications to wave-energy generation as accurate wave conditions can be forecasted with over a three-order-of-magnitude reduction in computational expense. The low computational cost (and by association low computer-power requirement) means that the machine learning algorithms could be installed on a wave-energy converter as a form of "edge computing" where a device could forecast its own 48-hour energy production.
Breakfast intake among adults with type 2 diabetes: is bigger better?
Jarvandi, Soghra; Schootman, Mario; Racette, Susan B.
2015-01-01
Objective To assess the association between breakfast energy and total daily energy intake among individuals with type 2 diabetes. Design Cross-sectional study. Daily energy intake was computed from a 24-h dietary recall. Multiple regression models were used to estimate the association between daily energy intake (dependent variable) and quartiles of energy intake at breakfast (independent variable) expressed as either absolute or relative (% of total daily energy intake) terms. Orthogonal polynomial contrasts were used to test for linear and quadratic trends. Models were controlled for sex, age, race/ethnicity, body mass index, physical activity and smoking. In addition, we used separate multiple regression models to test the effect of quartiles of absolute and relative breakfast energy on intake at lunch, dinner, and snacks. Setting The 1999–2004 National Health and Nutrition Examination Survey (NHANES). Subjects Participants aged ≥ 30 years with self-reported history of diabetes (N = 1,146). Results Daily energy intake increased as absolute breakfast energy intake increased (linear trend, P < 0.0001; quadratic trend, P = 0.02), but decreased as relative breakfast energy intake increased (linear trend, P < 0.0001). In addition, while higher quartiles of absolute breakfast intake had no associations with energy intake at subsequent meals, higher quartiles of relative breakfast intake were associated with lower energy intake during all subsequent meals and snacks (P < 0.05). Conclusions Consuming a breakfast that provided less energy or comprised a greater proportion of daily energy intake was associated with lower total daily energy intake in adults with type 2 diabetes. PMID:25529061
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thoreson, Gregory G
PCF files are binary files designed to contain gamma spectra and neutron count rates from radiation sensors. It is the native format for the GAmma Detector Response and Analysis Software (GADRAS) package [1]. It can contain multiple spectra and information about each spectrum such as energy calibration. This document outlines the format of the file that would allow one to write a computer program to parse and write such files.
Multiple Detector Optimization for Hidden Radiation Source Detection
2015-03-26
important in achieving operationally useful methods for optimizing detector emplacement, the 2-D attenuation model approach promises to speed up the...process of hidden source detection significantly. The model focused on detection of the full energy peak of a radiation source. Methods to optimize... radioisotope identification is possible without using a computationally intensive stochastic model such as the Monte Carlo n-Particle (MCNP) code
Multi-ray medical ultrasound simulation without explicit speckle modelling.
Tuzer, Mert; Yazıcı, Abdulkadir; Türkay, Rüştü; Boyman, Michael; Acar, Burak
2018-05-04
To develop a medical ultrasound (US) simulation method using T1-weighted magnetic resonance images (MRI) as the input that offers a compromise between low-cost ray-based and high-cost realistic wave-based simulations. The proposed method uses a novel multi-ray image formation approach with a virtual phased array transducer probe. A domain model is built from input MR images. Multiple virtual acoustic rays are emerged from each element of the linear transducer array. Reflected and transmitted acoustic energy at discrete points along each ray is computed independently. Simulated US images are computed by fusion of the reflected energy along multiple rays from multiple transducers, while phase delays due to differences in distances to transducers are taken into account. A preliminary implementation using GPUs is presented. Preliminary results show that the multi-ray approach is capable of generating view point-dependent realistic US images with an inherent Rician distributed speckle pattern automatically. The proposed simulator can reproduce the shadowing artefacts and demonstrates frequency dependence apt for practical training purposes. We also have presented preliminary results towards the utilization of the method for real-time simulations. The proposed method offers a low-cost near-real-time wave-like simulation of realistic US images from input MR data. It can further be improved to cover the pathological findings using an improved domain model, without any algorithmic updates. Such a domain model would require lesion segmentation or manual embedding of virtual pathologies for training purposes.
A Low Complexity System Based on Multiple Weighted Decision Trees for Indoor Localization
Sánchez-Rodríguez, David; Hernández-Morera, Pablo; Quinteiro, José Ma.; Alonso-González, Itziar
2015-01-01
Indoor position estimation has become an attractive research topic due to growing interest in location-aware services. Nevertheless, satisfying solutions have not been found with the considerations of both accuracy and system complexity. From the perspective of lightweight mobile devices, they are extremely important characteristics, because both the processor power and energy availability are limited. Hence, an indoor localization system with high computational complexity can cause complete battery drain within a few hours. In our research, we use a data mining technique named boosting to develop a localization system based on multiple weighted decision trees to predict the device location, since it has high accuracy and low computational complexity. The localization system is built using a dataset from sensor fusion, which combines the strength of radio signals from different wireless local area network access points and device orientation information from a digital compass built-in mobile device, so that extra sensors are unnecessary. Experimental results indicate that the proposed system leads to substantial improvements on computational complexity over the widely-used traditional fingerprinting methods, and it has a better accuracy than they have. PMID:26110413
Akhter, Nasrin; Shehu, Amarda
2018-01-19
Due to the essential role that the three-dimensional conformation of a protein plays in regulating interactions with molecular partners, wet and dry laboratories seek biologically-active conformations of a protein to decode its function. Computational approaches are gaining prominence due to the labor and cost demands of wet laboratory investigations. Template-free methods can now compute thousands of conformations known as decoys, but selecting native conformations from the generated decoys remains challenging. Repeatedly, research has shown that the protein energy functions whose minima are sought in the generation of decoys are unreliable indicators of nativeness. The prevalent approach ignores energy altogether and clusters decoys by conformational similarity. Complementary recent efforts design protein-specific scoring functions or train machine learning models on labeled decoys. In this paper, we show that an informative consideration of energy can be carried out under the energy landscape view. Specifically, we leverage local structures known as basins in the energy landscape probed by a template-free method. We propose and compare various strategies of basin-based decoy selection that we demonstrate are superior to clustering-based strategies. The presented results point to further directions of research for improving decoy selection, including the ability to properly consider the multiplicity of native conformations of proteins.
First-principles data-driven discovery of transition metal oxides for artificial photosynthesis
NASA Astrophysics Data System (ADS)
Yan, Qimin
We develop a first-principles data-driven approach for rapid identification of transition metal oxide (TMO) light absorbers and photocatalysts for artificial photosynthesis using the Materials Project. Initially focusing on Cr, V, and Mn-based ternary TMOs in the database, we design a broadly-applicable multiple-layer screening workflow automating density functional theory (DFT) and hybrid functional calculations of bulk and surface electronic and magnetic structures. We further assess the electrochemical stability of TMOs in aqueous environments from computed Pourbaix diagrams. Several promising earth-abundant low band-gap TMO compounds with desirable band edge energies and electrochemical stability are identified by our computational efforts and then synergistically evaluated using high-throughput synthesis and photoelectrochemical screening techniques by our experimental collaborators at Caltech. Our joint theory-experiment effort has successfully identified new earth-abundant copper and manganese vanadate complex oxides that meet highly demanding requirements for photoanodes, substantially expanding the known space of such materials. By integrating theory and experiment, we validate our approach and develop important new insights into structure-property relationships for TMOs for oxygen evolution photocatalysts, paving the way for use of first-principles data-driven techniques in future applications. This work is supported by the Materials Project Predictive Modeling Center and the Joint Center for Artificial Photosynthesis through the U.S. Department of Energy, Office of Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract No. DE-AC02-05CH11231. Computational resources also provided by the Department of Energy through the National Energy Supercomputing Center.
Computer display and manipulation of biological molecules
NASA Technical Reports Server (NTRS)
Coeckelenbergh, Y.; Macelroy, R. D.; Hart, J.; Rein, R.
1978-01-01
This paper describes a computer model that was designed to investigate the conformation of molecules, macromolecules and subsequent complexes. Utilizing an advanced 3-D dynamic computer display system, the model is sufficiently versatile to accommodate a large variety of molecular input and to generate data for multiple purposes such as visual representation of conformational changes, and calculation of conformation and interaction energy. Molecules can be built on the basis of several levels of information. These include the specification of atomic coordinates and connectivities and the grouping of building blocks and duplicated substructures using symmetry rules found in crystals and polymers such as proteins and nucleic acids. Called AIMS (Ames Interactive Molecular modeling System), the model is now being used to study pre-biotic molecular evolution toward life.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jalving, Jordan; Abhyankar, Shrirang; Kim, Kibaek
Here, we present a computational framework that facilitates the construction, instantiation, and analysis of large-scale optimization and simulation applications of coupled energy networks. The framework integrates the optimization modeling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools use a common graphbased abstraction that enables us to achieve compatibility between data structures and to build applications that use network models of different physical fidelity. We also describe how to embed these tools within complex computational workflows using SWIFT, which is a tool that facilitates parallel execution of multiple simulation runs and management of input and output data.more » We discuss how to use these capabilities to target coupled natural gas and electricity systems.« less
Jalving, Jordan; Abhyankar, Shrirang; Kim, Kibaek; ...
2017-04-24
Here, we present a computational framework that facilitates the construction, instantiation, and analysis of large-scale optimization and simulation applications of coupled energy networks. The framework integrates the optimization modeling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools use a common graphbased abstraction that enables us to achieve compatibility between data structures and to build applications that use network models of different physical fidelity. We also describe how to embed these tools within complex computational workflows using SWIFT, which is a tool that facilitates parallel execution of multiple simulation runs and management of input and output data.more » We discuss how to use these capabilities to target coupled natural gas and electricity systems.« less
NASA Astrophysics Data System (ADS)
Frew, E.; Argrow, B. M.; Houston, A. L.; Weiss, C.
2014-12-01
The energy-aware airborne dynamic, data-driven application system (EA-DDDAS) performs persistent sampling in complex atmospheric conditions by exploiting wind energy using the dynamic data-driven application system paradigm. The main challenge for future airborne sampling missions is operation with tight integration of physical and computational resources over wireless communication networks, in complex atmospheric conditions. The physical resources considered here include sensor platforms, particularly mobile Doppler radar and unmanned aircraft, the complex conditions in which they operate, and the region of interest. Autonomous operation requires distributed computational effort connected by layered wireless communication. Onboard decision-making and coordination algorithms can be enhanced by atmospheric models that assimilate input from physics-based models and wind fields derived from multiple sources. These models are generally too complex to be run onboard the aircraft, so they need to be executed in ground vehicles in the field, and connected over broadband or other wireless links back to the field. Finally, the wind field environment drives strong interaction between the computational and physical systems, both as a challenge to autonomous path planning algorithms and as a novel energy source that can be exploited to improve system range and endurance. Implementation details of a complete EA-DDDAS will be provided, along with preliminary flight test results targeting coherent boundary-layer structures.
NASA Astrophysics Data System (ADS)
Koehl, Patrice; Orland, Henri; Delarue, Marc
2011-08-01
We present an extension of the self-consistent mean field theory for protein side-chain modeling in which solvation effects are included based on the Poisson-Boltzmann (PB) theory. In this approach, the protein is represented with multiple copies of its side chains. Each copy is assigned a weight that is refined iteratively based on the mean field energy generated by the rest of the protein, until self-consistency is reached. At each cycle, the variational free energy of the multi-copy system is computed; this free energy includes the internal energy of the protein that accounts for vdW and electrostatics interactions and a solvation free energy term that is computed using the PB equation. The method converges in only a few cycles and takes only minutes of central processing unit time on a commodity personal computer. The predicted conformation of each residue is then set to be its copy with the highest weight after convergence. We have tested this method on a database of hundred highly refined NMR structures to circumvent the problems of crystal packing inherent to x-ray structures. The use of the PB-derived solvation free energy significantly improves prediction accuracy for surface side chains. For example, the prediction accuracies for χ1 for surface cysteine, serine, and threonine residues improve from 68%, 35%, and 43% to 80%, 53%, and 57%, respectively. A comparison with other side-chain prediction algorithms demonstrates that our approach is consistently better in predicting the conformations of exposed side chains.
System Control Applications of Low-Power Radio Frequency Devices
NASA Astrophysics Data System (ADS)
van Rensburg, Roger
2017-09-01
This paper conceptualizes a low-power wireless sensor network design for application employment to reduce theft of portable computer devices used in educational institutions today. The aim of this study is to design and develop a reliable and robust wireless network that can eradicate accessibility of a device’s human interface. An embedded system supplied by an energy harvesting source, installed on the portable computer device, may represent one of multiple slave nodes which request regular updates from a standalone master station. A portable computer device which is operated in an undesignated area or in a field perimeter where master to slave communication is restricted, indicating a possible theft scenario, will initiate a shutdown of its operating system and render the device unusable. Consequently, an algorithm in the device firmware may ensure the necessary steps are executed to track the device, irrespective whether the device is enabled. Design outcomes thus far indicate that a wireless network using low-power embedded hardware, is feasible for anti-theft applications. By incorporating one of the latest Bluetooth low-energy, ANT+, ZigBee or Thread wireless technologies, an anti-theft system may be implemented that has the potential to reduce major portable computer device theft in institutions of digitized learning.
Integration of Cloud resources in the LHCb Distributed Computing
NASA Astrophysics Data System (ADS)
Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel
2014-06-01
This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.
Programming and Runtime Support to Blaze FPGA Accelerator Deployment at Datacenter Scale
Huang, Muhuan; Wu, Di; Yu, Cody Hao; Fang, Zhenman; Interlandi, Matteo; Condie, Tyson; Cong, Jason
2017-01-01
With the end of CPU core scaling due to dark silicon limitations, customized accelerators on FPGAs have gained increased attention in modern datacenters due to their lower power, high performance and energy efficiency. Evidenced by Microsoft’s FPGA deployment in its Bing search engine and Intel’s 16.7 billion acquisition of Altera, integrating FPGAs into datacenters is considered one of the most promising approaches to sustain future datacenter growth. However, it is quite challenging for existing big data computing systems—like Apache Spark and Hadoop—to access the performance and energy benefits of FPGA accelerators. In this paper we design and implement Blaze to provide programming and runtime support for enabling easy and efficient deployments of FPGA accelerators in datacenters. In particular, Blaze abstracts FPGA accelerators as a service (FaaS) and provides a set of clean programming APIs for big data processing applications to easily utilize those accelerators. Our Blaze runtime implements an FaaS framework to efficiently share FPGA accelerators among multiple heterogeneous threads on a single node, and extends Hadoop YARN with accelerator-centric scheduling to efficiently share them among multiple computing tasks in the cluster. Experimental results using four representative big data applications demonstrate that Blaze greatly reduces the programming efforts to access FPGA accelerators in systems like Apache Spark and YARN, and improves the system throughput by 1.7 × to 3× (and energy efficiency by 1.5× to 2.7×) compared to a conventional CPU-only cluster. PMID:28317049
Programming and Runtime Support to Blaze FPGA Accelerator Deployment at Datacenter Scale.
Huang, Muhuan; Wu, Di; Yu, Cody Hao; Fang, Zhenman; Interlandi, Matteo; Condie, Tyson; Cong, Jason
2016-10-01
With the end of CPU core scaling due to dark silicon limitations, customized accelerators on FPGAs have gained increased attention in modern datacenters due to their lower power, high performance and energy efficiency. Evidenced by Microsoft's FPGA deployment in its Bing search engine and Intel's 16.7 billion acquisition of Altera, integrating FPGAs into datacenters is considered one of the most promising approaches to sustain future datacenter growth. However, it is quite challenging for existing big data computing systems-like Apache Spark and Hadoop-to access the performance and energy benefits of FPGA accelerators. In this paper we design and implement Blaze to provide programming and runtime support for enabling easy and efficient deployments of FPGA accelerators in datacenters. In particular, Blaze abstracts FPGA accelerators as a service (FaaS) and provides a set of clean programming APIs for big data processing applications to easily utilize those accelerators. Our Blaze runtime implements an FaaS framework to efficiently share FPGA accelerators among multiple heterogeneous threads on a single node, and extends Hadoop YARN with accelerator-centric scheduling to efficiently share them among multiple computing tasks in the cluster. Experimental results using four representative big data applications demonstrate that Blaze greatly reduces the programming efforts to access FPGA accelerators in systems like Apache Spark and YARN, and improves the system throughput by 1.7 × to 3× (and energy efficiency by 1.5× to 2.7×) compared to a conventional CPU-only cluster.
Monte Carlo explicitly correlated second-order many-body perturbation theory
NASA Astrophysics Data System (ADS)
Johnson, Cole M.; Doran, Alexander E.; Zhang, Jinmei; Valeev, Edward F.; Hirata, So
2016-10-01
A stochastic algorithm is proposed and implemented that computes a basis-set-incompleteness (F12) correction to an ab initio second-order many-body perturbation energy as a short sum of 6- to 15-dimensional integrals of Gaussian-type orbitals, an explicit function of the electron-electron distance (geminal), and its associated excitation amplitudes held fixed at the values suggested by Ten-no. The integrals are directly evaluated (without a resolution-of-the-identity approximation or an auxiliary basis set) by the Metropolis Monte Carlo method. Applications of this method to 17 molecular correlation energies and 12 gas-phase reaction energies reveal that both the nonvariational and variational formulas for the correction give reliable correlation energies (98% or higher) and reaction energies (within 2 kJ mol-1 with a smaller statistical uncertainty) near the complete-basis-set limits by using just the aug-cc-pVDZ basis set. The nonvariational formula is found to be 2-10 times less expensive to evaluate than the variational one, though the latter yields energies that are bounded from below and is, therefore, slightly but systematically more accurate for energy differences. Being capable of using virtually any geminal form, the method confirms the best overall performance of the Slater-type geminal among 6 forms satisfying the same cusp conditions. Not having to precompute lower-dimensional integrals analytically, to store them on disk, or to transform them in a nonscalable dense-matrix-multiplication algorithm, the method scales favorably with both system size and computer size; the cost increases only as O(n4) with the number of orbitals (n), and its parallel efficiency reaches 99.9% of the ideal case on going from 16 to 4096 computer processors.
Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klumpp, John
We propose a radiation detection system which generates its own discrete sampling distribution based on past measurements of background. The advantage to this approach is that it can take into account variations in background with respect to time, location, energy spectra, detector-specific characteristics (i.e. different efficiencies at different count rates and energies), etc. This would therefore be a 'machine learning' approach, in which the algorithm updates and improves its characterization of background over time. The system would have a 'learning mode,' in which it measures and analyzes background count rates, and a 'detection mode,' in which it compares measurements frommore » an unknown source against its unique background distribution. By characterizing and accounting for variations in the background, general purpose radiation detectors can be improved with little or no increase in cost. The statistical and computational techniques to perform this kind of analysis have already been developed. The necessary signal analysis can be accomplished using existing Bayesian algorithms which account for multiple channels, multiple detectors, and multiple time intervals. Furthermore, Bayesian machine-learning techniques have already been developed which, with trivial modifications, can generate appropriate decision thresholds based on the comparison of new measurements against a nonparametric sampling distribution. (authors)« less
Multi-dimensional computer simulation of MHD combustor hydrodynamics
NASA Astrophysics Data System (ADS)
Berry, G. F.; Chang, S. L.; Lottes, S. A.; Rimkus, W. A.
1991-04-01
Argonne National Laboratory is investigating the nonreacting jet gas mixing patterns in an MHD second stage combustor by using a 2-D multiphase hydrodynamics computer program and a 3-D single phase hydrodynamics computer program. The computer simulations are intended to enhance the understanding of flow and mixing patterns in the combustor, which in turn may lead to improvement of the downstream MHD channel performance. A 2-D steady state computer model, based on mass and momentum conservation laws for multiple gas species, is used to simulate the hydrodynamics of the combustor in which a jet of oxidizer is injected into an unconfined cross stream gas flow. A 3-D code is used to examine the effects of the side walls and the distributed jet flows on the non-reacting jet gas mixing patterns. The code solves the conservation equations of mass, momentum, and energy, and a transport equation of a turbulence parameter and allows permeable surfaces to be specified for any computational cell.
Unlocking Flexibility: Integrated Optimization and Control of Multienergy Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Mancarella, Pierluigi; Monti, Antonello
Electricity, natural gas, water, and dis trict heating/cooling systems are predominantly planned and operated independently. However, it is increasingly recognized that integrated optimization and control of such systems at multiple spatiotemporal scales can bring significant socioeconomic, operational efficiency, and environmental benefits. Accordingly, the concept of the multi-energy system is gaining considerable attention, with the overarching objectives of 1) uncovering fundamental gains (and potential drawbacks) that emerge from the integrated operation of multiple systems and 2) developing holistic yet computationally affordable optimization and control methods that maximize operational benefits, while 3) acknowledging intrinsic interdependencies and quality-of-service requirements for each provider.
Statistical hadronization and microcanonical ensemble
Becattini, F.; Ferroni, L.
2004-01-01
We present a Monte Carlo calculation of the microcanonical ensemble of the of the ideal hadron-resonance gas including all known states up to a mass of 1. 8 GeV, taking into account quantum statistics. The computing method is a development of a previous one based on a Metropolis Monte Carlo algorithm, with a the grand-canonical limit of the multi-species multiplicity distribution as proposal matrix. The microcanonical average multiplicities of the various hadron species are found to converge to the canonical ones for moderately low values of the total energy. This algorithm opens the way for event generators based for themore » statistical hadronization model.« less
DEEP: Database of Energy Efficiency Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Piette, Mary; Lee, Sang Hoon
A database of energy efficiency performance (DEEP) is a presimulated database to enable quick and accurate assessment of energy retrofit of commercial buildings. DEEP was compiled from results of about 10 million EnergyPlus simulations. DEEP provides energy savings for screening and evaluation of retrofit measures targeting the small and medium-sized office and retail buildings in California. The prototype building models are developed for a comprehensive assessment of building energy performance based on DOE commercial reference buildings and the California DEER [sic] prototype buildings. The prototype buildings represent seven building types across six vintages of constructions and 16 California climate zones.more » DEEP uses these prototypes to evaluate energy performance of about 100 energy conservation measures covering envelope, lighting, heating, ventilation, air conditioning, plug loads, and domestic hot war. DEEP consists the energy simulation results for individual retrofit measures as well as packages of measures to consider interactive effects between multiple measures. The large scale EnergyPlus simulations are being conducted on the super computers at the National Energy Research Scientific Computing Center (NERSC) of Lawrence Berkeley National Laboratory. The pre-simulation database is a part of the CEC PIER project to develop a web-based retrofit toolkit for small and medium-sized commercial buildings in California, which provides real-time energy retrofit feedback by querying DEEP with recommended measures, estimated energy savings and financial payback period based on users' decision criteria of maximizing energy savings, energy cost savings, carbon reduction, or payback of investment. The pre-simulated database and associated comprehensive measure analysis enhances the ability to performance assessments of retrofits to reduce energy use for small and medium buildings and business owners who typically do not have resources to conduct costly building energy audit.« less
Local Gate Control of a Carbon Nanotube Double Quantum Dot
2016-04-04
Nanotube Double Quantum Dot N. Mason,*† M. J. Biercuk,* C. M. Marcus† We have measured carbon nanotube quantum dots with multiple electro- static gates and...computation. Carbon nanotubes have been considered lead- ing candidates for nanoscale electronic applica- tions (1, 2). Previous measurements of nano- tube...electronics have shown electron confine- ment (quantum dot) effects such as single- electron charging and energy-level quantization (3–5). Nanotube
NASA Astrophysics Data System (ADS)
Rizzatti, Eduardo O.; Barbosa, Marco Aurélio A.; Barbosa, Marcia C.
2018-02-01
The pressure versus temperature phase diagram of a system of particles interacting through a multiscale shoulder-like potential is exactly computed in one dimension. The N-shoulder potential exhibits N density anomaly regions in the phase diagram if the length scales can be connected by a convex curve. The result is analyzed in terms of the convexity of the Gibbs free energy.
Kumari, Sudesh; Roudjane, Mourad; Hewage, Dilrukshi; Liu, Yang; Yang, Dong-Sheng
2013-04-28
Cerium, praseodymium, and neodymium complexes of 1,3,5,7-cyclooctatetraene (COT) complexes were produced in a laser-vaporization metal cluster source and studied by pulsed-field ionization zero electron kinetic energy spectroscopy and quantum chemical calculations. The computations included the second-order Møller-Plesset perturbation theory, the coupled cluster method with single, double, and perturbative triple excitations, and the state-average complete active space self-consistent field method. The spectrum of each complex exhibits multiple band systems and is assigned to ionization of several low-energy electronic states of the neutral complex. This observation is different from previous studies of M(COT) (M = Sc, Y, La, and Gd), for which a single band system was observed. The presence of the multiple low-energy electronic states is caused by the splitting of the partially filled lanthanide 4f orbitals in the ligand field, and the number of the low-energy states increases rapidly with increasing number of the metal 4f electrons. On the other hand, the 4f electrons have a small effect on the geometries and vibrational frequencies of these lanthanide complexes.
Faller, Christina E; Raman, E Prabhu; MacKerell, Alexander D; Guvench, Olgun
2015-01-01
Fragment-based drug design (FBDD) involves screening low molecular weight molecules ("fragments") that correspond to functional groups found in larger drug-like molecules to determine their binding to target proteins or nucleic acids. Based on the principle of thermodynamic additivity, two fragments that bind nonoverlapping nearby sites on the target can be combined to yield a new molecule whose binding free energy is the sum of those of the fragments. Experimental FBDD approaches, like NMR and X-ray crystallography, have proven very useful but can be expensive in terms of time, materials, and labor. Accordingly, a variety of computational FBDD approaches have been developed that provide different levels of detail and accuracy.The Site Identification by Ligand Competitive Saturation (SILCS) method of computational FBDD uses all-atom explicit-solvent molecular dynamics (MD) simulations to identify fragment binding. The target is "soaked" in an aqueous solution with multiple fragments having different identities. The resulting computational competition assay reveals what small molecule types are most likely to bind which regions of the target. From SILCS simulations, 3D probability maps of fragment binding called "FragMaps" can be produced. Based on the probabilities relative to bulk, SILCS FragMaps can be used to determine "Grid Free Energies (GFEs)," which provide per-atom contributions to fragment binding affinities. For essentially no additional computational overhead relative to the production of the FragMaps, GFEs can be used to compute Ligand Grid Free Energies (LGFEs) for arbitrarily complex molecules, and these LGFEs can be used to rank-order the molecules in accordance with binding affinities.
Computational membrane biophysics: From ion channel interactions with drugs to cellular function.
Miranda, Williams E; Ngo, Van A; Perissinotti, Laura L; Noskov, Sergei Yu
2017-11-01
The rapid development of experimental and computational techniques has changed fundamentally our understanding of cellular-membrane transport. The advent of powerful computers and refined force-fields for proteins, ions, and lipids has expanded the applicability of Molecular Dynamics (MD) simulations. A myriad of cellular responses is modulated through the binding of endogenous and exogenous ligands (e.g. neurotransmitters and drugs, respectively) to ion channels. Deciphering the thermodynamics and kinetics of the ligand binding processes to these membrane proteins is at the heart of modern drug development. The ever-increasing computational power has already provided insightful data on the thermodynamics and kinetics of drug-target interactions, free energies of solvation, and partitioning into lipid bilayers for drugs. This review aims to provide a brief summary about modeling approaches to map out crucial binding pathways with intermediate conformations and free-energy surfaces for drug-ion channel binding mechanisms that are responsible for multiple effects on cellular functions. We will discuss post-processing analysis of simulation-generated data, which are then transformed to kinetic models to better understand the molecular underpinning of the experimental observables under the influence of drugs or mutations in ion channels. This review highlights crucial mathematical frameworks and perspectives on bridging different well-established computational techniques to connect the dynamics and timescales from all-atom MD and free energy simulations of ion channels to the physiology of action potentials in cellular models. This article is part of a Special Issue entitled: Biophysics in Canada, edited by Lewis Kay, John Baenziger, Albert Berghuis and Peter Tieleman. Copyright © 2017 Elsevier B.V. All rights reserved.
Numerical analysis of the photo-injection time-of-flight curves in molecularly doped polymers
NASA Astrophysics Data System (ADS)
Tyutnev, A. P.; Ikhsanov, R. Sh.; Saenko, V. S.; Nikerov, D. V.
2018-03-01
We have performed numerical analysis of the charge carrier transport in a specific molecularly doped polymer using the multiple trapping model. The computations covered a wide range of applied electric fields, temperatures and most importantly, of the initial energies of photo injected one-sign carriers (in our case, holes). Special attention has been given to comparison of time of flight curves measured by the photo-injection and radiation-induced techniques which has led to a problematic situation concerning an interpretation of the experimental data. Computational results have been compared with both analytical and experimental results available in literature.
Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E
2014-01-01
This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.
Improving wave forecasting by integrating ensemble modelling and machine learning
NASA Astrophysics Data System (ADS)
O'Donncha, F.; Zhang, Y.; James, S. C.
2017-12-01
Modern smart-grid networks use technologies to instantly relay information on supply and demand to support effective decision making. Integration of renewable-energy resources with these systems demands accurate forecasting of energy production (and demand) capacities. For wave-energy converters, this requires wave-condition forecasting to enable estimates of energy production. Current operational wave forecasting systems exhibit substantial errors with wave-height RMSEs of 40 to 60 cm being typical, which limits the reliability of energy-generation predictions thereby impeding integration with the distribution grid. In this study, we integrate physics-based models with statistical learning aggregation techniques that combine forecasts from multiple, independent models into a single "best-estimate" prediction of the true state. The Simulating Waves Nearshore physics-based model is used to compute wind- and currents-augmented waves in the Monterey Bay area. Ensembles are developed based on multiple simulations perturbing input data (wave characteristics supplied at the model boundaries and winds) to the model. A learning-aggregation technique uses past observations and past model forecasts to calculate a weight for each model. The aggregated forecasts are compared to observation data to quantify the performance of the model ensemble and aggregation techniques. The appropriately weighted ensemble model outperforms an individual ensemble member with regard to forecasting wave conditions.
Ghandehari, Masoud; Emig, Thorsten; Aghamohamadnia, Milad
2018-02-02
Despite decades of research seeking to derive the urban energy budget, the dynamics of thermal exchange in the densely constructed environment is not yet well understood. Using New York City as a study site, we present a novel hybrid experimental-computational approach for a better understanding of the radiative heat transfer in complex urban environments. The aim of this work is to contribute to the calculation of the urban energy budget, particularly the stored energy. We will focus our attention on surface thermal radiation. Improved understanding of urban thermodynamics incorporating the interaction of various bodies, particularly in high rise cities, will have implications on energy conservation at the building scale, and for human health and comfort at the urban scale. The platform presented is based on longwave hyperspectral imaging of nearly 100 blocks of Manhattan, in addition to a geospatial radiosity model that describes the collective radiative heat exchange between multiple buildings. Despite assumptions in surface emissivity and thermal conductivity of buildings walls, the close comparison of temperatures derived from measurements and computations is promising. Results imply that the presented geospatial thermodynamic model of urban structures can enable accurate and high resolution analysis of instantaneous urban surface temperatures.
Systematic size study of an insect antifreeze protein and its interaction with ice.
Liu, Kai; Jia, Zongchao; Chen, Guangju; Tung, Chenho; Liu, Ruozhuang
2005-02-01
Because of their remarkable ability to depress the freezing point of aqueous solutions, antifreeze proteins (AFPs) play a critical role in helping many organisms survive subzero temperatures. The beta-helical insect AFP structures solved to date, consisting of multiple repeating circular loops or coils, are perhaps the most regular protein structures discovered thus far. Taking an exceptional advantage of the unusually high structural regularity of insect AFPs, we have employed both semiempirical and quantum mechanics computational approaches to systematically investigate the relationship between the number of AFP coils and the AFP-ice interaction energy, an indicator of antifreeze activity. We generated a series of AFP models with varying numbers of 12-residue coils (sequence TCTxSxxCxxAx) and calculated their interaction energies with ice. Using several independent computational methods, we found that the AFP-ice interaction energy increased as the number of coils increased, until an upper bound was reached. The increase of interaction energy was significant for each of the first five coils, and there was a clear synergism that gradually diminished and even decreased with further increase of the number of coils. Our results are in excellent agreement with the recently reported experimental observations.
Systematic Size Study of an Insect Antifreeze Protein and Its Interaction with Ice
Liu, Kai; Jia, Zongchao; Chen, Guangju; Tung, Chenho; Liu, Ruozhuang
2005-01-01
Because of their remarkable ability to depress the freezing point of aqueous solutions, antifreeze proteins (AFPs) play a critical role in helping many organisms survive subzero temperatures. The β-helical insect AFP structures solved to date, consisting of multiple repeating circular loops or coils, are perhaps the most regular protein structures discovered thus far. Taking an exceptional advantage of the unusually high structural regularity of insect AFPs, we have employed both semiempirical and quantum mechanics computational approaches to systematically investigate the relationship between the number of AFP coils and the AFP-ice interaction energy, an indicator of antifreeze activity. We generated a series of AFP models with varying numbers of 12-residue coils (sequence TCTxSxxCxxAx) and calculated their interaction energies with ice. Using several independent computational methods, we found that the AFP-ice interaction energy increased as the number of coils increased, until an upper bound was reached. The increase of interaction energy was significant for each of the first five coils, and there was a clear synergism that gradually diminished and even decreased with further increase of the number of coils. Our results are in excellent agreement with the recently reported experimental observations. PMID:15713600
Energy conservation using face detection
NASA Astrophysics Data System (ADS)
Deotale, Nilesh T.; Kalbande, Dhananjay R.; Mishra, Akassh A.
2011-10-01
Computerized Face Detection, is concerned with the difficult task of converting a video signal of a person to written text. It has several applications like face recognition, simultaneous multiple face processing, biometrics, security, video surveillance, human computer interface, image database management, digital cameras use face detection for autofocus, selecting regions of interest in photo slideshows that use a pan-and-scale and The Present Paper deals with energy conservation using face detection. Automating the process to a computer requires the use of various image processing techniques. There are various methods that can be used for Face Detection such as Contour tracking methods, Template matching, Controlled background, Model based, Motion based and color based. Basically, the video of the subject are converted into images are further selected manually for processing. However, several factors like poor illumination, movement of face, viewpoint-dependent Physical appearance, Acquisition geometry, Imaging conditions, Compression artifacts makes Face detection difficult. This paper reports an algorithm for conservation of energy using face detection for various devices. The present paper suggests Energy Conservation can be done by Detecting the Face and reducing the brightness of complete image and then adjusting the brightness of the particular area of an image where the face is located using histogram equalization.
Hyperswitch Communication Network Computer
NASA Technical Reports Server (NTRS)
Peterson, John C.; Chow, Edward T.; Priel, Moshe; Upchurch, Edwin T.
1993-01-01
Hyperswitch Communications Network (HCN) computer is prototype multiple-processor computer being developed. Incorporates improved version of hyperswitch communication network described in "Hyperswitch Network For Hypercube Computer" (NPO-16905). Designed to support high-level software and expansion of itself. HCN computer is message-passing, multiple-instruction/multiple-data computer offering significant advantages over older single-processor and bus-based multiple-processor computers, with respect to price/performance ratio, reliability, availability, and manufacturing. Design of HCN operating-system software provides flexible computing environment accommodating both parallel and distributed processing. Also achieves balance among following competing factors; performance in processing and communications, ease of use, and tolerance of (and recovery from) faults.
A Framework to Improve Energy Efficient Behaviour at Home through Activity and Context Monitoring
García, Óscar; Alonso, Ricardo S.; Corchado, Juan M.
2017-01-01
Real-time Localization Systems have been postulated as one of the most appropriated technologies for the development of applications that provide customized services. These systems provide us with the ability to locate and trace users and, among other features, they help identify behavioural patterns and habits. Moreover, the implementation of policies that will foster energy saving in homes is a complex task that involves the use of this type of systems. Although there are multiple proposals in this area, the implementation of frameworks that combine technologies and use Social Computing to influence user behaviour have not yet reached any significant savings in terms of energy. In this work, the CAFCLA framework (Context-Aware Framework for Collaborative Learning Applications) is used to develop a recommendation system for home users. The proposed system integrates a Real-Time Localization System and Wireless Sensor Networks, making it possible to develop applications that work under the umbrella of Social Computing. The implementation of an experimental use case aided efficient energy use, achieving savings of 17%. Moreover, the conducted case study pointed to the possibility of attaining good energy consumption habits in the long term. This can be done thanks to the system’s real time and historical localization, tracking and contextual data, based on which customized recommendations are generated. PMID:28758987
Yang, Changwon; Kim, Eunae; Pak, Youngshang
2015-01-01
Houghton (HG) base pairing plays a central role in the DNA binding of proteins and small ligands. Probing detailed transition mechanism from Watson–Crick (WC) to HG base pair (bp) formation in duplex DNAs is of fundamental importance in terms of revealing intrinsic functions of double helical DNAs beyond their sequence determined functions. We investigated a free energy landscape of a free B-DNA with an adenosine–thymine (A–T) rich sequence to probe its conformational transition pathways from WC to HG base pairing. The free energy landscape was computed with a state-of-art two-dimensional umbrella molecular dynamics simulation at the all-atom level. The present simulation showed that in an isolated duplex DNA, the spontaneous transition from WC to HG bp takes place via multiple pathways. Notably, base flipping into the major and minor grooves was found to play an important role in forming these multiple transition pathways. This finding suggests that naked B-DNA under normal conditions has an inherent ability to form HG bps via spontaneous base opening events. PMID:26250116
NASA Astrophysics Data System (ADS)
Song, Li-Hua; Xin, Shang-Fei; Liu, Na
2018-02-01
Semi-inclusive deep inelastic lepton-nucleus scattering provides a good opportunity to investigate the cold nuclear effects on quark propagation and hadronization. Considering the nuclear modification of the quark energy loss and nuclear absorption effects in final state, the leading-order computations on hadron multiplicity ratios for both hadronization occurring outside and inside the medium are performed with the nuclear geometry effect of the path length L of the struck quark in the medium. By fitting the HERMES two-dimensional data on the multiplicity ratios for positively and negatively charged pions and kaons produced on neon, the hadron-nucleon inelastic cross section {σ }h for different identified hadrons is determined, respectively. It is found that our predictions obtained with the analytic parameterizations of quenching weights based on BDMPS formalism and the nuclear absorption factor {N}A(z,ν ) are in good agreement with the experimental measurements. This indicates that the energy loss and nuclear absorption are the main nuclear effects inducing a reduction of the hadron yield for quark hadronization occurring outside and inside the nucleus, respectively.
Fernández-Varea, J M; Andreo, P; Tabata, T
1996-07-01
Average penetration depths and detour factors of 1-50 MeV electrons in water and plastic materials have been computed by means of analytical calculation, within the continuous-slowing-down approximation and including multiple scattering, and using the Monte Carlo codes ITS and PENELOPE. Results are compared to detour factors from alternative definitions previously proposed in the literature. Different procedures used in low-energy electron-beam dosimetry to convert ranges and depths measured in plastic phantoms into water-equivalent ranges and depths are analysed. A new simple and accurate scaling method, based on Monte Carlo-derived ratios of average electron penetration depths and thus incorporating the effect of multiple scattering, is presented. Data are given for most plastics used in electron-beam dosimetry together with a fit which extends the method to any other low-Z plastic material. A study of scaled depth-dose curves and mean energies as a function of depth for some plastics of common usage shows that the method improves the consistency and results of other scaling procedures in dosimetry with electron beams at therapeutic energies.
Remote sensing of a coupled carbon-water-energy-radiation balances from the Globe to plot scales
NASA Astrophysics Data System (ADS)
Ryu, Y.; Jiang, C.; Huang, Y.; Kim, J.; Hwang, Y.; Kimm, H.; Kim, S.
2016-12-01
Advancements in near-surface and satellite remote sensing technologies have enabled us to monitor the global terrestrial ecosystems at multiple spatial and temporal scales. An emergent challenge is how to formulate a coupled water, carbon, energy, radiation, and nitrogen cycles from remote sensing. Here, we report Breathing Earth System Simulator (BESS), which coupled radiation (shortwave, longwave, PAR, diffuse PAR), carbon (gross primary productivity, ecosystem respiration, net ecosystem exchange), water (evaporation), and energy (latent and sensible heat) balances across the global land at 1 km resolution, 8 daily between 2000 and 2015 using multiple satellite remote sensing. The performance of BESS was tested against field observations (FLUXNET, BSRN) and other independent products (MPI-BGC, MODIS, GLASS). We found that the coupled model, BESS showed on par with, or better performance than the other products which computed land surface fluxes individually. Lastly, we show one plot-level study conducted in a paddy rice to demonstrate how to couple radiation, carbon, water, nitrogen balances with a series of near-surface spectral sensors.
Energy and Quality-Aware Multimedia Signal Processing
NASA Astrophysics Data System (ADS)
Emre, Yunus
Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.
NASA Astrophysics Data System (ADS)
Faucci, Maria Teresa; Melani, Fabrizio; Mura, Paola
2002-06-01
Molecular modeling was used to investigate factors influencing complex formation between cyclodextrins and guest molecules and predict their stability through a theoretical model based on the search for a correlation between experimental stability constants ( Ks) and some theoretical parameters describing complexation (docking energy, host-guest contact surfaces, intermolecular interaction fields) calculated from complex structures at a minimum conformational energy, obtained through stochastic methods based on molecular dynamic simulations. Naproxen, ibuprofen, ketoprofen and ibuproxam were used as model drug molecules. Multiple Regression Analysis allowed identification of the significant factors for the complex stability. A mathematical model ( r=0.897) related log Ks with complex docking energy and lipophilic molecular fields of cyclodextrin and drug.
Polarized reflectance and transmittance properties of windblown sea surfaces.
Mobley, Curtis D
2015-05-20
Generation of random sea surfaces using wave variance spectra and Fourier transforms is formulated in a way that guarantees conservation of wave energy and fully resolves wave height and slope variances. Monte Carlo polarized ray tracing, which accounts for multiple scattering between light rays and wave facets, is used to compute effective Mueller matrices for reflection and transmission of air- or water-incident polarized radiance. Irradiance reflectances computed using a Rayleigh sky radiance distribution, sea surfaces generated with Cox-Munk statistics, and unpolarized ray tracing differ by 10%-18% compared with values computed using elevation- and slope-resolving surfaces and polarized ray tracing. Radiance reflectance factors, as used to estimate water-leaving radiance from measured upwelling and sky radiances, are shown to depend on sky polarization, and improved values are given.
Neuromorphic Kalman filter implementation in IBM’s TrueNorth
NASA Astrophysics Data System (ADS)
Carney, R.; Bouchard, K.; Calafiura, P.; Clark, D.; Donofrio, D.; Garcia-Sciveres, M.; Livezey, J.
2017-10-01
Following the advent of a post-Moore’s law field of computation, novel architectures continue to emerge. With composite, multi-million connection neuromorphic chips like IBM’s TrueNorth, neural engineering has now become a feasible technology in this novel computing paradigm. High Energy Physics experiments are continuously exploring new methods of computation and data handling, including neuromorphic, to support the growing challenges of the field and be prepared for future commodity computing trends. This work details the first instance of a Kalman filter implementation in IBM’s neuromorphic architecture, TrueNorth, for both parallel and serial spike trains. The implementation is tested on multiple simulated systems and its performance is evaluated with respect to an equivalent non-spiking Kalman filter. The limits of the implementation are explored whilst varying the size of weight and threshold registers, the number of spikes used to encode a state, size of neuron block for spatial encoding, and neuron potential reset schemes.
Study of Solid State Drives performance in PROOF distributed analysis system
NASA Astrophysics Data System (ADS)
Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.
2010-04-01
Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.
NASA Astrophysics Data System (ADS)
Adem, ACIR; Eşref, BAYSAL
2018-07-01
In this paper, neutronic analysis in a laser fusion inertial confinement fusion fission energy (LIFE) engine fuelled plutonium and minor actinides using a MCNP codes was investigated. LIFE engine fuel zone contained 10 vol% TRISO particles and 90 vol% natural lithium coolant mixture. TRISO fuel compositions have Mod①: reactor grade plutonium (RG-Pu), Mod②: weapon grade plutonium (WG-Pu) and Mod③: minor actinides (MAs). Tritium breeding ratios (TBR) were computed as 1.52, 1.62 and 1.46 for Mod①, Mod② and Mod③, respectively. The operation period was computed as ∼21 years when the reference TBR > 1.05 for a self-sustained reactor for all investigated cases. Blanket energy multiplication values (M) were calculated as 4.18, 4.95 and 3.75 for Mod①, Mod② and Mod③, respectively. The burnup (BU) values were obtained as ∼1230, ∼1550 and ∼1060 GWd tM–1, respectively. As a result, the higher BU were provided with using TRISO particles for all cases in LIFE engine.
Flow dynamics and energy efficiency of flow in the left ventricle during myocardial infarction.
Vasudevan, Vivek; Low, Adriel Jia Jun; Annamalai, Sarayu Parimal; Sampath, Smita; Poh, Kian Keong; Totman, Teresa; Mazlan, Muhammad; Croft, Grace; Richards, A Mark; de Kleijn, Dominique P V; Chin, Chih-Liang; Yap, Choon Hwai
2017-10-01
Cardiovascular disease is a leading cause of death worldwide, where myocardial infarction (MI) is a major category. After infarction, the heart has difficulty providing sufficient energy for circulation, and thus, understanding the heart's energy efficiency is important. We induced MI in a porcine animal model via circumflex ligation and acquired multiple-slice cine magnetic resonance (MR) images in a longitudinal manner-before infarction, and 1 week (acute) and 4 weeks (chronic) after infarction. Computational fluid dynamic simulations were performed based on MR images to obtain detailed fluid dynamics and energy dynamics of the left ventricles. Results showed that energy efficiency flow through the heart decreased at the acute time point. Since the heart was observed to experience changes in heart rate, stroke volume and chamber size over the two post-infarction time points, simulations were performed to test the effect of each of the three parameters. Increasing heart rate and stroke volume were found to significantly decrease flow energy efficiency, but the effect of chamber size was inconsistent. Strong complex interplay was observed between the three parameters, necessitating the use of non-dimensional parameterization to characterize flow energy efficiency. The ratio of Reynolds to Strouhal number, which is a form of Womersley number, was found to be the most effective non-dimensional parameter to represent energy efficiency of flow in the heart. We believe that this non-dimensional number can be computed for clinical cases via ultrasound and hypothesize that it can serve as a biomarker for clinical evaluations.
A decision model for cost effective design of biomass based green energy supply chains.
Yılmaz Balaman, Şebnem; Selim, Hasan
2015-09-01
The core driver of this study is to deal with the design of anaerobic digestion based biomass to energy supply chains in a cost effective manner. In this concern, a decision model is developed. The model is based on fuzzy multi objective decision making in order to simultaneously optimize multiple economic objectives and tackle the inherent uncertainties in the parameters and decision makers' aspiration levels for the goals. The viability of the decision model is explored with computational experiments on a real-world biomass to energy supply chain and further analyses are performed to observe the effects of different conditions. To this aim, scenario analyses are conducted to investigate the effects of energy crop utilization and operational costs on supply chain structure and performance measures. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shu, Yinan; Levine, Benjamin G., E-mail: levine@chemistry.msu.edu; Hohenstein, Edward G.
2015-01-14
Multireference quantum chemical methods, such as the complete active space self-consistent field (CASSCF) method, have long been the state of the art for computing regions of potential energy surfaces (PESs) where complex, multiconfigurational wavefunctions are required, such as near conical intersections. Herein, we present a computationally efficient alternative to the widely used CASSCF method based on a complete active space configuration interaction (CASCI) expansion built from the state-averaged natural orbitals of configuration interaction singles calculations (CISNOs). This CISNO-CASCI approach is shown to predict vertical excitation energies of molecules with closed-shell ground states similar to those predicted by state averaged (SA)-CASSCFmore » in many cases and to provide an excellent reference for a perturbative treatment of dynamic electron correlation. Absolute energies computed at the CISNO-CASCI level are found to be variationally superior, on average, to other CASCI methods. Unlike SA-CASSCF, CISNO-CASCI provides vertical excitation energies which are both size intensive and size consistent, thus suggesting that CISNO-CASCI would be preferable to SA-CASSCF for the study of systems with multiple excitable centers. The fact that SA-CASSCF and some other CASCI methods do not provide a size intensive/consistent description of excited states is attributed to changes in the orbitals that occur upon introduction of non-interacting subsystems. Finally, CISNO-CASCI is found to provide a suitable description of the PES surrounding a biradicaloid conical intersection in ethylene.« less
Better, Cheaper, Faster Molecular Dynamics
NASA Technical Reports Server (NTRS)
Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
Recent, revolutionary progress in genomics and structural, molecular and cellular biology has created new opportunities for molecular-level computer simulations of biological systems by providing vast amounts of data that require interpretation. These opportunities are further enhanced by the increasing availability of massively parallel computers. For many problems, the method of choice is classical molecular dynamics (iterative solving of Newton's equations of motion). It focuses on two main objectives. One is to calculate the relative stability of different states of the system. A typical problem that has' such an objective is computer-aided drug design. Another common objective is to describe evolution of the system towards a low energy (possibly the global minimum energy), "native" state. Perhaps the best example of such a problem is protein folding. Both types of problems share the same difficulty. Often, different states of the system are separated by high energy barriers, which implies that transitions between these states are rare events. This, in turn, can greatly impede exploration of phase space. In some instances this can lead to "quasi non-ergodicity", whereby a part of phase space is inaccessible on time scales of the simulation. To overcome this difficulty and to extend molecular dynamics to "biological" time scales (millisecond or longer) new physical formulations and new algorithmic developments are required. To be efficient they should account for natural limitations of multi-processor computer architecture. I will present work along these lines done in my group. In particular, I will focus on a new approach to calculating the free energies (stability) of different states and to overcoming "the curse of rare events". I will also discuss algorithmic improvements to multiple time step methods and to the treatment of slowly decaying, log-ranged, electrostatic effects.
3D glasma initial state for relativistic heavy ion collisions
Schenke, Björn; Schlichting, Sören
2016-10-13
We extend the impact-parameter-dependent Glasma model to three dimensions using explicit small-x evolution of the two incoming nuclear gluon distributions. We compute rapidity distributions of produced gluons and the early-time energy momentum tensor as a function of space-time rapidity and transverse coordinates. Finally, we study rapidity correlations and fluctuations of the initial geometry and multiplicity distributions and make comparisons to existing models for the three-dimensional initial state.
Development of a nuclear technique for monitoring water levels in pressurized vehicles
NASA Technical Reports Server (NTRS)
Singh, J. J.; Davis, W. T.; Mall, G. H.
1983-01-01
A new technique for monitoring water levels in pressurized stainless steel cylinders was developed. It is based on differences in attenuation coefficients of water and air for Cs137 (662 keV) gamma rays. Experimentally observed gamma ray counting rates with and without water in model reservoir cylinder were compared with corresponding calculated values for two different gamma ray detection theshold energies. Calculated values include the effects of multiple scattering and attendant gamma ray energy reductions. The agreement between the measured and calculated values is reasonably good. Computer programs for calculating angular and spectral distributions of scattered radition in various media are included.
Computationally Efficient Multiconfigurational Reactive Molecular Dynamics
Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.
2012-01-01
It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924
Single-molecule techniques in biophysics: a review of the progress in methods and applications.
Miller, Helen; Zhou, Zhaokun; Shepherd, Jack; Wollman, Adam J M; Leake, Mark C
2018-02-01
Single-molecule biophysics has transformed our understanding of biology, but also of the physics of life. More exotic than simple soft matter, biomatter lives far from thermal equilibrium, covering multiple lengths from the nanoscale of single molecules to up to several orders of magnitude higher in cells, tissues and organisms. Biomolecules are often characterized by underlying instability: multiple metastable free energy states exist, separated by levels of just a few multiples of the thermal energy scale k B T, where k B is the Boltzmann constant and T absolute temperature, implying complex inter-conversion kinetics in the relatively hot, wet environment of active biological matter. A key benefit of single-molecule biophysics techniques is their ability to probe heterogeneity of free energy states across a molecular population, too challenging in general for conventional ensemble average approaches. Parallel developments in experimental and computational techniques have catalysed the birth of multiplexed, correlative techniques to tackle previously intractable biological questions. Experimentally, progress has been driven by improvements in sensitivity and speed of detectors, and the stability and efficiency of light sources, probes and microfluidics. We discuss the motivation and requirements for these recent experiments, including the underpinning mathematics. These methods are broadly divided into tools which detect molecules and those which manipulate them. For the former we discuss the progress of super-resolution microscopy, transformative for addressing many longstanding questions in the life sciences, and for the latter we include progress in 'force spectroscopy' techniques that mechanically perturb molecules. We also consider in silico progress of single-molecule computational physics, and how simulation and experimentation may be drawn together to give a more complete understanding. Increasingly, combinatorial techniques are now used, including correlative atomic force microscopy and fluorescence imaging, to probe questions closer to native physiological behaviour. We identify the trade-offs, limitations and applications of these techniques, and discuss exciting new directions.
Single-molecule techniques in biophysics: a review of the progress in methods and applications
NASA Astrophysics Data System (ADS)
Miller, Helen; Zhou, Zhaokun; Shepherd, Jack; Wollman, Adam J. M.; Leake, Mark C.
2018-02-01
Single-molecule biophysics has transformed our understanding of biology, but also of the physics of life. More exotic than simple soft matter, biomatter lives far from thermal equilibrium, covering multiple lengths from the nanoscale of single molecules to up to several orders of magnitude higher in cells, tissues and organisms. Biomolecules are often characterized by underlying instability: multiple metastable free energy states exist, separated by levels of just a few multiples of the thermal energy scale k B T, where k B is the Boltzmann constant and T absolute temperature, implying complex inter-conversion kinetics in the relatively hot, wet environment of active biological matter. A key benefit of single-molecule biophysics techniques is their ability to probe heterogeneity of free energy states across a molecular population, too challenging in general for conventional ensemble average approaches. Parallel developments in experimental and computational techniques have catalysed the birth of multiplexed, correlative techniques to tackle previously intractable biological questions. Experimentally, progress has been driven by improvements in sensitivity and speed of detectors, and the stability and efficiency of light sources, probes and microfluidics. We discuss the motivation and requirements for these recent experiments, including the underpinning mathematics. These methods are broadly divided into tools which detect molecules and those which manipulate them. For the former we discuss the progress of super-resolution microscopy, transformative for addressing many longstanding questions in the life sciences, and for the latter we include progress in ‘force spectroscopy’ techniques that mechanically perturb molecules. We also consider in silico progress of single-molecule computational physics, and how simulation and experimentation may be drawn together to give a more complete understanding. Increasingly, combinatorial techniques are now used, including correlative atomic force microscopy and fluorescence imaging, to probe questions closer to native physiological behaviour. We identify the trade-offs, limitations and applications of these techniques, and discuss exciting new directions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dias, M F; Department of Radiation Oncology, Francis H. Burr Proton Therapy Center Massachusetts General Hospital; Seco, J
Purpose: Research in carbon imaging has been growing over the past years, as a way to increase treatment accuracy and patient positioning in carbon therapy. The purpose of this tool is to allow a fast and flexible way to generate CDRR data without the need to use Monte Carlo (MC) simulations. It can also be used to predict future clinically measured data. Methods: A python interface has been developed, which uses information from CT or 4DCT and thetreatment calibration curve to compute the Water Equivalent Path Length (WEPL) of carbon ions. A GPU based ray tracing algorithm computes the WEPLmore » of each individual carbon traveling through the CT voxels. A multiple peak detection method to estimate high contrast margin positioning has been implemented (described elsewhere). MC simulations have been used to simulate carbons depth dose curves in order to simulate the response of a range detector. Results: The tool allows the upload of CT or 4DCT images. The user has the possibility to selectphase/slice of interested as well as position, angle…). The WEPL is represented as a range detector which can be used to assess range dilution and multiple peak detection effects. The tool also provides knowledge of the minimum energy that should be considered for imaging purposes. The multiple peak detection method has been used in a lung tumor case, showing an accuracy of 1mm in determine the exact interface position. Conclusion: The tool offers an easy and fast way to simulate carbon imaging data. It can be used for educational and for clinical purposes, allowing the user to test beam energies and angles before real acquisition. An analysis add-on is being developed, where the used will have the opportunity to select different reconstruction methods and detector types (range or energy). Fundacao para a Ciencia e a Tecnologia (FCT), PhD Grant number SFRH/BD/85749/2012.« less
Quality of service routing in wireless ad hoc networks
NASA Astrophysics Data System (ADS)
Sane, Sachin J.; Patcha, Animesh; Mishra, Amitabh
2003-08-01
An efficient routing protocol is essential to guarantee application level quality of service running on wireless ad hoc networks. In this paper we propose a novel routing algorithm that computes a path between a source and a destination by considering several important constraints such as path-life, availability of sufficient energy as well as buffer space in each of the nodes on the path between the source and destination. The algorithm chooses the best path from among the multiples paths that it computes between two endpoints. We consider the use of control packets that run at a priority higher than the data packets in determining the multiple paths. The paper also examines the impact of different schedulers such as weighted fair queuing, and weighted random early detection among others in preserving the QoS level guarantees. Our extensive simulation results indicate that the algorithm improves the overall lifetime of a network, reduces the number of dropped packets, and decreases the end-to-end delay for real-time voice application.
NASA Astrophysics Data System (ADS)
Jimenez, Edward S.; Thompson, Kyle R.; Stohn, Adriana; Goodner, Ryan N.
2017-09-01
Sandia National Laboratories has recently developed the capability to acquire multi-channel radio- graphs for multiple research and development applications in industry and security. This capability allows for the acquisition of x-ray radiographs or sinogram data to be acquired at up to 300 keV with up to 128 channels per pixel. This work will investigate whether multiple quality metrics for computed tomography can actually benefit from binned projection data compared to traditionally acquired grayscale sinogram data. Features and metrics to be evaluated include the ability to dis- tinguish between two different materials with similar absorption properties, artifact reduction, and signal-to-noise for both raw data and reconstructed volumetric data. The impact of this technology to non-destructive evaluation, national security, and industry is wide-ranging and has to potential to improve upon many inspection methods such as dual-energy methods, material identification, object segmentation, and computer vision on radiographs.
Grid Computing Environment using a Beowulf Cluster
NASA Astrophysics Data System (ADS)
Alanis, Fransisco; Mahmood, Akhtar
2003-10-01
Custom-made Beowulf clusters using PCs are currently replacing expensive supercomputers to carry out complex scientific computations. At the University of Texas - Pan American, we built a 8 Gflops Beowulf Cluster for doing HEP research using RedHat Linux 7.3 and the LAM-MPI middleware. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes that were compiled in C on the cluster using the LAM-XMPI graphics user environment. We will demonstrate a "simple" prototype grid environment, where we will submit and run parallel jobs remotely across multiple cluster nodes over the internet from the presentation room at Texas Tech. University. The Sphinx Beowulf Cluster will be used for monte-carlo grid test-bed studies for the LHC-ATLAS high energy physics experiment. Grid is a new IT concept for the next generation of the "Super Internet" for high-performance computing. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.
Accelerated weight histogram method for exploring free energy landscapes
NASA Astrophysics Data System (ADS)
Lindahl, V.; Lidmar, J.; Hess, B.
2014-07-01
Calculating free energies is an important and notoriously difficult task for molecular simulations. The rapid increase in computational power has made it possible to probe increasingly complex systems, yet extracting accurate free energies from these simulations remains a major challenge. Fully exploring the free energy landscape of, say, a biological macromolecule typically requires sampling large conformational changes and slow transitions. Often, the only feasible way to study such a system is to simulate it using an enhanced sampling method. The accelerated weight histogram (AWH) method is a new, efficient extended ensemble sampling technique which adaptively biases the simulation to promote exploration of the free energy landscape. The AWH method uses a probability weight histogram which allows for efficient free energy updates and results in an easy discretization procedure. A major advantage of the method is its general formulation, making it a powerful platform for developing further extensions and analyzing its relation to already existing methods. Here, we demonstrate its efficiency and general applicability by calculating the potential of mean force along a reaction coordinate for both a single dimension and multiple dimensions. We make use of a non-uniform, free energy dependent target distribution in reaction coordinate space so that computational efforts are not wasted on physically irrelevant regions. We present numerical results for molecular dynamics simulations of lithium acetate in solution and chignolin, a 10-residue long peptide that folds into a β-hairpin. We further present practical guidelines for setting up and running an AWH simulation.
Accelerated weight histogram method for exploring free energy landscapes.
Lindahl, V; Lidmar, J; Hess, B
2014-07-28
Calculating free energies is an important and notoriously difficult task for molecular simulations. The rapid increase in computational power has made it possible to probe increasingly complex systems, yet extracting accurate free energies from these simulations remains a major challenge. Fully exploring the free energy landscape of, say, a biological macromolecule typically requires sampling large conformational changes and slow transitions. Often, the only feasible way to study such a system is to simulate it using an enhanced sampling method. The accelerated weight histogram (AWH) method is a new, efficient extended ensemble sampling technique which adaptively biases the simulation to promote exploration of the free energy landscape. The AWH method uses a probability weight histogram which allows for efficient free energy updates and results in an easy discretization procedure. A major advantage of the method is its general formulation, making it a powerful platform for developing further extensions and analyzing its relation to already existing methods. Here, we demonstrate its efficiency and general applicability by calculating the potential of mean force along a reaction coordinate for both a single dimension and multiple dimensions. We make use of a non-uniform, free energy dependent target distribution in reaction coordinate space so that computational efforts are not wasted on physically irrelevant regions. We present numerical results for molecular dynamics simulations of lithium acetate in solution and chignolin, a 10-residue long peptide that folds into a β-hairpin. We further present practical guidelines for setting up and running an AWH simulation.
Treating electron transport in MCNP{sup trademark}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, H.G.
1996-12-31
The transport of electrons and other charged particles is fundamentally different from that of neutrons and photons. A neutron, in aluminum slowing down from 0.5 MeV to 0.0625 MeV will have about 30 collisions; a photon will have fewer than ten. An electron with the same energy loss will undergo 10{sup 5} individual interactions. This great increase in computational complexity makes a single- collision Monte Carlo approach to electron transport unfeasible for many situations of practical interest. Considerable theoretical work has been done to develop a variety of analytic and semi-analytic multiple-scattering theories for the transport of charged particles. Themore » theories used in the algorithms in MCNP are the Goudsmit-Saunderson theory for angular deflections, the Landau an theory of energy-loss fluctuations, and the Blunck-Leisegang enhancements of the Landau theory. In order to follow an electron through a significant energy loss, it is necessary to break the electron`s path into many steps. These steps are chosen to be long enough to encompass many collisions (so that multiple-scattering theories are valid) but short enough that the mean energy loss in any one step is small (for the approximations in the multiple-scattering theories). The energy loss and angular deflection of the electron during each step can then be sampled from probability distributions based on the appropriate multiple- scattering theories. This subsumption of the effects of many individual collisions into single steps that are sampled probabilistically constitutes the ``condensed history`` Monte Carlo method. This method is exemplified in the ETRAN series of electron/photon transport codes. The ETRAN codes are also the basis for the Integrated TIGER Series, a system of general-purpose, application-oriented electron/photon transport codes. The electron physics in MCNP is similar to that of the Integrated TIGER Series.« less
Secure Multiparty Quantum Computation for Summation and Multiplication.
Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun
2016-01-21
As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics.
Secure Multiparty Quantum Computation for Summation and Multiplication
Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun
2016-01-01
As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics. PMID:26792197
NASA Astrophysics Data System (ADS)
Pollard, Travis P.; Beck, Thomas L.
2018-06-01
Attempts to establish an absolute single-ion hydration free energy scale have followed multiple strategies. Two central themes consist of (1) employing bulk pair thermodynamic data and an underlying interfacial-potential-free model to partition the hydration free energy into individual contributions [Marcus, Latimer, and tetraphenyl-arsonium/tetraphenyl-borate (TATB) methods] or (2) utilizing bulk thermodynamic and cluster data to estimate the free energy to insert a proton into water, including in principle an interfacial potential contribution [the cluster pair approximation (CPA)]. While the results for the hydration free energy of the proton agree remarkably well between the three approaches in the first category, the value differs from the CPA result by roughly +10 kcal/mol, implying a value for the effective electrochemical surface potential of water of -0.4 V. This paper provides a computational re-analysis of the TATB method for single-ion free energies using quasichemical theory. A previous study indicated a significant discrepancy between the free energies of hydration for the TA cation and the TB anion. We show that the main contribution to this large computed difference is an electrostatic artifact arising from modeling interactions in periodic boundaries. No attempt is made here to develop more accurate models for the local ion/solvent interactions that may lead to further small free energy differences between the TA and TB ions, but the results clarify the primary importance of interfacial potential effects for analysis of the various free energy scales. Results are also presented, related to the TATB assumption in the organic solvents dimethyl sulfoxide and 1,2-dichloroethane.
NASA Technical Reports Server (NTRS)
Charles, H. K. Jr; Beck, T. J.; Feldmesser, H. S.; Magee, T. C.; Spisz, T. S.; Pisacane, V. L.
2001-01-01
An advanced, multiple projection, dual energy x-ray absorptiometry (AMPDXA) scanner system is under development. The AMPDXA is designed to make precision bone and muscle loss measurements necessary to determine the deleterious effects of microgravity on astronauts as well as develop countermeasures to stem their bone and muscle loss. To date, a full size test system has been developed to verify principles and the results of computer simulations. Results indicate that accurate predictions of bone mechanical properties can be determined from as few as three projections, while more projections are needed for a complete, three-dimensional reconstruction. c 2001. Elsevier Science Ltd. All rights reserved.
Solar radiation on Mars: Update 1990
NASA Technical Reports Server (NTRS)
Appelbaum, Joseph; Flood, Dennis J.
1990-01-01
Detailed information on solar radiation characteristics on Mars are necessary for effective design of future planned solar energy systems operating on the surface of Mars. The authors present a procedure and solar radiation related data from which the diurnally and daily variation of the global, direct beam and diffuse insolation on Mars are calculated. The radiation data are based on measured optical depth of the Martian atmosphere derived from images taken of the Sun with a special diode on the Viking Lander cameras and computation based on multiple wavelength and multiple scattering of the solar radiation. This work is an update to NASA-TM-102299 and includes a refinement of the solar radiation model.
Faller, Christina E.; Raman, E. Prabhu; MacKerell, Alexander D.; Guvench, Olgun
2015-01-01
Fragment-based drug design (FBDD) involves screening low molecular weight molecules (“fragments”) that correspond to functional groups found in larger drug-like molecules to determine their binding to target proteins or nucleic acids. Based on the principle of thermodynamic additivity, two fragments that bind non-overlapping nearby sites on the target can be combined to yield a new molecule whose binding free energy is the sum of those of the fragments. Experimental FBDD approaches, like NMR and X-ray crystallography, have proven very useful but can be expensive in terms of time, materials, and labor. Accordingly, a variety of computational FBDD approaches have been developed that provide different levels of detail and accuracy. The Site Identification by Ligand Competitive Saturation (SILCS) method of computational FBDD uses all-atom explicit-solvent molecular dynamics (MD) simulations to identify fragment binding. The target is “soaked” in an aqueous solution with multiple fragments having different identities. The resulting computational competition assay reveals what small molecule types are most likely to bind which regions of the target. From SILCS simulations, 3D probability maps of fragment binding called “FragMaps” can be produced. Based on the probabilities relative to bulk, SILCS FragMaps can be used to determine “Grid Free Energies (GFEs),” which provide per-atom contributions to fragment binding affinities. For essentially no additional computational overhead relative to the production of the FragMaps, GFEs can be used to compute Ligand Grid Free Energies (LGFEs) for arbitrarily complex molecules, and these LGFEs can be used to rank-order the molecules in accordance with binding affinities. PMID:25709034
NASA Astrophysics Data System (ADS)
Park, Kwan-Woo; Na, Suck-Joo
2010-06-01
A computational model for UV pulsed-laser scribing of silicon target is presented and compared with experimental results. The experiments were performed with a high-power Q-switched diode-pumped solid state laser which was operated at 355 nm. They were conducted on n-type 500 μm thick silicon wafers. The scribing width and depth were measured using scanning electron microscopy. The model takes into account major physics, such as heat transfer, evaporation, multiple reflections, and Rayleigh scattering. It also considers the attenuation and redistribution of laser energy due to Rayleigh scattering. Especially, the influence of the average particle sizes in the model is mainly investigated. Finally, it is shown that the computational model describing the laser scribing of silicon is valid at an average particle size of about 10 nm.
Free energy calculations of short peptide chains using Adaptively Biased Molecular Dynamics
NASA Astrophysics Data System (ADS)
Karpusenka, Vadzim; Babin, Volodymyr; Roland, Christopher; Sagui, Celeste
2008-10-01
We performed a computational study of monomer peptides composed of methionine, alanine, leucine, glutamate, lysine (all amino acids with a helix-forming propensities); and proline, glycine tyrosine, serine, arginine (which all have poor helix-forming propensities). The free energy landscapes as a function of the handedness and radius of gyration have been calculated using the recently introduced Adaptively Biased Molecular Dynamics (ABMD) method, combined with replica exchange, multiple walkers, and post-processing Umbrella Correction (UC). Minima that correspond to some of the left- and right-handed 310-, α- and π-helixes were identified by secondary structure assignment methods (DSSP, Stride). The resulting free energy surface (FES) and the subsequent steered molecular dynamics (SMD) simulation results are in agreement with the empirical evidence of preferred secondary structures for the peptide chains considered.
Near-wall turbulence model and its application to fully developed turbulent channel and pipe flows
NASA Technical Reports Server (NTRS)
Kim, S.-W.
1990-01-01
A near-wall turbulence model and its incorporation into a multiple-timescale turbulence model are presented. The near-wall turbulence model is obtained from a k-equation turbulence model and a near-wall analysis. In the method, the equations for the conservation of mass, momentum, and turbulent kinetic energy are integrated up to the wall, and the energy transfer and the dissipation rates inside the near-wall layer are obtained from algebraic equations. Fully developed turbulent channel and pipe flows are solved using a finite element method. The computational results compare favorably with experimental data. It is also shown that the turbulence model can resolve the overshoot phenomena of the turbulent kinetic energy and the dissipation rate in the region very close to the wall.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dernotte, Jeremie; Dec, John E.; Ji, Chunsheng
A detailed understanding of the various factors affecting the trends in gross-indicated thermal efficiency with changes in key operating parameters has been carried out, applied to a one-liter displacement single-cylinder boosted Low-Temperature Gasoline Combustion (LTGC) engine. This work systematically investigates how the supplied fuel energy splits into the following four energy pathways: gross-indicated thermal efficiency, combustion inefficiency, heat transfer and exhaust losses, and how this split changes with operating conditions. Additional analysis is performed to determine the influence of variations in the ratio of specific heat capacities (γ) and the effective expansion ratio, related to the combustion-phasing retard (CA50), onmore » the energy split. Heat transfer and exhaust losses are computed using multiple standard cycle analysis techniques. Furthermore, the various methods are evaluated in order to validate the trends.« less
Spent nuclear fuel assembly inspection using neutron computed tomography
NASA Astrophysics Data System (ADS)
Pope, Chad Lee
The research presented here focuses on spent nuclear fuel assembly inspection using neutron computed tomography. Experimental measurements involving neutron beam transmission through a spent nuclear fuel assembly serve as benchmark measurements for an MCNP simulation model. Comparison of measured results to simulation results shows good agreement. Generation of tomography images from MCNP tally results was accomplished using adapted versions of built in MATLAB algorithms. Multiple fuel assembly models were examined to provide a broad set of conclusions. Tomography images revealing assembly geometric information including the fuel element lattice structure and missing elements can be obtained using high energy neutrons. A projection difference technique was developed which reveals the substitution of unirradiated fuel elements for irradiated fuel elements, using high energy neutrons. More subtle material differences such as altering the burnup of individual elements can be identified with lower energy neutrons provided the scattered neutron contribution to the image is limited. The research results show that neutron computed tomography can be used to inspect spent nuclear fuel assemblies for the purpose of identifying anomalies such as missing elements or substituted elements. The ability to identify anomalies in spent fuel assemblies can be used to deter diversion of material by increasing the risk of early detection as well as improve reprocessing facility operations by confirming the spent fuel configuration is as expected or allowing segregation if anomalies are detected.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoon Lee, Sang; Hong, Tianzhen; Sawaya, Geof
The paper presents a method and process to establish a database of energy efficiency performance (DEEP) to enable quick and accurate assessment of energy retrofit of commercial buildings. DEEP was compiled from results of about 35 million EnergyPlus simulations. DEEP provides energy savings for screening and evaluation of retrofit measures targeting the small and medium-sized office and retail buildings in California. The prototype building models are developed for a comprehensive assessment of building energy performance based on DOE commercial reference buildings and the California DEER prototype buildings. The prototype buildings represent seven building types across six vintages of constructions andmore » 16 California climate zones. DEEP uses these prototypes to evaluate energy performance of about 100 energy conservation measures covering envelope, lighting, heating, ventilation, air-conditioning, plug-loads, and domestic hot water. DEEP consists the energy simulation results for individual retrofit measures as well as packages of measures to consider interactive effects between multiple measures. The large scale EnergyPlus simulations are being conducted on the super computers at the National Energy Research Scientific Computing Center of Lawrence Berkeley National Laboratory. The pre-simulation database is a part of an on-going project to develop a web-based retrofit toolkit for small and medium-sized commercial buildings in California, which provides real-time energy retrofit feedback by querying DEEP with recommended measures, estimated energy savings and financial payback period based on users’ decision criteria of maximizing energy savings, energy cost savings, carbon reduction, or payback of investment. The pre-simulated database and associated comprehensive measure analysis enhances the ability to performance assessments of retrofits to reduce energy use for small and medium buildings and business owners who typically do not have resources to conduct costly building energy audit. DEEP will be migrated into the DEnCity - DOE’s Energy City, which integrates large-scale energy data for multi-purpose, open, and dynamic database leveraging diverse source of existing simulation data.« less
A vertical-energy-thresholding procedure for data reduction with multiple complex curves.
Jung, Uk; Jeong, Myong K; Lu, Jye-Chyi
2006-10-01
Due to the development of sensing and computer technology, measurements of many process variables are available in current manufacturing processes. It is very challenging, however, to process a large amount of information in a limited time in order to make decisions about the health of the processes and products. This paper develops a "preprocessing" procedure for multiple sets of complicated functional data in order to reduce the data size for supporting timely decision analyses. The data type studied has been used for fault detection, root-cause analysis, and quality improvement in such engineering applications as automobile and semiconductor manufacturing and nanomachining processes. The proposed vertical-energy-thresholding (VET) procedure balances the reconstruction error against data-reduction efficiency so that it is effective in capturing key patterns in the multiple data signals. The selected wavelet coefficients are treated as the "reduced-size" data in subsequent analyses for decision making. This enhances the ability of the existing statistical and machine-learning procedures to handle high-dimensional functional data. A few real-life examples demonstrate the effectiveness of our proposed procedure compared to several ad hoc techniques extended from single-curve-based data modeling and denoising procedures.
Quantum Monte Carlo simulations of Ti4 O7 Magnéli phase
NASA Astrophysics Data System (ADS)
Benali, Anouar; Shulenburger, Luke; Krogel, Jaron; Zhong, Xiaoliang; Kent, Paul; Heinonen, Olle
2015-03-01
Ti4O7 is ubiquitous in Ti-oxides. It has been extensively studied, both experimentally and theoretically in the past decades using multiple levels of theories, resulting in multiple diverse results. The latest DFT +SIC methods and state of the art HSE06 hybrid functionals even propose a new anti-ferromagnetic state at low temperature. Using Quantum Monte Carlo (QMC), as implemented in the QMCPACK simulation package, we investigated the electronic and magnetic properties of Ti4O7 at low (120K) and high (298K) temperatures and at different magnetic states. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357. L.S, J.K and P.K were supported through Predictive Theory and Modeling for Materials and Chemical Science program by the Office of Basic Energy Sciences (BES), Department of Energy (DOE) Sandia National Laboratories is a multiprogram laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract No. DE-AC04-94AL85000.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel
Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel; ...
2017-03-08
Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less
NASA Astrophysics Data System (ADS)
Johnson, Kristina Mary
In 1973 the computerized tomography (CT) scanner revolutionized medical imaging. This machine can isolate and display in two-dimensional cross-sections, internal lesions and organs previously impossible to visualize. The possibility of three-dimensional imaging however is not yet exploited by present tomographic systems. Using multiple-exposure holography, three-dimensional displays can be synthesizing from two-dimensional CT cross -sections. A multiple-exposure hologram is an incoherent superposition of many individual holograms. Intuitively it is expected that holograms recorded with equal energy will reconstruct images with equal brightness. It is found however, that holograms recorded first are brighter than holograms recorded later in the superposition. This phenomena is called Holographic Reciprocity Law Failure (HRLF). Computer simulations of latent image formation in multiple-exposure holography are one of the methods used to investigate HRLF. These simulations indicate that it is the time between individual exposures in the multiple -exposure hologram that is responsible for HRLF. This physical parameter introduces an asymmetry into the latent image formation process that favors the signal of previously recorded holograms over holograms recorded later in the superposition. The origin of this asymmetry lies in the dynamics of latent image formation, and in particular in the decay of single-atom latent image specks, which have lifetimes that are short compared to typical times between exposures. An analytical model is developed for a double exposure hologram that predicts a decrease in the brightness of the second exposure as compared to the first exposure as the time between exposures increases. These results are consistent with the computer simulations. Experiments investigating the influence of this parameter on the diffraction efficiency of reconstructed images in a double exposure hologram are also found to be consistent with the computer simulations and analytical results. From this information, two techniques are presented that correct for HRLF, and succeed in reconstructing multiple holographic images of CT cross-sections with equal brightness. The multiple multiple-exposure hologram is a new hologram that increases the number of equally bright images that can be superimposed on one photographic plate.
A coarse grain model for protein-surface interactions
NASA Astrophysics Data System (ADS)
Wei, Shuai; Knotts, Thomas A.
2013-09-01
The interaction of proteins with surfaces is important in numerous applications in many fields—such as biotechnology, proteomics, sensors, and medicine—but fundamental understanding of how protein stability and structure are affected by surfaces remains incomplete. Over the last several years, molecular simulation using coarse grain models has yielded significant insights, but the formalisms used to represent the surface interactions have been rudimentary. We present a new model for protein surface interactions that incorporates the chemical specificity of both the surface and the residues comprising the protein in the context of a one-bead-per-residue, coarse grain approach that maintains computational efficiency. The model is parameterized against experimental adsorption energies for multiple model peptides on different types of surfaces. The validity of the model is established by its ability to quantitatively and qualitatively predict the free energy of adsorption and structural changes for multiple biologically-relevant proteins on different surfaces. The validation, done with proteins not used in parameterization, shows that the model produces remarkable agreement between simulation and experiment.
Faheem, Muhammad; Heyden, Andreas
2014-08-12
We report the development of a quantum mechanics/molecular mechanics free energy perturbation (QM/MM-FEP) method for modeling chemical reactions at metal-water interfaces. This novel solvation scheme combines planewave density function theory (DFT), periodic electrostatic embedded cluster method (PEECM) calculations using Gaussian-type orbitals, and classical molecular dynamics (MD) simulations to obtain a free energy description of a complex metal-water system. We derive a potential of mean force (PMF) of the reaction system within the QM/MM framework. A fixed-size, finite ensemble of MM conformations is used to permit precise evaluation of the PMF of QM coordinates and its gradient defined within this ensemble. Local conformations of adsorbed reaction moieties are optimized using sequential MD-sampling and QM-optimization steps. An approximate reaction coordinate is constructed using a number of interpolated states and the free energy difference between adjacent states is calculated using the QM/MM-FEP method. By avoiding on-the-fly QM calculations and by circumventing the challenges associated with statistical averaging during MD sampling, a computational speedup of multiple orders of magnitude is realized. The method is systematically validated against the results of ab initio QM calculations and demonstrated for C-C cleavage in double-dehydrogenated ethylene glycol on a Pt (111) model surface.
Zeng, Dong; Gao, Yuanyuan; Huang, Jing; Bian, Zhaoying; Zhang, Hua; Lu, Lijun; Ma, Jianhua
2016-10-01
Multienergy computed tomography (MECT) allows identifying and differentiating different materials through simultaneous capture of multiple sets of energy-selective data belonging to specific energy windows. However, because sufficient photon counts are not available in each energy window compared with that in the whole energy window, the MECT images reconstructed by the analytical approach often suffer from poor signal-to-noise and strong streak artifacts. To address the particular challenge, this work presents a penalized weighted least-squares (PWLS) scheme by incorporating the new concept of structure tensor total variation (STV) regularization, which is henceforth referred to as 'PWLS-STV' for simplicity. Specifically, the STV regularization is derived by penalizing higher-order derivatives of the desired MECT images. Thus it could provide more robust measures of image variation, which can eliminate the patchy artifacts often observed in total variation (TV) regularization. Subsequently, an alternating optimization algorithm was adopted to minimize the objective function. Extensive experiments with a digital XCAT phantom and meat specimen clearly demonstrate that the present PWLS-STV algorithm can achieve more gains than the existing TV-based algorithms and the conventional filtered backpeojection (FBP) algorithm in terms of both quantitative and visual quality evaluations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Angular selective window systems: Assessment of technical potential for energy savings
Fernandes, Luis L.; Lee, Eleanor S.; McNeil, Andrew; ...
2014-10-16
Static angular selective shading systems block direct sunlight and admit daylight within a specific range of incident solar angles. The objective of this study is to quantify their potential to reduce energy use and peak demand in commercial buildings using state-of-the art whole-building computer simulation software that allows accurate modeling of the behavior of optically-complex fenestration systems such as angular selective systems. Three commercial systems were evaluated: a micro-perforated screen, a tubular shading structure, and an expanded metal mesh. This evaluation was performed through computer simulation for multiple climates (Chicago, Illinois and Houston, Texas), window-to-wall ratios (0.15-0.60), building codes (ASHRAEmore » 90.1-2004 and 2010) and lighting control configurations (with and without). The modeling of the optical complexity of the systems took advantage of the development of state-of-the-art versions of the EnergyPlus, Radiance and Window simulation tools. Results show significant reductions in perimeter zone energy use; the best system reached 28% and 47% savings, respectively without and with daylighting controls (ASHRAE 90.1-2004, south facade, Chicago,WWR=0.45). As a result, angular selectivity and thermal conductance of the angle-selective layer, as well as spectral selectivity of low-emissivity coatings, were identified as factors with significant impact on performance.« less
NASA Astrophysics Data System (ADS)
Zeng, Dong; Bian, Zhaoying; Gong, Changfei; Huang, Jing; He, Ji; Zhang, Hua; Lu, Lijun; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua
2016-03-01
Multienergy computed tomography (MECT) has the potential to simultaneously offer multiple sets of energy- selective data belonging to specific energy windows. However, because sufficient photon counts are not available in the specific energy windows compared with that in the whole energy window, the MECT images reconstructed by the analytical approach often suffer from poor signal-to-noise (SNR) and strong streak artifacts. To eliminate this drawback, in this work we present a penalized weighted least-squares (PWLS) scheme by incorporating the new concept of structure tensor total variation (STV) regularization to improve the MECT images quality from low-milliampere-seconds (low-mAs) data acquisitions. Henceforth the present scheme is referred to as `PWLS- STV' for simplicity. Specifically, the STV regularization is derived by penalizing the eigenvalues of the structure tensor of every point in the MECT images. Thus it can provide more robust measures of image variation, which can eliminate the patchy artifacts often observed in total variation regularization. Subsequently, an alternating optimization algorithm was adopted to minimize the objective function. Experiments with a digital XCAT phantom clearly demonstrate that the present PWLS-STV algorithm can achieve more gains than the existing TV-based algorithms and the conventional filtered backpeojection (FBP) algorithm in terms of noise-induced artifacts suppression, resolution preservation, and material decomposition assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, MeiYue; Lin, Lin; Yang, Chao
The single particle energies obtained in a Kohn-Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in tr ansport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green’s function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The G 0W 0 approximation is a widely used techniquemore » in which the self energy is expressed as the convolution of a noninteracting Green’s function (G 0) and a screened Coulomb interaction (W 0) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating W 0 at multiple frequencies. In this paper, we discuss how the cost of G 0W 0 calculation can be reduced by constructing a low rank approximation to the frequency dependent part of W 0 . In particular, we examine the effect of such a low rank approximation on the accuracy of the G 0W 0 approximation. We also discuss how the numerical convolution of G 0 and W 0 can be evaluated efficiently and accurately by using a contour deformation technique with an appropriate choice of the contour.« less
DEEP-SaM - Energy-Efficient Provisioning Policies for Computing Environments
NASA Astrophysics Data System (ADS)
Bodenstein, Christian; Püschel, Tim; Hedwig, Markus; Neumann, Dirk
The cost of electricity for datacenters is a substantial operational cost that can and should be managed, not only for saving energy, but also due to the ecologic commitment inherent to power consumption. Often, pursuing this goal results in chronic underutilization of resources, a luxury most resource providers do not have in light of their corporate commitments. This work proposes, formalizes and numerically evaluates DEEP-Sam, for clearing provisioning markets, based on the maximization of welfare, subject to utility-level dependant energy costs and customer satisfaction levels. We focus specifically on linear power models, and the implications of the inherent fixed costs related to energy consumption of modern datacenters and cloud environments. We rigorously test the model by running multiple simulation scenarios and evaluate the results critically. We conclude with positive results and implications for long-term sustainable management of modern datacenters.
Chassin, David P.; Behboodi, Sahand; Djilali, Ned
2018-01-28
This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less
Dernotte, Jeremie; Dec, John E.; Ji, Chunsheng
2015-04-14
A detailed understanding of the various factors affecting the trends in gross-indicated thermal efficiency with changes in key operating parameters has been carried out, applied to a one-liter displacement single-cylinder boosted Low-Temperature Gasoline Combustion (LTGC) engine. This work systematically investigates how the supplied fuel energy splits into the following four energy pathways: gross-indicated thermal efficiency, combustion inefficiency, heat transfer and exhaust losses, and how this split changes with operating conditions. Additional analysis is performed to determine the influence of variations in the ratio of specific heat capacities (γ) and the effective expansion ratio, related to the combustion-phasing retard (CA50), onmore » the energy split. Heat transfer and exhaust losses are computed using multiple standard cycle analysis techniques. Furthermore, the various methods are evaluated in order to validate the trends.« less
An optimization model for energy generation and distribution in a dynamic facility
NASA Technical Reports Server (NTRS)
Lansing, F. L.
1981-01-01
An analytical model is described using linear programming for the optimum generation and distribution of energy demands among competing energy resources and different economic criteria. The model, which will be used as a general engineering tool in the analysis of the Deep Space Network ground facility, considers several essential decisions for better design and operation. The decisions sought for the particular energy application include: the optimum time to build an assembly of elements, inclusion of a storage medium of some type, and the size or capacity of the elements that will minimize the total life-cycle cost over a given number of years. The model, which is structured in multiple time divisions, employ the decomposition principle for large-size matrices, the branch-and-bound method in mixed-integer programming, and the revised simplex technique for efficient and economic computer use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Behboodi, Sahand; Djilali, Ned
This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less
PIMS: Memristor-Based Processing-in-Memory-and-Storage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Jeanine
Continued progress in computing has augmented the quest for higher performance with a new quest for higher energy efficiency. This has led to the re-emergence of Processing-In-Memory (PIM) ar- chitectures that offer higher density and performance with some boost in energy efficiency. Past PIM work either integrated a standard CPU with a conventional DRAM to improve the CPU- memory link, or used a bit-level processor with Single Instruction Multiple Data (SIMD) control, but neither matched the energy consumption of the memory to the computation. We originally proposed to develop a new architecture derived from PIM that more effectively addressed energymore » efficiency for high performance scientific, data analytics, and neuromorphic applications. We also originally planned to implement a von Neumann architecture with arithmetic/logic units (ALUs) that matched the power consumption of an advanced storage array to maximize energy efficiency. Implementing this architecture in storage was our original idea, since by augmenting storage (in- stead of memory), the system could address both in-memory computation and applications that accessed larger data sets directly from storage, hence Processing-in-Memory-and-Storage (PIMS). However, as our research matured, we discovered several things that changed our original direc- tion, the most important being that a PIM that implements a standard von Neumann-type archi- tecture results in significant energy efficiency improvement, but only about a O(10) performance improvement. In addition to this, the emergence of new memory technologies moved us to propos- ing a non-von Neumann architecture, called Superstrider, implemented not in storage, but in a new DRAM technology called High Bandwidth Memory (HBM). HBM is a stacked DRAM tech- nology that includes a logic layer where an architecture such as Superstrider could potentially be implemented.« less
First principles calculation of finite temperature magnetism in Fe and Fe3C
NASA Astrophysics Data System (ADS)
Eisenbach, M.; Nicholson, D. M.; Rusanu, A.; Brown, G.
2011-04-01
Density functional calculations have proven to be a useful tool in the study of ground state properties of many materials. The investigation of finite temperature magnetism, on the other hand, has to rely usually on the usage of empirical models that allow the large number of evaluations of the systems Hamiltonian that are required to obtain the phase space sampling needed to obtain the free energy, specific heat, magnetization, susceptibility, and other quantities as function of temperature. We have demonstrated a solution to this problem that harnesses the computational power of today's large massively parallel computers by combining a classical Wang-Landau Monte-Carlo calculation [F. Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001)] with our first principles multiple scattering electronic structure code [Y. Wang et al., Phys. Rev. Lett. 75, 2867 (1995)] that allows the energy calculation of constrained magnetic states [M. Eisenbach et al., Proceedings of the Conference on High Performance Computing, Networking, Storage and Analysis (ACM, New York, 2009)]. We present our calculations of finite temperature properties of Fe and Fe3C using this approach and we find the Curie temperatures to be 980 and 425K, respectively.
On the performance of large Gaussian basis sets for the computation of total atomization energies
NASA Technical Reports Server (NTRS)
Martin, J. M. L.
1992-01-01
The total atomization energies of a number of molecules have been computed using an augmented coupled-cluster method and (5s4p3d2f1g) and 4s3p2d1f) atomic natural orbital (ANO) basis sets, as well as the correlation consistent valence triple zeta plus polarization (cc-pVTZ) correlation consistent valence quadrupole zeta plus polarization (cc-pVQZ) basis sets. The performance of ANO and correlation consistent basis sets is comparable throughout, although the latter can result in significant CPU time savings. Whereas the inclusion of g functions has significant effects on the computed Sigma D(e) values, chemical accuracy is still not reached for molecules involving multiple bonds. A Gaussian-1 (G) type correction lowers the error, but not much beyond the accuracy of the G1 model itself. Using separate corrections for sigma bonds, pi bonds, and valence pairs brings down the mean absolute error to less than 1 kcal/mol for the spdf basis sets, and about 0.5 kcal/mol for the spdfg basis sets. Some conclusions on the success of the Gaussian-1 and Gaussian-2 models are drawn.
Recent advances in nonlinear implicit, electrostatic particle-in-cell (PIC) algorithms
NASA Astrophysics Data System (ADS)
Chen, Guangye; Chacón, Luis; Barnes, Daniel
2012-10-01
An implicit 1D electrostatic PIC algorithmfootnotetextChen, Chac'on, Barnes, J. Comput. Phys. 230 (2011) has been developed that satisfies exact energy and charge conservation. The algorithm employs a kinetic-enslaved Jacobian-free Newton-Krylov methodfootnotetextIbid. that ensures nonlinear convergence while taking timesteps comparable to the dynamical timescale of interest. Here we present two main improvements of the algorithm. The first is the formulation of a preconditioner based on linearized fluid equations, which are closed using available particle information. The computational benefit is that solving the fluid system is much cheaper than the kinetic one. The effectiveness of the preconditioner in accelerating nonlinear iterations on challenging problems will be demonstrated. A second improvement is the generalization of Ref. 1 to curvilinear meshes,footnotetextChac'on, Chen, Barnes, J. Comput. Phys. submitted (2012) with a hybrid particle update of positions and velocities in logical and physical space respectively.footnotetextSwift, J. Comp. Phys., 126 (1996) The curvilinear algorithm remains exactly charge and energy-conserving, and can be extended to multiple dimensions. We demonstrate the accuracy and efficiency of the algorithm with a 1D ion-acoustic shock wave simulation.
Machine learning of molecular electronic properties in chemical compound space
NASA Astrophysics Data System (ADS)
Montavon, Grégoire; Rupp, Matthias; Gobre, Vivekanand; Vazquez-Mayagoitia, Alvaro; Hansen, Katja; Tkatchenko, Alexandre; Müller, Klaus-Robert; Anatole von Lilienfeld, O.
2013-09-01
The combination of modern scientific computing with electronic structure theory can lead to an unprecedented amount of data amenable to intelligent data analysis for the identification of meaningful, novel and predictive structure-property relationships. Such relationships enable high-throughput screening for relevant properties in an exponentially growing pool of virtual compounds that are synthetically accessible. Here, we present a machine learning model, trained on a database of ab initio calculation results for thousands of organic molecules, that simultaneously predicts multiple electronic ground- and excited-state properties. The properties include atomization energy, polarizability, frontier orbital eigenvalues, ionization potential, electron affinity and excitation energies. The machine learning model is based on a deep multi-task artificial neural network, exploiting the underlying correlations between various molecular properties. The input is identical to ab initio methods, i.e. nuclear charges and Cartesian coordinates of all atoms. For small organic molecules, the accuracy of such a ‘quantum machine’ is similar, and sometimes superior, to modern quantum-chemical methods—at negligible computational cost.
Dumitru, Adrian; McLerran, Larry; Skokov, Vladimir
2015-02-23
In this study, we show how angular asymmetries ~cos2φ can arise in dipole scattering at high energies. We illustrate the effects due to anisotropic fluctuations of the saturation momentum of the target with a finite correlation length in the transverse impact parameter plane, i.e. from a domain-like structure. We compute the two-particle azimuthal cumulant in this model including both one-particle factorizable as well as genuine two-particle non-factorizable contributions to the two-particle cross section. We also compute the full BBGKY hierarchy for the four-particle azimuthal cumulant and find that only the fully factorizable contribution to c 2{4} is negative while allmore » contributions from genuine two, three and four particle correlations are positive. Our results may provide some qualitative insight into the origin of azimuthal asymmetries in p + Pb collisions at the LHC which reveal a change of sign of c 2{4} in high multiplicity events. (author)« less
Free energy calculations: an efficient adaptive biasing potential method.
Dickson, Bradley M; Legoll, Frédéric; Lelièvre, Tony; Stoltz, Gabriel; Fleurat-Lessard, Paul
2010-05-06
We develop an efficient sampling and free energy calculation technique within the adaptive biasing potential (ABP) framework. By mollifying the density of states we obtain an approximate free energy and an adaptive bias potential that is computed directly from the population along the coordinates of the free energy. Because of the mollifier, the bias potential is "nonlocal", and its gradient admits a simple analytic expression. A single observation of the reaction coordinate can thus be used to update the approximate free energy at every point within a neighborhood of the observation. This greatly reduces the equilibration time of the adaptive bias potential. This approximation introduces two parameters: strength of mollification and the zero of energy of the bias potential. While we observe that the approximate free energy is a very good estimate of the actual free energy for a large range of mollification strength, we demonstrate that the errors associated with the mollification may be removed via deconvolution. The zero of energy of the bias potential, which is easy to choose, influences the speed of convergence but not the limiting accuracy. This method is simple to apply to free energy or mean force computation in multiple dimensions and does not involve second derivatives of the reaction coordinates, matrix manipulations nor on-the-fly adaptation of parameters. For the alanine dipeptide test case, the new method is found to gain as much as a factor of 10 in efficiency as compared to two basic implementations of the adaptive biasing force methods, and it is shown to be as efficient as well-tempered metadynamics with the postprocess deconvolution giving a clear advantage to the mollified density of states method.
Climate Science's Globally Distributed Infrastructure
NASA Astrophysics Data System (ADS)
Williams, D. N.
2016-12-01
The Earth System Grid Federation (ESGF) is primarily funded by the Department of Energy's (DOE's) Office of Science (the Office of Biological and Environmental Research [BER] Climate Data Informatics Program and the Office of Advanced Scientific Computing Research Next Generation Network for Science Program), the National Oceanic and Atmospheric Administration (NOAA), the National Aeronautics and Space Administration (NASA), and the National Science Foundation (NSF), the European Infrastructure for the European Network for Earth System Modeling (IS-ENES), and the Australian National University (ANU). Support also comes from other U.S. federal and international agencies. The federation works across multiple worldwide data centers and spans seven international network organizations to provide users with the ability to access, analyze, and visualize data using a globally federated collection of networks, computers, and software. Its architecture employs a series of geographically distributed peer nodes that are independently administered and united by common federation protocols and application programming interfaces (APIs). The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the Coupled Model Intercomparison Project (CMIP; output used by the Intergovernmental Panel on Climate Change assessment reports), multiple model intercomparison projects (MIPs; endorsed by the World Climate Research Programme [WCRP]), and the Accelerated Climate Modeling for Energy (ACME; ESGF is included in the overarching ACME workflow process to store model output). ESGF is a successful example of integration of disparate open-source technologies into a cohesive functional system that serves the needs the global climate science community. Data served by ESGF includes not only model output but also observational data from satellites and instruments, reanalysis, and generated images.
Yang, Changwon; Kim, Eunae; Pak, Youngshang
2015-09-18
Houghton (HG) base pairing plays a central role in the DNA binding of proteins and small ligands. Probing detailed transition mechanism from Watson-Crick (WC) to HG base pair (bp) formation in duplex DNAs is of fundamental importance in terms of revealing intrinsic functions of double helical DNAs beyond their sequence determined functions. We investigated a free energy landscape of a free B-DNA with an adenosine-thymine (A-T) rich sequence to probe its conformational transition pathways from WC to HG base pairing. The free energy landscape was computed with a state-of-art two-dimensional umbrella molecular dynamics simulation at the all-atom level. The present simulation showed that in an isolated duplex DNA, the spontaneous transition from WC to HG bp takes place via multiple pathways. Notably, base flipping into the major and minor grooves was found to play an important role in forming these multiple transition pathways. This finding suggests that naked B-DNA under normal conditions has an inherent ability to form HG bps via spontaneous base opening events. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Root Cause Investigation of Lead-Free Solder Joint Interfacial Failures After Multiple Reflows
NASA Astrophysics Data System (ADS)
Li, Yan; Hatch, Olen; Liu, Pilin; Goyal, Deepak
2017-03-01
Solder joint interconnects in three-dimensional (3D) packages with package stacking configurations typically must undergo multiple reflow cycles during the assembly process. In this work, interfacial open joint failures between the bulk solder and the intermetallic compound (IMC) layer were found in Sn-Ag-Cu (SAC) solder joints connecting a small package to a large package after multiple reflow reliability tests. Systematic progressive 3D x-ray computed tomography experiments were performed on both incoming and assembled parts to reveal the initiation and evolution of the open failures in the same solder joints before and after the reliability tests. Characterization studies, including focused ion beam cross-sections, scanning electron microscopy, and energy-dispersive x-ray spectroscopy, were conducted to determine the correlation between IMC phase transformation and failure initiation in the solder joints. A comprehensive failure mechanism, along with solution paths for the solder joint interfacial failures after multiple reflow cycles, is discussed in detail.
Random number generators tested on quantum Monte Carlo simulations.
Hongo, Kenta; Maezono, Ryo; Miura, Kenichi
2010-08-01
We have tested and compared several (pseudo) random number generators (RNGs) applied to a practical application, ground state energy calculations of molecules using variational and diffusion Monte Carlo metheds. A new multiple recursive generator with 8th-order recursion (MRG8) and the Mersenne twister generator (MT19937) are tested and compared with the RANLUX generator with five luxury levels (RANLUX-[0-4]). Both MRG8 and MT19937 are proven to give the same total energy as that evaluated with RANLUX-4 (highest luxury level) within the statistical error bars with less computational cost to generate the sequence. We also tested the notorious implementation of linear congruential generator (LCG), RANDU, for comparison. (c) 2010 Wiley Periodicals, Inc.
The fast multipole method and point dipole moment polarizable force fields.
Coles, Jonathan P; Masella, Michel
2015-01-14
We present an implementation of the fast multipole method for computing Coulombic electrostatic and polarization forces from polarizable force-fields based on induced point dipole moments. We demonstrate the expected O(N) scaling of that approach by performing single energy point calculations on hexamer protein subunits of the mature HIV-1 capsid. We also show the long time energy conservation in molecular dynamics at the nanosecond scale by performing simulations of a protein complex embedded in a coarse-grained solvent using a standard integrator and a multiple time step integrator. Our tests show the applicability of fast multipole method combined with state-of-the-art chemical models in molecular dynamical systems.
Parallel Computing Using Web Servers and "Servlets".
ERIC Educational Resources Information Center
Lo, Alfred; Bloor, Chris; Choi, Y. K.
2000-01-01
Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…
Integral Transport Analysis Results for Ions Flowing Through Neutral Gas
NASA Astrophysics Data System (ADS)
Emmert, Gilbert; Santarius, John
2017-10-01
Results of a computational model for the flow of energetic ions and neutrals through a background neutral gas will be presented. The method models reactions as creating a new source of ions or neutrals if the energy or charge state of the resulting particle is changed. For a given source boundary condition, the creation and annihilation of the various species is formulated as a 1-D Volterra integral equation that can quickly be solved numerically by finite differences. The present work focuses on multiple-pass, 1-D ion flow through neutral gas and a nearly transparent, concentric anode and cathode pair in spherical, cylindrical, or linear geometry. This has been implemented as a computer code for atomic (3He, 3He +, 3He + +) and molecular (D, D2, D-, D +, D2 +, D3 +) ion and neutral species, and applied to modeling inertial-electrostatic connement (IEC) devices. The code yields detailed energy spectra of the various ions and energetic neutral species. Calculations for several University of Wisconsin IEC and ion implantation devices will be presented. Research supported by US Dept. of Homeland Security Grant 2015-DN-077-ARI095, Dept. of Energy Grant DE-FG02-04ER54745, and the Grainger Foundation.
Tunable high-channel-count bandstop graphene plasmonic filters based on plasmon induced transparency
NASA Astrophysics Data System (ADS)
Zhang, Zhengren; Long, Yang; Ma, Pengyu; Li, Hongqiang
2017-11-01
A high-channel-count bandstop graphene plasmonic filter based on ultracompact plasmonic structure is proposed in this paper. It consists of graphene waveguide side-coupled with a series of graphene filtering units. The study shows that the waveguide-resonator system performs a multiple plasmon induced transparency (PIT) phenomenon. By carefully adjusting the Fermi level of the filtering units, any two adjacent transmitted dips which belong to different PIT units can produce coherent coupling superposition enhancement. This property prevents the attenuation of the high-frequency transmission dips of multiple PIT and leads to an excellent bandstop filter with multiple channels. Specifically, the bandwidth and modulation depth of the filters can be flexibly adjusted by tuning the Fermi energy of the graphene waveguide. This ultracompact plasmonic structure contributes to the achievement of frequency division multiplexing systems for optical computing and communications in highly integrated optical circuits.
Walthouwer, Michel Jean Louis; Oenema, Anke; Lechner, Lilian; de Vries, Hein
2015-10-19
Web-based computer-tailored interventions often suffer from small effect sizes and high drop-out rates, particularly among people with a low level of education. Using videos as a delivery format can possibly improve the effects and attractiveness of these interventions The main aim of this study was to examine the effects of a video and text version of a Web-based computer-tailored obesity prevention intervention on dietary intake, physical activity, and body mass index (BMI) among Dutch adults. A second study aim was to examine differences in appreciation between the video and text version. The final study aim was to examine possible differences in intervention effects and appreciation per educational level. A three-armed randomized controlled trial was conducted with a baseline and 6 months follow-up measurement. The intervention consisted of six sessions, lasting about 15 minutes each. In the video version, the core tailored information was provided by means of videos. In the text version, the same tailored information was provided in text format. Outcome variables were self-reported and included BMI, physical activity, energy intake, and appreciation of the intervention. Multiple imputation was used to replace missing values. The effect analyses were carried out with multiple linear regression analyses and adjusted for confounders. The process evaluation data were analyzed with independent samples t tests. The baseline questionnaire was completed by 1419 participants and the 6 months follow-up measurement by 1015 participants (71.53%). No significant interaction effects of educational level were found on any of the outcome variables. Compared to the control condition, the video version resulted in lower BMI (B=-0.25, P=.049) and lower average daily energy intake from energy-dense food products (B=-175.58, P<.001), while the text version had an effect only on energy intake (B=-163.05, P=.001). No effects on physical activity were found. Moreover, the video version was rated significantly better than the text version on feelings of relatedness (P=.041), usefulness (P=.047), and grade given to the intervention (P=.018). The video version of the Web-based computer-tailored obesity prevention intervention was the most effective intervention and most appreciated. Future research needs to examine if the effects are maintained in the long term and how the intervention can be optimized. Netherlands Trial Register: NTR3501; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=3501 (Archived by WebCite at http://www.webcitation.org/6cBKIMaW1).
Sughimoto, Koichi; Takahara, Yoshiharu; Mogi, Kenji; Yamazaki, Kenji; Tsubota, Ken'ichi; Liang, Fuyou; Liu, Hao
2014-05-01
Aortic aneurysms may cause the turbulence of blood flow and result in the energy loss of the blood flow, while grafting of the dilated aorta may ameliorate these hemodynamic disturbances, contributing to the alleviation of the energy efficiency of blood flow delivery. However, evaluating of the energy efficiency of blood flow in an aortic aneurysm has been technically difficult to estimate and not comprehensively understood yet. We devised a multiscale computational biomechanical model, introducing novel flow indices, to investigate a single male patient with multiple aortic aneurysms. Preoperative levels of wall shear stress and oscillatory shear index (OSI) were elevated but declined after staged grafting procedures: OSI decreased from 0.280 to 0.257 (first operation) and 0.221 (second operation). Graftings may strategically counter the loss of efficient blood delivery to improve hemodynamics of the aorta. The energy efficiency of blood flow also improved postoperatively. Novel indices of pulsatile pressure index (PPI) and pulsatile energy loss index (PELI) were evaluated to characterize and quantify energy loss of pulsatile blood flow. Mean PPI decreased from 0.445 to 0.423 (first operation) and 0.359 (second operation), respectively; while the preoperative PELI of 0.986 dropped to 0.820 and 0.831. Graftings contributed not only to ameliorate wall shear stress or oscillatory shear index but also to improve efficient blood flow. This patient-specific modeling will help in analyzing the mechanism of aortic aneurysm formation and may play an important role in quantifying the energy efficiency or loss in blood delivery.
VAX-11 Programs for Computing Available Potential Energy from CTD Data.
1981-08-01
the plots can be plotted as many times as desired. The use of the translators is described at the end of section 3. The multiple branch structure of...are listed later in this section, and short * versions of them may be obtained on the terminal any time the program prompts the user for branch number...input, by typing 0/. Within each branch there may be options which are accessible by varying parameters input by the user at the time the branch
Reliability assessment of multiple quantum well avalanche photodiodes
NASA Technical Reports Server (NTRS)
Yun, Ilgu; Menkara, Hicham M.; Wang, Yang; Oguzman, Isamil H.; Kolnik, Jan; Brennan, Kevin F.; May, Gray S.; Wagner, Brent K.; Summers, Christopher J.
1995-01-01
The reliability of doped-barrier AlGaAs/GsAs multi-quantum well avalanche photodiodes fabricated by molecular beam epitaxy is investigated via accelerated life tests. Dark current and breakdown voltage were the parameters monitored. The activation energy of the degradation mechanism and median device lifetime were determined. Device failure probability as a function of time was computed using the lognormal model. Analysis using the electron beam induced current method revealed the degradation to be caused by ionic impurities or contamination in the passivation layer.
Image method for induced surface charge from many-body system of dielectric spheres
NASA Astrophysics Data System (ADS)
Qin, Jian; de Pablo, Juan J.; Freed, Karl F.
2016-09-01
Charged dielectric spheres embedded in a dielectric medium provide the simplest model for many-body systems of polarizable ions and charged colloidal particles. We provide a multiple scattering formulation for the total electrostatic energy for such systems and demonstrate that the polarization energy can be rapidly evaluated by an image method that generalizes the image methods for conducting spheres. Individual contributions to the total electrostatic energy are ordered according to the number of polarized surfaces involved, and each additional surface polarization reduces the energy by a factor of (a/R)3ɛ, where a is the sphere radius, R the average inter-sphere separation, and ɛ the relevant dielectric mismatch at the interface. Explicit expressions are provided for both the energy and the forces acting on individual spheres, which can be readily implemented in Monte Carlo and molecular dynamics simulations of polarizable charged spheres, thereby avoiding costly computational techniques that introduce a surface charge distribution that requires numerical solution.
IImage method for induced surface charge from many-body system of dielectric spheres
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Jian; de Pablo, Juan J.; Freed, Karl F.
2016-09-28
Charged dielectric spheres embedded in a dielectric medium provide the simplest model for many-body systems of polarizable ions and charged colloidal particles. We provide a multiple scattering formulation for the total electrostatic energy for such systems and demonstrate that the polarization energy can be rapidly evaluated by an image method that generalizes the image methods for conducting spheres. Individual contributions to the total electrostatic energy are ordered according to the number of polarized surfaces involved, and each additional surface polarization reduces the energy by a factor of (a/R)(3) epsilon, where a is the sphere radius, R the average inter-sphere separation,more » and. the relevant dielectric mismatch at the interface. Explicit expressions are provided for both the energy and the forces acting on individual spheres, which can be readily implemented in Monte Carlo and molecular dynamics simulations of polarizable charged spheres, thereby avoiding costly computational techniques that introduce a surface charge distribution that requires numerical solution.« less
Harada, Ryuhei; Kitao, Akio
2011-07-14
The folding process for a β-hairpin miniprotein, chignolin, was investigated by free energy landscape (FEL) calculations using the recently proposed multiscale free energy landscape calculation method (MSFEL). First, coarse-grained molecular dynamics simulations searched a broad conformational space, then multiple independent, all-atom molecular dynamics simulations with explicit solvent determined the detailed local FEL using massively distributed computing. The combination of the two models enabled efficient calculation of the free energy landscapes. The MSFEL analysis showed that chignolin has an intermediate state as well as a misfolded state. The folding process is initiated by the formation of a β-hairpin turn, followed by the formation of contacts in the hydrophobic core between Tyr2 and Trp9. Furthermore, mutation of Tyr2 shifts the population to the misfolded conformation. The results indicate that the hydrophobic core plays an important role in stabilizing the native state of chignolin. © 2011 American Chemical Society
MAI statistics estimation and analysis in a DS-CDMA system
NASA Astrophysics Data System (ADS)
Alami Hassani, A.; Zouak, M.; Mrabti, M.; Abdi, F.
2018-05-01
A primary limitation of Direct Sequence Code Division Multiple Access DS-CDMA link performance and system capacity is multiple access interference (MAI). To examine the performance of CDMA systems in the presence of MAI, i.e., in a multiuser environment, several works assumed that the interference can be approximated by a Gaussian random variable. In this paper, we first develop a new and simple approach to characterize the MAI in a multiuser system. In addition to statistically quantifying the MAI power, the paper also proposes a statistical model for both variance and mean of the MAI for synchronous and asynchronous CDMA transmission. We show that the MAI probability density function (PDF) is Gaussian for the equal-received-energy case and validate it by computer simulations.
NASA Astrophysics Data System (ADS)
Prakash, Priyanka; Sayyed-Ahmad, Abdallah; Cho, Kwang-Jin; Dolino, Drew M.; Chen, Wei; Li, Hongyang; Grant, Barry J.; Hancock, John F.; Gorfe, Alemayehu A.
2017-01-01
Recent studies found that membrane-bound K-Ras dimers are important for biological function. However, the structure and thermodynamic stability of these complexes remained unknown because they are hard to probe by conventional approaches. Combining data from a wide range of computational and experimental approaches, here we describe the structure, dynamics, energetics and mechanism of assembly of multiple K-Ras dimers. Utilizing a range of techniques for the detection of reactive surfaces, protein-protein docking and molecular simulations, we found that two largely polar and partially overlapping surfaces underlie the formation of multiple K-Ras dimers. For validation we used mutagenesis, electron microscopy and biochemical assays under non-denaturing conditions. We show that partial disruption of a predicted interface through charge reversal mutation of apposed residues reduces oligomerization while introduction of cysteines at these positions enhanced dimerization likely through the formation of an intermolecular disulfide bond. Free energy calculations indicated that K-Ras dimerization involves direct but weak protein-protein interactions in solution, consistent with the notion that dimerization is facilitated by membrane binding. Taken together, our atomically detailed analyses provide unique mechanistic insights into K-Ras dimer formation and membrane organization as well as the conformational fluctuations and equilibrium thermodynamics underlying these processes.
On Writing and Reading Artistic Computational Ecosystems.
Antunes, Rui Filipe; Leymarie, Frederic Fol; Latham, William
2015-01-01
We study the use of the generative systems known as computational ecosystems to convey artistic and narrative aims. These are virtual worlds running on computers, composed of agents that trade units of energy and emulate cycles of life and behaviors adapted from biological life forms. In this article we propose a conceptual framework in order to understand these systems, which are involved in processes of authorship and interpretation that this investigation analyzes in order to identify critical instruments for artistic exploration. We formulate a model of narrative that we call system stories (after Mitchell Whitelaw), characterized by the dynamic network of material and conceptual processes that define these artefacts. They account for narrative constellations with multiple agencies from which meaning and messages emerge. Finally, we present three case studies to explore the potential of this model within an artistic and generative domain, arguing that this understanding expands and enriches the palette of the language of these systems.
Computational study of a calcium release-activated calcium channel
NASA Astrophysics Data System (ADS)
Talukdar, Keka; Shantappa, Anil
2016-05-01
The naturally occurring proteins that form hole in membrane are commonly known as ion channels. They play multiple roles in many important biological processes. Deletion or alteration of these channels often leads to serious problems in the physiological processes as it controls the flow of ions through it. The proper maintenance of the flow of ions, in turn, is required for normal health. Here we have investigated the behavior of a calcium release-activated calcium ion channel with pdb entry 4HKR in Drosophila Melanogaster. The equilibrium energy as well as molecular dynamics simulation is performed first. The protein is subjected to molecular dynamics simulation to find their energy minimized value. Simulation of the protein in the environment of water and ions has given us important results too. The solvation energy is also found using Charmm potential.
NASA Astrophysics Data System (ADS)
Wales, David J.
2018-04-01
Recent advances in the potential energy landscapes approach are highlighted, including both theoretical and computational contributions. Treating the high dimensionality of molecular and condensed matter systems of contemporary interest is important for understanding how emergent properties are encoded in the landscape and for calculating these properties while faithfully representing barriers between different morphologies. The pathways characterized in full dimensionality, which are used to construct kinetic transition networks, may prove useful in guiding such calculations. The energy landscape perspective has also produced new procedures for structure prediction and analysis of thermodynamic properties. Basin-hopping global optimization, with alternative acceptance criteria and generalizations to multiple metric spaces, has been used to treat systems ranging from biomolecules to nanoalloy clusters and condensed matter. This review also illustrates how all this methodology, developed in the context of chemical physics, can be transferred to landscapes defined by cost functions associated with machine learning.
Solution of hydrogen in accident tolerant fuel candidate material: U3Si2
NASA Astrophysics Data System (ADS)
Middleburgh, S. C.; Claisse, A.; Andersson, D. A.; Grimes, R. W.; Olsson, P.; Mašková, S.
2018-04-01
Hydrogen uptake and accommodation into U3Si2, a candidate accident-tolerant fuel system, has been modelled on the atomic scale using the density functional theory. The solution energy of multiple H atoms is computed, reaching a stoichiometry of U3Si2H2 which has been experimentally observed in previous work (reported as U3Si2H1.8). The absorption of hydrogen is found to be favourable up to U3Si2H2 and the associated volume change is computed, closely matching experimental data. Entropic effects are considered to assess the dissociation temperature of H2, estimated to be at ∼800 K - again in good agreement with the experimentally observed transition temperature.
Challenges in scaling NLO generators to leadership computers
NASA Astrophysics Data System (ADS)
Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.
2017-10-01
Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.
The Practical Obstacles of Data Transfer: Why researchers still love scp
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nam, Hai Ah; Hill, Jason J; Parete-Koon, Suzanne T
The importance of computing facilities is heralded every six months with the announcement of the new Top500 list, showcasing the world s fastest supercomputers. Unfortu- nately, with great computing capability does not come great long-term data storage capacity, which often means users must move their data to their local site archive, to remote sites where they may be doing future computation or anal- ysis, or back to their home institution, else face the dreaded data purge that most HPC centers employ to keep utiliza- tion of large parallel filesystems low to manage performance and capacity. At HPC centers, data transfermore » is crucial to the scientific workflow and will increase in importance as computing systems grow in size. The Energy Sciences Net- work (ESnet) recently launched its fifth generation network, a 100 Gbps high-performance, unclassified national network connecting more than 40 DOE research sites to support scientific research and collaboration. Despite the tenfold increase in bandwidth to DOE research sites amenable to multiple data transfer streams and high throughput, in prac- tice, researchers often under-utilize the network and resort to painfully-slow single stream transfer methods such as scp to avoid the complexity of using multiple stream tools such as GridFTP and bbcp, and contend with frustration from the lack of consistency of available tools between sites. In this study we survey and assess the data transfer methods pro- vided at several DOE supported computing facilities, includ- ing both leadership-computing facilities, connected through ESnet. We present observed transfer rates, suggested opti- mizations, and discuss the obstacles the tools must overcome to receive wide-spread adoption over scp.« less
NASA Astrophysics Data System (ADS)
Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.
2012-12-01
Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.
Molecular recognition of DNA by ligands: Roughness and complexity of the free energy profile
NASA Astrophysics Data System (ADS)
Zheng, Wenwei; Vargiu, Attilio Vittorio; Rohrdanz, Mary A.; Carloni, Paolo; Clementi, Cecilia
2013-10-01
Understanding the molecular mechanism by which probes and chemotherapeutic agents bind to nucleic acids is a fundamental issue in modern drug design. From a computational perspective, valuable insights are gained by the estimation of free energy landscapes as a function of some collective variables (CVs), which are associated with the molecular recognition event. Unfortunately the choice of CVs is highly non-trivial because of DNA's high flexibility and the presence of multiple association-dissociation events at different locations and/or sliding within the grooves. Here we have applied a modified version of Locally-Scaled Diffusion Map (LSDMap), a nonlinear dimensionality reduction technique for decoupling multiple-timescale dynamics in macromolecular systems, to a metadynamics-based free energy landscape calculated using a set of intuitive CVs. We investigated the binding of the organic drug anthramycin to a DNA 14-mer duplex. By performing an extensive set of metadynamics simulations, we observed sliding of anthramycin along the full-length DNA minor groove, as well as several detachments from multiple sites, including the one identified by X-ray crystallography. As in the case of equilibrium processes, the LSDMap analysis is able to extract the most relevant collective motions, which are associated with the slow processes within the system, i.e., ligand diffusion along the minor groove and dissociation from it. Thus, LSDMap in combination with metadynamics (and possibly every equivalent method) emerges as a powerful method to describe the energetics of ligand binding to DNA without resorting to intuitive ad hoc reaction coordinates.
Bu, Lintao; Beckham, Gregg T.; Shirts, Michael R.; Nimlos, Mark R.; Adney, William S.; Himmel, Michael E.; Crowley, Michael F.
2011-01-01
Understanding the enzymatic mechanism that cellulases employ to degrade cellulose is critical to efforts to efficiently utilize plant biomass as a sustainable energy resource. A key component of cellulase action on cellulose is product inhibition from monosaccharide and disaccharides in the product site of cellulase tunnel. The absolute binding free energy of cellobiose and glucose to the product site of the catalytic tunnel of the Family 7 cellobiohydrolase (Cel7A) of Trichoderma reesei (Hypocrea jecorina) was calculated using two different approaches: steered molecular dynamics (SMD) simulations and alchemical free energy perturbation molecular dynamics (FEP/MD) simulations. For the SMD approach, three methods based on Jarzynski's equality were used to construct the potential of mean force from multiple pulling trajectories. The calculated binding free energies, −14.4 kcal/mol using SMD and −11.2 kcal/mol using FEP/MD, are in good qualitative agreement. Analysis of the SMD pulling trajectories suggests that several protein residues (Arg-251, Asp-259, Asp-262, Trp-376, and Tyr-381) play key roles in cellobiose and glucose binding to the catalytic tunnel. Five mutations (R251A, D259A, D262A, W376A, and Y381A) were made computationally to measure the changes in free energy during the product expulsion process. The absolute binding free energies of cellobiose to the catalytic tunnel of these five mutants are −13.1, −6.0, −11.5, −7.5, and −8.8 kcal/mol, respectively. The results demonstrated that all of the mutants tested can lower the binding free energy of cellobiose, which provides potential applications in engineering the enzyme to accelerate the product expulsion process and improve the efficiency of biomass conversion. PMID:21454590
Energy Efficiency Maximization of Practical Wireless Communication Systems
NASA Astrophysics Data System (ADS)
Eraslan, Eren
Energy consumption of the modern wireless communication systems is rapidly growing due to the ever-increasing data demand and the advanced solutions employed in order to address this demand, such as multiple-input multiple-output (MIMO) and orthogonal frequency division multiplexing (OFDM) techniques. These MIMO systems are power hungry, however, they are capable of changing the transmission parameters, such as number of spatial streams, number of transmitter/receiver antennas, modulation, code rate, and transmit power. They can thus choose the best mode out of possibly thousands of modes in order to optimize an objective function. This problem is referred to as the link adaptation problem. In this work, we focus on the link adaptation for energy efficiency maximization problem, which is defined as choosing the optimal transmission mode to maximize the number of successfully transmitted bits per unit energy consumed by the link. We model the energy consumption and throughput performances of a MIMO-OFDM link and develop a practical link adaptation protocol, which senses the channel conditions and changes its transmission mode in real-time. It turns out that the brute force search, which is usually assumed in previous works, is prohibitively complex, especially when there are large numbers of transmit power levels to choose from. We analyze the relationship between the energy efficiency and transmit power, and prove that energy efficiency of a link is a single-peaked quasiconcave function of transmit power. This leads us to develop a low-complexity algorithm that finds a near-optimal transmit power and take this dimension out of the search space. We further prune the search space by analyzing the singular value decomposition of the channel and excluding the modes that use higher number of spatial streams than the channel can support. These algorithms and our novel formulations provide simpler computations and limit the search space into a much smaller set; hence reducing the computational complexity by orders of magnitude without sacrificing the performance. The result of this work is a highly practical link adaptation protocol for maximizing the energy efficiency of modern wireless communication systems. Simulation results show orders of magnitude gain in the energy efficiency of the link. We also implemented the link adaptation protocol on real-time MIMO-OFDM radios and we report on the experimental results. To the best of our knowledge, this is the first reported testbed that is capable of performing energy-efficient fast link adaptation using PHY layer information.
SU-C-BRC-06: OpenCL-Based Cross-Platform Monte Carlo Simulation Package for Carbon Ion Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, N; Tian, Z; Pompos, A
2016-06-15
Purpose: Monte Carlo (MC) simulation is considered to be the most accurate method for calculation of absorbed dose and fundamental physical quantities related to biological effects in carbon ion therapy. Its long computation time impedes clinical and research applications. We have developed an MC package, goCMC, on parallel processing platforms, aiming at achieving accurate and efficient simulations for carbon therapy. Methods: goCMC was developed under OpenCL framework. It supported transport simulation in voxelized geometry with kinetic energy up to 450 MeV/u. Class II condensed history algorithm was employed for charged particle transport with stopping power computed via Bethe-Bloch equation. Secondarymore » electrons were not transported with their energy locally deposited. Energy straggling and multiple scattering were modeled. Production of secondary charged particles from nuclear interactions was implemented based on cross section and yield data from Geant4. They were transported via the condensed history scheme. goCMC supported scoring various quantities of interest e.g. physical dose, particle fluence, spectrum, linear energy transfer, and positron emitting nuclei. Results: goCMC has been benchmarked against Geant4 with different phantoms and beam energies. For 100 MeV/u, 250 MeV/u and 400 MeV/u beams impinging to a water phantom, range difference was 0.03 mm, 0.20 mm and 0.53 mm, and mean dose difference was 0.47%, 0.72% and 0.79%, respectively. goCMC can run on various computing devices. Depending on the beam energy and voxel size, it took 20∼100 seconds to simulate 10{sup 7} carbons on an AMD Radeon GPU card. The corresponding CPU time for Geant4 with the same setup was 60∼100 hours. Conclusion: We have developed an OpenCL-based cross-platform carbon MC simulation package, goCMC. Its accuracy, efficiency and portability make goCMC attractive for research and clinical applications in carbon therapy.« less
Fernández-Alvira, Juan M; te Velde, Saskia J; De Bourdeaudhuij, Ilse; Bere, Elling; Manios, Yannis; Kovacs, Eva; Jan, Natasa; Brug, Johannes; Moreno, Luis A
2013-06-21
It is well known that the prevalence of overweight and obesity is considerably higher among youth from lower socio-economic families, but there is little information about the role of some energy balance-related behaviors in the association between socio-economic status and childhood overweight and obesity. The objective of this paper was to assess the possible mediation role of energy balance-related behaviors in the association between parental education and children's body composition. Data were obtained from the cross sectional study of the "EuropeaN Energy balance Research to prevent excessive weight Gain among Youth" (ENERGY) project. 2121 boys and 2516 girls aged 10 to 12 from Belgium, Greece, Hungary, the Netherlands, Norway, Slovenia and Spain were included in the analyses. Data were obtained via questionnaires assessing obesity related dietary, physical activity and sedentary behaviors and basic anthropometric objectively measured indicators (weight, height, waist circumference). The possible mediating effect of sugared drinks intake, breakfast consumption, active transportation to school, sports participation, TV viewing, computer use and sleep duration in the association between parental education and children's body composition was explored via MacKinnon's product-of-coefficients test in single and multiple mediation models. Two different body composition indicators were included in the models, namely Body Mass Index and waist circumference. The association between parental education and children's body composition was partially mediated by breakfast consumption, sports participation, TV viewing and computer use. Additionally, a suppression effect was found for sugared drinks intake. No mediation effect was found for active transportation and sleep duration. The significant mediators explained a higher proportion of the association between parental education and waist circumference compared to the association between parental education and BMI. Tailored overweight and obesity prevention strategies in low SES preadolescent populations should incorporate specific messages focusing on the importance of encouraging daily breakfast consumption, increasing sports participation and decreasing TV viewing and computer use. However, longitudinal research to support these findings is needed.
Xu, Qun; Wang, Xianchao; Xu, Chao
2017-06-01
Multiplication with traditional electronic computers is faced with a low calculating accuracy and a long computation time delay. To overcome these problems, the modified signed digit (MSD) multiplication routine is established based on the MSD system and the carry-free adder. Also, its parallel algorithm and optimization techniques are studied in detail. With the help of a ternary optical computer's characteristics, the structured data processor is designed especially for the multiplication routine. Several ternary optical operators are constructed to perform M transformations and summations in parallel, which has accelerated the iterative process of multiplication. In particular, the routine allocates data bits of the ternary optical processor based on digits of multiplication input, so the accuracy of the calculation results can always satisfy the users. Finally, the routine is verified by simulation experiments, and the results are in full compliance with the expectations. Compared with an electronic computer, the MSD multiplication routine is not only good at dealing with large-value data and high-precision arithmetic, but also maintains lower power consumption and fewer calculating delays.
Highly Scalable Matching Pursuit Signal Decomposition Algorithm
NASA Technical Reports Server (NTRS)
Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.
2009-01-01
Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the standard algorithm. When the utmost accuracy must be achieved, the modified algorithm extracts atoms more conservatively but still exhibits computational gains over classical MPD. The MPD++ algorithm was demonstrated using an over-complete dictionary on real life data. Computational times were reduced by factors of 1.9 and 44 for the emphases of accuracy and performance, respectively. The modified algorithm extracted similar amounts of energy compared to classical MPD. The degree of the improvement in computational time depends on the complexity of the data, the initialization parameters, and the breadth of the dictionary. The results of the research confirm that the three modifications successfully improved the scalability and computational efficiency of the MPD algorithm. Correlation Thresholding decreased the time complexity by reducing the dictionary size. Multiple Atom Extraction also reduced the time complexity by decreasing the number of iterations required for a stopping criterion to be reached. The Course-Fine Grids technique enabled complicated atoms with numerous variable parameters to be effectively represented in the dictionary. Due to the nature of the three proposed modifications, they are capable of being stacked and have cumulative effects on the reduction of the time complexity.
On the Mechanism and Rate of Spontaneous Decomposition of Amino Acids
Alexandrova, Anastassia N.; Jorgensen, William L.
2011-01-01
Spontaneous decarboxylation of amino acids is among the slowest known reactions; it is much less facile than the cleavage of amide bonds in polypeptides. Establishment of the kinetics and mechanisms for this fundamental reaction is important for gauging the proficiency of enzymes. In the present study, multiple mechanisms for glycine decomposition in water are explored using QM/MM Monte Carlo simulations and free energy perturbation theory. Simple CO2 detachment emerges as the preferred pathway for decarboxylation; it is followed by water-assisted proton transfer to yield the products, CO2 and methylamine. The computed free energy of activation of 45 kcal/mol, and the resulting rate-constant of 1 × 10−21 s−1, can be compared with an extrapolated experimental rate constant of ~2 × 10−17 s−1 at 25 °C. The half-life for the reaction is more than 1 billion years. Furthermore, examination of deamination finds simple NH3-detachment yielding α-lactone to be the favored route, though it is less facile than decarboxylation by kcal/mol. Ab initio and DFT calculations with the CPCM hydration model were also carried out for the reactions; the computed free energies of activation for glycine decarboxylation agree with the QM/MM result, while deamination is predicted to be more favorable. QM/MM calculations were also performed for decarboxylation of alanine; the computed barrier is 2 kcal/mol higher than for glycine in qualitative accord with experiment. PMID:21995727
Assessing the Multiple Benefits of Clean Energy: A Resource for States
Clean energy provides multiple benefits. The Multiple Benefits Guide provides an overview of the environmental, energy system and economic benefits of clean energy, specifically energy efficiency, renewable energy and clean distributed generation, and why it is important to thin...
Design and Delivery of Multiple Server-Side Computer Languages Course
ERIC Educational Resources Information Center
Wang, Shouhong; Wang, Hai
2011-01-01
Given the emergence of service-oriented architecture, IS students need to be knowledgeable of multiple server-side computer programming languages to be able to meet the needs of the job market. This paper outlines the pedagogy of an innovative course of multiple server-side computer languages for the undergraduate IS majors. The paper discusses…
Spintronic Nanodevices for Bioinspired Computing
Grollier, Julie; Querlioz, Damien; Stiles, Mark D.
2016-01-01
Bioinspired hardware holds the promise of low-energy, intelligent, and highly adaptable computing systems. Applications span from automatic classification for big data management, through unmanned vehicle control, to control for biomedical prosthesis. However, one of the major challenges of fabricating bioinspired hardware is building ultra-high-density networks out of complex processing units interlinked by tunable connections. Nanometer-scale devices exploiting spin electronics (or spintronics) can be a key technology in this context. In particular, magnetic tunnel junctions (MTJs) are well suited for this purpose because of their multiple tunable functionalities. One such functionality, non-volatile memory, can provide massive embedded memory in unconventional circuits, thus escaping the von-Neumann bottleneck arising when memory and processors are located separately. Other features of spintronic devices that could be beneficial for bioinspired computing include tunable fast nonlinear dynamics, controlled stochasticity, and the ability of single devices to change functions in different operating conditions. Large networks of interacting spintronic nanodevices can have their interactions tuned to induce complex dynamics such as synchronization, chaos, soliton diffusion, phase transitions, criticality, and convergence to multiple metastable states. A number of groups have recently proposed bioinspired architectures that include one or several types of spintronic nanodevices. In this paper, we show how spintronics can be used for bioinspired computing. We review the different approaches that have been proposed, the recent advances in this direction, and the challenges toward fully integrated spintronics complementary metal–oxide–semiconductor (CMOS) bioinspired hardware. PMID:27881881
A new piezoelectric energy harvesting design concept: multimodal energy harvesting skin.
Lee, Soobum; Youn, Byeng D
2011-03-01
This paper presents an advanced design concept for a piezoelectric energy harvesting (EH), referred to as multimodal EH skin. This EH design facilitates the use of multimodal vibration and enhances power harvesting efficiency. The multimodal EH skin is an extension of our previous work, EH skin, which was an innovative design paradigm for a piezoelectric energy harvester: a vibrating skin structure and an additional thin piezoelectric layer in one device. A computational (finite element) model of the multilayered assembly - the vibrating skin structure and piezoelectric layer - is constructed and the optimal topology and/or shape of the piezoelectric layer is found for maximum power generation from multiple vibration modes. A design rationale for the multimodal EH skin was proposed: designing a piezoelectric material distribution and external resistors. In the material design step, the piezoelectric material is segmented by inflection lines from multiple vibration modes of interests to minimize voltage cancellation. The inflection lines are detected using the voltage phase. In the external resistor design step, the resistor values are found for each segment to maximize power output. The presented design concept, which can be applied to any engineering system with multimodal harmonic-vibrating skins, was applied to two case studies: an aircraft skin and a power transformer panel. The excellent performance of multimodal EH skin was demonstrated, showing larger power generation than EH skin without segmentation or unimodal EH skin.
Trampoline-related injuries in children: a preliminary biomechanical model of multiple users.
Menelaws, Simon; Bogacz, Andrew R; Drew, Tim; Paterson, Brodie C
2011-07-01
The recent popularity of domestic trampolines has seen a corresponding increase in injured children. Most injuries happen on the trampoline mat when there are multiple users present. This study sought to examine and simulate the forces and energy transferred to a child's limbs when trampolining with another person of greater mass. The study used a computational biomechanical model. The simulation demonstrated that when two masses bounce out of phase on a trampoline, a transfer of kinetic energy from the larger mass to the smaller mass is likely to occur. It predicted that when an 80 kg adult is on a trampoline with a 25 kg child, the energy transfer is equivalent to the child falling 2.8 m onto a solid surface. Additionally, the rate of loading on the child's bones and ligaments is greater than that on the accompanying adult. Current guidelines are clear that more than one user on a trampoline at a time is a risk factor for serious injury; however, the majority of injuries happen in this scenario. The model predicted that there are high energy transfers resulting in serious fracture and ligamentous injuries to children and that this could be equated to equivalent fall heights. This provides a clear take-home message, which can be conveyed to parents to reduce the incidence of trampoline-related injuries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faby, Sebastian; Maier, Joscha; Sawall, Stefan
2016-07-15
Purpose: To introduce and evaluate an increment matrix approach (IMA) describing the signal statistics of energy-selective photon counting detectors including spatial–spectral correlations between energy bins of neighboring detector pixels. The importance of the occurring correlations for image-based material decomposition is studied. Methods: An IMA describing the counter increase patterns in a photon counting detector is proposed. This IMA has the potential to decrease the number of required random numbers compared to Monte Carlo simulations by pursuing an approach based on convolutions. To validate and demonstrate the IMA, an approximate semirealistic detector model is provided, simulating a photon counting detector inmore » a simplified manner, e.g., by neglecting count rate-dependent effects. In this way, the spatial–spectral correlations on the detector level are obtained and fed into the IMA. The importance of these correlations in reconstructed energy bin images and the corresponding detector performance in image-based material decomposition is evaluated using a statistically optimal decomposition algorithm. Results: The results of IMA together with the semirealistic detector model were compared to other models and measurements using the spectral response and the energy bin sensitivity, finding a good agreement. Correlations between the different reconstructed energy bin images could be observed, and turned out to be of weak nature. These correlations were found to be not relevant in image-based material decomposition. An even simpler simulation procedure based on the energy bin sensitivity was tested instead and yielded similar results for the image-based material decomposition task, as long as the fact that one incident photon can increase multiple counters across neighboring detector pixels is taken into account. Conclusions: The IMA is computationally efficient as it required about 10{sup 2} random numbers per ray incident on a detector pixel instead of an estimated 10{sup 8} random numbers per ray as Monte Carlo approaches would need. The spatial–spectral correlations as described by IMA are not important for the studied image-based material decomposition task. Respecting the absolute photon counts and thus the multiple counter increases by a single x-ray photon, the same material decomposition performance could be obtained with a simpler detector description using the energy bin sensitivity.« less
Kuchenbecker, Walter K H; Groen, Henk; Zijlstra, Tineke M; Bolster, Johanna H T; Slart, Riemer H J; van der Jagt, Erik J; Kobold, Anneke C Muller; Wolffenbuttel, Bruce H R; Land, Jolande A; Hoek, Annemieke
2010-05-01
Abdominal fat contributes to anovulation. We compared body fat distribution measurements and their contribution to anovulation in obese ovulatory and anovulatory infertile women. Seventeen ovulatory and 40 anovulatory women (age, 30 +/- 4 yr; body mass index, 37.7 +/- 6.1 kg/m(2)) participated. Body fat distribution was measured by anthropometrics, dual-energy x-ray absorptiometry, and single-sliced abdominal computed tomography scan. Multiple logistic regression analysis was applied to determine which fat compartments significantly contributed to anovulation. Anovulatory women had a higher waist circumference (113 +/- 11 vs. 104 +/- 9 cm; P < 0.01) and significantly more trunk fat (23.0 +/- 5.3 vs. 19.1 +/- 4.2 kg; P < 0.01) and abdominal fat (4.4 +/- 1.3 kg vs. 3.5 +/- 0.9 kg; P < 0.05) on dual-energy x-ray absorptiometry scan than ovulatory women despite similar body mass index. The volume of intraabdominal fat on single-sliced abdominal computed tomography scan was not significantly different between the two groups (203 +/- 56 vs. 195 +/- 71 cm(3); P = 0.65), but anovulatory women had significantly more sc abdominal fat (SAF) (992 +/- 198 vs. 864 +/- 146 cm(3); P < 0.05). After multiple logistic regression analysis, only trunk fat, abdominal fat, and SAF were associated with anovulation. Abdominal fat is increased in anovulatory women due to a significant increase in SAF and not in intraabdominal fat. SAF and especially abdominal and trunk fat accumulation are associated with anovulation.
Complex-valued Multidirectional Associative Memory
NASA Astrophysics Data System (ADS)
Kobayashi, Masaki; Yamazaki, Haruaki
Hopfield model is a representative associative memory. It was improved to Bidirectional Associative Memory(BAM) by Kosko and Multidirectional Associative Memory(MAM) by Hagiwara. They have two layers or multilayers. Since they have symmetric connections between layers, they ensure to converge. MAM can deal with multiples of many patterns, such as (x1, x2,…), where xm is the pattern on layer-m. Noest, Hirose and Nemoto proposed complex-valued Hopfield model. Lee proposed complex-valued Bidirectional Associative Memory. Zemel proved the rotation invariance of complex-valued Hopfield model. It means that the rotated pattern also stored. In this paper, the complex-valued Multidirectional Associative Memory is proposed. The rotation invariance is also proved. Moreover it is shown by computer simulation that the differences of angles of given patterns are automatically reduced. At first we define complex-valued Multidirectional Associative Memory. Then we define the energy function of network. By using energy function, we prove that the network ensures to converge. Next, we define the learning law and show the characteristic of recall process. The characteristic means that the differences of angles of given patterns are automatically reduced. Especially we prove the following theorem. In case that only a multiple of patterns is stored, if patterns with different angles are given to each layer, the differences are automatically reduced. Finally, we invest that the differences of angles influence the noise robustness. It reduce the noise robustness, because input to each layer become small. We show that by computer simulations.
NASA Astrophysics Data System (ADS)
Zhu, Yanlong; Hamlow, Lucas; He, Chenchen; Gao, Juehan; Oomens, Jos; Rodgers, M. T.
2016-06-01
The local structures of DNA and RNA are influenced by protonation, deprotonation and noncovalent interactions with cations. In order to determine the effects of Na+ cationization on the gas-phase structures of 2'-deoxycytidine, [dCyd+Na]+, and cytidine, [Cyd+Na]+, infrared multiple photon dissociation (IRMPD) action spectra of these sodium cationized nucleosides are measured over the range extending from 500 to 1850 wn using the FELIX free electron laser. Complementary electronic structure calculations are performed to determine the stable low-energy conformations of these complexes. Geometry optimizations, frequency analyses, and IR spectra of these species are determined at the B3LYP/6-311+G(d,p) level of theory. Single-point energies are calculated at the B3LYP/6-311+G(2d,2p) level of theory to determine the relative stabilities of these conformations. Comparison of the measure IRMPD action spectra and computed linear IR spectra enable the conformations accessed in the experiments to be elucidated. For both cytosine nucleosides, tridentate binding of the Na+ cation to the O2, O4' and O5' atoms of the nucleobase and sugar is observed. Present results for the sodium cationized nucleosides are compared to results for the analogous protonated forms of these nucleosides to elucidate the effects of multiple chelating interactions with the sodium cation vs. hydrogen bonding interactions in the protonated systems on the structures and stabilities of these nucleosides.
Low rank approximation in G 0W 0 calculations
Shao, MeiYue; Lin, Lin; Yang, Chao; ...
2016-06-04
The single particle energies obtained in a Kohn-Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in tr ansport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green’s function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The G 0W 0 approximation is a widely used techniquemore » in which the self energy is expressed as the convolution of a noninteracting Green’s function (G 0) and a screened Coulomb interaction (W 0) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating W 0 at multiple frequencies. In this paper, we discuss how the cost of G 0W 0 calculation can be reduced by constructing a low rank approximation to the frequency dependent part of W 0 . In particular, we examine the effect of such a low rank approximation on the accuracy of the G 0W 0 approximation. We also discuss how the numerical convolution of G 0 and W 0 can be evaluated efficiently and accurately by using a contour deformation technique with an appropriate choice of the contour.« less
NASA Astrophysics Data System (ADS)
Ficnar, Andrej
In this dissertation we study the phenomenon of jet quenching in quark-gluon plasma using the AdS/CFT correspondence. We start with a weakly coupled, perturbative QCD approach to energy loss, and present a Monte Carlo code for computation of the DGLV radiative energy loss of quarks and gluons at an arbitrary order in opacity. We use the code to compute the radiated gluon distribution up to n=9 order in opacity, and compare it to the thin plasma (n=1) and the multiple soft scattering (n=infinity) approximations. We furthermore show that the gluon distribution at finite opacity depends in detail on the screening mass mu and the mean free path lambda. In the next part, we turn to the studies of how heavy quarks, represented as "trailing strings" in AdS/CFT, lose energy in a strongly coupled plasma. We study how the heavy quark energy loss gets modified in a "bottom-up" non-conformal holographic model, constructed to reproduce some properties of QCD at finite temperature and constrained by fitting the lattice gauge theory results. The energy loss of heavy quarks is found to be strongly sensitive to the medium properties. We use this model to compute the nuclear modification factor RAA of charm and bottom quarks in an expanding plasma with Glauber initial conditions, and comment on the range of validity of the model. The central part of this thesis is the energy loss of light quarks in a strongly coupled plasma. Using the standard model of "falling strings", we present an analytic derivation of the stopping distance of light quarks, previously available only through numerical simulations, and also apply it to the case of Gauss-Bonnet higher derivative gravity. We then present a general formula for computing the instantaneous energy loss in non-stationary string configurations. Application of this formula to the case of falling strings reveals interesting phenomenology, including a modified Bragg-like peak at late times and an approximately linear path dependence. Based on these results, we develop a phenomenological model of light quark energy loss and use it compute the nuclear modification factor RAA of light quarks in an expanding plasma. Comparison with the LHC pion suppression data shows that, although RAA has the right qualitative structure, the overall magnitude is too low, indicating that the predicted jet quenching is too strong. In the last part of the thesis we consider a novel idea of introducing finite momentum at endpoints of classical (bosonic and supersymmetric) strings, and the phenomenological consequences of this proposal on the energy loss of light quarks. We show that in a general curved background, finite momentum endpoints must propagate along null geodesics and that the distance they travel in an AdS5-Schwarzschild background is greater than in the previous treatments of falling strings. We also argue that this leads to a more realistic description of energetic quarks, allowing for an unambiguous way of distinguishing between the energy in the dual hard probe and the energy in the color fields surrounding it. This proposal also naturally allows for a clear and simple definition of the instantaneous energy loss. Using this definition and the "shooting string" initial conditions, we develope a new formula for light quark energy loss. Finally, we apply this formula to compute the nuclear modification factor RAA of light hadrons at RHIC and LHC, which, after the inclusion of the Gauss-Bonnet quadratic curvature corrections to the AdS5 geometry, shows a reasonably good agreement with the recent data.
Parameter estimation uncertainty: Comparing apples and apples?
NASA Astrophysics Data System (ADS)
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Avalanches and plastic flow in crystal plasticity: an overview
NASA Astrophysics Data System (ADS)
Papanikolaou, Stefanos; Cui, Yinan; Ghoniem, Nasr
2018-01-01
Crystal plasticity is mediated through dislocations, which form knotted configurations in a complex energy landscape. Once they disentangle and move, they may also be impeded by permanent obstacles with finite energy barriers or frustrating long-range interactions. The outcome of such complexity is the emergence of dislocation avalanches as the basic mechanism of plastic flow in solids at the nanoscale. While the deformation behavior of bulk materials appears smooth, a predictive model should clearly be based upon the character of these dislocation avalanches and their associated strain bursts. We provide here a comprehensive overview of experimental observations, theoretical models and computational approaches that have been developed to unravel the multiple aspects of dislocation avalanche physics and the phenomena leading to strain bursts in crystal plasticity.
Conformational Sampling of a Biomolecular Rugged Energy Landscape.
Rydzewski, Jakub; Jakubowski, Rafal; Nicosia, Giuseppe; Nowak, Wieslaw
2018-01-01
The protein structure refinement using conformational sampling is important in hitherto protein studies. In this paper, we examined the protein structure refinement by means of potential energy minimization using immune computing as a method of sampling conformations. The method was tested on the x-ray structure and 30 decoys of the mutant of [Leu]Enkephalin, a paradigmatic example of the biomolecular multiple-minima problem. In order to score the refined conformations, we used a standard potential energy function with the OPLSAA force field. The effectiveness of the search was assessed using a variety of methods. The robustness of sampling was checked by the energy yield function which measures quantitatively the number of the peptide decoys residing in an energetic funnel. Furthermore, the potential energy-dependent Pareto fronts were calculated to elucidate dissimilarities between peptide conformations and the native state as observed by x-ray crystallography. Our results showed that the probed potential energy landscape of [Leu]Enkephalin is self-similar on different metric scales and that the local potential energy minima of the peptide decoys are metastable, thus they can be refined to conformations whose potential energy is decreased by approximately 250 kJ/mol.
Design of nucleic acid sequences for DNA computing based on a thermodynamic approach
Tanaka, Fumiaki; Kameda, Atsushi; Yamamoto, Masahito; Ohuchi, Azuma
2005-01-01
We have developed an algorithm for designing multiple sequences of nucleic acids that have a uniform melting temperature between the sequence and its complement and that do not hybridize non-specifically with each other based on the minimum free energy (ΔGmin). Sequences that satisfy these constraints can be utilized in computations, various engineering applications such as microarrays, and nano-fabrications. Our algorithm is a random generate-and-test algorithm: it generates a candidate sequence randomly and tests whether the sequence satisfies the constraints. The novelty of our algorithm is that the filtering method uses a greedy search to calculate ΔGmin. This effectively excludes inappropriate sequences before ΔGmin is calculated, thereby reducing computation time drastically when compared with an algorithm without the filtering. Experimental results in silico showed the superiority of the greedy search over the traditional approach based on the hamming distance. In addition, experimental results in vitro demonstrated that the experimental free energy (ΔGexp) of 126 sequences correlated well with ΔGmin (|R| = 0.90) than with the hamming distance (|R| = 0.80). These results validate the rationality of a thermodynamic approach. We implemented our algorithm in a graphic user interface-based program written in Java. PMID:15701762
Mechanical collapse of confined fluid membrane vesicles.
Rim, Jee E; Purohit, Prashant K; Klug, William S
2014-11-01
Compact cylindrical and spherical invaginations are common structural motifs found in cellular and developmental biology. To understand the basic physical mechanisms that produce and maintain such structures, we present here a simple model of vesicles in confinement, in which mechanical equilibrium configurations are computed by energy minimization, balancing the effects of curvature elasticity, contact of the membrane with itself and the confining geometry, and adhesion. For cylindrical confinement, the shape equations are solved both analytically and numerically by finite element analysis. For spherical confinement, axisymmetric configurations are obtained numerically. We find that the geometry of invaginations is controlled by a dimensionless ratio of the adhesion strength to the bending energy of an equal area spherical vesicle. Larger adhesion produces more concentrated curvatures, which are mainly localized to the "neck" region where the invagination breaks away from its confining container. Under spherical confinement, axisymmetric invaginations are approximately spherical. For extreme confinement, multiple invaginations may form, bifurcating along multiple equilibrium branches. The results of the model are useful for understanding the physical mechanisms controlling the structure of lipid membranes of cells and their organelles, and developing tissue membranes.
Computational analysis of vertical axis wind turbine arrays
NASA Astrophysics Data System (ADS)
Bremseth, J.; Duraisamy, K.
2016-10-01
Canonical problems involving single, pairs, and arrays of vertical axis wind turbines (VAWTs) are investigated numerically with the objective of understanding the underlying flow structures and their implications on energy production. Experimental studies by Dabiri (J Renew Sustain Energy 3, 2011) suggest that VAWTs demand less stringent spacing requirements than their horizontal axis counterparts and additional benefits may be obtained by optimizing the placement and rotational direction of VAWTs. The flowfield of pairs of co-/counter-rotating VAWTs shows some similarities with pairs of cylinders in terms of wake structure and vortex shedding. When multiple VAWTs are placed in a column, the extent of the wake is seen to spread further downstream, irrespective of the direction of rotation of individual turbines. However, the aerodynamic interference between turbines gives rise to regions of excess momentum between the turbines which lead to significant power augmentations. Studies of VAWTs arranged in multiple columns show that the downstream columns can actually be more efficient than the leading column, a proposition that could lead to radical improvements in wind farm productivity.
Optimal Operation System of the Integrated District Heating System with Multiple Regional Branches
NASA Astrophysics Data System (ADS)
Kim, Ui Sik; Park, Tae Chang; Kim, Lae-Hyun; Yeo, Yeong Koo
This paper presents an optimal production and distribution management for structural and operational optimization of the integrated district heating system (DHS) with multiple regional branches. A DHS consists of energy suppliers and consumers, district heating pipelines network and heat storage facilities in the covered region. In the optimal management system, production of heat and electric power, regional heat demand, electric power bidding and sales, transport and storage of heat at each regional DHS are taken into account. The optimal management system is formulated as a mixed integer linear programming (MILP) where the objectives is to minimize the overall cost of the integrated DHS while satisfying the operation constraints of heat units and networks as well as fulfilling heating demands from consumers. Piecewise linear formulation of the production cost function and stairwise formulation of the start-up cost function are used to compute nonlinear cost function approximately. Evaluation of the total overall cost is based on weekly operations at each district heat branches. Numerical simulations show the increase of energy efficiency due to the introduction of the present optimal management system.
NASA Astrophysics Data System (ADS)
Vincena, S.; Gekelman, W.; Pribyl, P.; Tang, S., W.,; Papadopoulos, K.
2017-10-01
Shear Alfven waves are a fundamental mode in magnetized plasmas. Propagating near the ion cyclotron frequency, these waves are often termed electromagnetic ion cyclotron (EMIC) waves and can involve multiple ion species. Near the earth, for example, the wave may interact resonantly with oxygen ions at altitudes ranging from 1000 to 2000 km. The waves may either propagate from space towards the earth (possibly involving mode conversion), or be generated by RF transmitters on the ground. These preliminary experiments are motivated by theoretical predictions that such waves can pitch-angle scatter relativistic electrons trapped in the earth's dipole field. EMIC waves are launched in the Large Plasma Device at UCLA's Basic Plasma Science Facility in plasmas with single and multiple ion species into magnetic field gradients where ion cyclotron resonance is satisfied. We report here on the frequency and k-spectra in the critical layer and how they compare with theoretical predictions in computing an effective diffusion coefficient for high-energy electrons. Funding is provided by the NSF, DoE, and AFSOR.
Wang, Haipeng; Yang, Yushuang; Yang, Jianli; Nie, Yihang; Jia, Jing; Wang, Yudan
2015-01-01
Multiscale nondestructive characterization of coal microscopic physical structure can provide important information for coal conversion and coal-bed methane extraction. In this study, the physical structure of a coal sample was investigated by synchrotron-based multiple-energy X-ray CT at three beam energies and two different spatial resolutions. A data-constrained modeling (DCM) approach was used to quantitatively characterize the multiscale compositional distributions at the two resolutions. The volume fractions of each voxel for four different composition groups were obtained at the two resolutions. Between the two resolutions, the difference for DCM computed volume fractions of coal matrix and pores is less than 0.3%, and the difference for mineral composition groups is less than 0.17%. This demonstrates that the DCM approach can account for compositions beyond the X-ray CT imaging resolution with adequate accuracy. By using DCM, it is possible to characterize a relatively large coal sample at a relatively low spatial resolution with minimal loss of the effect due to subpixel fine length scale structures.
Harnessing wake vortices for efficient collective swimming via deep reinfrcement learning
NASA Astrophysics Data System (ADS)
Verma, Siddartha; Novati, Guido; Koumoutsakos, Petros; ChairComputing Science Team
2017-11-01
Collective motion may bestow evolutionary advantages to a number of animal species. Soaring flocks of birds, teeming swarms of insects, and swirling masses of schooling fish, all to some extent enjoy anti-predator benefits, increased foraging success, and enhanced problem-solving abilities. Coordinated activity may also provide energetic benefits, as in the case of large groups of fish where swimmers exploit unsteady flow-patterns generated in the wake. Both experimental and computational investigations of such scenarios are hampered by difficulties associated with studying multiple swimmers. Consequentially, the precise energy-saving mechanisms at play remain largely unknown. We combine high-fidelity numerical simulations of multiple, self propelled swimmers with novel deep reinforcement learning algorithms to discover optimal ways for swimmers to interact with unsteady wakes, in a fully unsupervised manner. We identify optimal flow-interaction strategies devised by the resulting autonomous swimmers, and use it to formulate an effective control-logic. We demonstrate, via 3D simulations of controlled groups that swimmers exploiting the learned strategy exhibit a significant reduction in energy-expenditure. ERC Advanced Investigator Award 341117.
NASA Astrophysics Data System (ADS)
Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.
2011-12-01
With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic speedup in execution time. NRM is sufficiently generic to support applications in any domain, as long as the application is parallelizable (i.e., can be subdivided into multiple individual processing tasks). At present, NRM has been effective in decreasing the overall runtime of several algorithms: 1) the generation of a global 3D model of the compressional velocity distribution in the Earth using tomographic inversion, 2) the calculation of the model resolution matrix, model covariance matrix, and travel time uncertainty for the aforementioned velocity model, and 3) the correlation of waveforms with archival data on a massive scale for seismic event detection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Kotter, Dale K [Shelley, ID; Rohrbaugh, David T [Idaho Falls, ID
2010-09-07
A frequency selective surface (FSS) and associated methods for modeling, analyzing and designing the FSS are disclosed. The FSS includes a pattern of conductive material formed on a substrate to form an array of resonance elements. At least one aspect of the frequency selective surface is determined by defining a frequency range including multiple frequency values, determining a frequency dependent permittivity across the frequency range for the substrate, determining a frequency dependent conductivity across the frequency range for the conductive material, and analyzing the frequency selective surface using a method of moments analysis at each of the multiple frequency values for an incident electromagnetic energy impinging on the frequency selective surface. The frequency dependent permittivity and the frequency dependent conductivity are included in the method of moments analysis.
Active Flash: Performance-Energy Tradeoffs for Out-of-Core Processing on Non-Volatile Memory Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S
2012-01-01
In this abstract, we study the performance and energy tradeoffs involved in migrating data analysis into the flash device, a process we refer to as Active Flash. The Active Flash paradigm is similar to 'active disks', which has received considerable attention. Active Flash allows us to move processing closer to data, thereby minimizing data movement costs and reducing power consumption. It enables true out-of-core computation. The conventional definition of out-of-core solvers refers to an approach to process data that is too large to fit in the main memory and, consequently, requires access to disk. However, in Active Flash, processing outsidemore » the host CPU literally frees the core and achieves real 'out-of-core' analysis. Moving analysis to data has long been desirable, not just at this level, but at all levels of the system hierarchy. However, this requires a detailed study on the tradeoffs involved in achieving analysis turnaround under an acceptable energy envelope. To this end, we first need to evaluate if there is enough computing power on the flash device to warrant such an exploration. Flash processors require decent computing power to run the internal logic pertaining to the Flash Translation Layer (FTL), which is responsible for operations such as address translation, garbage collection (GC) and wear-leveling. Modern SSDs are composed of multiple packages and several flash chips within a package. The packages are connected using multiple I/O channels to offer high I/O bandwidth. SSD computing power is also expected to be high enough to exploit such inherent internal parallelism within the drive to increase the bandwidth and to handle fast I/O requests. More recently, SSD devices are being equipped with powerful processing units and are even embedded with multicore CPUs (e.g. ARM Cortex-A9 embedded processor is advertised to reach 2GHz frequency and deliver 5000 DMIPS; OCZ RevoDrive X2 SSD has 4 SandForce controllers, each with 780MHz max frequency Tensilica core). Efforts that take advantage of the available computing cycles on the processors on SSDs to run auxiliary tasks other than actual I/O requests are beginning to emerge. Kim et al. investigate database scan operations in the context of processing on the SSDs, and propose dedicated hardware logic to speed up scans. Also, cluster architectures have been explored, which consist of low-power embedded CPUs coupled with small local flash to achieve fast, parallel access to data. Processor utilization on SSD is highly dependent on workloads and, therefore, they can be idle during periods with no I/O accesses. We propose to use the available processing capability on the SSD to run tasks that can be offloaded from the host. This paper makes the following contributions: (1) We have investigated Active Flash and its potential to optimize the total energy cost, including power consumption on the host and the flash device; (2) We have developed analytical models to analyze the performance-energy tradeoffs for Active Flash, by treating the SSD as a blackbox, this is particularly valuable due to the proprietary nature of the SSD internal hardware; and (3) We have enhanced a well-known SSD simulator (from MSR) to implement 'on-the-fly' data compression using Active Flash. Our results provide a window into striking a balance between energy consumption and application performance.« less
Calculation of Stress Intensity Factors for Interfacial Cracks in Fiber Metal Laminates
NASA Technical Reports Server (NTRS)
Wang, John T.
2009-01-01
Stress intensity factors for interfacial cracks in Fiber Metal Laminates (FML) are computed by using the displacement ratio method recently developed by Sun and Qian (1997, Int. J. Solids. Struct. 34, 2595-2609). Various FML configurations with single and multiple delaminations subjected to different loading conditions are investigated. The displacement ratio method requires the total energy release rate, bimaterial parameters, and relative crack surface displacements as input. Details of generating the energy release rates, defining bimaterial parameters with anisotropic elasticity, and selecting proper crack surface locations for obtaining relative crack surface displacements are discussed in the paper. Even though the individual energy release rates are nonconvergent, mesh-size-independent stress intensity factors can be obtained. This study also finds that the selection of reference length can affect the magnitudes and the mode mixity angles of the stress intensity factors; thus, it is important to report the reference length used with the calculated stress intensity factors.
Study of α-Cu 0.82Al 0.18(100) using low energy ion scattering
NASA Astrophysics Data System (ADS)
Zhu, L.; Muhlen, E. Zur; O'Connor, D. J.; King, B. V.; MacDonald, R. J.
1996-07-01
The clean α-Cu 0.82Al 0.18(100) surface has been investigated using low energy ion scattering. The surface structure was found to be similar to the structure of the Cu(100) surface. By measuring the first layer concentration of Al using He + and Ne + beams and standard calibration procedure, the α-Cu 0.82Al 0.18(100) surface was found to be slightly Al-rich. Analysis of multiple scattering of ions suggests that Al atoms do not form islands. It was also found that Al atoms sit higher than the Cu atoms on the surface. By comparison with computer simulations (SABRE and FAN2D), the buckling of Al was found to be 0.16 ± 0.07 Å. No reconstructions were observed on the surface by low energy ion scattering which is in agreement with previous LEED studies.
A new and trustworthy formalism to compute entropy in quantum systems
NASA Astrophysics Data System (ADS)
Ansari, Mohammad
Entropy is nonlinear in density matrix and as such its evaluation in open quantum system has not been fully understood. Recently a quantum formalism was proposed by Ansari and Nazarov that evaluates entropy using parallel time evolutions of multiple worlds. We can use this formalism to evaluate entropy flow in a photovoltaic cells coupled to thermal reservoirs and cavity modes. Recently we studied the full counting statistics of energy transfers in such systems. This rigorously proves a nontrivial correspondence between energy exchanges and entropy changes in quantum systems, which only in systems without entanglement can be simplified to the textbook second law of thermodynamics. We evaluate the flow of entropy using this formalism. In the presence of entanglement, however, interestingly much less information is exchanged than what we expected. This increases the upper limit capacity for information transfer and its conversion to energy for next generation devices in mesoscopic physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almasi, Gheorghe; Blumrich, Matthias Augustin; Chen, Dong
Methods and apparatus perform fault isolation in multiple node computing systems using commutative error detection values for--example, checksums--to identify and to isolate faulty nodes. When information associated with a reproducible portion of a computer program is injected into a network by a node, a commutative error detection value is calculated. At intervals, node fault detection apparatus associated with the multiple node computer system retrieve commutative error detection values associated with the node and stores them in memory. When the computer program is executed again by the multiple node computer system, new commutative error detection values are created and stored inmore » memory. The node fault detection apparatus identifies faulty nodes by comparing commutative error detection values associated with reproducible portions of the application program generated by a particular node from different runs of the application program. Differences in values indicate a possible faulty node.« less
ABSENTEE COMPUTATIONS IN A MULTIPLE-ACCESS COMPUTER SYSTEM.
require user interaction, and the user may therefore want to run these computations ’ absentee ’ (or, user not present). A mechanism is presented which...provides for the handling of absentee computations in a multiple-access computer system. The design is intended to be implementation-independent...Some novel features of the system’s design are: a user can switch computations from interactive to absentee (and vice versa), the system can
A near-wall turbulence model and its application to fully developed turbulent channel and pipe flows
NASA Technical Reports Server (NTRS)
Kim, S.-W.
1988-01-01
A near wall turbulence model and its incorporation into a multiple-time-scale turbulence model are presented. In the method, the conservation of mass, momentum, and the turbulent kinetic energy equations are integrated up to the wall; and the energy transfer rate and the dissipation rate inside the near wall layer are obtained from algebraic equations. The algebraic equations for the energy transfer rate and the dissipation rate inside the near wall layer were obtained from a k-equation turbulence model and the near wall analysis. A fully developed turbulent channel flow and fully developed turbulent pipe flows were solved using a finite element method to test the predictive capability of the turbulence model. The computational results compared favorably with experimental data. It is also shown that the present turbulence model could resolve the over shoot phenomena of the turbulent kinetic energy and the dissipation rate in the region very close to the wall.
Bioactive focus in conformational ensembles: a pluralistic approach
NASA Astrophysics Data System (ADS)
Habgood, Matthew
2017-12-01
Computational generation of conformational ensembles is key to contemporary drug design. Selecting the members of the ensemble that will approximate the conformation most likely to bind to a desired target (the bioactive conformation) is difficult, given that the potential energy usually used to generate and rank the ensemble is a notoriously poor discriminator between bioactive and non-bioactive conformations. In this study an approach to generating a focused ensemble is proposed in which each conformation is assigned multiple rankings based not just on potential energy but also on solvation energy, hydrophobic or hydrophilic interaction energy, radius of gyration, and on a statistical potential derived from Cambridge Structural Database data. The best ranked structures derived from each system are then assembled into a new ensemble that is shown to be better focused on bioactive conformations. This pluralistic approach is tested on ensembles generated by the Molecular Operating Environment's Low Mode Molecular Dynamics module, and by the Cambridge Crystallographic Data Centre's conformation generator software.
Mechanical design of translocating motor proteins.
Hwang, Wonmuk; Lang, Matthew J
2009-01-01
Translocating motors generate force and move along a biofilament track to achieve diverse functions including gene transcription, translation, intracellular cargo transport, protein degradation, and muscle contraction. Advances in single molecule manipulation experiments, structural biology, and computational analysis are making it possible to consider common mechanical design principles of these diverse families of motors. Here, we propose a mechanical parts list that include track, energy conversion machinery, and moving parts. Energy is supplied not just by burning of a fuel molecule, but there are other sources or sinks of free energy, by binding and release of a fuel or products, or similarly between the motor and the track. Dynamic conformational changes of the motor domain can be regarded as controlling the flow of free energy to and from the surrounding heat reservoir. Multiple motor domains are organized in distinct ways to achieve motility under imposed physical constraints. Transcending amino acid sequence and structure, physically and functionally similar mechanical parts may have evolved as nature's design strategy for these molecular engines.
Mechanical Design of Translocating Motor Proteins
Lang, Matthew J.
2013-01-01
Translocating motors generate force and move along a biofilament track to achieve diverse functions including gene transcription, translation, intracellular cargo transport, protein degradation, and muscle contraction. Advances in single molecule manipulation experiments, structural biology, and computational analysis are making it possible to consider common mechanical design principles of these diverse families of motors. Here, we propose a mechanical parts list that include track, energy conversion machinery, and moving parts. Energy is supplied not just by burning of a fuel molecule, but there are other sources or sinks of free energy, by binding and release of a fuel or products, or similarly between the motor and the track. Dynamic conformational changes of the motor domain can be regarded as controlling the flow of free energy to and from the surrounding heat reservoir. Multiple motor domains are organized in distinct ways to achieve motility under imposed physical constraints. Transcending amino acid sequence and structure, physically and functionally similar mechanical parts may have evolved as nature’s design strategy for these molecular engines. PMID:19452133
NASA Astrophysics Data System (ADS)
Zink, Frank Edward
The detection and classification of pulmonary nodules is of great interest in chest radiography. Nodules are often indicative of primary cancer, and their detection is particularly important in asymptomatic patients. The ability to classify nodules as calcified or non-calcified is important because calcification is a positive indicator that the nodule is benign. Dual-energy methods offer the potential to improve both the detection and classification of nodules by allowing the formation of material-selective images. Tissue-selective images can improve detection by virtue of the elimination of obscuring rib structure. Bone -selective images are essentially calcium images, allowing classification of the nodule. A dual-energy technique is introduced which uses a computed radiography system to acquire dual-energy chest radiographs in a single-exposure. All aspects of the dual-energy technique are described, with particular emphasis on scatter-correction, beam-hardening correction, and noise-reduction algorithms. The adaptive noise-reduction algorithm employed improves material-selective signal-to-noise ratio by up to a factor of seven with minimal sacrifice in selectivity. A clinical comparison study is described, undertaken to compare the dual-energy technique to conventional chest radiography for the tasks of nodule detection and classification. Observer performance data were collected using the Free Response Observer Characteristic (FROC) method and the bi-normal Alternative FROC (AFROC) performance model. Results of the comparison study, analyzed using two common multiple observer statistical models, showed that the dual-energy technique was superior to conventional chest radiography for detection of nodules at a statistically significant level (p < .05). Discussion of the comparison study emphasizes the unique combination of data collection and analysis techniques employed, as well as the limitations of comparison techniques in the larger context of technology assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maurer, Simon A.; Clin, Lucien; Ochsenfeld, Christian, E-mail: christian.ochsenfeld@uni-muenchen.de
2014-06-14
Our recently developed QQR-type integral screening is introduced in our Cholesky-decomposed pseudo-densities Møller-Plesset perturbation theory of second order (CDD-MP2) method. We use the resolution-of-the-identity (RI) approximation in combination with efficient integral transformations employing sparse matrix multiplications. The RI-CDD-MP2 method shows an asymptotic cubic scaling behavior with system size and a small prefactor that results in an early crossover to conventional methods for both small and large basis sets. We also explore the use of local fitting approximations which allow to further reduce the scaling behavior for very large systems. The reliability of our method is demonstrated on test sets formore » interaction and reaction energies of medium sized systems and on a diverse selection from our own benchmark set for total energies of larger systems. Timings on DNA systems show that fast calculations for systems with more than 500 atoms are feasible using a single processor core. Parallelization extends the range of accessible system sizes on one computing node with multiple cores to more than 1000 atoms in a double-zeta basis and more than 500 atoms in a triple-zeta basis.« less
Neutron-fragment and Neutron-neutron Correlations in Low-energy Fission
NASA Astrophysics Data System (ADS)
Lestone, J. P.
2016-01-01
A computational method has been developed to simulate neutron emission from thermal-neutron induced fission of 235U and from spontaneous fission of 252Cf. Measured pre-emission mass-yield curves, average total kinetic energies and their variances, both as functions of mass split, are used to obtain a representation of the distribution of fragment velocities. Measured average neutron multiplicities as a function of mass split and their dependence on total kinetic energy are used. Simulations can be made to reproduce measured factorial moments of neutron-multiplicity distributions with only minor empirical adjustments to some experimental inputs. The neutron-emission spectra in the rest-frame of the fragments are highly constrained by ENDF/B-VII.1 prompt-fission neutron-spectra evaluations. The n-f correlation measurements of Vorobyev et al. (2010) are consistent with predictions where all neutrons are assumed to be evaporated isotropically from the rest frame of fully accelerated fragments. Measured n-f and n-n correlations of others are a little weaker than the predictions presented here. These weaker correlations could be used to infer a weak scission-neutron source. However, the effect of neutron scattering on the experimental results must be studied in detail before moving away from a null hypothesis that all neutrons are evaporated from the fragments.
Designing overall stoichiometric conversions and intervening metabolic reactions
Chowdhury, Anupam; Maranas, Costas D.
2015-11-04
Existing computational tools for de novo metabolic pathway assembly, either based on mixed integer linear programming techniques or graph-search applications, generally only find linear pathways connecting the source to the target metabolite. The overall stoichiometry of conversion along with alternate co-reactant (or co-product) combinations is not part of the pathway design. Therefore, global carbon and energy efficiency is in essence fixed with no opportunities to identify more efficient routes for recycling carbon flux closer to the thermodynamic limit. Here, we introduce a two-stage computational procedure that both identifies the optimum overall stoichiometry (i.e., optStoic) and selects for (non-)native reactions (i.e.,more » minRxn/minFlux) that maximize carbon, energy or price efficiency while satisfying thermodynamic feasibility requirements. Implementation for recent pathway design studies identified non-intuitive designs with improved efficiencies. Specifically, multiple alternatives for non-oxidative glycolysis are generated and non-intuitive ways of co-utilizing carbon dioxide with methanol are revealed for the production of C 2+ metabolites with higher carbon efficiency.« less
Vectorlike particles, Z‧ and Yukawa unification in F-theory inspired E6
NASA Astrophysics Data System (ADS)
Karozas, Athanasios; Leontaris, George K.; Shafi, Qaisar
2018-03-01
We explore the low energy implications of an F-theory inspired E6 model whose breaking yields, in addition to the MSSM gauge symmetry, a Z‧ gauge boson associated with a U (1) symmetry broken at the TeV scale. The zero mode spectrum of the effective low energy theory is derived from the decomposition of the 27 and 27 ‾ representations of E6 and we parametrise their multiplicities in terms of a minimum number of flux parameters. We perform a two-loop renormalisation group analysis of the gauge and Yukawa couplings of the effective theory model and estimate lower bounds on the new vectorlike particles predicted in the model. We compute the third generation Yukawa couplings in an F-theory context assuming an E8 point of enhancement and express our results in terms of the local flux densities associated with the gauge symmetry breaking. We find that their values are compatible with the ones computed by the renormalisation group equations, and we identify points in the parameter space of the flux densities where the t - b - τ Yukawa couplings unify.
Initial postbuckling analysis of elastoplastic thin-shear structures
NASA Technical Reports Server (NTRS)
Carnoy, E. G.; Panosyan, G.
1984-01-01
The design of thin shell structures with respect to elastoplastic buckling requires an extended analysis of the influence of initial imperfections. For conservative design, the most critical defect should be assumed with the maximum allowable magnitude. This defect is closely related to the initial postbuckling behavior. An algorithm is given for the quasi-static analysis of the postbuckling behavior of structures that exhibit multiple buckling points. the algorithm based upon an energy criterion allows the computation of the critical perturbation which will be employed for the definition of the critical defect. For computational efficiency, the algorithm uses the reduced basis technique with automatic update of the modal basis. The method is applied to the axisymmetric buckling of cylindrical shells under axial compression, and conclusions are given for future research.
Exploiting MIC architectures for the simulation of channeling of charged particles in crystals
NASA Astrophysics Data System (ADS)
Bagli, Enrico; Karpusenko, Vadim
2016-08-01
Coherent effects of ultra-relativistic particles in crystals is an area of science under development. DYNECHARM + + is a toolkit for the simulation of coherent interactions between high-energy charged particles and complex crystal structures. The particle trajectory in a crystal is computed through numerical integration of the equation of motion. The code was revised and improved in order to exploit parallelization on multi-cores and vectorization of single instructions on multiple data. An Intel Xeon Phi card was adopted for the performance measurements. The computation time was proved to scale linearly as a function of the number of physical and virtual cores. By enabling the auto-vectorization flag of the compiler a three time speedup was obtained. The performances of the card were compared to the Dual Xeon ones.
Opportunities and choice in a new vector era
NASA Astrophysics Data System (ADS)
Nowak, A.
2014-06-01
This work discusses the significant changes in computing landscape related to the progression of Moore's Law, and the implications on scientific computing. Particular attention is devoted to the High Energy Physics domain (HEP), which has always made good use of threading, but levels of parallelism closer to the hardware were often left underutilized. Findings of the CERN openlab Platform Competence Center are reported in the context of expanding "performance dimensions", and especially the resurgence of vectors. These suggest that data oriented designs are feasible in HEP and have considerable potential for performance improvements on multiple levels, but will rarely trump algorithmic enhancements. Finally, an analysis of upcoming hardware and software technologies identifies heterogeneity as a major challenge for software, which will require more emphasis on scalable, efficient design.
Marsili, Simone; Signorini, Giorgio Federico; Chelli, Riccardo; Marchi, Massimo; Procacci, Piero
2010-04-15
We present the new release of the ORAC engine (Procacci et al., Comput Chem 1997, 18, 1834), a FORTRAN suite to simulate complex biosystems at the atomistic level. The previous release of the ORAC code included multiple time steps integration, smooth particle mesh Ewald method, constant pressure and constant temperature simulations. The present release has been supplemented with the most advanced techniques for enhanced sampling in atomistic systems including replica exchange with solute tempering, metadynamics and steered molecular dynamics. All these computational technologies have been implemented for parallel architectures using the standard MPI communication protocol. ORAC is an open-source program distributed free of charge under the GNU general public license (GPL) at http://www.chim.unifi.it/orac. 2009 Wiley Periodicals, Inc.
Computational Study of Droplet Trains Impacting a Smooth Solid Surface
NASA Astrophysics Data System (ADS)
Markt, David, Jr.; Pathak, Ashish; Raessi, Mehdi; Lee, Seong-Young; Zhao, Emma
2017-11-01
The study of droplet impingement is vital to understanding the fluid dynamics of fuel injection in modern internal combustion engines. One widely accepted model was proposed by Yarin and Weiss (JFM, 1995), developed from experiments of single trains of ethanol droplets impacting a substrate. The model predicts the onset of splashing and the mass ejected upon splashing. In this study, using an in-house 3D multiphase flow solver, the experiments of Yarin and Weiss were computationally simulated. The experimentally observed splashing threshold was captured by the simulations, thus validating the solver's ability to accurately simulate the splashing dynamics. Then, we performed simulations of cases with multiple droplet trains, which have high relevance to dense fuel sprays, where droplets impact within the spreading diameters of their neighboring droplets, leading to changes in splashing dynamics due to interactions of spreading films. For both single and multi-train simulations the amount of splashed mass was calculated as a function of time, allowing a quantitative comparison between the two cases. Furthermore, using a passive scalar the amount of splashed mass per impinging droplet was also calculated. This work is supported by the Department of Energy, Office of Energy Efficiency and Renewable Energy (EERE) and the Department of Defense, Tank and Automotive Research, Development, and Engineering Center (TARDEC), under Award Number DE-EE0007292.
Experiments and simulations of flux rope dynamics in a plasma
NASA Astrophysics Data System (ADS)
Intrator, Thomas; Abbate, Sara; Ryutov, Dmitri
2005-10-01
The behavior of flux ropes is a key issue in solar, space and astrophysics. For instance, magnetic fields and currents on the Sun are sheared and twisted as they store energy, experience an as yet unidentified instability, open into interplanetary space, eject the plasma trapped in them, and cause a flare. The Reconnection Scaling Experiment (RSX) provides a simple means to systematically characterize the linear and non-linear evolution of driven, dissipative, unstable plasma-current filaments. Topology evolves in three dimensions, supports multiple modes, and can bifurcate to quasi-helical equilibria. The ultimate saturation to a nonlinear force and energy balance is the link to a spectrum of relaxation processes. RSX has adjustable energy density β1 to β 1, non-negligible equilibrium plasma flows, driven steady-state scenarios, and adjustable line tying at boundaries. We will show magnetic structure of a kinking, rotating single line tied column, magnetic reconnection between two flux ropes, and pictures of three braided flux ropes. We use computed simulation movies to bridge the gap between the solar physics scales and experimental data with computational modeling. In collaboration with Ivo Furno, Tsitsi Madziwa-Nussinovm Giovanni Lapenta, Adam Light, Los Alamos National Laboratory; Sara Abbate, Torino Polytecnico; and Dmitri Ryutov, Lawrence Livermore National Laboratory.
Double Super-Exchange in Silicon Quantum Dots Connected by Short-Bridged Networks
NASA Astrophysics Data System (ADS)
Li, Huashan; Wu, Zhigang; Lusk, Mark
2013-03-01
Silicon quantum dots (QDs) with diameters in the range of 1-2 nm are attractive for photovoltaic applications. They absorb photons more readily, transport excitons with greater efficiency, and show greater promise in multiple-exciton generation and hot carrier collection paradigms. However, their high excitonic binding energy makes it difficult to dissociate excitons into separate charge carriers. One possible remedy is to create dot assemblies in which a second material creates a Type-II heterojunction with the dot so that exciton dissociation occurs locally. This talk will focus on such a Type-II heterojunction paradigm in which QDs are connected via covalently bonded, short-bridge molecules. For such interpenetrating networks of dots and molecules, our first principles computational investigation shows that it is possible to rapidly and efficiently separate electrons to QDs and holes to bridge units. The bridge network serves as an efficient mediator of electron superexchange between QDs while the dots themselves play the complimentary role of efficient hole superexchange mediators. Dissociation, photoluminescence and carrier transport rates will be presented for bridge networks of silicon QDs that exhibit such double superexchange. This material is based upon work supported by the Renewable Energy Materials Research Science and Engineering Center (REMRSEC) under Grant No. DMR-0820518 and Golden Energy Computing Organization (GECO).
NASA Astrophysics Data System (ADS)
Jeyavijayan, S.
2015-04-01
This study is a comparative analysis of FTIR and FT-Raman spectra of 2-amino-4-hydroxypyrimidine. The total energies of different conformations have been obtained from DFT (B3LYP) method with 6-31+G(d,p) and 6-311++G(d,p) basis sets. The barrier of planarity between the most stable and planar form is also predicted. The molecular structure, vibrational wavenumbers, infrared intensities, Raman scattering activities were calculated for the molecule using the B3LYP density functional theory (DFT) method. The computed values of frequencies are scaled using multiple scaling factors to yield good coherence with the observed values. Reliable vibrational assignments were made on the basis of total energy distribution (TED) along with scaled quantum mechanical (SQM) method. The stability of the molecule arising from hyperconjugative interactions, charge delocalization has been analyzed using natural bond orbital (NBO) analysis. Non-linear properties such as electric dipole moment (μ), polarizability (α), and hyperpolarizability (β) values of the investigated molecule have been computed using B3LYP quantum chemical calculation. The calculated HOMO and LUMO energies show that charge transfer occurs within the molecule. Besides, molecular electrostatic potential (MEP), Mulliken's charges analysis, and several thermodynamic properties were performed by the DFT method.
A NEW METHOD FOR FINDING POINT SOURCES IN HIGH-ENERGY NEUTRINO DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Ke; Miller, M. Coleman
The IceCube collaboration has reported the first detection of high-energy astrophysical neutrinos, including ∼50 high-energy starting events, but no individual sources have been identified. It is therefore important to develop the most sensitive and efficient possible algorithms to identify the point sources of these neutrinos. The most popular current method works by exploring a dense grid of possible directions to individual sources, and identifying the single direction with the maximum probability of having produced multiple detected neutrinos. This method has numerous strengths, but it is computationally intensive and because it focuses on the single best location for a point source,more » additional point sources are not included in the evidence. We propose a new maximum likelihood method that uses the angular separations between all pairs of neutrinos in the data. Unlike existing autocorrelation methods for this type of analysis, which also use angular separations between neutrino pairs, our method incorporates information about the point-spread function and can identify individual point sources. We find that if the angular resolution is a few degrees or better, then this approach reduces both false positive and false negative errors compared to the current method, and is also more computationally efficient up to, potentially, hundreds of thousands of detected neutrinos.« less
NASA Astrophysics Data System (ADS)
Preuss, E.
1981-10-01
A formula for the He + ion survival probability against neutralization is presented, which was derived from the fit of the azimuthal angular dependence of the Ni peak heights on clean and O covered Ni(001) surfaces observed in LEISS experiments and computer simulations. The formula contains a collision- and two Auger-type neutralization terms for the ion trajectories prolonged by multiple collisions above the "neutralization surface plane", which was assumed to be corrugated and shaped like muffin-tins.
DetOx: a program for determining anomalous scattering factors of mixed-oxidation-state species.
Sutton, Karim J; Barnett, Sarah A; Christensen, Kirsten E; Nowell, Harriott; Thompson, Amber L; Allan, David R; Cooper, Richard I
2013-01-01
Overlapping absorption edges will occur when an element is present in multiple oxidation states within a material. DetOx is a program for partitioning overlapping X-ray absorption spectra into contributions from individual atomic species and computing the dependence of the anomalous scattering factors on X-ray energy. It is demonstrated how these results can be used in combination with X-ray diffraction data to determine the oxidation state of ions at specific sites in a mixed-valance material, GaCl(2).
Xu, Lina; O'Hare, Gregory M P; Collier, Rem
2017-07-05
Wireless Sensor Networks (WSNs) are typically composed of thousands of sensors powered by limited energy resources. Clustering techniques were introduced to prolong network longevity offering the promise of green computing. However, most existing work fails to consider the network coverage when evaluating the lifetime of a network. We believe that balancing the energy consumption in per unit area rather than on each single sensor can provide better-balanced power usage throughout the network. Our former work-Balanced Energy-Efficiency (BEE) and its Multihop version BEEM can not only extend the network longevity, but also maintain the network coverage. Following WSNs, Internet of Things (IoT) technology has been proposed with higher degree of diversities in terms of communication abilities and user scenarios, supporting a large range of real world applications. The IoT devices are embedded with multiple communication interfaces, normally referred as Multiple-In and Multiple-Out (MIMO) in 5G networks. The applications running on those devices can generate various types of data. Every interface has its own characteristics, which may be preferred and beneficial in some specific user scenarios. With MIMO becoming more available on the IoT devices, an advanced clustering solution for highly dynamic IoT systems is missing and also pressingly demanded in order to cater for differing user applications. In this paper, we present a smart clustering algorithm (Smart-BEEM) based on our former work BEE(M) to accomplish energy efficient and Quality of user Experience (QoE) supported communication in cluster based IoT networks. It is a user behaviour and context aware approach, aiming to facilitate IoT devices to choose beneficial communication interfaces and cluster headers for data transmission. Experimental results have proved that Smart-BEEM can further improve the performance of BEE and BEEM for coverage sensitive longevity.
O’Hare, Gregory M. P.; Collier, Rem
2017-01-01
Wireless Sensor Networks (WSNs) are typically composed of thousands of sensors powered by limited energy resources. Clustering techniques were introduced to prolong network longevity offering the promise of green computing. However, most existing work fails to consider the network coverage when evaluating the lifetime of a network. We believe that balancing the energy consumption in per unit area rather than on each single sensor can provide better-balanced power usage throughout the network. Our former work—Balanced Energy-Efficiency (BEE) and its Multihop version BEEM can not only extend the network longevity, but also maintain the network coverage. Following WSNs, Internet of Things (IoT) technology has been proposed with higher degree of diversities in terms of communication abilities and user scenarios, supporting a large range of real world applications. The IoT devices are embedded with multiple communication interfaces, normally referred as Multiple-In and Multiple-Out (MIMO) in 5G networks. The applications running on those devices can generate various types of data. Every interface has its own characteristics, which may be preferred and beneficial in some specific user scenarios. With MIMO becoming more available on the IoT devices, an advanced clustering solution for highly dynamic IoT systems is missing and also pressingly demanded in order to cater for differing user applications. In this paper, we present a smart clustering algorithm (Smart-BEEM) based on our former work BEE(M) to accomplish energy efficient and Quality of user Experience (QoE) supported communication in cluster based IoT networks. It is a user behaviour and context aware approach, aiming to facilitate IoT devices to choose beneficial communication interfaces and cluster headers for data transmission. Experimental results have proved that Smart-BEEM can further improve the performance of BEE and BEEM for coverage sensitive longevity. PMID:28678164
Opportunities for Computational Discovery in Basic Energy Sciences
NASA Astrophysics Data System (ADS)
Pederson, Mark
2011-03-01
An overview of the broad-ranging support of computational physics and computational science within the Department of Energy Office of Science will be provided. Computation as the third branch of physics is supported by all six offices (Advanced Scientific Computing, Basic Energy, Biological and Environmental, Fusion Energy, High-Energy Physics, and Nuclear Physics). Support focuses on hardware, software and applications. Most opportunities within the fields of~condensed-matter physics, chemical-physics and materials sciences are supported by the Officeof Basic Energy Science (BES) or through partnerships between BES and the Office for Advanced Scientific Computing. Activities include radiation sciences, catalysis, combustion, materials in extreme environments, energy-storage materials, light-harvesting and photovoltaics, solid-state lighting and superconductivity.~ A summary of two recent reports by the computational materials and chemical communities on the role of computation during the next decade will be provided. ~In addition to materials and chemistry challenges specific to energy sciences, issues identified~include a focus on the role of the domain scientist in integrating, expanding and sustaining applications-oriented capabilities on evolving high-performance computing platforms and on the role of computation in accelerating the development of innovative technologies. ~~
Gschwind, Michael K
2013-04-16
Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.
Cost efficient CFD simulations: Proper selection of domain partitioning strategies
NASA Astrophysics Data System (ADS)
Haddadi, Bahram; Jordan, Christian; Harasek, Michael
2017-10-01
Computational Fluid Dynamics (CFD) is one of the most powerful simulation methods, which is used for temporally and spatially resolved solutions of fluid flow, heat transfer, mass transfer, etc. One of the challenges of Computational Fluid Dynamics is the extreme hardware demand. Nowadays super-computers (e.g. High Performance Computing, HPC) featuring multiple CPU cores are applied for solving-the simulation domain is split into partitions for each core. Some of the different methods for partitioning are investigated in this paper. As a practical example, a new open source based solver was utilized for simulating packed bed adsorption, a common separation method within the field of thermal process engineering. Adsorption can for example be applied for removal of trace gases from a gas stream or pure gases production like Hydrogen. For comparing the performance of the partitioning methods, a 60 million cell mesh for a packed bed of spherical adsorbents was created; one second of the adsorption process was simulated. Different partitioning methods available in OpenFOAM® (Scotch, Simple, and Hierarchical) have been used with different numbers of sub-domains. The effect of the different methods and number of processor cores on the simulation speedup and also energy consumption were investigated for two different hardware infrastructures (Vienna Scientific Clusters VSC 2 and VSC 3). As a general recommendation an optimum number of cells per processor core was calculated. Optimized simulation speed, lower energy consumption and consequently the cost effects are reported here.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koniges, A.E.; Craddock, G.G.; Schnack, D.D.
The purpose of the workshop was to assemble workers, both within and outside of the fusion-related computations areas, for discussion regarding the issues of dynamically adaptive gridding. There were three invited talks related to adaptive gridding application experiences in various related fields of computational fluid dynamics (CFD), and nine short talks reporting on the progress of adaptive techniques in the specific areas of scrape-off-layer (SOL) modeling and magnetohydrodynamic (MHD) stability. Adaptive mesh methods have been successful in a number of diverse fields of CFD for over a decade. The method involves dynamic refinement of computed field profiles in a waymore » that disperses uniformly the numerical errors associated with discrete approximations. Because the process optimizes computational effort, adaptive mesh methods can be used to study otherwise the intractable physical problems that involve complex boundary shapes or multiple spatial/temporal scales. Recent results indicate that these adaptive techniques will be required for tokamak fluid-based simulations involving the diverted tokamak SOL modeling and MHD simulations problems related to the highest priority ITER relevant issues.Individual papers are indexed separately on the energy data bases.« less
The computational complexity of elliptic curve integer sub-decomposition (ISD) method
NASA Astrophysics Data System (ADS)
Ajeena, Ruma Kareem K.; Kamarulhaili, Hailiza
2014-07-01
The idea of the GLV method of Gallant, Lambert and Vanstone (Crypto 2001) is considered a foundation stone to build a new procedure to compute the elliptic curve scalar multiplication. This procedure, that is integer sub-decomposition (ISD), will compute any multiple kP of elliptic curve point P which has a large prime order n with two low-degrees endomorphisms ψ1 and ψ2 of elliptic curve E over prime field Fp. The sub-decomposition of values k1 and k2, not bounded by ±C√n , gives us new integers k11, k12, k21 and k22 which are bounded by ±C√n and can be computed through solving the closest vector problem in lattice. The percentage of a successful computation for the scalar multiplication increases by ISD method, which improved the computational efficiency in comparison with the general method for computing scalar multiplication in elliptic curves over the prime fields. This paper will present the mechanism of ISD method and will shed light mainly on the computation complexity of the ISD approach that will be determined by computing the cost of operations. These operations include elliptic curve operations and finite field operations.
Multiprocessing on supercomputers for computational aerodynamics
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Mehta, Unmeel B.
1991-01-01
Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.
Kroonblawd, Matthew P; Pietrucci, Fabio; Saitta, Antonino Marco; Goldman, Nir
2018-04-10
We demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTB model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol -1 .
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco; ...
2018-03-15
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
Acoustic radiosity for computation of sound fields in diffuse environments
NASA Astrophysics Data System (ADS)
Muehleisen, Ralph T.; Beamer, C. Walter
2002-05-01
The use of image and ray tracing methods (and variations thereof) for the computation of sound fields in rooms is relatively well developed. In their regime of validity, both methods work well for prediction in rooms with small amounts of diffraction and mostly specular reflection at the walls. While extensions to the method to include diffuse reflections and diffraction have been made, they are limited at best. In the fields of illumination and computer graphics the ray tracing and image methods are joined by another method called luminous radiative transfer or radiosity. In radiosity, an energy balance between surfaces is computed assuming diffuse reflection at the reflective surfaces. Because the interaction between surfaces is constant, much of the computation required for sound field prediction with multiple or moving source and receiver positions can be reduced. In acoustics the radiosity method has had little attention because of the problems of diffraction and specular reflection. The utility of radiosity in acoustics and an approach to a useful development of the method for acoustics will be presented. The method looks especially useful for sound level prediction in industrial and office environments. [Work supported by NSF.
Hopfield, J J
2008-05-01
The algorithms that simple feedback neural circuits representing a brain area can rapidly carry out are often adequate to solve easy problems but for more difficult problems can return incorrect answers. A new excitatory-inhibitory circuit model of associative memory displays the common human problem of failing to rapidly find a memory when only a small clue is present. The memory model and a related computational network for solving Sudoku puzzles produce answers that contain implicit check bits in the representation of information across neurons, allowing a rapid evaluation of whether the putative answer is correct or incorrect through a computation related to visual pop-out. This fact may account for our strong psychological feeling of right or wrong when we retrieve a nominal memory from a minimal clue. This information allows more difficult computations or memory retrievals to be done in a serial fashion by using the fast but limited capabilities of a computational module multiple times. The mathematics of the excitatory-inhibitory circuits for associative memory and for Sudoku, both of which are understood in terms of energy or Lyapunov functions, is described in detail.
Near real-time traffic routing
NASA Technical Reports Server (NTRS)
Yang, Chaowei (Inventor); Xie, Jibo (Inventor); Zhou, Bin (Inventor); Cao, Ying (Inventor)
2012-01-01
A near real-time physical transportation network routing system comprising: a traffic simulation computing grid and a dynamic traffic routing service computing grid. The traffic simulator produces traffic network travel time predictions for a physical transportation network using a traffic simulation model and common input data. The physical transportation network is divided into a multiple sections. Each section has a primary zone and a buffer zone. The traffic simulation computing grid includes multiple of traffic simulation computing nodes. The common input data includes static network characteristics, an origin-destination data table, dynamic traffic information data and historical traffic data. The dynamic traffic routing service computing grid includes multiple dynamic traffic routing computing nodes and generates traffic route(s) using the traffic network travel time predictions.
Multiplicities of charged hadrons in 280 GeV/c muon-proton scattering
NASA Astrophysics Data System (ADS)
Arneodo, M.; Arvidson, A.; Aubert, J. J.; Badelek, B.; Beaufays, J.; Becks, K. H.; Bee, C. P.; Benchouk, C.; Berghoff, G.; Bird, I.; Blum, D.; Böhm, E.; De Bouard, X.; Brasse, F. W.; Braun, H.; Broll, C.; Brown, S.; Brück, H.; Calen, H.; Callebaut, D.; Chima, J. S.; Ciborowski, J.; Clifft, R.; Coignet, G.; Combley, F.; Coughlan, J.; D'Agostini, G.; Dahlgren, S.; Dengler, F.; Derado, I.; Dreyer, T.; Drees, J.; Düren, M.; Eckardt, V.; Edwards, A.; Edwards, M.; Ernst, T.; Eszes, G.; Favier, J.; Ferrero, M. I.; Figiel, J.; Flauger, W.; Foster, J.; Gabathuler, E.; Gajewski, J.; Gamet, R.; Gayler, J.; Geddes, N.; Giubellino, P.; Grafström, P.; Grard, F.; Hass, J.; Hagberg, E.; Hasert, F. J.; Hayman, P.; Heusse, P.; Hoppe, C.; Jaffré, M.; Jacholkowska, A.; Janata, F.; Jancso, G.; Johnson, A. S.; Kabuss, E. M.; Kellner, G.; Kesteman, J.; Korbel, V.; Krüger, J.; Kullander, S.; Landgraf, U.; Lanske, D.; Loken, J.; Long, K.; Maire, M.; Manz, A.; Mohr, W.; Montanet, F.; Montgomery, H. E.; Nagy, E.; Nassalski, J.; Norton, P. R.; Oakham, F. G.; Osborne, A. M.; Pascaud, C.; Pawlik, B.; Payre, P.; Peroni, C.; Pessard, H.; Pettingale, J.; Pietrzyk, B.; Pönsgen, B.; Pötsch, M.; Renton, P.; Ribarics, P.; Rith, K.; Rondio, E.; Scheer, M.; Schlagböhmer, A.; Schiemann, H.; Schmitz, N.; Schneegans, M.; Schröder, T.; Schouten, M.; Schultze, K.; Sholz, M.; Sloan, T.; Stier, H. E.; Stockhausen, W.; Studt, M.; Taylor, G. N.; Thénard, J. M.; Thompson, J. C.; De La Torre, A.; Toth, J.; Urban, L.; Urban, L.; Wahlen, H.; Wallucks, W.; Whalley, M.; Wheeler, S.; Williams, W. S. C.; Wimpenny, S. J.; Windmolders, R.; Wolf, G.; European Muon Collaboration
Properties of the hadron multiplicity distributions in 280 GeV/ c μ +p interactions have been investigated. The c.m. energy dependence in the range from 4 to 20 GeV of the total charged multiplicities are presented. No variation faster than logarithmic is seen in the energy range of this experiment. Comparison with νp and overlineνp data at lower energy has been made and shows good agreement between μ +p and overlineνp total charged multiplicities. It has been found that the average forward multiplicity (charged hadrons with xF > 0) exceeds the average backward multiplicity (charged hadrons with xF < 0) in the whole energy range and presents a different energy variation. The average forward multiplicity has been compared to e +e - data and shows a similar dependence on energy. Little correlation was observed between the forward and backward multiplicities indicating that the current and target regions fragment almost independently.
Multiple Hydrogen Bond Tethers for Grazing Formic Acid in Its Complexes with Phenylacetylene.
Karir, Ginny; Kumar, Gaurav; Kar, Bishnu Prasad; Viswanathan, K S
2018-03-01
Complexes of phenylacetylene (PhAc) and formic acid (FA) present an interesting picture, where the two submolecules are tethered, sometimes multiply, by hydrogen bonds. The multiple tentacles adopted by PhAc-FA complexes stem from the fact that both submolecules can, in the same complex, serve as proton acceptors and/or proton donors. The acetylenic and phenyl π systems of PhAc can serve as proton acceptors, while the ≡C-H or -C-H of the phenyl ring can act as a proton donor. Likewise, FA also is amphiprotic. Hence, more than 10 hydrogen-bonded structures, involving O-H···π, C-H···π, and C-H···O contacts, were indicated by our computations, some with multiple tentacles. Interestingly, despite the multiple contacts in the complexes, the barrier between some of the structures is small, and hence, FA grazes around PhAc, even while being tethered to it, with hydrogen bonds. We used matrix isolation infrared spectroscopy to experimentally study the PhAc-FA complexes, with which we located global and a few local minima, involving primarily an O-H···π interaction. Experiments were corroborated by ab initio computations, which were performed using MP2 and M06-2X methods, with 6-311++G (d,p) and aug-cc-pVDZ basis sets. Single-point energy calculations were also done at MP2/CBS and CCSD(T)/CBS levels. The nature, strength, and origin of these noncovalent interactions were studied using AIM, NBO, and LMO-EDA analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Chih-Chieh; Lin, Hsin-Hon; Lin, Chang-Shiun
Abstract-Multiple-photon emitters, such as In-111 or Se-75, have enormous potential in the field of nuclear medicine imaging. For example, Se-75 can be used to investigate the bile acid malabsorption and measure the bile acid pool loss. The simulation system for emission tomography (SimSET) is a well-known Monte Carlo simulation (MCS) code in nuclear medicine for its high computational efficiency. However, current SimSET cannot simulate these isotopes due to the lack of modeling of complex decay scheme and the time-dependent decay process. To extend the versatility of SimSET for simulation of those multi-photon emission isotopes, a time-resolved multiple photon history generatormore » based on SimSET codes is developed in present study. For developing the time-resolved SimSET (trSimSET) with radionuclide decay process, the new MCS model introduce new features, including decay time information and photon time-of-flight information, into this new code. The half-life of energy states were tabulated from the Evaluated Nuclear Structure Data File (ENSDF) database. The MCS results indicate that the overall percent difference is less than 8.5% for all simulation trials as compared to GATE. To sum up, we demonstrated that time-resolved SimSET multiple photon history generator can have comparable accuracy with GATE and keeping better computational efficiency. The new MCS code is very useful to study the multi-photon imaging of novel isotopes that needs the simulation of lifetime and the time-of-fight measurements. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun Xudong; Hoeksema, J. Todd; Liu, Yang
We report the evolution of the magnetic field and its energy in NOAA active region 11158 over five days based on a vector magnetogram series from the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamic Observatory (SDO). Fast flux emergence and strong shearing motion led to a quadrupolar sunspot complex that produced several major eruptions, including the first X-class flare of Solar Cycle 24. Extrapolated nonlinear force-free coronal fields show substantial electric current and free energy increase during early flux emergence near a low-lying sigmoidal filament with a sheared kilogauss field in the filament channel. The computed magneticmore » free energy reaches a maximum of {approx}2.6 Multiplication-Sign 10{sup 32} erg, about 50% of which is stored below 6 Mm. It decreases by {approx}0.3 Multiplication-Sign 10{sup 32} erg within 1 hr of the X-class flare, which is likely an underestimation of the actual energy loss. During the flare, the photospheric field changed rapidly: the horizontal field was enhanced by 28% in the core region, becoming more inclined and more parallel to the polarity inversion line. Such change is consistent with the conjectured coronal field 'implosion' and is supported by the coronal loop retraction observed by the Atmospheric Imaging Assembly (AIA). The extrapolated field becomes more 'compact' after the flare, with shorter loops in the core region, probably because of reconnection. The coronal field becomes slightly more sheared in the lowest layer, relaxes faster with height, and is overall less energetic.« less
High Performance Computing Meets Energy Efficiency - Continuum Magazine |
NREL High Performance Computing Meets Energy Efficiency High Performance Computing Meets Energy turbines. Simulation by Patrick J. Moriarty and Matthew J. Churchfield, NREL The new High Performance Computing Data Center at the National Renewable Energy Laboratory (NREL) hosts high-speed, high-volume data
Energy Consumption Management of Virtual Cloud Computing Platform
NASA Astrophysics Data System (ADS)
Li, Lin
2017-11-01
For energy consumption management research on virtual cloud computing platforms, energy consumption management of virtual computers and cloud computing platform should be understood deeper. Only in this way can problems faced by energy consumption management be solved. In solving problems, the key to solutions points to data centers with high energy consumption, so people are in great need to use a new scientific technique. Virtualization technology and cloud computing have become powerful tools in people’s real life, work and production because they have strong strength and many advantages. Virtualization technology and cloud computing now is in a rapid developing trend. It has very high resource utilization rate. In this way, the presence of virtualization and cloud computing technologies is very necessary in the constantly developing information age. This paper has summarized, explained and further analyzed energy consumption management questions of the virtual cloud computing platform. It eventually gives people a clearer understanding of energy consumption management of virtual cloud computing platform and brings more help to various aspects of people’s live, work and son on.
NASA Astrophysics Data System (ADS)
Santana, Victor Mancir da Silva; David, Denis; de Almeida, Jailton Souza; Godet, Christian
2018-06-01
A Fourier transform (FT) algorithm is proposed to retrieve the energy loss function (ELF) of solid surfaces from experimental X-ray photoelectron spectra. The intensity measured over a broad energy range towards lower kinetic energies results from convolution of four spectral distributions: photoemission line shape, multiple plasmon loss probability, X-ray source line structure and Gaussian broadening of the photoelectron analyzer. The FT of the measured XPS spectrum, including the zero-loss peak and all inelastic scattering mechanisms, being a mathematical function of the respective FT of X-ray source, photoemission line shape, multiple plasmon loss function, and Gaussian broadening of the photoelectron analyzer, the proposed algorithm gives straightforward access to the bulk ELF and effective dielectric function of the solid, assuming identical ELF for intrinsic and extrinsic plasmon excitations. This method is applied to aluminum single crystal Al(002) where the photoemission line shape has been computed accurately beyond the Doniach-Sunjic approximation using the Mahan-Wertheim-Citrin approach which takes into account the density of states near the Fermi level; the only adjustable parameters are the singularity index and the broadening energy D (inverse hole lifetime). After correction for surface plasmon excitations, the q-averaged bulk loss function,
NASA Astrophysics Data System (ADS)
Santana, Victor Mancir da Silva; David, Denis; de Almeida, Jailton Souza; Godet, Christian
2018-04-01
A Fourier transform (FT) algorithm is proposed to retrieve the energy loss function (ELF) of solid surfaces from experimental X-ray photoelectron spectra. The intensity measured over a broad energy range towards lower kinetic energies results from convolution of four spectral distributions: photoemission line shape, multiple plasmon loss probability, X-ray source line structure and Gaussian broadening of the photoelectron analyzer. The FT of the measured XPS spectrum, including the zero-loss peak and all inelastic scattering mechanisms, being a mathematical function of the respective FT of X-ray source, photoemission line shape, multiple plasmon loss function, and Gaussian broadening of the photoelectron analyzer, the proposed algorithm gives straightforward access to the bulk ELF and effective dielectric function of the solid, assuming identical ELF for intrinsic and extrinsic plasmon excitations. This method is applied to aluminum single crystal Al(002) where the photoemission line shape has been computed accurately beyond the Doniach-Sunjic approximation using the Mahan-Wertheim-Citrin approach which takes into account the density of states near the Fermi level; the only adjustable parameters are the singularity index and the broadening energy D (inverse hole lifetime). After correction for surface plasmon excitations, the q-averaged bulk loss function,
Resampling probability values for weighted kappa with multiple raters.
Mielke, Paul W; Berry, Kenneth J; Johnston, Janis E
2008-04-01
A new procedure to compute weighted kappa with multiple raters is described. A resampling procedure to compute approximate probability values for weighted kappa with multiple raters is presented. Applications of weighted kappa are illustrated with an example analysis of classifications by three independent raters.
Stoffregen, Stacey A; Lee, Stephanie Y; Dickerson, Pearl; Jenks, William S
2014-02-01
CASSCF and multireference MP2 calculations were carried out on thiophene-S-oxide (TO) and selenophene-Se-oxide (SeO), comparing the energies of the ground state to the first two electronically excited singlet and triplet states, using constrained optimizations and multiple fixed S-O or Se-O distances. For both molecules, one of the two triplet states smoothly dissociates to yield O((3)P) with little or no barrier. Single point calculations are consistent with the same phenomenon occurring for dibenzothiophene-S-oxide (DBTO). This provides an explanation for the inefficient unimolecular photochemical dissociation of O((3)P) from DBTO despite a phosphorescence energy below that of S-O dissociation, i.e., that S-O scission probably occurs from a spectroscopically unobserved triplet (T2) state.
NASA Astrophysics Data System (ADS)
Al-Refaie, Ahmed F.; Tennyson, Jonathan
2017-12-01
Construction and diagonalization of the Hamiltonian matrix is the rate-limiting step in most low-energy electron - molecule collision calculations. Tennyson (1996) implemented a novel algorithm for Hamiltonian construction which took advantage of the structure of the wavefunction in such calculations. This algorithm is re-engineered to make use of modern computer architectures and the use of appropriate diagonalizers is considered. Test calculations demonstrate that significant speed-ups can be gained using multiple CPUs. This opens the way to calculations which consider higher collision energies, larger molecules and / or more target states. The methodology, which is implemented as part of the UK molecular R-matrix codes (UKRMol and UKRMol+) can also be used for studies of bound molecular Rydberg states, photoionization and positron-molecule collisions.
Large Scale GW Calculations on the Cori System
NASA Astrophysics Data System (ADS)
Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven
The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.
Electronic levels and charge distribution near the interface of nickel
NASA Technical Reports Server (NTRS)
Waber, J. T.
1982-01-01
The energy levels in clusters of nickel atoms were investigated by means of a series of cluster calculations using both the multiple scattering and computational techniques (designated SSO) which avoids the muffin-tin approximation. The point group symmetry of the cluster has significant effect on the energy of levels nominally not occupied. This influences the electron transfer process during chemisorption. The SSO technique permits the approaching atom or molecule plus a small number of nickel atoms to be treated as a cluster. Specifically, molecular levels become more negative in the O atom, as well as in a CO molecule, as the metal atoms are approached. Thus, electron transfer from the nickel and bond formation is facilitated. This result is of importance in understanding chemisorption and catalytic processes.
NASA Astrophysics Data System (ADS)
Bonitati, Joey; Slimmer, Ben; Li, Weichuan; Potel, Gregory; Nunes, Filomena
2017-09-01
The calculable form of the R-matrix method has been previously shown to be a useful tool in approximately solving the Schrodinger equation in nuclear scattering problems. We use this technique combined with the Gauss quadrature for the Lagrange-mesh method to efficiently solve for the wave functions of projectile nuclei in low energy collisions (1-100 MeV) involving an arbitrary number of channels. We include the local Woods-Saxon potential, the non-local potential of Perey and Buck, a Coulomb potential, and a coupling potential to computationally solve for the wave function of two nuclei at short distances. Object oriented programming is used to increase modularity, and parallel programming techniques are introduced to reduce computation time. We conclude that the R-matrix method is an effective method to predict the wave functions of nuclei in scattering problems involving both multiple channels and non-local potentials. Michigan State University iCER ACRES REU.
NASA Astrophysics Data System (ADS)
Henry, Jackson; Blair, Enrique P.
2018-02-01
Mixed-valence molecules provide an implementation for a high-speed, energy-efficient paradigm for classical computing known as quantum-dot cellular automata (QCA). The primitive device in QCA is a cell, a structure with multiple quantum dots and a few mobile charges. A single mixed-valence molecule can function as a cell, with redox centers providing quantum dots. The charge configuration of a molecule encodes binary information, and device switching occurs via intramolecular electron transfer between dots. Arrays of molecular cells adsorbed onto a substrate form QCA logic. Individual cells in the array are coupled locally via the electrostatic electric field. This device networking enables general-purpose computing. Here, a quantum model of a two-dot molecule is built in which the two-state electronic system is coupled to the dominant nuclear vibrational mode via a reorganization energy. This model is used to explore the effects of the electronic inter-dot tunneling (coupling) matrix element and the reorganization energy on device switching. A semi-classical reduction of the model also is made to investigate the competition between field-driven device switching and the electron-vibrational self-trapping. A strong electron-vibrational coupling (high reorganization energy) gives rise to self-trapping, which inhibits the molecule's ability to switch. Nonetheless, there remains an expansive area in the tunneling-reorganization phase space where molecules can support adequate tunneling. Thus, the relationship between the tunneling matrix element and the reorganization energy affords significant leeway in the design of molecules viable for QCA applications.
Hypergraph-Based Combinatorial Optimization of Matrix-Vector Multiplication
ERIC Educational Resources Information Center
Wolf, Michael Maclean
2009-01-01
Combinatorial scientific computing plays an important enabling role in computational science, particularly in high performance scientific computing. In this thesis, we will describe our work on optimizing matrix-vector multiplication using combinatorial techniques. Our research has focused on two different problems in combinatorial scientific…
Reconstituted Three-Dimensional Interactive Imaging
NASA Technical Reports Server (NTRS)
Hamilton, Joseph; Foley, Theodore; Duncavage, Thomas; Mayes, Terrence
2010-01-01
A method combines two-dimensional images, enhancing the images as well as rendering a 3D, enhanced, interactive computer image or visual model. Any advanced compiler can be used in conjunction with any graphics library package for this method, which is intended to take digitized images and virtually stack them so that they can be interactively viewed as a set of slices. This innovation can take multiple image sources (film or digital) and create a "transparent" image with higher densities in the image being less transparent. The images are then stacked such that an apparent 3D object is created in virtual space for interactive review of the set of images. This innovation can be used with any application where 3D images are taken as slices of a larger object. These could include machines, materials for inspection, geological objects, or human scanning. Illuminous values were stacked into planes with different transparency levels of tissues. These transparency levels can use multiple energy levels, such as density of CT scans or radioactive density. A desktop computer with enough video memory to produce the image is capable of this work. The memory changes with the size and resolution of the desired images to be stacked and viewed.
SLEEC: Semantics-Rich Libraries for Effective Exascale Computation. Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milind, Kulkarni
SLEEC (Semantics-rich Libraries for Effective Exascale Computation) was a project funded by the Department of Energy X-Stack Program, award number DE-SC0008629. The initial project period was September 2012–August 2015. The project was renewed for an additional year, expiring August 2016. Finally, the project received a no-cost extension, leading to a final expiry date of August 2017. Modern applications, especially those intended to run at exascale, are not written from scratch. Instead, they are built by stitching together various carefully-written, hand-tuned libraries. Correctly composing these libraries is difficult, but traditional compilers are unable to effectively analyze and transform across abstraction layers.more » Domain specific compilers integrate semantic knowledge into compilers, allowing them to transform applications that use particular domain-specific languages, or domain libraries. But they do not help when new domains are developed, or applications span multiple domains. SLEEC aims to fix these problems. To do so, we are building generic compiler and runtime infrastructures that are semantics-aware but not domain-specific. By performing optimizations related to the semantics of a domain library, the same infrastructure can be made generic and apply across multiple domains.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hules, John
This 1998 annual report from the National Scientific Energy Research Computing Center (NERSC) presents the year in review of the following categories: Computational Science; Computer Science and Applied Mathematics; and Systems and Services. Also presented are science highlights in the following categories: Basic Energy Sciences; Biological and Environmental Research; Fusion Energy Sciences; High Energy and Nuclear Physics; and Advanced Scientific Computing Research and Other Projects.
ERIC Educational Resources Information Center
Namdar, Bahadir; Shen, Ji
2018-01-01
Computer-supported collaborative learning (CSCL) environments provide learners with multiple representational tools for storing, sharing, and constructing knowledge. However, little is known about how learners organize knowledge through multiple representations about complex socioscientific issues. Therefore, the purpose of this study was to…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crabtree, George; Glotzer, Sharon; McCurdy, Bill
This report is based on a SC Workshop on Computational Materials Science and Chemistry for Innovation on July 26-27, 2010, to assess the potential of state-of-the-art computer simulations to accelerate understanding and discovery in materials science and chemistry, with a focus on potential impacts in energy technologies and innovation. The urgent demand for new energy technologies has greatly exceeded the capabilities of today's materials and chemical processes. To convert sunlight to fuel, efficiently store energy, or enable a new generation of energy production and utilization technologies requires the development of new materials and processes of unprecedented functionality and performance. Newmore » materials and processes are critical pacing elements for progress in advanced energy systems and virtually all industrial technologies. Over the past two decades, the United States has developed and deployed the world's most powerful collection of tools for the synthesis, processing, characterization, and simulation and modeling of materials and chemical systems at the nanoscale, dimensions of a few atoms to a few hundred atoms across. These tools, which include world-leading x-ray and neutron sources, nanoscale science facilities, and high-performance computers, provide an unprecedented view of the atomic-scale structure and dynamics of materials and the molecular-scale basis of chemical processes. For the first time in history, we are able to synthesize, characterize, and model materials and chemical behavior at the length scale where this behavior is controlled. This ability is transformational for the discovery process and, as a result, confers a significant competitive advantage. Perhaps the most spectacular increase in capability has been demonstrated in high performance computing. Over the past decade, computational power has increased by a factor of a million due to advances in hardware and software. This rate of improvement, which shows no sign of abating, has enabled the development of computer simulations and models of unprecedented fidelity. We are at the threshold of a new era where the integrated synthesis, characterization, and modeling of complex materials and chemical processes will transform our ability to understand and design new materials and chemistries with predictive power. In turn, this predictive capability will transform technological innovation by accelerating the development and deployment of new materials and processes in products and manufacturing. Harnessing the potential of computational science and engineering for the discovery and development of materials and chemical processes is essential to maintaining leadership in these foundational fields that underpin energy technologies and industrial competitiveness. Capitalizing on the opportunities presented by simulation-based engineering and science in materials and chemistry will require an integration of experimental capabilities with theoretical and computational modeling; the development of a robust and sustainable infrastructure to support the development and deployment of advanced computational models; and the assembly of a community of scientists and engineers to implement this integration and infrastructure. This community must extend to industry, where incorporating predictive materials science and chemistry into design tools can accelerate the product development cycle and drive economic competitiveness. The confluence of new theories, new materials synthesis capabilities, and new computer platforms has created an unprecedented opportunity to implement a "materials-by-design" paradigm with wide-ranging benefits in technological innovation and scientific discovery. The Workshop on Computational Materials Science and Chemistry for Innovation was convened in Bethesda, Maryland, on July 26-27, 2010. Sponsored by the Department of Energy (DOE) Offices of Advanced Scientific Computing Research and Basic Energy Sciences, the workshop brought together 160 experts in materials science, chemistry, and computational science representing more than 65 universities, laboratories, and industries, and four agencies. The workshop examined seven foundational challenge areas in materials science and chemistry: materials for extreme conditions, self-assembly, light harvesting, chemical reactions, designer fluids, thin films and interfaces, and electronic structure. Each of these challenge areas is critical to the development of advanced energy systems, and each can be accelerated by the integrated application of predictive capability with theory and experiment. The workshop concluded that emerging capabilities in predictive modeling and simulation have the potential to revolutionize the development of new materials and chemical processes. Coupled with world-leading materials characterization and nanoscale science facilities, this predictive capability provides the foundation for an innovation ecosystem that can accelerate the discovery, development, and deployment of new technologies, including advanced energy systems. Delivering on the promise of this innovation ecosystem requires the following: Integration of synthesis, processing, characterization, theory, and simulation and modeling. Many of the newly established Energy Frontier Research Centers and Energy Hubs are exploiting this integration. Achieving/strengthening predictive capability in foundational challenge areas. Predictive capability in the seven foundational challenge areas described in this report is critical to the development of advanced energy technologies. Developing validated computational approaches that span vast differences in time and length scales. This fundamental computational challenge crosscuts all of the foundational challenge areas. Similarly challenging is coupling of analytical data from multiple instruments and techniques that are required to link these length and time scales. Experimental validation and quantification of uncertainty in simulation and modeling. Uncertainty quantification becomes increasingly challenging as simulations become more complex. Robust and sustainable computational infrastructure, including software and applications. For modeling and simulation, software equals infrastructure. To validate the computational tools, software is critical infrastructure that effectively translates huge arrays of experimental data into useful scientific understanding. An integrated approach for managing this infrastructure is essential. Efficient transfer and incorporation of simulation-based engineering and science in industry. Strategies for bridging the gap between research and industrial applications and for widespread industry adoption of integrated computational materials engineering are needed.« less
Optical Interconnection Via Computer-Generated Holograms
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang; Zhou, Shaomin
1995-01-01
Method of free-space optical interconnection developed for data-processing applications like parallel optical computing, neural-network computing, and switching in optical communication networks. In method, multiple optical connections between multiple sources of light in one array and multiple photodetectors in another array made via computer-generated holograms in electrically addressed spatial light modulators (ESLMs). Offers potential advantages of massive parallelism, high space-bandwidth product, high time-bandwidth product, low power consumption, low cross talk, and low time skew. Also offers advantage of programmability with flexibility of reconfiguration, including variation of strengths of optical connections in real time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDeavitt, Sean; Shao, Lin; Tsvetkov, Pavel
2014-04-07
Advanced fast reactor systems being developed under the DOE's Advanced Fuel Cycle Initiative are designed to destroy TRU isotopes generated in existing and future nuclear energy systems. Over the past 40 years, multiple experiments and demonstrations have been completed using U-Zr, U-Pu-Zr, U-Mo and other metal alloys. As a result, multiple empirical and semi-empirical relationships have been established to develop empirical performance modeling codes. Many mechanistic questions about fission as mobility, bubble coalescience, and gas release have been answered through industrial experience, research, and empirical understanding. The advent of modern computational materials science, however, opens new doors of development suchmore » that physics-based multi-scale models may be developed to enable a new generation of predictive fuel performance codes that are not limited by empiricism.« less
Numerical analysis of single and multiple jets
NASA Astrophysics Data System (ADS)
Boussoufi, Mustapha; Sabeur-Bendehina, Amina; Ouadha, Ahmed; Morsli, Souad; El Ganaoui, Mohammed
2017-05-01
The present study aims to use the concept of entropy generation in order to study numerically the flow and the interaction of multiple jets. Several configurations of a single jet surrounded by equidistant 3, 5, 7 and 9 circumferential jets have been studied. The turbulent incompressible Navier-Stokes equations have been solved numerically using the commercial computational fluid dynamics code Fluent. The standard k-ɛ model has been selected to assess the eddy viscosity. The domain has been reduced to a quarter of the geometry due to symmetry. Results for axial and radial velocities have been compared with experimental measurements from the literature. Furthermore, additional results involving entropy generation rate have been presented and discussed. Contribution to the topical issue "Materials for Energy harvesting, conversion and storage II (ICOME 2016)", edited by Jean-Michel Nunzi, Rachid Bennacer and Mohammed El Ganaoui
Inverse Thermal Analysis of Titanium GTA Welds Using Multiple Constraints
NASA Astrophysics Data System (ADS)
Lambrakos, S. G.; Shabaev, A.; Huang, L.
2015-06-01
Inverse thermal analysis of titanium gas-tungsten-arc welds using multiple constraint conditions is presented. This analysis employs a methodology that is in terms of numerical-analytical basis functions for inverse thermal analysis of steady-state energy deposition in plate structures. The results of this type of analysis provide parametric representations of weld temperature histories that can be adopted as input data to various types of computational procedures, such as those for prediction of solid-state phase transformations. In addition, these temperature histories can be used to construct parametric function representations for inverse thermal analysis of welds corresponding to other process parameters or welding processes whose process conditions are within similar regimes. The present study applies an inverse thermal analysis procedure that provides for the inclusion of constraint conditions associated with both solidification and phase transformation boundaries.
NASA Technical Reports Server (NTRS)
Mckenzie, R. L.
1975-01-01
A semiclassical model of the inelastic collision between a vibrationally excited anharmonic oscillator and a structureless atom was used to predict the variation of thermally averaged vibration-translation rate coefficients with temperature and initial-state quantum number. Multiple oscillator states were included in a numerical solution for collinear encounters. The results are compared with CO-He experimental values for both ground and excited initial states using several simplified forms of the interaction potential. The numerical model was also used as a basis for evaluating several less complete but analytic models. Two computationally simple analytic approximations were found that successfully reproduced the numerical rate coefficients for a wide range of molecular properties and collision partners. Their limitations were also identified. The relative rates of multiple-quantum transitions from excited states were evaluated for several molecular types.
Multiple elastic scattering of electrons in condensed matter
NASA Astrophysics Data System (ADS)
Jablonski, A.
2017-01-01
Since the 1940s, much attention has been devoted to the problem of accurate theoretical description of electron transport in condensed matter. The needed information for describing different aspects of the electron transport is the angular distribution of electron directions after multiple elastic collisions. This distribution can be expanded into a series of Legendre polynomials with coefficients, Al. In the present work, a database of these coefficients for all elements up to uranium (Z=92) and a dense grid of electron energies varying from 50 to 5000 eV has been created. The database makes possible the following applications: (i) accurate interpolation of coefficients Al for any element and any energy from the above range, (ii) fast calculations of the differential and total elastic-scattering cross sections, (iii) determination of the angular distribution of directions after multiple collisions, (iv) calculations of the probability of elastic backscattering from solids, and (v) calculations of the calibration curves for determination of the inelastic mean free paths of electrons. The last two applications provide data with comparable accuracy to Monte Carlo simulations, yet the running time is decreased by several orders of magnitude. All of the above applications are implemented in the Fortran program MULTI_SCATT. Numerous illustrative runs of this program are described. Despite a relatively large volume of the database of coefficients Al, the program MULTI_SCATT can be readily run on personal computers.
NASA Astrophysics Data System (ADS)
Parker, Jeffrey; Lodestro, Lynda; Told, Daniel; Merlo, Gabriele; Ricketson, Lee; Campos, Alejandro; Jenko, Frank; Hittinger, Jeffrey
2017-10-01
Predictive whole-device simulation models will play an increasingly important role in ensuring the success of fusion experiments and accelerating the development of fusion energy. In the core of tokamak plasmas, a separation of timescales between turbulence and transport makes a single direct simulation of both processes computationally expensive. We present the first demonstration of a multiple-timescale method coupling global gyrokinetic simulations with a transport solver to calculate the self-consistent, steady-state temperature profile. Initial results are highly encouraging, with the coupling method appearing robust to the difficult problem of turbulent fluctuations. The method holds potential for integrating first-principles turbulence simulations into whole-device models and advancing the understanding of global plasma behavior. Work supported by US DOE under Contract DE-AC52-07NA27344 and the Exascale Computing Project (17-SC-20-SC).
2013-01-01
Background Elucidating the native structure of a protein molecule from its sequence of amino acids, a problem known as de novo structure prediction, is a long standing challenge in computational structural biology. Difficulties in silico arise due to the high dimensionality of the protein conformational space and the ruggedness of the associated energy surface. The issue of multiple minima is a particularly troublesome hallmark of energy surfaces probed with current energy functions. In contrast to the true energy surface, these surfaces are weakly-funneled and rich in comparably deep minima populated by non-native structures. For this reason, many algorithms seek to be inclusive and obtain a broad view of the low-energy regions through an ensemble of low-energy (decoy) conformations. Conformational diversity in this ensemble is key to increasing the likelihood that the native structure has been captured. Methods We propose an evolutionary search approach to address the multiple-minima problem in decoy sampling for de novo structure prediction. Two population-based evolutionary search algorithms are presented that follow the basic approach of treating conformations as individuals in an evolving population. Coarse graining and molecular fragment replacement are used to efficiently obtain protein-like child conformations from parents. Potential energy is used both to bias parent selection and determine which subset of parents and children will be retained in the evolving population. The effect on the decoy ensemble of sampling minima directly is measured by additionally mapping a conformation to its nearest local minimum before considering it for retainment. The resulting memetic algorithm thus evolves not just a population of conformations but a population of local minima. Results and conclusions Results show that both algorithms are effective in terms of sampling conformations in proximity of the known native structure. The additional minimization is shown to be key to enhancing sampling capability and obtaining a diverse ensemble of decoy conformations, circumventing premature convergence to sub-optimal regions in the conformational space, and approaching the native structure with proximity that is comparable to state-of-the-art decoy sampling methods. The results are shown to be robust and valid when using two representative state-of-the-art coarse-grained energy functions. PMID:24565020
Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.
Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L
2017-06-13
λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.
Using Multiple Grids To Compute Flows
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
1991-01-01
Paper discusses decomposition of global grids into multiple patched and/or overlaid local grids in computations of fluid flow. Such "domain decomposition" particularly useful in computation of flows about complicated bodies moving relative to each other; for example, flows associated with rotors and stators in turbomachinery and rotors and fuselages in helicopters.
20 CFR 226.74 - Redetermination of reduction.
Code of Federal Regulations, 2010 CFR
2010-04-01
... average of the total wages (including wages that exceed the maximum used in computing social security... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Reduction for Workers' Compensation and Disability... computed. If the result is not a multiple of $1, it is rounded to the next lower multiple of $1; or (2) If...
On the number of multiplications necessary to compute a length-2 exp n DFT
NASA Technical Reports Server (NTRS)
Heideman, M. T.; Burrus, C. S.
1986-01-01
The number of multiplications necessary and sufficient to compute a length-2 exp n DFT is determined. The method of derivation is shown to apply to the multiplicative complexity results of Winograd (1980, 1981) for a length-p exp n DFT, for p an odd prime number. The multiplicative complexity of the one-dimensional DFT is summarized for many possible lengths.
Program Aids Specification Of Multiple-Block Grids
NASA Technical Reports Server (NTRS)
Sorenson, R. L.; Mccann, K. M.
1993-01-01
3DPREP computer program aids specification of multiple-block computational grids. Highly interactive graphical preprocessing program designed for use on powerful graphical scientific computer workstation. Divided into three main parts, each corresponding to principal graphical-and-alphanumerical display. Relieves user of some burden of collecting and formatting many data needed to specify blocks and grids, and prepares input data for NASA's 3DGRAPE grid-generating computer program.
NASA Astrophysics Data System (ADS)
Xu, Jun
Topic 1. An Optimization-Based Approach for Facility Energy Management with Uncertainties. Effective energy management for facilities is becoming increasingly important in view of the rising energy costs, the government mandate on the reduction of energy consumption, and the human comfort requirements. This part of dissertation presents a daily energy management formulation and the corresponding solution methodology for HVAC systems. The problem is to minimize the energy and demand costs through the control of HVAC units while satisfying human comfort, system dynamics, load limit constraints, and other requirements. The problem is difficult in view of the fact that the system is nonlinear, time-varying, building-dependent, and uncertain; and that the direct control of a large number of HVAC components is difficult. In this work, HVAC setpoints are the control variables developed on top of a Direct Digital Control (DDC) system. A method that combines Lagrangian relaxation, neural networks, stochastic dynamic programming, and heuristics is developed to predict the system dynamics and uncontrollable load, and to optimize the setpoints. Numerical testing and prototype implementation results show that our method can effectively reduce total costs, manage uncertainties, and shed the load, is computationally efficient. Furthermore, it is significantly better than existing methods. Topic 2. Power Portfolio Optimization in Deregulated Electricity Markets with Risk Management. In a deregulated electric power system, multiple markets of different time scales exist with various power supply instruments. A load serving entity (LSE) has multiple choices from these instruments to meet its load obligations. In view of the large amount of power involved, the complex market structure, risks in such volatile markets, stringent constraints to be satisfied, and the long time horizon, a power portfolio optimization problem is of critical importance but difficulty for an LSE to serve the load, maximize its profit, and manage risks. In this topic, a mid-term power portfolio optimization problem with risk management is presented. Key instruments are considered, risk terms based on semi-variances of spot market transactions are introduced, and penalties on load obligation violations are added to the objective function to improve algorithm convergence and constraint satisfaction. To overcome the inseparability of the resulting problem, a surrogate optimization framework is developed enabling a decomposition and coordination approach. Numerical testing results show that our method effectively provides decisions for various instruments to maximize profit, manage risks, and is computationally efficient.
Shim, Jihyun; Mackerell, Alexander D
2011-05-01
A significant number of drug discovery efforts are based on natural products or high throughput screens from which compounds showing potential therapeutic effects are identified without knowledge of the target molecule or its 3D structure. In such cases computational ligand-based drug design (LBDD) can accelerate the drug discovery processes. LBDD is a general approach to elucidate the relationship of a compound's structure and physicochemical attributes to its biological activity. The resulting structure-activity relationship (SAR) may then act as the basis for the prediction of compounds with improved biological attributes. LBDD methods range from pharmacophore models identifying essential features of ligands responsible for their activity, quantitative structure-activity relationships (QSAR) yielding quantitative estimates of activities based on physiochemical properties, and to similarity searching, which explores compounds with similar properties as well as various combinations of the above. A number of recent LBDD approaches involve the use of multiple conformations of the ligands being studied. One of the basic components to generate multiple conformations in LBDD is molecular mechanics (MM), which apply an empirical energy function to relate conformation to energies and forces. The collection of conformations for ligands is then combined with functional data using methods ranging from regression analysis to neural networks, from which the SAR is determined. Accordingly, for effective application of LBDD for SAR determinations it is important that the compounds be accurately modelled such that the appropriate range of conformations accessible to the ligands is identified. Such accurate modelling is largely based on use of the appropriate empirical force field for the molecules being investigated and the approaches used to generate the conformations. The present chapter includes a brief overview of currently used SAR methods in LBDD followed by a more detailed presentation of issues and limitations associated with empirical energy functions and conformational sampling methods.
Strategy and gaps for modeling, simulation, and control of hybrid systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rabiti, Cristian; Garcia, Humberto E.; Hovsapian, Rob
2015-04-01
The purpose of this report is to establish a strategy for modeling and simulation of candidate hybrid energy systems. Modeling and simulation is necessary to design, evaluate, and optimize the system technical and economic performance. Accordingly, this report first establishes the simulation requirements to analysis candidate hybrid systems. Simulation fidelity levels are established based on the temporal scale, real and synthetic data availability or needs, solution accuracy, and output parameters needed to evaluate case-specific figures of merit. Accordingly, the associated computational and co-simulation resources needed are established; including physical models when needed, code assembly and integrated solutions platforms, mathematical solvers,more » and data processing. This report first attempts to describe the figures of merit, systems requirements, and constraints that are necessary and sufficient to characterize the grid and hybrid systems behavior and market interactions. Loss of Load Probability (LOLP) and effective cost of Effective Cost of Energy (ECE), as opposed to the standard Levelized Cost of Electricty (LCOE), are introduced as technical and economical indices for integrated energy system evaluations. Financial assessment methods are subsequently introduced for evaluation of non-traditional, hybrid energy systems. Algorithms for coupled and iterative evaluation of the technical and economic performance are subsequently discussed. This report further defines modeling objectives, computational tools, solution approaches, and real-time data collection and processing (in some cases using real test units) that will be required to model, co-simulate, and optimize; (a) an energy system components (e.g., power generation unit, chemical process, electricity management unit), (b) system domains (e.g., thermal, electrical or chemical energy generation, conversion, and transport), and (c) systems control modules. Co-simulation of complex, tightly coupled, dynamic energy systems requires multiple simulation tools, potentially developed in several programming languages and resolved on separate time scales. Whereas further investigation and development of hybrid concepts will provide a more complete understanding of the joint computational and physical modeling needs, this report highlights areas in which co-simulation capabilities are warranted. The current development status, quality assurance, availability and maintainability of simulation tools that are currently available for hybrid systems modeling is presented. Existing gaps in the modeling and simulation toolsets and development needs are subsequently discussed. This effort will feed into a broader Roadmap activity for designing, developing, and demonstrating hybrid energy systems.« less
46 CFR 111.10-5 - Multiple energy sources.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false Multiple energy sources. 111.10-5 Section 111.10-5...-GENERAL REQUIREMENTS Power Supply § 111.10-5 Multiple energy sources. Failure of any single generating set energy source such as a boiler, diesel, gas turbine, or steam turbine must not cause all generating sets...
46 CFR 111.10-5 - Multiple energy sources.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 4 2011-10-01 2011-10-01 false Multiple energy sources. 111.10-5 Section 111.10-5...-GENERAL REQUIREMENTS Power Supply § 111.10-5 Multiple energy sources. Failure of any single generating set energy source such as a boiler, diesel, gas turbine, or steam turbine must not cause all generating sets...
46 CFR 111.10-5 - Multiple energy sources.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false Multiple energy sources. 111.10-5 Section 111.10-5...-GENERAL REQUIREMENTS Power Supply § 111.10-5 Multiple energy sources. Failure of any single generating set energy source such as a boiler, diesel, gas turbine, or steam turbine must not cause all generating sets...
46 CFR 111.10-5 - Multiple energy sources.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false Multiple energy sources. 111.10-5 Section 111.10-5...-GENERAL REQUIREMENTS Power Supply § 111.10-5 Multiple energy sources. Failure of any single generating set energy source such as a boiler, diesel, gas turbine, or steam turbine must not cause all generating sets...
46 CFR 111.10-5 - Multiple energy sources.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 4 2012-10-01 2012-10-01 false Multiple energy sources. 111.10-5 Section 111.10-5...-GENERAL REQUIREMENTS Power Supply § 111.10-5 Multiple energy sources. Failure of any single generating set energy source such as a boiler, diesel, gas turbine, or steam turbine must not cause all generating sets...
Strong Effects of Vs30 Heterogeneity on Physics-Based Scenario Ground-Shaking Computations
NASA Astrophysics Data System (ADS)
Louie, J. N.; Pullammanappallil, S. K.
2014-12-01
Hazard mapping and building codes worldwide use the vertically time-averaged shear-wave velocity between the surface and 30 meters depth, Vs30, as one predictor of earthquake ground shaking. Intensive field campaigns a decade ago in Reno, Los Angeles, and Las Vegas measured urban Vs30 transects with 0.3-km spacing. The Clark County, Nevada, Parcel Map includes urban Las Vegas and comprises over 10,000 site measurements over 1500 km2, completed in 2010. All of these data demonstrate fractal spatial statistics, with a fractal dimension of 1.5-1.8 at scale lengths from 0.5 km to 50 km. Vs measurements in boreholes up to 400 m deep show very similar statistics at 1 m to 200 m lengths. When included in physics-based earthquake-scenario ground-shaking computations, the highly heterogeneous Vs30 maps exhibit unexpectedly strong influence. In sensitivity tests (image below), low-frequency computations at 0.1 Hz display amplifications (as well as de-amplifications) of 20% due solely to Vs30. In 0.5-1.0 Hz computations, the amplifications are a factor of two or more. At 0.5 Hz and higher frequencies the amplifications can be larger than what the 1-d Building Code equations would predict from the Vs30 variations. Vs30 heterogeneities at one location have strong influence on amplifications at other locations, stretching out in the predominant direction of wave propagation for that scenario. The sensitivity tests show that shaking and amplifications are highly scenario-dependent. Animations of computed ground motions and how they evolve with time suggest that the fractal Vs30 variance acts to trap wave energy and increases the duration of shaking. Validations of the computations against recorded ground motions, possible in Las Vegas Valley due to the measurements of the Clark County Parcel Map, show that ground motion levels and amplifications match, while recorded shaking has longer duration than computed shaking. Several mechanisms may explain the amplification and increased duration of shaking in the presence of heterogeneous spatial distributions of Vs: conservation of wave energy across velocity changes; geometric focusing of waves by low-velocity lenses; vertical resonance and trapping; horizontal resonance and trapping; and multiple conversion of P- to S-wave energy.
Energy consumption program: A computer model simulating energy loads in buildings
NASA Technical Reports Server (NTRS)
Stoller, F. W.; Lansing, F. L.; Chai, V. W.; Higgins, S.
1978-01-01
The JPL energy consumption computer program developed as a useful tool in the on-going building modification studies in the DSN energy conservation project is described. The program simulates building heating and cooling loads and computes thermal and electric energy consumption and cost. The accuracy of computations are not sacrificed, however, since the results lie within + or - 10 percent margin compared to those read from energy meters. The program is carefully structured to reduce both user's time and running cost by asking minimum information from the user and reducing many internal time-consuming computational loops. Many unique features were added to handle two-level electronics control rooms not found in any other program.
NASA Technical Reports Server (NTRS)
Denning, Peter J.; Tichy, Walter F.
1990-01-01
Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.
Multiscale computational modeling of a radiantly driven solar thermal collector
NASA Astrophysics Data System (ADS)
Ponnuru, Koushik
The objectives of the master's thesis are to present, discuss and apply sequential multiscale modeling that combines analytical, numerical (finite element-based) and computational fluid dynamic (CFD) analysis to assist in the development of a radiantly driven macroscale solar thermal collector for energy harvesting. The solar thermal collector is a novel green energy system that converts solar energy to heat and utilizes dry air as a working heat transfer fluid (HTF). This energy system has important advantages over competitive technologies: it is self-contained (no energy sources are needed), there are no moving parts, no oil or supplementary fluids are needed and it is environmentally friendly since it is powered by solar radiation. This work focuses on the development of multi-physics and multiscale models for predicting the performance of the solar thermal collector. Model construction and validation is organized around three distinct and complementary levels. The first level involves an analytical analysis of the thermal transpiration phenomenon and models for predicting the associated mass flow pumping that occurs in an aerogel membrane in the presence of a large thermal gradient. Within the aerogel, a combination of convection, conduction and radiation occurs simultaneously in a domain where the pore size is comparable to the mean free path of the gas molecules. CFD modeling of thermal transpiration is not possible because all the available commercial CFD codes solve the Navier Stokes equations only for continuum flow, which is based on the assumption that the net molecular mass diffusion is zero. However, thermal transpiration occurs in a flow regime where a non-zero net molecular mass diffusion exists. Thus these effects are modeled by using Sharipov's [2] analytical expression for gas flow characterized by high Knudsen number. The second level uses a detailed CFD model solving Navier Stokes equations for momentum, heat and mass transfer in the various components of the device. We have used state-of-the-art computational fluid dynamics (CFD) software, Flow3D (www.flow3d.com) to model the effects of multiple coupled physical processes including buoyancy driven flow from local temperature differences within the plenums, fluid-solid momentum and heat transfer, and coupled radiation exchange between the aerogel, top glazing and environment. In addition, the CFD models include both convection and radiation exchange between the top glazing and the environment. Transient and steady-state thermal models have been constructed using COMSOL Multiphysics. The third level consists of a lumped-element system model, which enables rapid parametric analysis and helps to develop an understanding of the system behavior; the mathematical models developed and multiple CFD simulations studies focus on simultaneous solution of heat, momentum, mass and gas volume fraction balances and succeed in accurate state variable distributions confirmed by experimental measurements.
Fully-relativistic full-potential multiple scattering theory: A pathology-free scheme
NASA Astrophysics Data System (ADS)
Liu, Xianglin; Wang, Yang; Eisenbach, Markus; Stocks, G. Malcolm
2018-03-01
The Green function plays an essential role in the Korringa-Kohn-Rostoker(KKR) multiple scattering method. In practice, it is constructed from the regular and irregular solutions of the local Kohn-Sham equation and robust methods exist for spherical potentials. However, when applied to a non-spherical potential, numerical errors from the irregular solutions give rise to pathological behaviors of the charge density at small radius. Here we present a full-potential implementation of the fully-relativistic KKR method to perform ab initio self-consistent calculation by directly solving the Dirac differential equations using the generalized variable phase (sine and cosine matrices) formalism Liu et al. (2016). The pathology around the origin is completely eliminated by carrying out the energy integration of the single-site Green function along the real axis. By using an efficient pole-searching technique to identify the zeros of the well-behaved Jost matrices, we demonstrated that this scheme is numerically stable and computationally efficient, with speed comparable to the conventional contour energy integration method, while free of the pathology problem of the charge density. As an application, this method is utilized to investigate the crystal structures of polonium and their bulk properties, which is challenging for a conventional real-energy scheme. The noble metals are also calculated, both as a test of our method and to study the relativistic effects.
Fully-relativistic full-potential multiple scattering theory: A pathology-free scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xianglin; Wang, Yang; Eisenbach, Markus
The Green function plays an essential role in the Korringa–Kohn–Rostoker(KKR) multiple scattering method. In practice, it is constructed from the regular and irregular solutions of the local Kohn–Sham equation and robust methods exist for spherical potentials. However, when applied to a non-spherical potential, numerical errors from the irregular solutions give rise to pathological behaviors of the charge density at small radius. Here we present a full-potential implementation of the fully-relativistic KKR method to perform ab initio self-consistent calculation by directly solving the Dirac differential equations using the generalized variable phase (sine and cosine matrices) formalism Liu et al. (2016). Themore » pathology around the origin is completely eliminated by carrying out the energy integration of the single-site Green function along the real axis. Here, by using an efficient pole-searching technique to identify the zeros of the well-behaved Jost matrices, we demonstrated that this scheme is numerically stable and computationally efficient, with speed comparable to the conventional contour energy integration method, while free of the pathology problem of the charge density. As an application, this method is utilized to investigate the crystal structures of polonium and their bulk properties, which is challenging for a conventional real-energy scheme. The noble metals are also calculated, both as a test of our method and to study the relativistic effects.« less
Fully-relativistic full-potential multiple scattering theory: A pathology-free scheme
Liu, Xianglin; Wang, Yang; Eisenbach, Markus; ...
2017-10-28
The Green function plays an essential role in the Korringa–Kohn–Rostoker(KKR) multiple scattering method. In practice, it is constructed from the regular and irregular solutions of the local Kohn–Sham equation and robust methods exist for spherical potentials. However, when applied to a non-spherical potential, numerical errors from the irregular solutions give rise to pathological behaviors of the charge density at small radius. Here we present a full-potential implementation of the fully-relativistic KKR method to perform ab initio self-consistent calculation by directly solving the Dirac differential equations using the generalized variable phase (sine and cosine matrices) formalism Liu et al. (2016). Themore » pathology around the origin is completely eliminated by carrying out the energy integration of the single-site Green function along the real axis. Here, by using an efficient pole-searching technique to identify the zeros of the well-behaved Jost matrices, we demonstrated that this scheme is numerically stable and computationally efficient, with speed comparable to the conventional contour energy integration method, while free of the pathology problem of the charge density. As an application, this method is utilized to investigate the crystal structures of polonium and their bulk properties, which is challenging for a conventional real-energy scheme. The noble metals are also calculated, both as a test of our method and to study the relativistic effects.« less
NASA Technical Reports Server (NTRS)
Eberhardt, D. S.; Baganoff, D.; Stevens, K.
1984-01-01
Implicit approximate-factored algorithms have certain properties that are suitable for parallel processing. A particular computational fluid dynamics (CFD) code, using this algorithm, is mapped onto a multiple-instruction/multiple-data-stream (MIMD) computer architecture. An explanation of this mapping procedure is presented, as well as some of the difficulties encountered when trying to run the code concurrently. Timing results are given for runs on the Ames Research Center's MIMD test facility which consists of two VAX 11/780's with a common MA780 multi-ported memory. Speedups exceeding 1.9 for characteristic CFD runs were indicated by the timing results.
Fast simulation of yttrium-90 bremsstrahlung photons with GATE.
Rault, Erwann; Staelens, Steven; Van Holen, Roel; De Beenhouwer, Jan; Vandenberghe, Stefaan
2010-06-01
Multiple investigators have recently reported the use of yttrium-90 (90Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging for the dosimetry of targeted radionuclide therapies. Because Monte Carlo (MC) simulations are useful for studying SPECT imaging, this study investigates the MC simulation of 90Y bremsstrahlung photons in SPECT. To overcome the computationally expensive simulation of electrons, the authors propose a fast way to simulate the emission of 90Y bremsstrahlung photons based on prerecorded bremsstrahlung photon probability density functions (PDFs). The accuracy of bremsstrahlung photon simulation is evaluated in two steps. First, the validity of the fast bremsstrahlung photon generator is checked. To that end, fast and analog simulations of photons emitted from a 90Y point source in a water phantom are compared. The same setup is then used to verify the accuracy of the bremsstrahlung photon simulations, comparing the results obtained with PDFs generated from both simulated and measured data to measurements. In both cases, the energy spectra and point spread functions of the photons detected in a scintillation camera are used. Results show that the fast simulation method is responsible for a 5% overestimation of the low-energy fluence (below 75 keV) of the bremsstrahlung photons detected using a scintillation camera. The spatial distribution of the detected photons is, however, accurately reproduced with the fast method and a computational acceleration of approximately 17-fold is achieved. When measured PDFs are used in the simulations, the simulated energy spectrum of photons emitted from a point source of 90Y in a water phantom and detected in a scintillation camera closely approximates the measured spectrum. The PSF of the photons imaged in the 50-300 keV energy window is also accurately estimated with a 12.4% underestimation of the full width at half maximum and 4.5% underestimation of the full width at tenth maximum. Despite its limited accuracy, the fast bremsstrahlung photon generator is well suited for the simulation of bremsstrahlung photons emitted in large homogeneous organs, such as the liver, and detected in a scintillation camera. The computational acceleration makes it very useful for future investigations of 90Y bremsstrahlung SPECT imaging.
Picking a Fight with Water, and Water Lost ... an Electron
NASA Astrophysics Data System (ADS)
Herr, Jonathan D.
The global need for energy is increasing, as is the importance of producing energy by green and renewable methodologies. This document outlines a research program dedicated to investigating a possible source for this form of energy generation and storage: solar fuels. The photon-induced splitting of water into molecular hydrogen and oxygen is currently hindered by large overpotentials from the oxidation half-reaction of water-splitting. This study concentrated on fundamental models of water-spitting chemistry, using a physical and computational chemistry analysis. The oxidation was first explored via ab initio electronic structure calculations of bare cationic water clusters, comprised of 2 to 21 molecules, in order to determine key electronic interactions that facilitate oxidation. Deeper understanding of these interactions could serve as guides for the development of viable water oxidation catalysts (WOC) designed to reduce overpotentials. The cationic water cluster study was followed by an investigation into hydrated copper (I) clusters, which acted as precursor models for real WOCs. Analyzing how the copper ion perturbed the properties of water clusters led to important electronic considerations for the development of WOCs, such as copper-water interactions that go beyond simple electrostatics. The importance of diagnostic thermodynamic properties, as well as anharmonic characteristics being persistent throughout oxidized water clusters, necessitated the use of quantum and classical molecular dynamics (MD) routines. Therefore, two new methods for accelerating computationally demanding classical and quantum MD methods were developed to increase their accessibility. The first method utilized a new form of electronic extrapolation - a linear prediction routine incorporating a Burg minimization - to decrease the iterations required for solving the electronic equations throughout the dynamics. The second method utilized a multiple-timestepping description of the potential energy term in the path integral molecular dynamics (PIMD) formalism. This method led to reductions of computational time by allowing the use of less computationally laborious methods for portions of the simulation and resulted in negligible increase of error. The determination of the fundamental driving forces within water oxidation and the development of acceleration techniques for important electronic structure methods will help drive progress into fully solar-initiated water oxidation.
Construction of Logarithm Tables for Galois Fields
ERIC Educational Resources Information Center
Torres-Jimenez, Jose; Rangel-Valdez, Nelson; Gonzalez-Hernandez, Ana Loreto; Avila-George, Himer
2011-01-01
A branch of mathematics commonly used in cryptography is Galois Fields GF(p[superscript n]). Two basic operations performed in GF(p[superscript n]) are the addition and the multiplication. While the addition is generally easy to compute, the multiplication requires a special treatment. A well-known method to compute the multiplication is based on…
10 CFR 727.4 - Is there any expectation of privacy applicable to a DOE computer?
Code of Federal Regulations, 2012 CFR
2012-01-01
... Communications Privacy Act of 1986), no user of a DOE computer shall have any expectation of privacy in the use... computer? 727.4 Section 727.4 Energy DEPARTMENT OF ENERGY CONSENT FOR ACCESS TO INFORMATION ON DEPARTMENT OF ENERGY COMPUTERS § 727.4 Is there any expectation of privacy applicable to a DOE computer...
10 CFR 727.4 - Is there any expectation of privacy applicable to a DOE computer?
Code of Federal Regulations, 2014 CFR
2014-01-01
... Communications Privacy Act of 1986), no user of a DOE computer shall have any expectation of privacy in the use... computer? 727.4 Section 727.4 Energy DEPARTMENT OF ENERGY CONSENT FOR ACCESS TO INFORMATION ON DEPARTMENT OF ENERGY COMPUTERS § 727.4 Is there any expectation of privacy applicable to a DOE computer...
10 CFR 727.4 - Is there any expectation of privacy applicable to a DOE computer?
Code of Federal Regulations, 2013 CFR
2013-01-01
... Communications Privacy Act of 1986), no user of a DOE computer shall have any expectation of privacy in the use... computer? 727.4 Section 727.4 Energy DEPARTMENT OF ENERGY CONSENT FOR ACCESS TO INFORMATION ON DEPARTMENT OF ENERGY COMPUTERS § 727.4 Is there any expectation of privacy applicable to a DOE computer...
10 CFR 727.4 - Is there any expectation of privacy applicable to a DOE computer?
Code of Federal Regulations, 2010 CFR
2010-01-01
... Communications Privacy Act of 1986), no user of a DOE computer shall have any expectation of privacy in the use... computer? 727.4 Section 727.4 Energy DEPARTMENT OF ENERGY CONSENT FOR ACCESS TO INFORMATION ON DEPARTMENT OF ENERGY COMPUTERS § 727.4 Is there any expectation of privacy applicable to a DOE computer...
10 CFR 727.4 - Is there any expectation of privacy applicable to a DOE computer?
Code of Federal Regulations, 2011 CFR
2011-01-01
... Communications Privacy Act of 1986), no user of a DOE computer shall have any expectation of privacy in the use... computer? 727.4 Section 727.4 Energy DEPARTMENT OF ENERGY CONSENT FOR ACCESS TO INFORMATION ON DEPARTMENT OF ENERGY COMPUTERS § 727.4 Is there any expectation of privacy applicable to a DOE computer...
A General Cross-Layer Cloud Scheduling Framework for Multiple IoT Computer Tasks.
Wu, Guanlin; Bao, Weidong; Zhu, Xiaomin; Zhang, Xiongtao
2018-05-23
The diversity of IoT services and applications brings enormous challenges to improving the performance of multiple computer tasks' scheduling in cross-layer cloud computing systems. Unfortunately, the commonly-employed frameworks fail to adapt to the new patterns on the cross-layer cloud. To solve this issue, we design a new computer task scheduling framework for multiple IoT services in cross-layer cloud computing systems. Specifically, we first analyze the features of the cross-layer cloud and computer tasks. Then, we design the scheduling framework based on the analysis and present detailed models to illustrate the procedures of using the framework. With the proposed framework, the IoT services deployed in cross-layer cloud computing systems can dynamically select suitable algorithms and use resources more effectively to finish computer tasks with different objectives. Finally, the algorithms are given based on the framework, and extensive experiments are also given to validate its effectiveness, as well as its superiority.
Theorems on symmetries and flux conservation in radiative transfer using the matrix operator theory.
NASA Technical Reports Server (NTRS)
Kattawar, G. W.
1973-01-01
The matrix operator approach to radiative transfer is shown to be a very powerful technique in establishing symmetry relations for multiple scattering in inhomogeneous atmospheres. Symmetries are derived for the reflection and transmission operators using only the symmetry of the phase function. These results will mean large savings in computer time and storage for performing calculations for realistic planetary atmospheres using this method. The results have also been extended to establish a condition on the reflection matrix of a boundary in order to preserve reciprocity. Finally energy conservation is rigorously proven for conservative scattering in inhomogeneous atmospheres.
Swellix: a computational tool to explore RNA conformational space.
Sloat, Nathan; Liu, Jui-Wen; Schroeder, Susan J
2017-11-21
The sequence of nucleotides in an RNA determines the possible base pairs for an RNA fold and thus also determines the overall shape and function of an RNA. The Swellix program presented here combines a helix abstraction with a combinatorial approach to the RNA folding problem in order to compute all possible non-pseudoknotted RNA structures for RNA sequences. The Swellix program builds on the Crumple program and can include experimental constraints on global RNA structures such as the minimum number and lengths of helices from crystallography, cryoelectron microscopy, or in vivo crosslinking and chemical probing methods. The conceptual advance in Swellix is to count helices and generate all possible combinations of helices rather than counting and combining base pairs. Swellix bundles similar helices and includes improvements in memory use and efficient parallelization. Biological applications of Swellix are demonstrated by computing the reduction in conformational space and entropy due to naturally modified nucleotides in tRNA sequences and by motif searches in Human Endogenous Retroviral (HERV) RNA sequences. The Swellix motif search reveals occurrences of protein and drug binding motifs in the HERV RNA ensemble that do not occur in minimum free energy or centroid predicted structures. Swellix presents significant improvements over Crumple in terms of efficiency and memory use. The efficient parallelization of Swellix enables the computation of sequences as long as 418 nucleotides with sufficient experimental constraints. Thus, Swellix provides a practical alternative to free energy minimization tools when multiple structures, kinetically determined structures, or complex RNA-RNA and RNA-protein interactions are present in an RNA folding problem.
In vivo small animal micro-CT using nanoparticle contrast agents
Ashton, Jeffrey R.; West, Jennifer L.; Badea, Cristian T.
2015-01-01
Computed tomography (CT) is one of the most valuable modalities for in vivo imaging because it is fast, high-resolution, cost-effective, and non-invasive. Moreover, CT is heavily used not only in the clinic (for both diagnostics and treatment planning) but also in preclinical research as micro-CT. Although CT is inherently effective for lung and bone imaging, soft tissue imaging requires the use of contrast agents. For small animal micro-CT, nanoparticle contrast agents are used in order to avoid rapid renal clearance. A variety of nanoparticles have been used for micro-CT imaging, but the majority of research has focused on the use of iodine-containing nanoparticles and gold nanoparticles. Both nanoparticle types can act as highly effective blood pool contrast agents or can be targeted using a wide variety of targeting mechanisms. CT imaging can be further enhanced by adding spectral capabilities to separate multiple co-injected nanoparticles in vivo. Spectral CT, using both energy-integrating and energy-resolving detectors, has been used with multiple contrast agents to enable functional and molecular imaging. This review focuses on new developments for in vivo small animal micro-CT using novel nanoparticle probes applied in preclinical research. PMID:26581654
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergmann, Ryan M.; Rowland, Kelly L.
2017-04-12
WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed at UC Berkeley to efficiently execute on NVIDIA graphics processing unit (GPU) platforms. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo method, namely, that very few physical and geometrical simplifications are applied. WARP is able to calculate multiplication factors, neutron flux distributions (in both space and energy), and fission source distributions for time-independent neutron transport problems. It can run in both criticality or fixed source modes, but fixed source mode is currentlymore » not robust, optimized, or maintained in the newest version. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. The goal of developing WARP is to investigate algorithms that can grow into a full-featured, continuous energy, Monte Carlo neutron transport code that is accelerated by running on GPUs. The crux of the effort is to make Monte Carlo calculations faster while producing accurate results. Modern supercomputers are commonly being built with GPU coprocessor cards in their nodes to increase their computational efficiency and performance. GPUs execute efficiently on data-parallel problems, but most CPU codes, including those for Monte Carlo neutral particle transport, are predominantly task-parallel. WARP uses a data-parallel neutron transport algorithm to take advantage of the computing power GPUs offer.« less
Minimizing energy dissipation of matrix multiplication kernel on Virtex-II
NASA Astrophysics Data System (ADS)
Choi, Seonil; Prasanna, Viktor K.; Jang, Ju-wook
2002-07-01
In this paper, we develop energy-efficient designs for matrix multiplication on FPGAs. To analyze the energy dissipation, we develop a high-level model using domain-specific modeling techniques. In this model, we identify architecture parameters that significantly affect the total energy (system-wide energy) dissipation. Then, we explore design trade-offs by varying these parameters to minimize the system-wide energy. For matrix multiplication, we consider a uniprocessor architecture and a linear array architecture to develop energy-efficient designs. For the uniprocessor architecture, the cache size is a parameter that affects the I/O complexity and the system-wide energy. For the linear array architecture, the amount of storage per processing element is a parameter affecting the system-wide energy. By using maximum amount of storage per processing element and minimum number of multipliers, we obtain a design that minimizes the system-wide energy. We develop several energy-efficient designs for matrix multiplication. For example, for 6×6 matrix multiplication, energy savings of upto 52% for the uniprocessor architecture and 36% for the linear arrary architecture is achieved over an optimized library for Virtex-II FPGA from Xilinx.
Hierarchical image segmentation via recursive superpixel with adaptive regularity
NASA Astrophysics Data System (ADS)
Nakamura, Kensuke; Hong, Byung-Woo
2017-11-01
A fast and accurate segmentation algorithm in a hierarchical way based on a recursive superpixel technique is presented. We propose a superpixel energy formulation in which the trade-off between data fidelity and regularization is dynamically determined based on the local residual in the energy optimization procedure. We also present an energy optimization algorithm that allows a pixel to be shared by multiple regions to improve the accuracy and appropriate the number of segments. The qualitative and quantitative evaluations demonstrate that our algorithm, combining the proposed energy and optimization, outperforms the conventional k-means algorithm by up to 29.10% in F-measure. We also perform comparative analysis with state-of-the-art algorithms in the hierarchical segmentation. Our algorithm yields smooth regions throughout the hierarchy as opposed to the others that include insignificant details. Our algorithm overtakes the other algorithms in terms of balance between accuracy and computational time. Specifically, our method runs 36.48% faster than the region-merging approach, which is the fastest of the comparing algorithms, while achieving a comparable accuracy.
2015-01-01
Many commonly used coarse-grained models for proteins are based on simplified interaction sites and consequently may suffer from significant limitations, such as the inability to properly model protein secondary structure without the addition of restraints. Recent work on a benzene fluid (LettieriS.; ZuckermanD. M.J. Comput. Chem.2012, 33, 268−27522120971) suggested an alternative strategy of tabulating and smoothing fully atomistic orientation-dependent interactions among rigid molecules or fragments. Here we report our initial efforts to apply this approach to the polar and covalent interactions intrinsic to polypeptides. We divide proteins into nearly rigid fragments, construct distance and orientation-dependent tables of the atomistic interaction energies between those fragments, and apply potential energy smoothing techniques to those tables. The amount of smoothing can be adjusted to give coarse-grained models that range from the underlying atomistic force field all the way to a bead-like coarse-grained model. For a moderate amount of smoothing, the method is able to preserve about 70–90% of the α-helical structure while providing a factor of 3–10 improvement in sampling per unit computation time (depending on how sampling is measured). For a greater amount of smoothing, multiple folding–unfolding transitions of the peptide were observed, along with a factor of 10–100 improvement in sampling per unit computation time, although the time spent in the unfolded state was increased compared with less smoothed simulations. For a β hairpin, secondary structure is also preserved, albeit for a narrower range of the smoothing parameter and, consequently, for a more modest improvement in sampling. We have also applied the new method in a “resolution exchange” setting, in which each replica runs a Monte Carlo simulation with a different degree of smoothing. We obtain exchange rates that compare favorably to our previous efforts at resolution exchange (LymanE.; ZuckermanD. M.J. Chem. Theory Comput.2006, 2, 656−666). PMID:25400525
NASA Astrophysics Data System (ADS)
Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.
2017-07-01
Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).
An Energy-Aware Hybrid ARQ Scheme with Multi-ACKs for Data Sensing Wireless Sensor Networks.
Zhang, Jinhuan; Long, Jun
2017-06-12
Wireless sensor networks (WSNs) are one of the important supporting technologies of edge computing. In WSNs, reliable communications are essential for most applications due to the unreliability of wireless links. In addition, network lifetime is also an important performance metric and needs to be considered in many WSN studies. In the paper, an energy-aware hybrid Automatic Repeat-reQuest protocol (ARQ) scheme is proposed to ensure energy efficiency under the guarantee of network transmission reliability. In the scheme, the source node sends data packets continuously with the correct window size and it does not need to wait for the acknowledgement (ACK) confirmation for each data packet. When the destination receives K data packets, it will return multiple copies of one ACK for confirmation to avoid ACK packet loss. The energy consumption of each node in flat circle network applying the proposed scheme is statistical analyzed and the cases under which it is more energy efficiency than the original scheme is discussed. Moreover, how to select parameters of the scheme is addressed to extend the network lifetime under the constraint of the network reliability. In addition, the energy efficiency of the proposed schemes is evaluated. Simulation results are presented to demonstrate that a node energy consumption reduction could be gained and the network lifetime is prolonged.
Li, B O; Liu, Yuan
A phase-field free-energy functional for the solvation of charged molecules (e.g., proteins) in aqueous solvent (i.e., water or salted water) is constructed. The functional consists of the solute volumetric and solute-solvent interfacial energies, the solute-solvent van der Waals interaction energy, and the continuum electrostatic free energy described by the Poisson-Boltzmann theory. All these are expressed in terms of phase fields that, for low free-energy conformations, are close to one value in the solute phase and another in the solvent phase. A key property of the model is that the phase-field interpolation of dielectric coefficient has the vanishing derivative at both solute and solvent phases. The first variation of such an effective free-energy functional is derived. Matched asymptotic analysis is carried out for the resulting relaxation dynamics of the diffused solute-solvent interface. It is shown that the sharp-interface limit is exactly the variational implicit-solvent model that has successfully captured capillary evaporation in hydrophobic confinement and corresponding multiple equilibrium states of underlying biomolecular systems as found in experiment and molecular dynamics simulations. Our phase-field approach and analysis can be used to possibly couple the description of interfacial fluctuations for efficient numerical computations of biomolecular interactions.
Improving energy efficiency of Embedded DRAM Caches for High-end Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Sparsh; Vetter, Jeffrey S; Li, Dong
2014-01-01
With increasing system core-count, the size of last level cache (LLC) has increased and since SRAM consumes high leakage power, power consumption of LLCs is becoming a significant fraction of processor power consumption. To address this, researchers have used embedded DRAM (eDRAM) LLCs which consume low-leakage power. However, eDRAM caches consume a significant amount of energy in the form of refresh energy. In this paper, we propose ESTEEM, an energy saving technique for embedded DRAM caches. ESTEEM uses dynamic cache reconfiguration to turn-off a portion of the cache to save both leakage and refresh energy. It logically divides the cachemore » sets into multiple modules and turns-off possibly different number of ways in each module. Microarchitectural simulations confirm that ESTEEM is effective in improving performance and energy efficiency and provides better results compared to a recently-proposed eDRAM cache energy saving technique, namely Refrint. For single and dual-core simulations, the average saving in memory subsystem (LLC+main memory) on using ESTEEM is 25.8% and 32.6%, respectively and average weighted speedup are 1.09X and 1.22X, respectively. Additional experiments confirm that ESTEEM works well for a wide-range of system parameters.« less
Characterization of normal feline renal vascular anatomy with dual-phase CT angiography.
Cáceres, Ana V; Zwingenberger, Allison L; Aronson, Lillian R; Mai, Wilfried
2008-01-01
Helical computed tomography angiography was used to evaluate the renal vascular anatomy of potential feline renal donors. One hundred and fourteen computed tomography angiograms were reviewed. The vessels were characterized as single without bifurcation, single with bifurcation, double, or triple. Multiplicity was most commonly seen for the right renal vein (45/114 vs. 3/114 multiple left renal veins, 0/114 multiple right renal arteries, and 8/114 multiple left renal arteries). The right kidney was 13.3 times more likely than the left to have multiple renal veins. Additional vascular variants included double caudal vena cava and an accessory renal artery. For the left kidney, surgery and computed tomography angiography findings were in agreement in 92% of 74 cats. For the right kidney, surgery and computed tomography angiography findings were in agreement in 6/6 cats. Our findings of renal vascular anatomy variations in cats were similar to previous reports in humans. Identifying and recognizing the pattern of distribution of these vessels is important when performing renal transplantation.
Instrumentation for Studies of Electron Emission and Charging From Insulators
NASA Technical Reports Server (NTRS)
Thomson, C. D.; Zavyalov, V.; Dennison, J. R.
2004-01-01
Making measurements of electron emission properties of insulators is difficult since insulators can charge either negatively or positively under charge particle bombardment. In addition, high incident energies or high fluences can result in modification of a material s conductivity, bulk and surface charge profile, structural makeup through bond breaking and defect creation, and emission properties. We discuss here some of the charging difficulties associated with making insulator-yield measurements and review the methods used in previous studies of electron emission from insulators. We present work undertaken by our group to make consistent and accurate measurements of the electron/ion yield properties for numerous thin-film and thick insulator materials using innovative instrumentation and techniques. We also summarize some of the necessary instrumentation developed for this purpose including fast response, low-noise, high-sensitivity ammeters; signal isolation and interface to standard computer data acquisition apparatus using opto-isolation, sample-and-hold, and boxcar integration techniques; computer control, automation and timing using Labview software; a multiple sample carousel; a pulsed, compact, low-energy, charge neutralization electron flood gun; and pulsed visible and UV light neutralization sources. This work is supported through funding from the NASA Space Environments and Effects Program and the NASA Graduate Research Fellowship Program.
Davis, Matthew R.; Dougherty, Dennis A.
2015-01-01
Cation-π interactions are common in biological systems, and many structural studies have revealed the aromatic box as a common motif. With the aim of understanding the nature of the aromatic box, several computational methods were evaluated for their ability to reproduce experimental cation-π binding energies. We find the DFT method M06 with the 6-31G(d,p) basis set performs best of several methods tested. The binding of benzene to a number of different cations (sodium, potassium, ammonium, tetramethylammonium, and guanidinium) was studied. In addition, the binding of the organic cations NH4+ and NMe4+ to ab initio generated aromatic boxes as well as examples of aromatic boxes from protein crystal structures were investigated. These data, along with a study of the distance dependence of the cation-π interaction, indicate that multiple aromatic residues can meaningfully contribute to cation binding, even with displacements of more than an angstrom from the optimal cation-π interaction. Progressive fluorination of benzene and indole was studied as well, and binding energies obtained were used to reaffirm the validity of the “fluorination strategy” to study cation-π interactions in vivo. PMID:26467787
Davis, Matthew R; Dougherty, Dennis A
2015-11-21
Cation-π interactions are common in biological systems, and many structural studies have revealed the aromatic box as a common motif. With the aim of understanding the nature of the aromatic box, several computational methods were evaluated for their ability to reproduce experimental cation-π binding energies. We find the DFT method M06 with the 6-31G(d,p) basis set performs best of several methods tested. The binding of benzene to a number of different cations (sodium, potassium, ammonium, tetramethylammonium, and guanidinium) was studied. In addition, the binding of the organic cations NH4(+) and NMe4(+) to ab initio generated aromatic boxes as well as examples of aromatic boxes from protein crystal structures were investigated. These data, along with a study of the distance dependence of the cation-π interaction, indicate that multiple aromatic residues can meaningfully contribute to cation binding, even with displacements of more than an angstrom from the optimal cation-π interaction. Progressive fluorination of benzene and indole was studied as well, and binding energies obtained were used to reaffirm the validity of the "fluorination strategy" to study cation-π interactions in vivo.
First principles statistical mechanics of alloys and magnetism
NASA Astrophysics Data System (ADS)
Eisenbach, Markus; Khan, Suffian N.; Li, Ying Wai
Modern high performance computing resources are enabling the exploration of the statistical physics of phase spaces with increasing size and higher fidelity of the Hamiltonian of the systems. For selected systems, this now allows the combination of Density Functional based first principles calculations with classical Monte Carlo methods for parameter free, predictive thermodynamics of materials. We combine our locally selfconsistent real space multiple scattering method for solving the Kohn-Sham equation with Wang-Landau Monte-Carlo calculations (WL-LSMS). In the past we have applied this method to the calculation of Curie temperatures in magnetic materials. Here we will present direct calculations of the chemical order - disorder transitions in alloys. We present our calculated transition temperature for the chemical ordering in CuZn and the temperature dependence of the short-range order parameter and specific heat. Finally we will present the extension of the WL-LSMS method to magnetic alloys, thus allowing the investigation of the interplay of magnetism, structure and chemical order in ferrous alloys. This research was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Science and Engineering Division and it used Oak Ridge Leadership Computing Facility resources at Oak Ridge National Laboratory.
Calculation of the Curie temperature of Ni using first principles based Wang-Landau Monte-Carlo
NASA Astrophysics Data System (ADS)
Eisenbach, Markus; Yin, Junqi; Li, Ying Wai; Nicholson, Don
2015-03-01
We combine constrained first principles density functional with a Wang-Landau Monte Carlo algorithm to calculate the Curie temperature of Ni. Mapping the magnetic interactions in Ni onto a Heisenberg like model to underestimates the Curie temperature. Using a model we show that the addition of the magnitude of the local magnetic moments can account for the difference in the calculated Curie temperature. For ab initio calculations, we have extended our Locally Selfconsistent Multiple Scattering (LSMS) code to constrain the magnitude of the local moments in addition to their direction and apply the Replica Exchange Wang-Landau method to sample the larger phase space efficiently to investigate Ni where the fluctuation in the magnitude of the local magnetic moments is of importance equal to their directional fluctuations. We will present our results for Ni where we compare calculations that consider only the moment directions and those including fluctuations of the magnetic moment magnitude on the Curie temperature. This research was sponsored by the Department of Energy, Offices of Basic Energy Science and Advanced Computing. We used Oak Ridge Leadership Computing Facility resources at Oak Ridge National Laboratory, supported by US DOE under contract DE-AC05-00OR22725.
Mollet, Mike; Godoy-Silva, Ruben; Berdugo, Claudia; Chalmers, Jeffrey J
2008-06-01
Fluorescence activated cell sorting, FACS, is a widely used method to sort subpopulations of cells to high purities. To achieve relatively high sorting speeds, FACS instruments operate by forcing suspended cells to flow in a single file line through a laser(s) beam(s). Subsequently, this flow stream breaks up into individual drops which can be charged and deflected into multiple collection streams. Previous work by Ma et al. (2002) and Mollet et al. (2007; Biotechnol Bioeng 98:772-788) indicates that subjecting cells to hydrodynamic forces consisting of both high extensional and shear components in micro-channels results in significant cell damage. Using the fluid dynamics software FLUENT, computer simulations of typical fluid flow through the nozzle of a BD FACSVantage indicate that hydrodynamic forces, quantified using the scalar parameter energy dissipation rate, are similar in the FACS nozzle to levels reported to create significant cell damage in micro-channels. Experimental studies in the FACSVantage, operated under the same conditions as the simulations confirmed significant cell damage in two cell lines, Chinese Hamster Ovary cells (CHO) and THP1, a human acute monocytic leukemia cell line.
Dual-Energy Computed Tomography in Cardiothoracic Vascular Imaging.
De Santis, Domenico; Eid, Marwen; De Cecco, Carlo N; Jacobs, Brian E; Albrecht, Moritz H; Varga-Szemes, Akos; Tesche, Christian; Caruso, Damiano; Laghi, Andrea; Schoepf, Uwe Joseph
2018-07-01
Dual energy computed tomography is becoming increasingly widespread in clinical practice. It can expand on the traditional density-based data achievable with single energy computed tomography by adding novel applications to help reach a more accurate diagnosis. The implementation of this technology in cardiothoracic vascular imaging allows for improved image contrast, metal artifact reduction, generation of virtual unenhanced images, virtual calcium subtraction techniques, cardiac and pulmonary perfusion evaluation, and plaque characterization. The improved diagnostic performance afforded by dual energy computed tomography is not associated with an increased radiation dose. This review provides an overview of dual energy computed tomography cardiothoracic vascular applications. Copyright © 2018 Elsevier Inc. All rights reserved.
Revealing Nucleic Acid Mutations Using Förster Resonance Energy Transfer-Based Probes
Junager, Nina P. L.; Kongsted, Jacob; Astakhova, Kira
2016-01-01
Nucleic acid mutations are of tremendous importance in modern clinical work, biotechnology and in fundamental studies of nucleic acids. Therefore, rapid, cost-effective and reliable detection of mutations is an object of extensive research. Today, Förster resonance energy transfer (FRET) probes are among the most often used tools for the detection of nucleic acids and in particular, for the detection of mutations. However, multiple parameters must be taken into account in order to create efficient FRET probes that are sensitive to nucleic acid mutations. In this review; we focus on the design principles for such probes and available computational methods that allow for their rational design. Applications of advanced, rationally designed FRET probes range from new insights into cellular heterogeneity to gaining new knowledge of nucleic acid structures directly in living cells. PMID:27472344
iSEDfit: Bayesian spectral energy distribution modeling of galaxies
NASA Astrophysics Data System (ADS)
Moustakas, John
2017-08-01
iSEDfit uses Bayesian inference to extract the physical properties of galaxies from their observed broadband photometric spectral energy distribution (SED). In its default mode, the inputs to iSEDfit are the measured photometry (fluxes and corresponding inverse variances) and a measurement of the galaxy redshift. Alternatively, iSEDfit can be used to estimate photometric redshifts from the input photometry alone. After the priors have been specified, iSEDfit calculates the marginalized posterior probability distributions for the physical parameters of interest, including the stellar mass, star-formation rate, dust content, star formation history, and stellar metallicity. iSEDfit also optionally computes K-corrections and produces multiple "quality assurance" (QA) plots at each stage of the modeling procedure to aid in the interpretation of the prior parameter choices and subsequent fitting results. The software is distributed as part of the impro IDL suite.
Adaptively biased molecular dynamics: An umbrella sampling method with a time-dependent potential
NASA Astrophysics Data System (ADS)
Babin, Volodymyr; Karpusenka, Vadzim; Moradi, Mahmoud; Roland, Christopher; Sagui, Celeste
We discuss an adaptively biased molecular dynamics (ABMD) method for the computation of a free energy surface for a set of reaction coordinates. The ABMD method belongs to the general category of umbrella sampling methods with an evolving biasing potential. It is characterized by a small number of control parameters and an O(t) numerical cost with simulation time t. The method naturally allows for extensions based on multiple walkers and replica exchange mechanism. The workings of the method are illustrated with a number of examples, including sugar puckering, and free energy landscapes for polymethionine and polyproline peptides, and for a short β-turn peptide. ABMD has been implemented into the latest version (Case et al., AMBER 10; University of California: San Francisco, 2008) of the AMBER software package and is freely available to the simulation community.
Electron-impact vibrational relaxation in high-temperature nitrogen
NASA Technical Reports Server (NTRS)
Lee, Jong-Hun
1992-01-01
Vibrational relaxation process of N2 molecules by electron-impact is examined for the future planetary entry environments. Multiple-quantum transitions from excited states to higher/lower states are considered for the electronic ground state of the nitrogen molecule N2 (X 1Sigma-g(+)). Vibrational excitation and deexcitation rate coefficients obtained by computational quantum chemistry are incorporated into the 'diffusion model' to evaluate the time variations of vibrational number densities of each energy state and total vibrational energy. Results show a non-Boltzmann distribution of number densities at the earlier stage of relaxation, which in turn suppresses the equilibrium process but affects little the time variation of total vibrational energy. An approximate rate equation and a corresponding relaxation time from the excited states, compatible with the system of flow conservation equations, are derived. The relaxation time from the excited states indicates the weak dependency of the initial vibrational temperature. The empirical curve-fit formula for the improved e-V relaxation time is obtained.
Disconnections kinks and competing modes in shear-coupled grain boundary migration
NASA Astrophysics Data System (ADS)
Combe, N.; Mompiou, F.; Legros, M.
2016-01-01
The response of small-grained metals to mechanical stress is investigated by a theoretical study of the elementary mechanisms occurring during the shear-coupled migration of grain boundaries (GB). Investigating a model Σ 17 (410 ) GB in a copper bicrystal, both <110 > and <100 > GB migration modes are studied focusing on both the structural and energetic characteristics. The minimum energy paths of these shear-coupled GB migrations are computed using the nudge elastic band method. For both modes, the GB migration occurs through the nucleation and motion of disconnections. However, the atomic mechanisms of both modes qualitatively differ: While the <110 > mode presents no metastable state, the <100 > mode shows multiple metastable states, some of them evidencing some kinks along the disconnection lines. Disconnection kinks nucleation and motion activation energies are evaluated. Besides, the activation energies of the <100 > mode are smaller than those of the <110 > one except for very high stresses. These results significantly improve our knowledge of the GB migration mechanisms and the conditions under which they occur.
Meng, Yilin; Roux, Benoît
2015-08-11
The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.
2015-01-01
The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost. PMID:26574437
Ultralow-power switching via defect engineering in germanium telluride phase-change memory devices.
Nukala, Pavan; Lin, Chia-Chun; Composto, Russell; Agarwal, Ritesh
2016-01-25
Crystal-amorphous transformation achieved via the melt-quench pathway in phase-change memory involves fundamentally inefficient energy conversion events; and this translates to large switching current densities, responsible for chemical segregation and device degradation. Alternatively, introducing defects in the crystalline phase can engineer carrier localization effects enhancing carrier-lattice coupling; and this can efficiently extract work required to introduce bond distortions necessary for amorphization from input electrical energy. Here, by pre-inducing extended defects and thus carrier localization effects in crystalline GeTe via high-energy ion irradiation, we show tremendous improvement in amorphization current densities (0.13-0.6 MA cm(-2)) compared with the melt-quench strategy (∼50 MA cm(-2)). We show scaling behaviour and good reversibility on these devices, and explore several intermediate resistance states that are accessible during both amorphization and recrystallization pathways. Existence of multiple resistance states, along with ultralow-power switching and scaling capabilities, makes this approach promising in context of low-power memory and neuromorphic computation.
Ultralow-power switching via defect engineering in germanium telluride phase-change memory devices
Nukala, Pavan; Lin, Chia-Chun; Composto, Russell; Agarwal, Ritesh
2016-01-01
Crystal–amorphous transformation achieved via the melt-quench pathway in phase-change memory involves fundamentally inefficient energy conversion events; and this translates to large switching current densities, responsible for chemical segregation and device degradation. Alternatively, introducing defects in the crystalline phase can engineer carrier localization effects enhancing carrier–lattice coupling; and this can efficiently extract work required to introduce bond distortions necessary for amorphization from input electrical energy. Here, by pre-inducing extended defects and thus carrier localization effects in crystalline GeTe via high-energy ion irradiation, we show tremendous improvement in amorphization current densities (0.13–0.6 MA cm−2) compared with the melt-quench strategy (∼50 MA cm−2). We show scaling behaviour and good reversibility on these devices, and explore several intermediate resistance states that are accessible during both amorphization and recrystallization pathways. Existence of multiple resistance states, along with ultralow-power switching and scaling capabilities, makes this approach promising in context of low-power memory and neuromorphic computation. PMID:26805748
Stratmann, Philipp; Lakatos, Dominic; Albu-Schäffer, Alin
2016-01-01
There are multiple indications that the nervous system of animals tunes muscle output to exploit natural dynamics of the elastic locomotor system and the environment. This is an advantageous strategy especially in fast periodic movements, since the elastic elements store energy and increase energy efficiency and movement speed. Experimental evidence suggests that coordination among joints involves proprioceptive input and neuromodulatory influence originating in the brain stem. However, the neural strategies underlying the coordination of fast periodic movements remain poorly understood. Based on robotics control theory, we suggest that the nervous system implements a mechanism to accomplish coordination between joints by a linear coordinate transformation from the multi-dimensional space representing proprioceptive input at the joint level into a one-dimensional controller space. In this one-dimensional subspace, the movements of a whole limb can be driven by a single oscillating unit as simple as a reflex interneuron. The output of the oscillating unit is transformed back to joint space via the same transformation. The transformation weights correspond to the dominant principal component of the movement. In this study, we propose a biologically plausible neural network to exemplify that the central nervous system (CNS) may encode our controller design. Using theoretical considerations and computer simulations, we demonstrate that spike-timing-dependent plasticity (STDP) for the input mapping and serotonergic neuromodulation for the output mapping can extract the dominant principal component of sensory signals. Our simulations show that our network can reliably control mechanical systems of different complexity and increase the energy efficiency of ongoing cyclic movements. The proposed network is simple and consistent with previous biologic experiments. Thus, our controller could serve as a candidate to describe the neural control of fast, energy-efficient, periodic movements involving multiple coupled joints.
Stratmann, Philipp; Lakatos, Dominic; Albu-Schäffer, Alin
2016-01-01
There are multiple indications that the nervous system of animals tunes muscle output to exploit natural dynamics of the elastic locomotor system and the environment. This is an advantageous strategy especially in fast periodic movements, since the elastic elements store energy and increase energy efficiency and movement speed. Experimental evidence suggests that coordination among joints involves proprioceptive input and neuromodulatory influence originating in the brain stem. However, the neural strategies underlying the coordination of fast periodic movements remain poorly understood. Based on robotics control theory, we suggest that the nervous system implements a mechanism to accomplish coordination between joints by a linear coordinate transformation from the multi-dimensional space representing proprioceptive input at the joint level into a one-dimensional controller space. In this one-dimensional subspace, the movements of a whole limb can be driven by a single oscillating unit as simple as a reflex interneuron. The output of the oscillating unit is transformed back to joint space via the same transformation. The transformation weights correspond to the dominant principal component of the movement. In this study, we propose a biologically plausible neural network to exemplify that the central nervous system (CNS) may encode our controller design. Using theoretical considerations and computer simulations, we demonstrate that spike-timing-dependent plasticity (STDP) for the input mapping and serotonergic neuromodulation for the output mapping can extract the dominant principal component of sensory signals. Our simulations show that our network can reliably control mechanical systems of different complexity and increase the energy efficiency of ongoing cyclic movements. The proposed network is simple and consistent with previous biologic experiments. Thus, our controller could serve as a candidate to describe the neural control of fast, energy-efficient, periodic movements involving multiple coupled joints. PMID:27014051
Measurement of X-ray intensity in mammography by a ferroelectric dosimeter
NASA Astrophysics Data System (ADS)
Alter, Albert J.
2005-07-01
Each year in the US over 20 million women undergo mammography, a relatively high dose x-ray examination of the breast, which is relatively sensitive to the carcinogenic effect of ionizing radiation. The radiation risk from mammography is usually expressed in terms of mean glandular dose (MGD) which is calculated as the product of measured entrance exposure (ESE) and a dose conversion factor which is a function of anode material, peak tube voltage (23 to 35 kVp), half-value layer, filtration, compressed breast thickness and breast composition. Mammographic units may have anodes made of molybdenum, rhodium or tungsten and filters of molybdenum, rhodium, or aluminum. In order to accommodate all these parameters, multiple extensive tables of conversion factors are required to cover the range of possibilities. Energy fluence and energy imparted are alternative measures of radiation hazard, which have been used in situations where geometry or filtration is unconventional such as computed tomography or fluoroscopy. Unfortunately, at the present there is no way to directly measure these quantities clinically. In radiation therapy applications, calorimetry has been used to measure energy absorbed. A ferroelectric-based detector has been described that measures energy fluence rate (x-ray intensity) for diagnostic x-ray, 50 to 140 kVp, aluminum filtered tungsten spectrum [Carvalho & Alter: IEEE Transactions 44(6) 1997]. This work explores use of ferroelectric detectors to measure energy fluence, energy fluence rate and energy imparted in mammography. A detector interfaced with a laptop computer was developed to allow measurements on clinical units of five different manufactures having targets of molybdenum, rhodium and tungsten and filters of molybdenum, rhodium, and aluminum of various thicknesses. The measurements provide the first values of energy fluence and energy imparted in mammography. These measurements are compared with conventional parameters such as entrance exposure and mean glandular dose as well as published values of energy imparted for other types of x-ray examinations. Advantage of measuring dose in terms of energy imparted in mammography are simplicity of comparison with other sources of radiation exposure and potential (relative ease) of measurement across a variety of anode and filter combinations.
Srinivasan, E; Rajasekaran, R
2017-07-25
The genetic substitution mutation of Cys146Arg in the SOD1 protein is predominantly found in the Japanese population suffering from familial amyotrophic lateral sclerosis (FALS). A complete study of the biophysical aspects of this particular missense mutation through conformational analysis and producing free energy landscapes could provide an insight into the pathogenic mechanism of ALS disease. In this study, we utilized general molecular dynamics simulations along with computational predictions to assess the structural characterization of the protein as well as the conformational preferences of monomeric wild type and mutant SOD1. Our static analysis, accomplished through multiple programs, predicted the deleterious and destabilizing effect of mutant SOD1. Subsequently, comparative molecular dynamic studies performed on the wild type and mutant SOD1 indicated a loss in the protein conformational stability and flexibility. We observed the mutational consequences not only in local but also in long-range variations in the structural properties of the SOD1 protein. Long-range intramolecular protein interactions decrease upon mutation, resulting in less compact structures in the mutant protein rather than in the wild type, suggesting that the mutant structures are less stable than the wild type SOD1. We also presented the free energy landscape to study the collective motion of protein conformations through principal component analysis for the wild type and mutant SOD1. Overall, the study assisted in revealing the cause of the structural destabilization and protein misfolding via structural characterization, secondary structure composition and free energy landscapes. Hence, the computational framework in our study provides a valuable direction for the search for the cure against fatal FALS.
10 CFR 727.1 - What is the purpose and scope of this part?
Code of Federal Regulations, 2012 CFR
2012-01-01
... 727.1 Energy DEPARTMENT OF ENERGY CONSENT FOR ACCESS TO INFORMATION ON DEPARTMENT OF ENERGY COMPUTERS... requirements applicable to each individual granted access to a DOE computer or to information on a DOE computer... computer used in the performance of the individual's duties during the term of that individual's employment...
10 CFR 727.1 - What is the purpose and scope of this part?
Code of Federal Regulations, 2014 CFR
2014-01-01
... 727.1 Energy DEPARTMENT OF ENERGY CONSENT FOR ACCESS TO INFORMATION ON DEPARTMENT OF ENERGY COMPUTERS... requirements applicable to each individual granted access to a DOE computer or to information on a DOE computer... computer used in the performance of the individual's duties during the term of that individual's employment...
10 CFR 727.1 - What is the purpose and scope of this part?
Code of Federal Regulations, 2013 CFR
2013-01-01
... 727.1 Energy DEPARTMENT OF ENERGY CONSENT FOR ACCESS TO INFORMATION ON DEPARTMENT OF ENERGY COMPUTERS... requirements applicable to each individual granted access to a DOE computer or to information on a DOE computer... computer used in the performance of the individual's duties during the term of that individual's employment...
10 CFR 727.1 - What is the purpose and scope of this part?
Code of Federal Regulations, 2011 CFR
2011-01-01
... 727.1 Energy DEPARTMENT OF ENERGY CONSENT FOR ACCESS TO INFORMATION ON DEPARTMENT OF ENERGY COMPUTERS... requirements applicable to each individual granted access to a DOE computer or to information on a DOE computer... computer used in the performance of the individual's duties during the term of that individual's employment...
10 CFR 727.1 - What is the purpose and scope of this part?
Code of Federal Regulations, 2010 CFR
2010-01-01
... 727.1 Energy DEPARTMENT OF ENERGY CONSENT FOR ACCESS TO INFORMATION ON DEPARTMENT OF ENERGY COMPUTERS... requirements applicable to each individual granted access to a DOE computer or to information on a DOE computer... computer used in the performance of the individual's duties during the term of that individual's employment...
Advancing Drug Discovery through Enhanced Free Energy Calculations.
Abel, Robert; Wang, Lingle; Harder, Edward D; Berne, B J; Friesner, Richard A
2017-07-18
A principal goal of drug discovery project is to design molecules that can tightly and selectively bind to the target protein receptor. Accurate prediction of protein-ligand binding free energies is therefore of central importance in computational chemistry and computer aided drug design. Multiple recent improvements in computing power, classical force field accuracy, enhanced sampling methods, and simulation setup have enabled accurate and reliable calculations of protein-ligands binding free energies, and position free energy calculations to play a guiding role in small molecule drug discovery. In this Account, we outline the relevant methodological advances, including the REST2 (Replica Exchange with Solute Temperting) enhanced sampling, the incorporation of REST2 sampling with convential FEP (Free Energy Perturbation) through FEP/REST, the OPLS3 force field, and the advanced simulation setup that constitute our FEP+ approach, followed by the presentation of extensive comparisons with experiment, demonstrating sufficient accuracy in potency prediction (better than 1 kcal/mol) to substantially impact lead optimization campaigns. The limitations of the current FEP+ implementation and best practices in drug discovery applications are also discussed followed by the future methodology development plans to address those limitations. We then report results from a recent drug discovery project, in which several thousand FEP+ calculations were successfully deployed to simultaneously optimize potency, selectivity, and solubility, illustrating the power of the approach to solve challenging drug design problems. The capabilities of free energy calculations to accurately predict potency and selectivity have led to the advance of ongoing drug discovery projects, in challenging situations where alternative approaches would have great difficulties. The ability to effectively carry out projects evaluating tens of thousands, or hundreds of thousands, of proposed drug candidates, is potentially transformative in enabling hard to drug targets to be attacked, and in facilitating the development of superior compounds, in various dimensions, for a wide range of targets. More effective integration of FEP+ calculations into the drug discovery process will ensure that the results are deployed in an optimal fashion for yielding the best possible compounds entering the clinic; this is where the greatest payoff is in the exploitation of computer driven design capabilities. A key conclusion from the work described is the surprisingly robust and accurate results that are attainable within the conventional classical simulation, fixed charge paradigm. No doubt there are individual cases that would benefit from a more sophisticated energy model or dynamical treatment, and properties other than protein-ligand binding energies may be more sensitive to these approximations. We conclude that an inflection point in the ability of MD simulations to impact drug discovery has now been attained, due to the confluence of hardware and software development along with the formulation of "good enough" theoretical methods and models.
ERIC Educational Resources Information Center
Davison, Mark L.; Biancarosa, Gina; Carlson, Sarah E.; Seipel, Ben; Liu, Bowen
2018-01-01
The computer-administered Multiple-Choice Online Causal Comprehension Assessment (MOCCA) for Grades 3 to 5 has an innovative, 40-item multiple-choice structure in which each distractor corresponds to a comprehension process upon which poor comprehenders have been shown to rely. This structure requires revised thinking about measurement issues…
Energy-aware scheduling of surveillance in wireless multimedia sensor networks.
Wang, Xue; Wang, Sheng; Ma, Junjie; Sun, Xinyao
2010-01-01
Wireless sensor networks involve a large number of sensor nodes with limited energy supply, which impacts the behavior of their application. In wireless multimedia sensor networks, sensor nodes are equipped with audio and visual information collection modules. Multimedia contents are ubiquitously retrieved in surveillance applications. To solve the energy problems during target surveillance with wireless multimedia sensor networks, an energy-aware sensor scheduling method is proposed in this paper. Sensor nodes which acquire acoustic signals are deployed randomly in the sensing fields. Target localization is based on the signal energy feature provided by multiple sensor nodes, employing particle swarm optimization (PSO). During the target surveillance procedure, sensor nodes are adaptively grouped in a totally distributed manner. Specially, the target motion information is extracted by a forecasting algorithm, which is based on the hidden Markov model (HMM). The forecasting results are utilized to awaken sensor node in the vicinity of future target position. According to the two properties, signal energy feature and residual energy, the sensor nodes decide whether to participate in target detection separately with a fuzzy control approach. Meanwhile, the local routing scheme of data transmission towards the observer is discussed. Experimental results demonstrate the efficiency of energy-aware scheduling of surveillance in wireless multimedia sensor network, where significant energy saving is achieved by the sensor awakening approach and data transmission paths are calculated with low computational complexity.
Challenges in reducing the computational time of QSTS simulations for distribution system analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deboever, Jeremiah; Zhang, Xiaochen; Reno, Matthew J.
The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10more » to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.« less
Statistical linearization for multi-input/multi-output nonlinearities
NASA Technical Reports Server (NTRS)
Lin, Ching-An; Cheng, Victor H. L.
1991-01-01
Formulas are derived for the computation of the random input-describing functions for MIMO nonlinearities; these straightforward and rigorous derivations are based on the optimal mean square linear approximation. The computations involve evaluations of multiple integrals. It is shown that, for certain classes of nonlinearities, multiple-integral evaluations are obviated and the computations are significantly simplified.
Reaction Rate Theory in Coordination Number Space: An Application to Ion Solvation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy, Santanu; Baer, Marcel D.; Mundy, Christopher J.
2016-04-14
Understanding reaction mechanisms in many chemical and biological processes require application of rare event theories. In these theories, an effective choice of a reaction coordinate to describe a reaction pathway is essential. To this end, we study ion solvation in water using molecular dynamics simulations and explore the utility of coordination number (n = number of water molecules in the first solvation shell) as the reaction coordinate. Here we compute the potential of mean force (W(n)) using umbrella sampling, predicting multiple metastable n-states for both cations and anions. We find with increasing ionic size, these states become more stable andmore » structured for cations when compared to anions. We have extended transition state theory (TST) to calculate transition rates between n-states. TST overestimates the rate constant due to solvent-induced barrier recrossings that are not accounted for. We correct the TST rates by calculating transmission coefficients using the reactive flux method. This approach enables a new way of understanding rare events involving coordination complexes. We gratefully acknowledge Liem Dang and Panos Stinis for useful discussion. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. SR, CJM, and GKS were supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. MDB was supported by MS3 (Materials Synthesis and Simulation Across Scales) Initiative, a Laboratory Directed Research and Development Program at Pacific Northwest National Laboratory (PNNL). PNNL is a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy.« less
Improved atmospheric 3D BSDF model in earthlike exoplanet using ray-tracing based method
NASA Astrophysics Data System (ADS)
Ryu, Dongok; Kim, Sug-Whan; Seong, Sehyun
2012-10-01
The studies on planetary radiative transfer computation have become important elements to disk-averaged spectral characterization of potential exoplanets. In this paper, we report an improved ray-tracing based atmospheric simulation model as a part of 3-D earth-like planet model with 3 principle sub-components i.e. land, sea and atmosphere. Any changes in ray paths and their characteristics such as radiative power and direction are computed as they experience reflection, refraction, transmission, absorption and scattering. Improved atmospheric BSDF algorithms uses Q.Liu's combined Rayleigh and aerosol Henrey-Greenstein scattering phase function. The input cloud-free atmosphere model consists of 48 layers with vertical absorption profiles and a scattering layer with their input characteristics using the GIOVANNI database. Total Solar Irradiance data are obtained from Solar Radiation and Climate Experiment (SORCE) mission. Using aerosol scattering computation, we first tested the atmospheric scattering effects with imaging simulation with HRIV, EPOXI. Then we examined the computational validity of atmospheric model with the measurements of global, direct and diffuse radiation taken from NREL(National Renewable Energy Laboratory)s pyranometers and pyrheliometers on a ground station for cases of single incident angle and for simultaneous multiple incident angles of the solar beam.
Neuromorphic computing with nanoscale spintronic oscillators
Torrejon, Jacob; Riou, Mathieu; Araujo, Flavio Abreu; Tsunegi, Sumito; Khalsa, Guru; Querlioz, Damien; Bortolotti, Paolo; Cros, Vincent; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Stiles, M. D.; Grollier, Julie
2017-01-01
Neurons in the brain behave as non-linear oscillators, which develop rhythmic activity and interact to process information1. Taking inspiration from this behavior to realize high density, low power neuromorphic computing will require huge numbers of nanoscale non-linear oscillators. Indeed, a simple estimation indicates that, in order to fit a hundred million oscillators organized in a two-dimensional array inside a chip the size of a thumb, their lateral dimensions must be smaller than one micrometer. However, despite multiple theoretical proposals2–5, and several candidates such as memristive6 or superconducting7 oscillators, there is no proof of concept today of neuromorphic computing with nano-oscillators. Indeed, nanoscale devices tend to be noisy and to lack the stability required to process data in a reliable way. Here, we show experimentally that a nanoscale spintronic oscillator8,9 can achieve spoken digit recognition with accuracies similar to state of the art neural networks. We pinpoint the regime of magnetization dynamics leading to highest performance. These results, combined with the exceptional ability of these spintronic oscillators to interact together, their long lifetime, and low energy consumption, open the path to fast, parallel, on-chip computation based on networks of oscillators. PMID:28748930
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetter, Michael; Fuchs, Marcus; Nouidui, Thierry
This paper discusses design decisions for exporting Modelica thermofluid flow components as Functional Mockup Units. The purpose is to provide guidelines that will allow building energy simulation programs and HVAC equipment manufacturers to effectively use FMUs for modeling of HVAC components and systems. We provide an analysis for direct input-output dependencies of such components and discuss how these dependencies can lead to algebraic loops that are formed when connecting thermofluid flow components. Based on this analysis, we provide recommendations that increase the computing efficiency of such components and systems that are formed by connecting multiple components. We explain what codemore » optimizations are lost when providing thermofluid flow components as FMUs rather than Modelica code. We present an implementation of a package for FMU export of such components, explain the rationale for selecting the connector variables of the FMUs and finally provide computing benchmarks for different design choices. It turns out that selecting temperature rather than specific enthalpy as input and output signals does not lead to a measurable increase in computing time, but selecting nine small FMUs rather than a large FMU increases computing time by 70%.« less
Magneto Caloric Effect in Ni-Mn-Ga alloys: First Principles and Experimental studies
NASA Astrophysics Data System (ADS)
Odbadrakh, Khorgolkhuu; Nicholson, Don; Brown, Gregory; Rusanu, Aurelian; Rios, Orlando; Hodges, Jason; Safa-Sefat, Athena; Ludtka, Gerard; Eisenbach, Markus; Evans, Boyd
2012-02-01
Understanding the Magneto-Caloric Effect (MCE) in alloys with real technological potential is important to the development of viable MCE based products. We report results of computational and experimental investigation of a candidate MCE materials Ni-Mn-Ga alloys. The Wang-Landau statistical method is used in tandem with Locally Self-consistent Multiple Scattering (LSMS) method to explore magnetic states of the system. A classical Heisenberg Hamiltonian is parametrized based on these states and used in obtaining the density of magnetic states. The Currie temperature, isothermal entropy change, and adiabatic temperature change are then calculated from the density of states. Experiments to observe the structural and magnetic phase transformations were performed at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) on alloys of Ni-Mn-Ga and Fe-Ni-Mn-Ga-Cu. Data from the observations are discussed in comparison with the computational studies. This work was sponsored by the Laboratory Directed Research and Development Program (ORNL), by the Mathematical, Information, and Computational Sciences Division; Office of Advanced Scientific Computing Research (US DOE), and by the Materials Sciences and Engineering Division; Office of Basic Energy Sciences (US DOE).
NASA Astrophysics Data System (ADS)
Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain
2017-10-01
We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.
NASA Astrophysics Data System (ADS)
Karman, Tijs; van der Avoird, Ad; Groenenboom, Gerrit C.
2017-08-01
We compute four-dimensional diabatic potential energy surfaces and transition dipole moment surfaces of O2-O2, relevant for the theoretical description of collision-induced absorption in the forbidden X3Σg- → a1Δg and X3Σg- → b1Σg+ bands at 7883 cm-1 and 13 122 cm-1, respectively. We compute potentials at the multi-reference configuration interaction (MRCI) level and dipole surfaces at the MRCI and complete active space self-consistent field (CASSCF) levels of theory. Potentials and dipole surfaces are transformed to a diabatic basis using a recent multiple-property-based diabatization algorithm. We discuss the angular expansion of these surfaces, derive the symmetry constraints on the expansion coefficients, and present working equations for determining the expansion coefficients by numerical integration over the angles. We also present an interpolation scheme with exponential extrapolation to both short and large separations, which is used for representing the O2-O2 distance dependence of the angular expansion coefficients. For the triplet ground state of the complex, the potential energy surface is in reasonable agreement with previous calculations, whereas global excited state potentials are reported here for the first time. The transition dipole moment surfaces are strongly dependent on the level of theory at which they are calculated, as is also shown here by benchmark calculations at high symmetry geometries. Therefore, ab initio calculations of the collision-induced absorption spectra cannot become quantitatively predictive unless more accurate transition dipole surfaces can be computed. This is left as an open question for method development in electronic structure theory. The calculated potential energy and transition dipole moment surfaces are employed in quantum dynamical calculations of collision-induced absorption spectra reported in Paper II [T. Karman et al., J. Chem. Phys. 147, 084307 (2017)].
Karman, Tijs; van der Avoird, Ad; Groenenboom, Gerrit C
2017-08-28
We compute four-dimensional diabatic potential energy surfaces and transition dipole moment surfaces of O 2 -O 2 , relevant for the theoretical description of collision-induced absorption in the forbidden X 3 Σ g - → a 1 Δ g and X 3 Σ g - → b 1 Σ g + bands at 7883 cm -1 and 13 122 cm -1 , respectively. We compute potentials at the multi-reference configuration interaction (MRCI) level and dipole surfaces at the MRCI and complete active space self-consistent field (CASSCF) levels of theory. Potentials and dipole surfaces are transformed to a diabatic basis using a recent multiple-property-based diabatization algorithm. We discuss the angular expansion of these surfaces, derive the symmetry constraints on the expansion coefficients, and present working equations for determining the expansion coefficients by numerical integration over the angles. We also present an interpolation scheme with exponential extrapolation to both short and large separations, which is used for representing the O 2 -O 2 distance dependence of the angular expansion coefficients. For the triplet ground state of the complex, the potential energy surface is in reasonable agreement with previous calculations, whereas global excited state potentials are reported here for the first time. The transition dipole moment surfaces are strongly dependent on the level of theory at which they are calculated, as is also shown here by benchmark calculations at high symmetry geometries. Therefore, ab initio calculations of the collision-induced absorption spectra cannot become quantitatively predictive unless more accurate transition dipole surfaces can be computed. This is left as an open question for method development in electronic structure theory. The calculated potential energy and transition dipole moment surfaces are employed in quantum dynamical calculations of collision-induced absorption spectra reported in Paper II [T. Karman et al., J. Chem. Phys. 147, 084307 (2017)].
Multiprocessing on supercomputers for computational aerodynamics
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Mehta, Unmeel B.
1990-01-01
Very little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPs or more) in computational aerodynamics to significantly improve turnaround time. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, the improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) through multi-tasking is applied via a strategy which requires relatively minor modifications to an existing code for a single processor. Essentially, this approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. The existing single processor code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor. As a demonstration of this approach, a Multiple Processor Multiple Grid (MPMG) code is developed. It is capable of using nine processors, and can be easily extended to a larger number of processors. This code solves the three-dimensional, Reynolds averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. The solver is applied to generic oblique-wing aircraft problem on a four processor Cray-2 computer. A tricubic interpolation scheme is developed to increase the accuracy of coupling of overlapped grids. For the oblique-wing aircraft problem, a speedup of two in elapsed (turnaround) time is observed in a saturated time-sharing environment.
Energy challenges in optical access and aggregation networks.
Kilper, Daniel C; Rastegarfar, Houman
2016-03-06
Scalability is a critical issue for access and aggregation networks as they must support the growth in both the size of data capacity demands and the multiplicity of access points. The number of connected devices, the Internet of Things, is growing to the tens of billions. Prevailing communication paradigms are reaching physical limitations that make continued growth problematic. Challenges are emerging in electronic and optical systems and energy increasingly plays a central role. With the spectral efficiency of optical systems approaching the Shannon limit, increasing parallelism is required to support higher capacities. For electronic systems, as the density and speed increases, the total system energy, thermal density and energy per bit are moving into regimes that become impractical to support-for example requiring single-chip processor powers above the 100 W limit common today. We examine communication network scaling and energy use from the Internet core down to the computer processor core and consider implications for optical networks. Optical switching in data centres is identified as a potential model from which scalable access and aggregation networks for the future Internet, with the application of integrated photonic devices and intelligent hybrid networking, will emerge. © 2016 The Author(s).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zitney, S.E.
This paper highlights the use of the CAPE-OPEN (CO) standard interfaces in the Advanced Process Engineering Co-Simulator (APECS) developed at the National Energy Technology Laboratory (NETL). The APECS system uses the CO unit operation, thermodynamic, and reaction interfaces to provide its plug-and-play co-simulation capabilities, including the integration of process simulation with computational fluid dynamics (CFD) simulation. APECS also relies heavily on the use of a CO COM/CORBA bridge for running process/CFD co-simulations on multiple operating systems. For process optimization in the face of multiple and some time conflicting objectives, APECS offers stochastic modeling and multi-objective optimization capabilities developed to complymore » with the CO software standard. At NETL, system analysts are applying APECS to a wide variety of advanced power generation systems, ranging from small fuel cell systems to commercial-scale power plants including the coal-fired, gasification-based FutureGen power and hydrogen production plant.« less
Long, Hai; Chang, Christopher H.; King, Paul W.; Ghirardi, Maria L.; Kim, Kwiseon
2008-01-01
The [FeFe] hydrogenase from the green alga Chlamydomonas reinhardtii can catalyze the reduction of protons to hydrogen gas using electrons supplied from photosystem I and transferred via ferredoxin. To better understand the association of the hydrogenase and the ferredoxin, we have simulated the process over multiple timescales. A Brownian dynamics simulation method gave an initial thorough sampling of the rigid-body translational and rotational phase spaces, and the resulting trajectories were used to compute the occupancy and free-energy landscapes. Several important hydrogenase-ferredoxin encounter complexes were identified from this analysis, which were then individually simulated using atomistic molecular dynamics to provide more details of the hydrogenase and ferredoxin interaction. The ferredoxin appeared to form reasonable complexes with the hydrogenase in multiple orientations, some of which were good candidates for inclusion in a transition state ensemble of configurations for electron transfer. PMID:18621810
Meirer, Florian; Morris, Darius T.; Kalirai, Sam; ...
2015-01-02
Full-field transmission X-ray microscopy has been used to determine the 3D structure of a whole individual fluid catalytic cracking (FCC) particle at high spatial resolution and in a fast, noninvasive manner, maintaining the full integrity of the particle. Using X-ray absorption mosaic imaging to combine multiple fields of view, computed tomography was performed to visualize the macropore structure of the catalyst and its availability for mass transport. We mapped the relative spatial distributions of Ni and Fe using multiple-energy tomography at the respective X-ray absorption K-edges and correlated these distributions with porosity and permeability of an equilibrated catalyst (E-cat) particle.more » Both metals were found to accumulate in outer layers of the particle, effectively decreasing porosity by clogging of pores and eventually restricting access into the FCC particle.« less
NASA Astrophysics Data System (ADS)
Broedel, Johannes; Duhr, Claude; Dulat, Falko; Tancredi, Lorenzo
2018-06-01
We introduce a class of iterated integrals that generalize multiple polylogarithms to elliptic curves. These elliptic multiple polylogarithms are closely related to similar functions defined in pure mathematics and string theory. We then focus on the equal-mass and non-equal-mass sunrise integrals, and we develop a formalism that enables us to compute these Feynman integrals in terms of our iterated integrals on elliptic curves. The key idea is to use integration-by-parts identities to identify a set of integral kernels, whose precise form is determined by the branch points of the integral in question. These kernels allow us to express all iterated integrals on an elliptic curve in terms of them. The flexibility of our approach leads us to expect that it will be applicable to a large variety of integrals in high-energy physics.
Li, Xiang; Eustis, Soren N; Bowen, Kit H; Kandalam, Anil
2008-09-28
The gas-phase, iron and cobalt cyclooctatetraene cluster anions, [Fe(1,2)(COT)](-) and [Co(COT)](-), were generated using a laser vaporization source and studied using mass spectrometry and anion photoelectron spectroscopy. Density functional theory was employed to compute the structures and spin multiplicities of these cluster anions as well as those of their corresponding neutrals. Both experimental and theoretically predicted electron affinities and photodetachment transition energies are in good agreement, authenticating the structures and spin multiplicities predicted by theory. The implied spin magnetic moments of these systems suggest that [Fe(COT)], [Fe(2)(COT)], and [Co(COT)] retain the magnetic moments of the Fe atom, the Fe(2) dimer, and the Co atom, respectively. Thus, the interaction of these transition metal, atomic and dimeric moieties with a COT molecule does not quench their magnetic moments, leading to the possibility that these combinations may be useful in forming novel magnetic materials.
Shen, Yiwen; Hattink, Maarten H N; Samadi, Payman; Cheng, Qixiang; Hu, Ziyiz; Gazman, Alexander; Bergman, Keren
2018-04-16
Silicon photonics based switches offer an effective option for the delivery of dynamic bandwidth for future large-scale Datacom systems while maintaining scalable energy efficiency. The integration of a silicon photonics-based optical switching fabric within electronic Datacom architectures requires novel network topologies and arbitration strategies to effectively manage the active elements in the network. We present a scalable software-defined networking control plane to integrate silicon photonic based switches with conventional Ethernet or InfiniBand networks. Our software-defined control plane manages both electronic packet switches and multiple silicon photonic switches for simultaneous packet and circuit switching. We built an experimental Dragonfly network testbed with 16 electronic packet switches and 2 silicon photonic switches to evaluate our control plane. Observed latencies occupied by each step of the switching procedure demonstrate a total of 344 µs control plane latency for data-center and high performance computing platforms.
NASA Technical Reports Server (NTRS)
Liechty, Derek S.; Burt, Jonathan M.
2016-01-01
There are many flows fields that span a wide range of length scales where regions of both rarefied and continuum flow exist and neither direct simulation Monte Carlo (DSMC) nor computational fluid dynamics (CFD) provide the appropriate solution everywhere. Recently, a new viscous collision limited (VCL) DSMC technique was proposed to incorporate effects of physical diffusion into collision limiter calculations to make the low Knudsen number regime normally limited to CFD more tractable for an all-particle technique. This original work had been derived for a single species gas. The current work extends the VCL-DSMC technique to gases with multiple species. Similar derivations were performed to equate numerical and physical transport coefficients. However, a more rigorous treatment of determining the mixture viscosity is applied. In the original work, consideration was given to internal energy non-equilibrium, and this is also extended in the current work to chemical non-equilibrium.
NASA Astrophysics Data System (ADS)
Imamura, Seigo; Ono, Kenji; Yokokawa, Mitsuo
2016-07-01
Ensemble computing, which is an instance of capacity computing, is an effective computing scenario for exascale parallel supercomputers. In ensemble computing, there are multiple linear systems associated with a common coefficient matrix. We improve the performance of iterative solvers for multiple vectors by solving them at the same time, that is, by solving for the product of the matrices. We implemented several iterative methods and compared their performance. The maximum performance on Sparc VIIIfx was 7.6 times higher than that of a naïve implementation. Finally, to deal with the different convergence processes of linear systems, we introduced a control method to eliminate the calculation of already converged vectors.
Analysis and Application of Microgrids
NASA Astrophysics Data System (ADS)
Yue, Lu
New trends of generating electricity locally and utilizing non-conventional or renewable energy sources have attracted increasing interests due to the gradual depletion of conventional fossil fuel energy sources. The new type of power generation is called Distributed Generation (DG) and the energy sources utilized by Distributed Generation are termed Distributed Energy Sources (DERs). With DGs embedded in the distribution networks, they evolve from passive distribution networks to active distribution networks enabling bidirectional power flows in the networks. Further incorporating flexible and intelligent controllers and employing future technologies, active distribution networks will turn to a Microgrid. A Microgrid is a small-scale, low voltage Combined with Heat and Power (CHP) supply network designed to supply electrical and heat loads for a small community. To further implement Microgrids, a sophisticated Microgrid Management System must be integrated. However, due to the fact that a Microgrid has multiple DERs integrated and is likely to be deregulated, the ability to perform real-time OPF and economic dispatch with fast speed advanced communication network is necessary. In this thesis, first, problems such as, power system modelling, power flow solving and power system optimization, are studied. Then, Distributed Generation and Microgrid are studied and reviewed, including a comprehensive review over current distributed generation technologies and Microgrid Management Systems, etc. Finally, a computer-based AC optimization method which minimizes the total transmission loss and generation cost of a Microgrid is proposed and a wireless communication scheme based on synchronized Code Division Multiple Access (sCDMA) is proposed. The algorithm is tested with a 6-bus power system and a 9-bus power system.
2015-01-01
We present a new computational approach for constant pH simulations in explicit solvent based on the combination of the enveloping distribution sampling (EDS) and Hamiltonian replica exchange (HREX) methods. Unlike constant pH methods based on variable and continuous charge models, our method is based on discrete protonation states. EDS generates a hybrid Hamiltonian of different protonation states. A smoothness parameter s is used to control the heights of energy barriers of the hybrid-state energy landscape. A small s value facilitates state transitions by lowering energy barriers. Replica exchange between EDS potentials with different s values allows us to readily obtain a thermodynamically accurate ensemble of multiple protonation states with frequent state transitions. The analysis is performed with an ensemble obtained from an EDS Hamiltonian without smoothing, s = ∞, which strictly follows the minimum energy surface of the end states. The accuracy and efficiency of this method is tested on aspartic acid, lysine, and glutamic acid, which have two protonation states, a histidine with three states, a four-residue peptide with four states, and snake cardiotoxin with eight states. The pKa values estimated with the EDS-HREX method agree well with the experimental pKa values. The mean absolute errors of small benchmark systems range from 0.03 to 0.17 pKa units, and those of three titratable groups of snake cardiotoxin range from 0.2 to 1.6 pKa units. This study demonstrates that EDS-HREX is a potent theoretical framework, which gives the correct description of multiple protonation states and good calculated pKa values. PMID:25061443
A 60 GOPS/W, -1.8 V to 0.9 V body bias ULP cluster in 28 nm UTBB FD-SOI technology
NASA Astrophysics Data System (ADS)
Rossi, Davide; Pullini, Antonio; Loi, Igor; Gautschi, Michael; Gürkaynak, Frank K.; Bartolini, Andrea; Flatresse, Philippe; Benini, Luca
2016-03-01
Ultra-low power operation and extreme energy efficiency are strong requirements for a number of high-growth application areas, such as E-health, Internet of Things, and wearable Human-Computer Interfaces. A promising approach to achieve up to one order of magnitude of improvement in energy efficiency over current generation of integrated circuits is near-threshold computing. However, frequency degradation due to aggressive voltage scaling may not be acceptable across all performance-constrained applications. Thread-level parallelism over multiple cores can be used to overcome the performance degradation at low voltage. Moreover, enabling the processors to operate on-demand and over a wide supply voltage and body bias ranges allows to achieve the best possible energy efficiency while satisfying a large spectrum of computational demands. In this work we present the first ever implementation of a 4-core cluster fabricated using conventional-well 28 nm UTBB FD-SOI technology. The multi-core architecture we present in this work is able to operate on a wide range of supply voltages starting from 0.44 V to 1.2 V. In addition, the architecture allows a wide range of body bias to be applied from -1.8 V to 0.9 V. The peak energy efficiency 60 GOPS/W is achieved at 0.5 V supply voltage and 0.5 V forward body bias. Thanks to the extended body bias range of conventional-well FD-SOI technology, high energy efficiency can be guaranteed for a wide range of process and environmental conditions. We demonstrate the ability to compensate for up to 99.7% of chips for process variation with only ±0.2 V of body biasing, and compensate temperature variation in the range -40 °C to 120 °C exploiting -1.1 V to 0.8 V body biasing. When compared to leading-edge near-threshold RISC processors optimized for extremely low power applications, the multi-core architecture we propose has 144× more performance at comparable energy efficiency levels. Even when compared to other low-power processors with comparable performance, including those implemented in 28 nm technology, our platform provides 1.4× to 3.7× better energy efficiency.
Plotnikov, Nikolay V
2014-08-12
Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.
2015-01-01
Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force. PMID:25136268
Mathematical modelling of Bit-Level Architecture using Reciprocal Quantum Logic
NASA Astrophysics Data System (ADS)
Narendran, S.; Selvakumar, J.
2018-04-01
Efficiency of high-performance computing is on high demand with both speed and energy efficiency. Reciprocal Quantum Logic (RQL) is one of the technology which will produce high speed and zero static power dissipation. RQL uses AC power supply as input rather than DC input. RQL has three set of basic gates. Series of reciprocal transmission lines are placed in between each gate to avoid loss of power and to achieve high speed. Analytical model of Bit-Level Architecture are done through RQL. Major drawback of reciprocal Quantum Logic is area, because of lack in proper power supply. To achieve proper power supply we need to use splitters which will occupy large area. Distributed arithmetic uses vector- vector multiplication one is constant and other is signed variable and each word performs as a binary number, they rearranged and mixed to form distributed system. Distributed arithmetic is widely used in convolution and high performance computational devices.
Zhou, Guoxu; Yang, Zuyuan; Xie, Shengli; Yang, Jun-Mei
2011-04-01
Online blind source separation (BSS) is proposed to overcome the high computational cost problem, which limits the practical applications of traditional batch BSS algorithms. However, the existing online BSS methods are mainly used to separate independent or uncorrelated sources. Recently, nonnegative matrix factorization (NMF) shows great potential to separate the correlative sources, where some constraints are often imposed to overcome the non-uniqueness of the factorization. In this paper, an incremental NMF with volume constraint is derived and utilized for solving online BSS. The volume constraint to the mixing matrix enhances the identifiability of the sources, while the incremental learning mode reduces the computational cost. The proposed method takes advantage of the natural gradient based multiplication updating rule, and it performs especially well in the recovery of dependent sources. Simulations in BSS for dual-energy X-ray images, online encrypted speech signals, and high correlative face images show the validity of the proposed method.
Meral, Derya; Provasi, Davide; Prada-Gracia, Diego; Möller, Jan; Marino, Kristen; Lohse, Martin J; Filizola, Marta
2018-05-16
Various experimental and computational techniques have been employed over the past decade to provide structural and thermodynamic insights into G Protein-Coupled Receptor (GPCR) dimerization. Here, we use multiple microsecond-long, coarse-grained, biased and unbiased molecular dynamics simulations (a total of ~4 milliseconds) combined with multi-ensemble Markov state models to elucidate the kinetics of homodimerization of a prototypic GPCR, the µ-opioid receptor (MOR), embedded in a 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC)/cholesterol lipid bilayer. Analysis of these computations identifies kinetically distinct macrostates comprising several different short-lived dimeric configurations of either inactive or activated MOR. Calculated kinetic rates and fractions of dimers at different MOR concentrations suggest a negligible population of MOR homodimers at physiological concentrations, which is supported by acceptor photobleaching fluorescence resonance energy transfer (FRET) experiments. This study provides a rigorous, quantitative explanation for some conflicting experimental data on GPCR oligomerization.
Laserthermia: a new computer-controlled contact Nd: YAG system for interstitial local hyperthermia.
Daikuzono, N; Suzuki, S; Tajiri, H; Tsunekawa, H; Ohyama, M; Joffe, S N
1988-01-01
Contact Nd:YAG laser surgery is assuming a greater importance in endoscopic and open surgery, allowing coagulation, cutting, and vaporization with greater precision and safety. A new contact probe allows a wider angle of irradiation and diffusion of low-power laser energy (less than 5 watts), using the interstitial technique for producing local hyperthermia. Temperature sensors that monitor continuously can be placed directly into the surrounding tissue or tumor. Using a computer program interfaced with the laser and sensors, a controlled and stable temperature (e.g., 42 degrees C) can be produced in a known volume of tissue over a prolonged period of time (e.g., 20-40 min). This new laserthermia system, using a single low-power Nd:YAG laser for interstitial local hyperthermia, may offer many new advantages in the experimental treatment and clinical management of carcinoma. A multiple system is now being developed.
Beletskiy, Evgeny V; Wang, Xue-Bin; Kass, Steven Robert
2016-10-05
A benzene ring substituted with 1-3 thiourea containing arms (1-3) were examined by photoelectron spectroscopy and density functional theory computations. Their conjugate bases and chloride, acetate and dihydrogen phosphate anion clusters are reported. The resulting vertical and adiabatic detachment energies span from 3.93 - 5.82 eV (VDE) and 3.65 - 5.10 (ADE) for the deprotonated species and 4.88 - 5.97 eV (VDE) and 4.45 - 5.60 eV (ADE) for the anion complexes. These results reveal the stabilizing effects of multiple hydrogen bonds and anionic host-guest interactions in the gas phase. Previously measured equilibrium binding constants in aqueous dimethyl sulfoxide for all three thioureas are compared to the present results and cooperative binding is uniformly observed in the gas phase but only for one case (i.e., 3 • H2PO4-) in solution.
Energy Use and Power Levels in New Monitors and Personal Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay
2002-07-23
Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can usemore » to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption (UEC). Cur rent ENERGY STAR monitor and computer criteria do not specify off or on power, but our results suggest opportunities for saving energy in these modes. Also, significant differences between CRT and LCD technology, and between field-measured and manufacturer-reported power levels reveal the need for standard methods and metrics for measuring and comparing monitor power consumption.« less
DHS Summary Report -- Robert Weldon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weldon, Robert A.
This summer I worked on benchmarking the Lawrence Livermore National Laboratory fission multiplicity capability used in the Monte Carlo particle transport code MCNPX. This work involved running simulations and then comparing the simulation results with experimental experiments. Outlined in this paper is a brief description of the work completed this summer, skills and knowledge gained, and how the internship has impacted my planning for the future. Neutron multiplicity counting is a neutron detection technique that leverages the multiplicity emissions of neutrons from fission to identify various actinides in a lump of material. The identification of individual actinides in lumps ofmore » material crossing our boarders, especially U-235 and Pu-239, is a key component for maintaining the safety of the country from nuclear threats. Several multiplicity emission options from spontaneous and induced fission already existed in MCNPX 2.4.0. These options can be accessed through use of the 6th entry on the PHYS:N card. Lawrence Livermore National Laboratory (LLNL) developed a physics model for the simulation of neutron and gamma ray emission from fission and photofission that was included in MCNPX 2.7.B as an undocumented feature and then was documented in MCNPX 2.7.C. The LLNL multiplicity capability provided a different means for MCNPX to simulate neutron and gamma-ray distributions for neutron induced, spontaneous and photonuclear fission reactions. The original testing on the model for implementation into MCNPX was conducted by Gregg McKinney and John Hendricks. The model is an encapsulation of measured data of neutron multiplicity distributions from Gwin, Spencer, and Ingle, along with the data from Zucker and Holden. One of the founding principles of MCNPX was that it would have several redundant capabilities, providing the means of testing and including various physics packages. Though several multiplicity sampling methodologies already existed within MCNPX, the LLNL fission multiplicity was included to provide a separate capability for computing multiplicity as well as including several new features not already included in MCNPX. These new features include: (1) prompt gamma emission/multiplicity from neutron-induced fission; (2) neutron multiplicity and gamma emission/multiplicity from photofission; and (3) an option to enforce energy correlation for gamma neutron multiplicity emission. These new capabilities allow correlated signal detection for identifying presence of special nuclear material (SNM). Therefore, these new capabilities help meet the missions of the Domestic Nuclear Detection Office (DNDO), which is tasked with developing nuclear detection strategies for identifying potential radiological and nuclear threats, by providing new simulation capability for detection strategies that leverage the new available physics in the LLNL multiplicity capability. Two types of tests were accomplished this summer to test the default LLNL neutron multiplicity capability: neutron-induced fission tests and spontaneous fission tests. Both cases set the 6th entry on the PHYS:N card to 5 (i.e. use LLNL multiplicity). The neutron-induced fission tests utilized a simple 0.001 cm radius sphere where 0.0253 eV neutrons were released at the sphere center. Neutrons were forced to immediately collide in the sphere and release all progeny from the sphere, without further collision, using the LCA card, LCA 7j -2 (therefore density and size of the sphere were irrelevant). Enough particles were run to ensure that the average error of any specific multiplicity did not exceed 0.36%. Neutron-induced fission multiplicities were computed for U-233, U-235, Pu-239, and Pu-241. The spontaneous fission tests also used the same spherical geometry, except: (1) the LCA card was removed; (2) the density of the sphere was set to 0.001 g/cm3; and (3) instead of emitting a thermal neutron, the PAR keyword was set to PAR=SF. The purpose of the small density was to ensure that the spontaneous fission neutrons would not further interact and induce fissions (i.e. the mean free path greatly exceeded the size of the sphere). Enough particles were run to ensure that the average error of any specific spontaneous multiplicity did not exceed 0.23%. Spontaneous fission multiplicities were computed for U-238, Pu-238, Pu-240, Pu-242, Cm-242, and Cm-244. All of the computed results were compared against experimental results compiled by Holden at Brookhaven National Laboratory.« less
Web-based reactive transport modeling using PFLOTRAN
NASA Astrophysics Data System (ADS)
Zhou, H.; Karra, S.; Lichtner, P. C.; Versteeg, R.; Zhang, Y.
2017-12-01
Actionable understanding of system behavior in the subsurface is required for a wide spectrum of societal and engineering needs by both commercial firms and government entities and academia. These needs include, for example, water resource management, precision agriculture, contaminant remediation, unconventional energy production, CO2 sequestration monitoring, and climate studies. Such understanding requires the ability to numerically model various coupled processes that occur across different temporal and spatial scales as well as multiple physical domains (reservoirs - overburden, surface-subsurface, groundwater-surface water, saturated-unsaturated zone). Currently, this ability is typically met through an in-house approach where computational resources, model expertise, and data for model parameterization are brought together to meet modeling needs. However, such an approach has multiple drawbacks which limit the application of high-end reactive transport codes such as the Department of Energy funded[?] PFLOTRAN code. In addition, while many end users have a need for the capabilities provided by high-end reactive transport codes, they do not have the expertise - nor the time required to obtain the expertise - to effectively use these codes. We have developed and are actively enhancing a cloud-based software platform through which diverse users are able to easily configure, execute, visualize, share, and interpret PFLOTRAN models. This platform consists of a web application and available on-demand HPC computational infrastructure. The web application consists of (1) a browser-based graphical user interface which allows users to configure models and visualize results interactively, and (2) a central server with back-end relational databases which hold configuration, data, modeling results, and Python scripts for model configuration, and (3) a HPC environment for on-demand model execution. We will discuss lessons learned in the development of this platform, the rationale for different interfaces, implementation choices, as well as the planned path forward.
Williams, Eric
2004-11-15
The total energy and fossil fuels used in producing a desktop computer with 17-in. CRT monitor are estimated at 6400 megajoules (MJ) and 260 kg, respectively. This indicates that computer manufacturing is energy intensive: the ratio of fossil fuel use to product weight is 11, an order of magnitude larger than the factor of 1-2 for many other manufactured goods. This high energy intensity of manufacturing, combined with rapid turnover in computers, results in an annual life cycle energy burden that is surprisingly high: about 2600 MJ per year, 1.3 times that of a refrigerator. In contrast with many home appliances, life cycle energy use of a computer is dominated by production (81%) as opposed to operation (19%). Extension of usable lifespan (e.g. by reselling or upgrading) is thus a promising approach to mitigating energy impacts as well as other environmental burdens associated with manufacturing and disposal.
Wu, Rongli; Watanabe, Yoshiyuki; Satoh, Kazuhiko; Liao, Yen-Peng; Takahashi, Hiroto; Tanaka, Hisashi; Tomiyama, Noriyuki
2018-05-21
The aim of this study was to quantitatively compare the reduction in beam hardening artifact (BHA) and variance in computed tomography (CT) numbers of virtual monochromatic energy (VME) images obtained with 3 dual-energy computed tomography (DECT) systems at a given radiation dose. Five different iodine concentrations were scanned using dual-energy and single-energy (120 kVp) modes. The BHA and CT number variance were evaluated. For higher iodine concentrations, 40 and 80 mgI/mL, BHA on VME imaging was significantly decreased when the energy was higher than 50 keV (P = 0.003) and 60 keV (P < 0.001) for GE, higher than 80 keV (P < 0.001) and 70 keV (P = 0.002) for Siemens, and higher than 40 keV (P < 0.001) and 60 keV (P < 0.001) for Toshiba, compared with single-energy CT imaging. Virtual monochromatic energy imaging can decrease BHA and improve CT number accuracy in different dual-energy computed tomography systems, depending on energy levels and iodine concentrations.
NASA Astrophysics Data System (ADS)
DeBeer, Chris M.; Pomeroy, John W.
2017-10-01
The spatial heterogeneity of mountain snow cover and ablation is important in controlling patterns of snow cover depletion (SCD), meltwater production, and runoff, yet is not well-represented in most large-scale hydrological models and land surface schemes. Analyses were conducted in this study to examine the influence of various representations of snow cover and melt energy heterogeneity on both simulated SCD and stream discharge from a small alpine basin in the Canadian Rocky Mountains. Simulations were performed using the Cold Regions Hydrological Model (CRHM), where point-scale snowmelt computations were made using a snowpack energy balance formulation and applied to spatial frequency distributions of snow water equivalent (SWE) on individual slope-, aspect-, and landcover-based hydrological response units (HRUs) in the basin. Hydrological routines were added to represent the vertical and lateral transfers of water through the basin and channel system. From previous studies it is understood that the heterogeneity of late winter SWE is a primary control on patterns of SCD. The analyses here showed that spatial variation in applied melt energy, mainly due to differences in net radiation, has an important influence on SCD at multiple scales and basin discharge, and cannot be neglected without serious error in the prediction of these variables. A single basin SWE distribution using the basin-wide mean SWE (SWE ‾) and coefficient of variation (CV; standard deviation/mean) was found to represent the fine-scale spatial heterogeneity of SWE sufficiently well. Simulations that accounted for differences in (SWE ‾) among HRUs but neglected the sub-HRU heterogeneity of SWE were found to yield similar discharge results as simulations that included this heterogeneity, while SCD was poorly represented, even at the basin level. Finally, applying point-scale snowmelt computations based on a single SWE depth for each HRU (thereby neglecting spatial differences in internal snowpack energetics over the distributions) was found to yield similar SCD and discharge results as simulations that resolved internal energy differences. Spatial/internal snowpack melt energy effects are more pronounced at times earlier in spring before the main period of snowmelt and SCD, as shown in previously published work. The paper discusses the importance of these findings as they apply to the warranted complexity of snowmelt process simulation in cold mountain environments, and shows how the end-of-winter SWE distribution represents an effective means of resolving snow cover heterogeneity at multiple scales for modelling, even in steep and complex terrain.
Fast discrete cosine transform structure suitable for implementation with integer computation
NASA Astrophysics Data System (ADS)
Jeong, Yeonsik; Lee, Imgeun
2000-10-01
The discrete cosine transform (DCT) has wide applications in speech and image coding. We propose a fast DCT scheme with the property of reduced multiplication stages and fewer additions and multiplications. The proposed algorithm is structured so that most multiplications are performed at the final stage, which reduces the propagation error that could occur in the integer computation.
Multiple resonant railgun power supply
Honig, E.M.; Nunnally, W.C.
1985-06-19
A multiple repetitive resonant railgun power supply provides energy for repetitively propelling projectiles from a pair of parallel rails. A plurality of serially connected paired parallel rails are powered by similar power supplies. Each supply comprises an energy storage capacitor, a storage inductor to form a resonant circuit with the energy storage capacitor and a magnetic switch to transfer energy between the resonant circuit and the pair of parallel rails for the propelling of projectiles. The multiple serial operation permits relatively small energy components to deliver overall relatively large amounts of energy to the projectiles being propelled.
Multiple resonant railgun power supply
Honig, Emanuel M.; Nunnally, William C.
1988-01-01
A multiple repetitive resonant railgun power supply provides energy for repetitively propelling projectiles from a pair of parallel rails. A plurality of serially connected paired parallel rails are powered by similar power supplies. Each supply comprises an energy storage capacitor, a storage inductor to form a resonant circuit with the energy storage capacitor and a magnetic switch to transfer energy between the resonant circuit and the pair of parallel rails for the propelling of projectiles. The multiple serial operation permits relatively small energy components to deliver overall relatively large amounts of energy to the projectiles being propelled.
Projecting Wind Energy Potential Under Climate Change with Ensemble of Climate Model Simulations
NASA Astrophysics Data System (ADS)
Jain, A.; Shashikanth, K.; Ghosh, S.; Mukherjee, P. P.
2013-12-01
Recent years have witnessed an increasing global concern over energy sustainability and security, triggered by a number of issues, such as (though not limited to): fossil fuel depletion, energy resource geopolitics, economic efficiency versus population growth debate, environmental concerns and climate change. Wind energy is a renewable and sustainable form of energy in which wind turbines convert the kinetic energy of wind into electrical energy. Global warming and differential surface heating may significantly impact the wind velocity and hence the wind energy potential. Sustainable design of wind mills requires understanding the impacts of climate change on wind energy potential, which we evaluate here with multiple General Circulation Models (GCMs). GCMs simulate the climate variables globally considering the greenhouse emission scenarios provided as Representation Concentration path ways (RCPs). Here we use new generation climate model outputs obtained from Coupled model Intercomparison Project 5(CMIP5). We first compute the wind energy potential with reanalysis data (NCEP/ NCAR), at a spatial resolution of 2.50, where the gridded data is fitted to Weibull distribution and with the Weibull parameters, the wind energy densities are computed at different grids. The same methodology is then used, to CMIP5 outputs (resultant of U-wind and V-wind) of MRI, CMCC, BCC, CanESM, and INMCM4 for historical runs. This is performed separately for four seasons globally, MAM, JJA, SON and DJF. We observe the muti-model average of wind energy density for historic period has significant bias with respect to that of reanalysis product. Here we develop a quantile based superensemble approach where GCM quantiles corresponding to selected CDF values are regressed to reanalysis data. It is observed that this regression approach takes care of both, bias in GCMs and combination of GCMs. With superensemble, we observe that the historical wind energy density resembles quite well with reanalysis/ observed output. We apply the same for future under RCP scenarios. We observe spatially and temporally varying global change of wind energy density. The underlying assumption is that the regression relationship will also hold good for future. The results highlight the needs to change the design standards of wind mills at different locations, considering climate change and at the same time the requirement of height modifications for existing mills to produce same energy in future.
Biomolecular computers with multiple restriction enzymes.
Sakowski, Sebastian; Krasinski, Tadeusz; Waldmajer, Jacek; Sarnik, Joanna; Blasiak, Janusz; Poplawski, Tomasz
2017-01-01
The development of conventional, silicon-based computers has several limitations, including some related to the Heisenberg uncertainty principle and the von Neumann "bottleneck". Biomolecular computers based on DNA and proteins are largely free of these disadvantages and, along with quantum computers, are reasonable alternatives to their conventional counterparts in some applications. The idea of a DNA computer proposed by Ehud Shapiro's group at the Weizmann Institute of Science was developed using one restriction enzyme as hardware and DNA fragments (the transition molecules) as software and input/output signals. This computer represented a two-state two-symbol finite automaton that was subsequently extended by using two restriction enzymes. In this paper, we propose the idea of a multistate biomolecular computer with multiple commercially available restriction enzymes as hardware. Additionally, an algorithmic method for the construction of transition molecules in the DNA computer based on the use of multiple restriction enzymes is presented. We use this method to construct multistate, biomolecular, nondeterministic finite automata with four commercially available restriction enzymes as hardware. We also describe an experimental applicaton of this theoretical model to a biomolecular finite automaton made of four endonucleases.
Ubiquitous Complete in a Web 2.0 World
ERIC Educational Resources Information Center
Bull, Glen; Ferster, Bill
2006-01-01
In the third wave of computing, people will interact with multiple computers in multiple ways in every setting. The value of ubiquitous computing is enhanced and reinforced by another trend: the transition to a Web 2.0 world. In a Web 2.0 world, applications and data reside on the Web itself. Schools are not yet approaching a ratio of one…
ERIC Educational Resources Information Center
Teo, Timothy
2010-01-01
Purpose: The purpose of this paper is to examine the effect of gender on pre-service teachers' computer attitudes. Design/methodology/approach: A total of 157 pre-service teachers completed a survey questionnaire measuring their responses to four constructs which explain computer attitude. These were administered during the teaching term where…
Code of Federal Regulations, 2013 CFR
2013-01-01
... information on DOE computers? 727.5 Section 727.5 Energy DEPARTMENT OF ENERGY CONSENT FOR ACCESS TO INFORMATION ON DEPARTMENT OF ENERGY COMPUTERS § 727.5 What acknowledgment and consent is required for access to information on DOE computers? An individual may not be granted access to information on a DOE...
Code of Federal Regulations, 2012 CFR
2012-01-01
... information on DOE computers? 727.5 Section 727.5 Energy DEPARTMENT OF ENERGY CONSENT FOR ACCESS TO INFORMATION ON DEPARTMENT OF ENERGY COMPUTERS § 727.5 What acknowledgment and consent is required for access to information on DOE computers? An individual may not be granted access to information on a DOE...
Code of Federal Regulations, 2014 CFR
2014-01-01
... information on DOE computers? 727.5 Section 727.5 Energy DEPARTMENT OF ENERGY CONSENT FOR ACCESS TO INFORMATION ON DEPARTMENT OF ENERGY COMPUTERS § 727.5 What acknowledgment and consent is required for access to information on DOE computers? An individual may not be granted access to information on a DOE...
Code of Federal Regulations, 2011 CFR
2011-01-01
... information on DOE computers? 727.5 Section 727.5 Energy DEPARTMENT OF ENERGY CONSENT FOR ACCESS TO INFORMATION ON DEPARTMENT OF ENERGY COMPUTERS § 727.5 What acknowledgment and consent is required for access to information on DOE computers? An individual may not be granted access to information on a DOE...
Code of Federal Regulations, 2010 CFR
2010-01-01
... information on DOE computers? 727.5 Section 727.5 Energy DEPARTMENT OF ENERGY CONSENT FOR ACCESS TO INFORMATION ON DEPARTMENT OF ENERGY COMPUTERS § 727.5 What acknowledgment and consent is required for access to information on DOE computers? An individual may not be granted access to information on a DOE...
Measurement of inelastic cross sections for low-energy electron scattering from DNA bases.
Michaud, Marc; Bazin, Marc; Sanche, Léon
2012-01-01
To determine experimentally the absolute cross sections (CS) to deposit various amount of energies into DNA bases by low-energy electron (LEE) impact. Electron energy loss (EEL) spectra of DNA bases were recorded for different LEE impact energies on the molecules deposited at very low coverage on an inert argon (Ar) substrate. Following their normalisation to the effective incident electron current and molecular surface number density, the EEL spectra were then fitted with multiple Gaussian functions in order to delimit the various excitation energy regions. The CS to excite a molecule into its various excitation modes were finally obtained from computing the area under the corresponding Gaussians. The EEL spectra and absolute CS for the electronic excitations of pyrimidine and the DNA bases thymine, adenine, and cytosine by electron impacts below 18 eV were reported for the molecules deposited at about monolayer coverage on a solid Ar substrate. The CS for electronic excitations of DNA bases by LEE impact were found to lie within the 10(216) to 10(218) cm(2) range. The large value of the total ionisation CS indicated that ionisation of DNA bases by LEE is an important dissipative process via which ionising radiation degrades and is absorbed in DNA.
Measurement of inelastic cross sections for low-energy electron scattering from DNA bases
Michaud, Marc; Bazin, Marc.; Sanche, Léon
2013-01-01
Purpose Determine experimentally the absolute cross sections (CS) to deposit various amount of energies into DNA bases by low-energy electron (LEE) impact. Materials and methods Electron energy loss (EEL) spectra of DNA bases are recorded for different LEE impact energies on the molecules deposited at very low coverage on an inert argon (Ar) substrate. Following their normalisation to the effective incident electron current and molecular surface number density, the EEL spectra are then fitted with multiple Gaussian functions in order to delimit the various excitation energy regions. The CS to excite a molecule into its various excitation modes are finally obtained from computing the area under the corresponding Gaussians. Results The EEL spectra and absolute CS for the electronic excitations of pyrimidine and the DNA bases thymine, adenine, and cytosine by electron impacts below 18 eV are reported for the molecules deposited at about monolayer coverage on a solid Ar substrate. Conclusions The CS for electronic excitations of DNA bases by LEE impact are found to lie within the 10−16 – 10−18 cm2 range. The large value of the total ionisation CS indicates that ionisation of DNA bases by LEE is an important dissipative process via which ionising radiation degrades and is absorbed in DNA. PMID:21615242