Sample records for computing molecular multi-center

  1. The development of a revised version of multi-center molecular Ornstein-Zernike equation

    NASA Astrophysics Data System (ADS)

    Kido, Kentaro; Yokogawa, Daisuke; Sato, Hirofumi

    2012-04-01

    Ornstein-Zernike (OZ)-type theory is a powerful tool to obtain 3-dimensional solvent distribution around solute molecule. Recently, we proposed multi-center molecular OZ method, which is suitable for parallel computing of 3D solvation structure. The distribution function in this method consists of two components, namely reference and residue parts. Several types of the function were examined as the reference part to investigate the numerical robustness of the method. As the benchmark, the method is applied to water, benzene in aqueous solution and single-walled carbon nanotube in chloroform solution. The results indicate that fully-parallelization is achieved by utilizing the newly proposed reference functions.

  2. A hybrid framework of first principles molecular orbital calculations and a three-dimensional integral equation theory for molecular liquids: Multi-center molecular Ornstein-Zernike self-consistent field approach

    NASA Astrophysics Data System (ADS)

    Kido, Kentaro; Kasahara, Kento; Yokogawa, Daisuke; Sato, Hirofumi

    2015-07-01

    In this study, we reported the development of a new quantum mechanics/molecular mechanics (QM/MM)-type framework to describe chemical processes in solution by combining standard molecular-orbital calculations with a three-dimensional formalism of integral equation theory for molecular liquids (multi-center molecular Ornstein-Zernike (MC-MOZ) method). The theoretical procedure is very similar to the 3D-reference interaction site model self-consistent field (RISM-SCF) approach. Since the MC-MOZ method is highly parallelized for computation, the present approach has the potential to be one of the most efficient procedures to treat chemical processes in solution. Benchmark tests to check the validity of this approach were performed for two solute (solute water and formaldehyde) systems and a simple SN2 reaction (Cl- + CH3Cl → ClCH3 + Cl-) in aqueous solution. The results for solute molecular properties and solvation structures obtained by the present approach were in reasonable agreement with those obtained by other hybrid frameworks and experiments. In particular, the results of the proposed approach are in excellent agreements with those of 3D-RISM-SCF.

  3. A hybrid framework of first principles molecular orbital calculations and a three-dimensional integral equation theory for molecular liquids: multi-center molecular Ornstein-Zernike self-consistent field approach.

    PubMed

    Kido, Kentaro; Kasahara, Kento; Yokogawa, Daisuke; Sato, Hirofumi

    2015-07-07

    In this study, we reported the development of a new quantum mechanics/molecular mechanics (QM/MM)-type framework to describe chemical processes in solution by combining standard molecular-orbital calculations with a three-dimensional formalism of integral equation theory for molecular liquids (multi-center molecular Ornstein-Zernike (MC-MOZ) method). The theoretical procedure is very similar to the 3D-reference interaction site model self-consistent field (RISM-SCF) approach. Since the MC-MOZ method is highly parallelized for computation, the present approach has the potential to be one of the most efficient procedures to treat chemical processes in solution. Benchmark tests to check the validity of this approach were performed for two solute (solute water and formaldehyde) systems and a simple SN2 reaction (Cl(-) + CH3Cl → ClCH3 + Cl(-)) in aqueous solution. The results for solute molecular properties and solvation structures obtained by the present approach were in reasonable agreement with those obtained by other hybrid frameworks and experiments. In particular, the results of the proposed approach are in excellent agreements with those of 3D-RISM-SCF.

  4. Solution of multi-center molecular integrals of Slater-type orbitals

    NASA Technical Reports Server (NTRS)

    Tai, H.

    1989-01-01

    The troublesome multi-center molecular integrals of Slater-type orbitals (STO) in molecular physics calculations can be evaluated by using the Fourier transform and proper coupling of the two center exchange integrals. A numerical integration procedure is then readily rendered to the final expression in which the integrand consists of well known special functions of arguments containing the geometrical arrangement of the nuclear centers and the exponents of the atomic orbitals. A practical procedure was devised for the calculation of a general multi-center molecular integrals coupling arbitrary Slater-type orbitals. Symmetry relations and asymptotic conditions are discussed. Explicit expressions of three-center one-electron nuclear-attraction integrals and four-center two-electron repulsion integrals for STO of principal quantum number n=2 are listed. A few numerical results are given for the purpose of comparison.

  5. Point charge representation of multicenter multipole moments in calculation of electrostatic properties

    NASA Technical Reports Server (NTRS)

    Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.

    1993-01-01

    Distributed Point Charge Models (PCM) for CO, (H2O)2, and HS-SH molecules have been computed from analytical expressions using multi-center multipole moments. The point charges (set of charges including both atomic and non-atomic positions) exactly reproduce both molecular and segmental multipole moments, thus constituting an accurate representation of the local anisotropy of electrostatic properties. In contrast to other known point charge models, PCM can be used to calculate not only intermolecular, but also intramolecular interactions. Comparison of these results with more accurate calculations demonstrated that PCM can correctly represent both weak and strong (intramolecular) interactions, thus indicating the merit of extending PCM to obtain improved potentials for molecular mechanics and molecular dynamics computational methods.

  6. Argonne Research Library | Argonne National Laboratory

    Science.gov Websites

    Publications Researchers Postdocs Exascale Computing Institute for Molecular Engineering at Argonne Work with Scientific Publications Researchers Postdocs Exascale Computing Institute for Molecular Engineering at IMEInstitute for Molecular Engineering JCESRJoint Center for Energy Storage Research MCSGMidwest Center for

  7. Nanotube Heterojunctions and Endo-Fullerenes for Nanoelectronics

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Menon, M.; Andriotis, Antonis; Cho, K.; Park, Jun; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Topics discussed include: (1) Light-Weight Multi-Functional Materials: Nanomechanics; Nanotubes and Composites; Thermal/Chemical/Electrical Characterization; (2) Biomimetic/Revolutionary Concepts: Evolutionary Computing and Sensing; Self-Heating Materials; (3) Central Computing System: Molecular Electronics; Materials for Quantum Bits; and (4) Molecular Machines.

  8. A multi-emitter fitting algorithm for potential live cell super-resolution imaging over a wide range of molecular densities.

    PubMed

    Takeshima, T; Takahashi, T; Yamashita, J; Okada, Y; Watanabe, S

    2018-05-25

    Multi-emitter fitting algorithms have been developed to improve the temporal resolution of single-molecule switching nanoscopy, but the molecular density range they can analyse is narrow and the computation required is intensive, significantly limiting their practical application. Here, we propose a computationally fast method, wedged template matching (WTM), an algorithm that uses a template matching technique to localise molecules at any overlapping molecular density from sparse to ultrahigh density with subdiffraction resolution. WTM achieves the localization of overlapping molecules at densities up to 600 molecules μm -2 with a high detection sensitivity and fast computational speed. WTM also shows localization precision comparable with that of DAOSTORM (an algorithm for high-density super-resolution microscopy), at densities up to 20 molecules μm -2 , and better than DAOSTORM at higher molecular densities. The application of WTM to a high-density biological sample image demonstrated that it resolved protein dynamics from live cell images with subdiffraction resolution and a temporal resolution of several hundred milliseconds or less through a significant reduction in the number of camera images required for a high-density reconstruction. WTM algorithm is a computationally fast, multi-emitter fitting algorithm that can analyse over a wide range of molecular densities. The algorithm is available through the website. https://doi.org/10.17632/bf3z6xpn5j.1. © 2018 The Authors. Journal of Microscopy published by JohnWiley & Sons Ltd on behalf of Royal Microscopical Society.

  9. Exascale computing and what it means for shock physics

    NASA Astrophysics Data System (ADS)

    Germann, Timothy

    2015-06-01

    The U.S. Department of Energy is preparing to launch an Exascale Computing Initiative, to address the myriad challenges required to deploy and effectively utilize an exascale-class supercomputer (i.e., one capable of performing 1018 operations per second) in the 2023 timeframe. Since physical (power dissipation) requirements limit clock rates to at most a few GHz, this will necessitate the coordination of on the order of a billion concurrent operations, requiring sophisticated system and application software, and underlying mathematical algorithms, that may differ radically from traditional approaches. Even at the smaller workstation or cluster level of computation, the massive concurrency and heterogeneity within each processor will impact computational scientists. Through the multi-institutional, multi-disciplinary Exascale Co-design Center for Materials in Extreme Environments (ExMatEx), we have initiated an early and deep collaboration between domain (computational materials) scientists, applied mathematicians, computer scientists, and hardware architects, in order to establish the relationships between algorithms, software stacks, and architectures needed to enable exascale-ready materials science application codes within the next decade. In my talk, I will discuss these challenges, and what it will mean for exascale-era electronic structure, molecular dynamics, and engineering-scale simulations of shock-compressed condensed matter. In particular, we anticipate that the emerging hierarchical, heterogeneous architectures can be exploited to achieve higher physical fidelity simulations using adaptive physics refinement. This work is supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research.

  10. Investigating the Mechanism of Action and the Identification of Breast Carcinogens by Computational Analysis of Female Rodent Carcinogens

    DTIC Science & Technology

    2006-08-01

    preparing a COBRE Molecular Targets Project with a goal to extend the computational work of Specific Aims of this project to the discovery of novel...million Center of Biomedical Research Excellence ( COBRE ) grant from the National Center for Research Resources at the National Institutes of Health...three year COBRE -funded project in Molecular Targets. My recruitment to the University of Louisville’s Brown Cancer Center and my proposed COBRE

  11. Transportation Research and Analysis Computing Center (TRACC) Year 6 Quarter 4 Progress Report

    DOT National Transportation Integrated Search

    2013-03-01

    Argonne National Laboratory initiated a FY2006-FY2009 multi-year program with the US Department of Transportation (USDOT) on October 1, 2006, to establish the Transportation Research and Analysis Computing Center (TRACC). As part of the TRACC project...

  12. MaMiCo: Transient multi-instance molecular-continuum flow simulation on supercomputers

    NASA Astrophysics Data System (ADS)

    Neumann, Philipp; Bian, Xin

    2017-11-01

    We present extensions of the macro-micro-coupling tool MaMiCo, which was designed to couple continuum fluid dynamics solvers with discrete particle dynamics. To enable local extraction of smooth flow field quantities especially on rather short time scales, sampling over an ensemble of molecular dynamics simulations is introduced. We provide details on these extensions including the transient coupling algorithm, open boundary forcing, and multi-instance sampling. Furthermore, we validate the coupling in Couette flow using different particle simulation software packages and particle models, i.e. molecular dynamics and dissipative particle dynamics. Finally, we demonstrate the parallel scalability of the molecular-continuum simulations by using up to 65 536 compute cores of the supercomputer Shaheen II located at KAUST. Program Files doi:http://dx.doi.org/10.17632/w7rgdrhb85.1 Licensing provisions: BSD 3-clause Programming language: C, C++ External routines/libraries: For compiling: SCons, MPI (optional) Subprograms used: ESPResSo, LAMMPS, ls1 mardyn, waLBerla For installation procedures of the MaMiCo interfaces, see the README files in the respective code directories located in coupling/interface/impl. Journal reference of previous version: P. Neumann, H. Flohr, R. Arora, P. Jarmatz, N. Tchipev, H.-J. Bungartz. MaMiCo: Software design for parallel molecular-continuum flow simulations, Computer Physics Communications 200: 324-335, 2016 Does the new version supersede the previous version?: Yes. The functionality of the previous version is completely retained in the new version. Nature of problem: Coupled molecular-continuum simulation for multi-resolution fluid dynamics: parts of the domain are resolved by molecular dynamics or another particle-based solver whereas large parts are covered by a mesh-based CFD solver, e.g. a lattice Boltzmann automaton. Solution method: We couple existing MD and CFD solvers via MaMiCo (macro-micro coupling tool). Data exchange and coupling algorithmics are abstracted and incorporated in MaMiCo. Once an algorithm is set up in MaMiCo, it can be used and extended, even if other solvers are used (as soon as the respective interfaces are implemented/available). Reasons for the new version: We have incorporated a new algorithm to simulate transient molecular-continuum systems and to automatically sample data over multiple MD runs that can be executed simultaneously (on, e.g., a compute cluster). MaMiCo has further been extended by an interface to incorporate boundary forcing to account for open molecular dynamics boundaries. Besides support for coupling with various MD and CFD frameworks, the new version contains a test case that allows to run molecular-continuum Couette flow simulations out-of-the-box. No external tools or simulation codes are required anymore. However, the user is free to switch from the included MD simulation package to LAMMPS. For details on how to run the transient Couette problem, see the file README in the folder coupling/tests, Remark on MaMiCo V1.1. Summary of revisions: Open boundary forcing; Multi-instance MD sampling; support for transient molecular-continuum systems Restrictions: Currently, only single-centered systems are supported. For access to the LAMMPS-based implementation of DPD boundary forcing, please contact Xin Bian, xin.bian@tum.de. Additional comments: Please see file license_mamico.txt for further details regarding distribution and advertising of this software.

  13. Computational Nanotechnology Molecular Electronics, Materials and Machines

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    This presentation covers research being performed on computational nanotechnology, carbon nanotubes and fullerenes at the NASA Ames Research Center. Topics cover include: nanomechanics of nanomaterials, nanotubes and composite materials, molecular electronics with nanotube junctions, kinky chemistry, and nanotechnology for solid-state quantum computers using fullerenes.

  14. Design and Experimental Validation of a Simple Controller for a Multi-Segment Magnetic Crawler Robot

    DTIC Science & Technology

    2015-04-01

    Ave, Cambridge, MA USA 02139; bSpace and Naval Warfare (SPAWAR) Systems Center Pacific, San Diego, CA USA 92152 ABSTRACT A novel, multi-segmented...high-level, autonomous control computer. A low-level, embedded microcomputer handles the commands to the driving motors. This paper presents the...to be demonstrated.14 The Unmanned Systems Group at SPAWAR Systems Center Pacific has developed a multi-segment magnetic crawler robot (MSMR

  15. Computational Nanotechnology at NASA Ames Research Center, 1996

    NASA Technical Reports Server (NTRS)

    Globus, Al; Bailey, David; Langhoff, Steve; Pohorille, Andrew; Levit, Creon; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Some forms of nanotechnology appear to have enormous potential to improve aerospace and computer systems; computational nanotechnology, the design and simulation of programmable molecular machines, is crucial to progress. NASA Ames Research Center has begun a computational nanotechnology program including in-house work, external research grants, and grants of supercomputer time. Four goals have been established: (1) Simulate a hypothetical programmable molecular machine replicating itself and building other products. (2) Develop molecular manufacturing CAD (computer aided design) software and use it to design molecular manufacturing systems and products of aerospace interest, including computer components. (3) Characterize nanotechnologically accessible materials of aerospace interest. Such materials may have excellent strength and thermal properties. (4) Collaborate with experimentalists. Current in-house activities include: (1) Development of NanoDesign, software to design and simulate a nanotechnology based on functionalized fullerenes. Early work focuses on gears. (2) A design for high density atomically precise memory. (3) Design of nanotechnology systems based on biology. (4) Characterization of diamonoid mechanosynthetic pathways. (5) Studies of the laplacian of the electronic charge density to understand molecular structure and reactivity. (6) Studies of entropic effects during self-assembly. Characterization of properties of matter for clusters up to sizes exhibiting bulk properties. In addition, the NAS (NASA Advanced Supercomputing) supercomputer division sponsored a workshop on computational molecular nanotechnology on March 4-5, 1996 held at NASA Ames Research Center. Finally, collaborations with Bill Goddard at CalTech, Ralph Merkle at Xerox Parc, Don Brenner at NCSU (North Carolina State University), Tom McKendree at Hughes, and Todd Wipke at UCSC are underway.

  16. School Data Processing Services in Texas. A Cooperative Approach. [Revised.

    ERIC Educational Resources Information Center

    Texas Education Agency, Austin. Management Information Center.

    The Texas plan for computer services provides services to public school districts through a statewide network of 20 regional Education Service Centers (ESC). Each of the three Multi-Regional Processing Centers (MRPCs) operates a large computer facility providing school district services within from three to eight ESC regions; each of the five…

  17. School Data Processing Services in Texas: A Cooperative Approach.

    ERIC Educational Resources Information Center

    Texas Education Agency, Austin.

    The Texas plan for computer services provides services to public school districts through a statewide network of 20 regional Education Service Centers (ESC). Each of the three Multi-Regional Processing Centers (MRPCs) operates a large computer facility providing school district services within from three to eight ESC regions; each of the five…

  18. School Data Processing Services in Texas: A Cooperative Approach.

    ERIC Educational Resources Information Center

    Texas Education Agency, Austin.

    The Texas plan for computer services provides services to public school districts through a statewide network of 20 regional Education Service Centers (ESO). Each of the three Multi-Regional Processing Centers (MRPCs) operates a large computer facility providing school district services within from three to eight ESC regions each of the five…

  19. Remote Science Operation Center research

    NASA Technical Reports Server (NTRS)

    Banks, P. M.

    1986-01-01

    Progress in the following areas is discussed: the design, planning and operation of a remote science payload operations control center; design and planning of a data link via satellite; and the design and prototyping of an advanced workstation environment for multi-media (3-D computer aided design/computer aided engineering, voice, video, text) communications and operations.

  20. Communication: An efficient approach to compute state-specific nuclear gradients for a generic state-averaged multi-configuration self consistent field wavefunction.

    PubMed

    Granovsky, Alexander A

    2015-12-21

    We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation.

  1. Communication: An efficient approach to compute state-specific nuclear gradients for a generic state-averaged multi-configuration self consistent field wavefunction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granovsky, Alexander A., E-mail: alex.granovsky@gmail.com

    We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation.

  2. Logic circuits based on molecular spider systems.

    PubMed

    Mo, Dandan; Lakin, Matthew R; Stefanovic, Darko

    2016-08-01

    Spatial locality brings the advantages of computation speed-up and sequence reuse to molecular computing. In particular, molecular walkers that undergo localized reactions are of interest for implementing logic computations at the nanoscale. We use molecular spider walkers to implement logic circuits. We develop an extended multi-spider model with a dynamic environment wherein signal transmission is triggered via localized reactions, and use this model to implement three basic gates (AND, OR, NOT) and a cascading mechanism. We develop an algorithm to automatically generate the layout of the circuit. We use a kinetic Monte Carlo algorithm to simulate circuit computations, and we analyze circuit complexity: our design scales linearly with formula size and has a logarithmic time complexity. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Factors Influencing Medical Student Attrition and Their Implications in a Large Multi-Center Randomized Education Trial

    ERIC Educational Resources Information Center

    Kalet, A.; Ellaway, R. H.; Song, H. S.; Nick, M.; Sarpel, U.; Hopkins, M. A.; Hill, J.; Plass, J. L.; Pusic, M. V.

    2013-01-01

    Participant attrition may be a significant threat to the generalizability of the results of educational research studies if participants who do not persist in a study differ from those who do in ways that can affect the experimental outcomes. A multi-center trial of the efficacy of different computer-based instructional strategies gave us the…

  4. Avoiding Defect Nucleation during Equilibration in Molecular Dynamics Simulations with ReaxFF

    DTIC Science & Technology

    2015-04-01

    respectively. All simulations are performed using the LAMMPS computer code.12 2 Fig. 1 a) Initial and b) final configurations of the molecular centers...Plimpton S. Fast parallel algorithms for short-range molecular dynamics. Comput J Phys. 1995;117:1–19. (Software available at http:// lammps .sandia.gov

  5. Heart CT scan

    MedlinePlus

    ... Computed tomography scan - heart; Calcium scoring; Multi-detector CT scan - heart; Electron beam computed tomography - heart; Agatston ... table that slides into the center of the CT scanner. You will lie on your back with ...

  6. Multi-Dimensional Optimization for Cloud Based Multi-Tier Applications

    ERIC Educational Resources Information Center

    Jung, Gueyoung

    2010-01-01

    Emerging trends toward cloud computing and virtualization have been opening new avenues to meet enormous demands of space, resource utilization, and energy efficiency in modern data centers. By being allowed to host many multi-tier applications in consolidated environments, cloud infrastructure providers enable resources to be shared among these…

  7. Parallel, stochastic measurement of molecular surface area.

    PubMed

    Juba, Derek; Varshney, Amitabh

    2008-08-01

    Biochemists often wish to compute surface areas of proteins. A variety of algorithms have been developed for this task, but they are designed for traditional single-processor architectures. The current trend in computer hardware is towards increasingly parallel architectures for which these algorithms are not well suited. We describe a parallel, stochastic algorithm for molecular surface area computation that maps well to the emerging multi-core architectures. Our algorithm is also progressive, providing a rough estimate of surface area immediately and refining this estimate as time goes on. Furthermore, the algorithm generates points on the molecular surface which can be used for point-based rendering. We demonstrate a GPU implementation of our algorithm and show that it compares favorably with several existing molecular surface computation programs, giving fast estimates of the molecular surface area with good accuracy.

  8. Coordinating Center: Molecular and Cellular Findings of Screen-Detected Lesions | Division of Cancer Prevention

    Cancer.gov

    The Molecular and Cellular Characterization of Screen‐Detected Lesions ‐ Coordinating Center and Data Management Group will provide support for the participating studies responding to RFA CA14‐10. The coordinating center supports three main domains: network coordination, statistical support and computational analysis and protocol development and database support. Support for

  9. Assessment of Molecular Modeling & Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2002-01-03

    This report reviews the development and applications of molecular and materials modeling in Europe and Japan in comparison to those in the United States. Topics covered include computational quantum chemistry, molecular simulations by molecular dynamics and Monte Carlo methods, mesoscale modeling of material domains, molecular-structure/macroscale property correlations like QSARs and QSPRs, and related information technologies like informatics and special-purpose molecular-modeling computers. The panel's findings include the following: The United States leads this field in many scientific areas. However, Canada has particular strengths in DFT methods and homogeneous catalysis; Europe in heterogeneous catalysis, mesoscale, and materials modeling; and Japan in materialsmore » modeling and special-purpose computing. Major government-industry initiatives are underway in Europe and Japan, notably in multi-scale materials modeling and in development of chemistry-capable ab-initio molecular dynamics codes.« less

  10. Software Description for the O’Hare Runway Configuration Management System. Volume I. Technical Description,

    DTIC Science & Technology

    1982-10-01

    spent in preparing this document. 00. EXECUTIVE SUMMARY The O’Hare Runway Configuration Management System (CMS) is an interactive multi-user computer ...MITRE Washington’s Computer Center. Currently, CMS is housed in an IBM 4341 computer with VM/SP operating system. CMS employs the IBM’s Display...iV 0O, o 0 .r4L /~ wA 0U 00 00 0 w vi O’Hare, it will operate on a dedicated mini- computer which permits multi-tasking (that is, multiple users

  11. Molecular-Level Computational Investigation of Mechanical Transverse Behavior of p-Phenylene Terephthalamide (PPTA) Fibers

    DTIC Science & Technology

    2013-01-01

    fabricated today are based on polymer matrix composites containing Kevlarw KM2 reinforcements , the present work will deal with generic PPTA fibers . In...Multi-length scale enriched continuum-level material model for Kevlarw- fiber reinforced polymer-matrix composites”, Journal of Materials...mechanical transverse behavior of p-phenylene terephthalamide (PPTA) fibers Purpose – A series of all-atom molecular-level computational analyses is

  12. Electronic Structure Calculations and Adaptation Scheme in Multi-core Computing Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seshagiri, Lakshminarasimhan; Sosonkina, Masha; Zhang, Zhao

    2009-05-20

    Multi-core processing environments have become the norm in the generic computing environment and are being considered for adding an extra dimension to the execution of any application. The T2 Niagara processor is a very unique environment where it consists of eight cores having a capability of running eight threads simultaneously in each of the cores. Applications like General Atomic and Molecular Electronic Structure (GAMESS), used for ab-initio molecular quantum chemistry calculations, can be good indicators of the performance of such machines and would be a guideline for both hardware designers and application programmers. In this paper we try to benchmarkmore » the GAMESS performance on a T2 Niagara processor for a couple of molecules. We also show the suitability of using a middleware based adaptation algorithm on GAMESS on such a multi-core environment.« less

  13. DNA-programmed dynamic assembly of quantum dots for molecular computation.

    PubMed

    He, Xuewen; Li, Zhi; Chen, Muzi; Ma, Nan

    2014-12-22

    Despite the widespread use of quantum dots (QDs) for biosensing and bioimaging, QD-based bio-interfaceable and reconfigurable molecular computing systems have not yet been realized. DNA-programmed dynamic assembly of multi-color QDs is presented for the construction of a new class of fluorescence resonance energy transfer (FRET)-based QD computing systems. A complete set of seven elementary logic gates (OR, AND, NOR, NAND, INH, XOR, XNOR) are realized using a series of binary and ternary QD complexes operated by strand displacement reactions. The integration of different logic gates into a half-adder circuit for molecular computation is also demonstrated. This strategy is quite versatile and straightforward for logical operations and would pave the way for QD-biocomputing-based intelligent molecular diagnostics. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. The Analysis of Information Exchange Capability for Battlefield Networks Using M&S Techniques of the NetSPIN

    DTIC Science & Technology

    2013-06-01

    of the ATCIS in the NetSPIN Name Main functions Terminal Functions as the terminal that generates traffics MFE (Multi-Function accessing...generates traffics : MFE Function to transform messages of SST into TCP liP packets (Multi-Function accessing Equipment) Termmal PPP Functions of the...center Operation battalion DMT Computer shelter DLP Operation center MFE DMTTerminal Command post of a corps Brigade communication Operation

  15. Pervasive brain monitoring and data sharing based on multi-tier distributed computing and linked data technology

    PubMed Central

    Zao, John K.; Gan, Tchin-Tze; You, Chun-Kai; Chung, Cheng-En; Wang, Yu-Te; Rodríguez Méndez, Sergio José; Mullen, Tim; Yu, Chieh; Kothe, Christian; Hsiao, Ching-Teng; Chu, San-Liang; Shieh, Ce-Kuen; Jung, Tzyy-Ping

    2014-01-01

    EEG-based Brain-computer interfaces (BCI) are facing basic challenges in real-world applications. The technical difficulties in developing truly wearable BCI systems that are capable of making reliable real-time prediction of users' cognitive states in dynamic real-life situations may seem almost insurmountable at times. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report an attempt to develop a pervasive on-line EEG-BCI system using state-of-art technologies including multi-tier Fog and Cloud Computing, semantic Linked Data search, and adaptive prediction/classification models. To verify our approach, we implement a pilot system by employing wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end Fog Servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC) as the far-end Cloud Servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line EEG-BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch to use our system in real-life personal stress monitoring and the UCSD Movement Disorder Center to conduct in-home Parkinson's disease patient monitoring experiments. We shall proceed to develop the necessary BCI ontology and introduce automatic semantic annotation and progressive model refinement capability to our system. PMID:24917804

  16. Pervasive brain monitoring and data sharing based on multi-tier distributed computing and linked data technology.

    PubMed

    Zao, John K; Gan, Tchin-Tze; You, Chun-Kai; Chung, Cheng-En; Wang, Yu-Te; Rodríguez Méndez, Sergio José; Mullen, Tim; Yu, Chieh; Kothe, Christian; Hsiao, Ching-Teng; Chu, San-Liang; Shieh, Ce-Kuen; Jung, Tzyy-Ping

    2014-01-01

    EEG-based Brain-computer interfaces (BCI) are facing basic challenges in real-world applications. The technical difficulties in developing truly wearable BCI systems that are capable of making reliable real-time prediction of users' cognitive states in dynamic real-life situations may seem almost insurmountable at times. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report an attempt to develop a pervasive on-line EEG-BCI system using state-of-art technologies including multi-tier Fog and Cloud Computing, semantic Linked Data search, and adaptive prediction/classification models. To verify our approach, we implement a pilot system by employing wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end Fog Servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC) as the far-end Cloud Servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line EEG-BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch to use our system in real-life personal stress monitoring and the UCSD Movement Disorder Center to conduct in-home Parkinson's disease patient monitoring experiments. We shall proceed to develop the necessary BCI ontology and introduce automatic semantic annotation and progressive model refinement capability to our system.

  17. A High Performance Computing Study of a Scalable FISST-Based Approach to Multi-Target, Multi-Sensor Tracking

    NASA Astrophysics Data System (ADS)

    Hussein, I.; Wilkins, M.; Roscoe, C.; Faber, W.; Chakravorty, S.; Schumacher, P.

    2016-09-01

    Finite Set Statistics (FISST) is a rigorous Bayesian multi-hypothesis management tool for the joint detection, classification and tracking of multi-sensor, multi-object systems. Implicit within the approach are solutions to the data association and target label-tracking problems. The full FISST filtering equations, however, are intractable. While FISST-based methods such as the PHD and CPHD filters are tractable, they require heavy moment approximations to the full FISST equations that result in a significant loss of information contained in the collected data. In this paper, we review Smart Sampling Markov Chain Monte Carlo (SSMCMC) that enables FISST to be tractable while avoiding moment approximations. We study the effect of tuning key SSMCMC parameters on tracking quality and computation time. The study is performed on a representative space object catalog with varying numbers of RSOs. The solution is implemented in the Scala computing language at the Maui High Performance Computing Center (MHPCC) facility.

  18. BetaCavityWeb: a webserver for molecular voids and channels

    PubMed Central

    Kim, Jae-Kwan; Cho, Youngsong; Lee, Mokwon; Laskowski, Roman A.; Ryu, Seong Eon; Sugihara, Kokichi; Kim, Deok-Soo

    2015-01-01

    Molecular cavities, which include voids and channels, are critical for molecular function. We present a webserver, BetaCavityWeb, which computes these cavities for a given molecular structure and a given spherical probe, and reports their geometrical properties: volume, boundary area, buried area, etc. The server's algorithms are based on the Voronoi diagram of atoms and its derivative construct: the beta-complex. The correctness of the computed result and computational efficiency are both mathematically guaranteed. BetaCavityWeb is freely accessible at the Voronoi Diagram Research Center (VDRC) (http://voronoi.hanyang.ac.kr/betacavityweb). PMID:25904629

  19. Multi-scale modeling in cell biology

    PubMed Central

    Meier-Schellersheim, Martin; Fraser, Iain D. C.; Klauschen, Frederick

    2009-01-01

    Biomedical research frequently involves performing experiments and developing hypotheses that link different scales of biological systems such as, for instance, the scales of intracellular molecular interactions to the scale of cellular behavior and beyond to the behavior of cell populations. Computational modeling efforts that aim at exploring such multi-scale systems quantitatively with the help of simulations have to incorporate several different simulation techniques due to the different time and space scales involved. Here, we provide a non-technical overview of how different scales of experimental research can be combined with the appropriate computational modeling techniques. We also show that current modeling software permits building and simulating multi-scale models without having to become involved with the underlying technical details of computational modeling. PMID:20448808

  20. A DICOM-based 2nd generation Molecular Imaging Data Grid implementing the IHE XDS-i integration profile.

    PubMed

    Lee, Jasper; Zhang, Jianguo; Park, Ryan; Dagliyan, Grant; Liu, Brent; Huang, H K

    2012-07-01

    A Molecular Imaging Data Grid (MIDG) was developed to address current informatics challenges in archival, sharing, search, and distribution of preclinical imaging studies between animal imaging facilities and investigator sites. This manuscript presents a 2nd generation MIDG replacing the Globus Toolkit with a new system architecture that implements the IHE XDS-i integration profile. Implementation and evaluation were conducted using a 3-site interdisciplinary test-bed at the University of Southern California. The 2nd generation MIDG design architecture replaces the initial design's Globus Toolkit with dedicated web services and XML-based messaging for dedicated management and delivery of multi-modality DICOM imaging datasets. The Cross-enterprise Document Sharing for Imaging (XDS-i) integration profile from the field of enterprise radiology informatics was adopted into the MIDG design because streamlined image registration, management, and distribution dataflow are likewise needed in preclinical imaging informatics systems as in enterprise PACS application. Implementation of the MIDG is demonstrated at the University of Southern California Molecular Imaging Center (MIC) and two other sites with specified hardware, software, and network bandwidth. Evaluation of the MIDG involves data upload, download, and fault-tolerance testing scenarios using multi-modality animal imaging datasets collected at the USC Molecular Imaging Center. The upload, download, and fault-tolerance tests of the MIDG were performed multiple times using 12 collected animal study datasets. Upload and download times demonstrated reproducibility and improved real-world performance. Fault-tolerance tests showed that automated failover between Grid Node Servers has minimal impact on normal download times. Building upon the 1st generation concepts and experiences, the 2nd generation MIDG system improves accessibility of disparate animal-model molecular imaging datasets to users outside a molecular imaging facility's LAN using a new architecture, dataflow, and dedicated DICOM-based management web services. Productivity and efficiency of preclinical research for translational sciences investigators has been further streamlined for multi-center study data registration, management, and distribution.

  1. Core Clinical Data Elements for Cancer Genomic Repositories: A Multi-stakeholder Consensus.

    PubMed

    Conley, Robert B; Dickson, Dane; Zenklusen, Jean Claude; Al Naber, Jennifer; Messner, Donna A; Atasoy, Ajlan; Chaihorsky, Lena; Collyar, Deborah; Compton, Carolyn; Ferguson, Martin; Khozin, Sean; Klein, Roger D; Kotte, Sri; Kurzrock, Razelle; Lin, C Jimmy; Liu, Frank; Marino, Ingrid; McDonough, Robert; McNeal, Amy; Miller, Vincent; Schilsky, Richard L; Wang, Lisa I

    2017-11-16

    The Center for Medical Technology Policy and the Molecular Evidence Development Consortium gathered a diverse group of more than 50 stakeholders to develop consensus on a core set of data elements and values essential to understanding the clinical utility of molecularly targeted therapies in oncology. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. NASA National Combustion Code Simulations

    NASA Technical Reports Server (NTRS)

    Iannetti, Anthony; Davoudzadeh, Farhad

    2001-01-01

    A systematic effort is in progress to further validate the National Combustion Code (NCC) that has been developed at NASA Glenn Research Center (GRC) for comprehensive modeling and simulation of aerospace combustion systems. The validation efforts include numerical simulation of the gas-phase combustor experiments conducted at the Center for Turbulence Research (CTR), Stanford University, followed by comparison and evaluation of the computed results with the experimental data. Presently, at GRC, a numerical model of the experimental gaseous combustor is built to simulate the experimental model. The constructed numerical geometry includes the flow development sections for air annulus and fuel pipe, 24 channel air and fuel swirlers, hub, combustor, and tail pipe. Furthermore, a three-dimensional multi-block, multi-grid grid (1.6 million grid points, 3-levels of multi-grid) is generated. Computational simulation of the gaseous combustor flow field operating on methane fuel has started. The computational domain includes the whole flow regime starting from the fuel pipe and the air annulus, through the 12 air and 12 fuel channels, in the combustion region and through the tail pipe.

  3. Exposure Science and the US EPA National Center for Computational Toxicology

    EPA Science Inventory

    The emerging field of computational toxicology applies mathematical and computer models and molecular biological and chemical approaches to explore both qualitative and quantitative relationships between sources of environmental pollutant exposure and adverse health outcomes. The...

  4. 77 FR 11139 - Center for Scientific Review; Notice of Closed Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-24

    ...: Center for Scientific Review Special Emphasis Panel; ``Genetics and Epigenetics of Disease.'' Date: March... Scientific Review Special Emphasis Panel; Small Business: Cell, Computational, and Molecular Biology. Date...

  5. The BioIntelligence Framework: a new computational platform for biomedical knowledge computing.

    PubMed

    Farley, Toni; Kiefer, Jeff; Lee, Preston; Von Hoff, Daniel; Trent, Jeffrey M; Colbourn, Charles; Mousses, Spyro

    2013-01-01

    Breakthroughs in molecular profiling technologies are enabling a new data-intensive approach to biomedical research, with the potential to revolutionize how we study, manage, and treat complex diseases. The next great challenge for clinical applications of these innovations will be to create scalable computational solutions for intelligently linking complex biomedical patient data to clinically actionable knowledge. Traditional database management systems (DBMS) are not well suited to representing complex syntactic and semantic relationships in unstructured biomedical information, introducing barriers to realizing such solutions. We propose a scalable computational framework for addressing this need, which leverages a hypergraph-based data model and query language that may be better suited for representing complex multi-lateral, multi-scalar, and multi-dimensional relationships. We also discuss how this framework can be used to create rapid learning knowledge base systems to intelligently capture and relate complex patient data to biomedical knowledge in order to automate the recovery of clinically actionable information.

  6. Multi-party Semi-quantum Key Agreement with Delegating Quantum Computation

    NASA Astrophysics Data System (ADS)

    Liu, Wen-Jie; Chen, Zhen-Yu; Ji, Sai; Wang, Hai-Bin; Zhang, Jun

    2017-10-01

    A multi-party semi-quantum key agreement (SQKA) protocol based on delegating quantum computation (DQC) model is proposed by taking Bell states as quantum resources. In the proposed protocol, the participants only need the ability of accessing quantum channel and preparing single photons {|0〉, |1〉, |+〉, |-〉}, while the complicated quantum operations, such as the unitary operations and Bell measurement, will be delegated to the remote quantum center. Compared with previous quantum key agreement protocols, this client-server model is more feasible in the early days of the emergence of quantum computers. In order to prevent the attacks from outside eavesdroppers, inner participants and quantum center, two single photon sequences are randomly inserted into Bell states: the first sequence is used to perform the quantum channel detection, while the second is applied to disorder the positions of message qubits, which guarantees the security of the protocol.

  7. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 1, Issue 2

    DTIC Science & Technology

    2011-01-01

    area and the researchers working on these projects. Also inside: news from the AHPCRC consortium partners at Morgan State University and the NASA ...Computing Research Center is provided by the supercomputing and research facilities at Stanford University and at the NASA Ames Research Center at...atomic and molecular level, he said. He noted that “every general would like to have” a Star Trek -like holodeck, where holographic avatars could

  8. On-orbit free molecular flow aerodynamic characteristics of a proposal space operations center configuration

    NASA Technical Reports Server (NTRS)

    Romere, P. O.

    1982-01-01

    A proposed configuration for a Space Operations Center is presented in its eight stages of buildup. The on orbit aerodynamic force and moment characteristics were calculated for each stage based upon free molecular flow theory. Calculation of the aerodynamic characteristics was accomplished through the use of an orbital aerodynamic computer program, and the computation method is described with respect to the free molecular theory used. The aerodynamic characteristics are presented in tabulated form for each buildup stage at angles of attack from 0 to 360 degrees and roll angles from -60 to +60 degrees. The reference altitude is 490 kilometers, however, the data should be applicable for altitudes below 490 kilometers down to approximately 185 kilometers.

  9. Accelerating MP2C dispersion corrections for dimers and molecular crystals

    NASA Astrophysics Data System (ADS)

    Huang, Yuanhang; Shao, Yihan; Beran, Gregory J. O.

    2013-06-01

    The MP2C dispersion correction of Pitonak and Hesselmann [J. Chem. Theory Comput. 6, 168 (2010)], 10.1021/ct9005882 substantially improves the performance of second-order Møller-Plesset perturbation theory for non-covalent interactions, albeit with non-trivial computational cost. Here, the MP2C correction is computed in a monomer-centered basis instead of a dimer-centered one. When applied to a single dimer MP2 calculation, this change accelerates the MP2C dispersion correction several-fold while introducing only trivial new errors. More significantly, in the context of fragment-based molecular crystal studies, combination of the new monomer basis algorithm and the periodic symmetry of the crystal reduces the cost of computing the dispersion correction by two orders of magnitude. This speed-up reduces the MP2C dispersion correction calculation from a significant computational expense to a negligible one in crystals like aspirin or oxalyl dihydrazide, without compromising accuracy.

  10. RNA Polymerase Structure, Function, Regulation, Dynamics, Fidelity, and Roles in GENE EXPRESSION | Center for Cancer Research

    Cancer.gov

    Multi-subunit RNA polymerases (RNAP) are ornate molecular machines that translocate on a DNA template as they generate a complementary RNA chain. RNAPs are highly conserved in evolution among eukarya, eubacteria, archaea, and some viruses. As such, multi-subunit RNAPs appear to be an irreplaceable advance in the evolution of complex life on earth. Because of their stepwise

  11. Salient contour extraction from complex natural scene in night vision image

    NASA Astrophysics Data System (ADS)

    Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lian-fa

    2014-03-01

    The theory of center-surround interaction in non-classical receptive field can be applied in night vision information processing. In this work, an optimized compound receptive field modulation method is proposed to extract salient contour from complex natural scene in low-light-level (LLL) and infrared images. The kernel idea is that multi-feature analysis can recognize the inhomogeneity in modulatory coverage more accurately and that center and surround with the grouping structure satisfying Gestalt rule deserves high connection-probability. Computationally, a multi-feature contrast weighted inhibition model is presented to suppress background and lower mutual inhibition among contour elements; a fuzzy connection facilitation model is proposed to achieve the enhancement of contour response, the connection of discontinuous contour and the further elimination of randomly distributed noise and texture; a multi-scale iterative attention method is designed to accomplish dynamic modulation process and extract contours of targets in multi-size. This work provides a series of biologically motivated computational visual models with high-performance for contour detection from cluttered scene in night vision images.

  12. Center for Center for Technology for Advanced Scientific Component Software (TASCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostadin, Damevski

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technologymore » for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.« less

  13. Structure, dynamics and stability of water/scCO 2/mineral interfaces from ab initio molecular dynamics simulations

    DOE PAGES

    Lee, Mal -Soon; Peter McGrail, B.; Rousseau, Roger; ...

    2015-10-12

    Here, the interface between a solid and a complex multi-component liquid forms a unique reaction environment whose structure and composition can significantly deviate from either bulk or liquid phase and is poorly understood due the innate difficulty to obtain molecular level information. Feldspar minerals, as typified by the Ca-end member Anorthite, serve as prototypical model systems to assess the reactivity and ion mobility at solid/water-bearing supercritical fluid (WBSF) interfaces due to recent X-ray based measurements that provide information on water-film formation, and cation vacancies at these surfaces. Using density functional theory based molecular dynamics, which allows the evaluation of reactivitymore » and condensed phase dynamics on equal footing, we report on the structure and dynamics of water nucleation and surface aggregation, carbonation and Ca mobilization under geologic carbon sequestration scenarios (T = 323 K and P = 90 bar). We find that water has a strong enthalpic preference for aggregation on a Ca-rich, O-terminated anorthite (001) surface, but entropy strongly hinders the film formation at very low water concentrations. Carbonation reactions readily occur at electron-rich terminal Oxygen sites adjacent to cation vacancies, when in contact with supercritical CO 2. Cation vacancies of this type can form readily in the presence of a water layer that allows for facile and enthalpicly favorable Ca 2+ extraction and solvation. Apart from providing unprecedented molecular level detail of a complex three component (mineral, water and scCO 2) system), this work highlights the ability of modern capabilities of AIMD methods to begin to qualitatively and quantitatively address structure and reactivity at solid-liquid interfaces of high chemical complexity. This work was supported by the US Department of Energy, Office of Fossil Energy (M.-S. L., B. P. M. and V.-A. G.) and the Office of Basic Energy Science, Division of Chemical Sciences, Geosciences and Biosciences (R.R.), and performed at the Pacific Northwest National Laboratory (PNNL). PNNL is a multi-program national laboratory operated for DOE by Battelle. Computational resources were provided by PNNL’s Platform for Institutional Computing (PIC), the W. R. Wiley Environmental Molecular Science Laboratory (EMSL), a national scientific user facility sponsored by the Department of Energy’s Office of Biological and Environmental Research located at PNNL and the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory.« less

  14. Interaction sorting method for molecular dynamics on multi-core SIMD CPU architecture.

    PubMed

    Matvienko, Sergey; Alemasov, Nikolay; Fomin, Eduard

    2015-02-01

    Molecular dynamics (MD) is widely used in computational biology for studying binding mechanisms of molecules, molecular transport, conformational transitions, protein folding, etc. The method is computationally expensive; thus, the demand for the development of novel, much more efficient algorithms is still high. Therefore, the new algorithm designed in 2007 and called interaction sorting (IS) clearly attracted interest, as it outperformed the most efficient MD algorithms. In this work, a new IS modification is proposed which allows the algorithm to utilize SIMD processor instructions. This paper shows that the improvement provides an additional gain in performance, 9% to 45% in comparison to the original IS method.

  15. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    NASA Technical Reports Server (NTRS)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  16. | NREL

    Science.gov Websites

    of NREL's Computational Science Center, where he uses electronic structure calculations and other introductory chemistry and physical chemistry. Research Interests Electronic structure and dynamics in the quantum/classical molecular dynamics simulation|Coupling of molecular electronic structure to

  17. On computing stress in polymer systems involving multi-body potentials from molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Fu, Yao; Song, Jeong-Hoon

    2014-08-01

    Hardy stress definition has been restricted to pair potentials and embedded-atom method potentials due to the basic assumptions in the derivation of a symmetric microscopic stress tensor. Force decomposition required in the Hardy stress expression becomes obscure for multi-body potentials. In this work, we demonstrate the invariance of the Hardy stress expression for a polymer system modeled with multi-body interatomic potentials including up to four atoms interaction, by applying central force decomposition of the atomic force. The balance of momentum has been demonstrated to be valid theoretically and tested under various numerical simulation conditions. The validity of momentum conservation justifies the extension of Hardy stress expression to multi-body potential systems. Computed Hardy stress has been observed to converge to the virial stress of the system with increasing spatial averaging volume. This work provides a feasible and reliable linkage between the atomistic and continuum scales for multi-body potential systems.

  18. Multi-haem cytochromes in Shewanella oneidensis MR-1: structures, functions and opportunities

    PubMed Central

    Breuer, Marian; Rosso, Kevin M.; Blumberger, Jochen; Butt, Julea N.

    2015-01-01

    Multi-haem cytochromes are employed by a range of microorganisms to transport electrons over distances of up to tens of nanometres. Perhaps the most spectacular utilization of these proteins is in the reduction of extracellular solid substrates, including electrodes and insoluble mineral oxides of Fe(III) and Mn(III/IV), by species of Shewanella and Geobacter. However, multi-haem cytochromes are found in numerous and phylogenetically diverse prokaryotes where they participate in electron transfer and redox catalysis that contributes to biogeochemical cycling of N, S and Fe on the global scale. These properties of multi-haem cytochromes have attracted much interest and contributed to advances in bioenergy applications and bioremediation of contaminated soils. Looking forward, there are opportunities to engage multi-haem cytochromes for biological photovoltaic cells, microbial electrosynthesis and developing bespoke molecular devices. As a consequence, it is timely to review our present understanding of these proteins and we do this here with a focus on the multitude of functionally diverse multi-haem cytochromes in Shewanella oneidensis MR-1. We draw on findings from experimental and computational approaches which ideally complement each other in the study of these systems: computational methods can interpret experimentally determined properties in terms of molecular structure to cast light on the relation between structure and function. We show how this synergy has contributed to our understanding of multi-haem cytochromes and can be expected to continue to do so for greater insight into natural processes and their informed exploitation in biotechnologies. PMID:25411412

  19. Computational efficiency and Amdahl’s law for the adaptive resolution simulation technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Junghans, Christoph; Agarwal, Animesh; Delle Site, Luigi

    Here, we discuss the computational performance of the adaptive resolution technique in molecular simulation when it is compared with equivalent full coarse-grained and full atomistic simulations. We show that an estimate of its efficiency, within 10%–15% accuracy, is given by the Amdahl’s Law adapted to the specific quantities involved in the problem. The derivation of the predictive formula is general enough that it may be applied to the general case of molecular dynamics approaches where a reduction of degrees of freedom in a multi scale fashion occurs.

  20. Computational efficiency and Amdahl’s law for the adaptive resolution simulation technique

    DOE PAGES

    Junghans, Christoph; Agarwal, Animesh; Delle Site, Luigi

    2017-06-01

    Here, we discuss the computational performance of the adaptive resolution technique in molecular simulation when it is compared with equivalent full coarse-grained and full atomistic simulations. We show that an estimate of its efficiency, within 10%–15% accuracy, is given by the Amdahl’s Law adapted to the specific quantities involved in the problem. The derivation of the predictive formula is general enough that it may be applied to the general case of molecular dynamics approaches where a reduction of degrees of freedom in a multi scale fashion occurs.

  1. Exploring the role of pendant amines in transition metal complexes for the reduction of N2 to hydrazine and ammonia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharya, Papri; Prokopchuk, Demyan E.; Mock, Michael T.

    2017-03-01

    This review examines the synthesis and acid reactivity of transition metal dinitrogen complexes bearing diphosphine ligands containing pendant amine groups in the second coordination sphere. This manuscript is a review of the work performed in the Center for Molecular Electrocatalysis. This work was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy (U.S. DOE), Office of Science, Office of Basic Energy Sciences. EPR studies on Fe were performed using EMSL, a national scientific user facility sponsored by the DOE’s Office of Biological and Environmental Research and located atmore » PNNL. Computational resources were provided by the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Pacific Northwest National Laboratory is operated by Battelle for the U.S. DOE.« less

  2. Molecular dynamics simulations and applications in computational toxicology and nanotoxicology.

    PubMed

    Selvaraj, Chandrabose; Sakkiah, Sugunadevi; Tong, Weida; Hong, Huixiao

    2018-02-01

    Nanotoxicology studies toxicity of nanomaterials and has been widely applied in biomedical researches to explore toxicity of various biological systems. Investigating biological systems through in vivo and in vitro methods is expensive and time taking. Therefore, computational toxicology, a multi-discipline field that utilizes computational power and algorithms to examine toxicology of biological systems, has gained attractions to scientists. Molecular dynamics (MD) simulations of biomolecules such as proteins and DNA are popular for understanding of interactions between biological systems and chemicals in computational toxicology. In this paper, we review MD simulation methods, protocol for running MD simulations and their applications in studies of toxicity and nanotechnology. We also briefly summarize some popular software tools for execution of MD simulations. Published by Elsevier Ltd.

  3. Structure-biodegradability study and computer-automated prediction of aerobic biodegradation of chemicals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klopman, G.; Tu, M.

    1997-09-01

    It is shown that a combination of two programs, MultiCASE and META, can help assess the biodegradability of industrial organic materials in the ecosystem. MultiCASE is an artificial intelligence computer program that had been trained to identify molecular substructures believed to cause or inhibit biodegradation and META is an expert system trained to predict the aerobic biodegradation products of organic molecules. These two programs can be used to help evaluate the fate of disposed chemicals by estimating their biodegradability and the nature of their biodegradation products under conditions that may model the environment.

  4. The BioIntelligence Framework: a new computational platform for biomedical knowledge computing

    PubMed Central

    Farley, Toni; Kiefer, Jeff; Lee, Preston; Von Hoff, Daniel; Trent, Jeffrey M; Colbourn, Charles

    2013-01-01

    Breakthroughs in molecular profiling technologies are enabling a new data-intensive approach to biomedical research, with the potential to revolutionize how we study, manage, and treat complex diseases. The next great challenge for clinical applications of these innovations will be to create scalable computational solutions for intelligently linking complex biomedical patient data to clinically actionable knowledge. Traditional database management systems (DBMS) are not well suited to representing complex syntactic and semantic relationships in unstructured biomedical information, introducing barriers to realizing such solutions. We propose a scalable computational framework for addressing this need, which leverages a hypergraph-based data model and query language that may be better suited for representing complex multi-lateral, multi-scalar, and multi-dimensional relationships. We also discuss how this framework can be used to create rapid learning knowledge base systems to intelligently capture and relate complex patient data to biomedical knowledge in order to automate the recovery of clinically actionable information. PMID:22859646

  5. 77 FR 57571 - Center For Scientific Review; Notice of Closed Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-18

    ...: Genes, Genomes, and Genetics Integrated Review Group; Genomics, Computational Biology and Technology... Reproductive Sciences Integrated Review Group; Cellular, Molecular and Integrative Reproduction Study Section...: Immunology Integrated Review Group; Cellular and Molecular Immunology--B Study Section. [[Page 57572

  6. Multi-field C-13 NMR Relaxation Study of the Tripeptide Glycine-Proline-Glycine-NH2

    NASA Astrophysics Data System (ADS)

    Shibata, John; Forrester, Mary

    2010-03-01

    T1 and T2 C-13 NMR relaxation measurements were performed on the tripeptide Gly-Pro-Gly-NH2 on 300 MHz, 500 MHz, and 800 MHz NMR instruments (1). T1 and T2 data at different field strengths were analyzed to reveal the internal dynamics of this tripeptide. The results are compared to the classification scheme of rigidity by Anishetty, et al. (2). The dynamics of the tripeptide at different carbons in the molecule probe the site-specificity of the motions. We compare the dynamics revealed at the glycines with the dynamics in the proline ring. These motions are also being studied by molecular dynamics using the molecular modeling program Tinker (3). (1) Measurements at 500 MHz and 800 MHz were performed at the Alabama High Field NMR Center, University of Alabama at Huntsville, Huntsville, AL. (2) Anishetty, S., Pennathur, G., Anishetty, R. BMC Structural Biology 2:9 (2002). http://www.biomedcentral.com/1472-6807/2/9. (3) Dudek, M. J., Ramnarayan, K., Ponder, J. W. J. Comput. Chem. 19, 548 (1996). http://dasher.wustl.edu/tinker.

  7. Rapid Calculation of Max-Min Fair Rates for Multi-Commodity Flows in Fat-Tree Networks

    DOE PAGES

    Mollah, Md Atiqul; Yuan, Xin; Pakin, Scott; ...

    2017-08-29

    Max-min fairness is often used in the performance modeling of interconnection networks. Existing methods to compute max-min fair rates for multi-commodity flows have high complexity and are computationally infeasible for large networks. In this paper, we show that by considering topological features, this problem can be solved efficiently for the fat-tree topology that is widely used in data centers and high performance compute clusters. Several efficient new algorithms are developed for this problem, including a parallel algorithm that can take advantage of multi-core and shared-memory architectures. Using these algorithms, we demonstrate that it is possible to find the max-min fairmore » rate allocation for multi-commodity flows in fat-tree networks that support tens of thousands of nodes. We evaluate the run-time performance of the proposed algorithms and show improvement in orders of magnitude over the previously best known method. Finally, we further demonstrate a new application of max-min fair rate allocation that is only computationally feasible using our new algorithms.« less

  8. Rapid Calculation of Max-Min Fair Rates for Multi-Commodity Flows in Fat-Tree Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mollah, Md Atiqul; Yuan, Xin; Pakin, Scott

    Max-min fairness is often used in the performance modeling of interconnection networks. Existing methods to compute max-min fair rates for multi-commodity flows have high complexity and are computationally infeasible for large networks. In this paper, we show that by considering topological features, this problem can be solved efficiently for the fat-tree topology that is widely used in data centers and high performance compute clusters. Several efficient new algorithms are developed for this problem, including a parallel algorithm that can take advantage of multi-core and shared-memory architectures. Using these algorithms, we demonstrate that it is possible to find the max-min fairmore » rate allocation for multi-commodity flows in fat-tree networks that support tens of thousands of nodes. We evaluate the run-time performance of the proposed algorithms and show improvement in orders of magnitude over the previously best known method. Finally, we further demonstrate a new application of max-min fair rate allocation that is only computationally feasible using our new algorithms.« less

  9. Performance of quantum Monte Carlo for calculating molecular bond lengths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleland, Deidre M., E-mail: deidre.cleland@csiro.au; Per, Manolo C., E-mail: manolo.per@csiro.au

    2016-03-28

    This work investigates the accuracy of real-space quantum Monte Carlo (QMC) methods for calculating molecular geometries. We present the equilibrium bond lengths of a test set of 30 diatomic molecules calculated using variational Monte Carlo (VMC) and diffusion Monte Carlo (DMC) methods. The effect of different trial wavefunctions is investigated using single determinants constructed from Hartree-Fock (HF) and Density Functional Theory (DFT) orbitals with LDA, PBE, and B3LYP functionals, as well as small multi-configurational self-consistent field (MCSCF) multi-determinant expansions. When compared to experimental geometries, all DMC methods exhibit smaller mean-absolute deviations (MADs) than those given by HF, DFT, and MCSCF.more » The most accurate MAD of 3 ± 2 × 10{sup −3} Å is achieved using DMC with a small multi-determinant expansion. However, the more computationally efficient multi-determinant VMC method has a similar MAD of only 4.0 ± 0.9 × 10{sup −3} Å, suggesting that QMC forces calculated from the relatively simple VMC algorithm may often be sufficient for accurate molecular geometries.« less

  10. Computer Assisted Multi-Center Creation of Medical Knowledge Bases

    PubMed Central

    Giuse, Nunzia Bettinsoli; Giuse, Dario A.; Miller, Randolph A.

    1988-01-01

    Computer programs which support different aspects of medical care have been developed in recent years. Their capabilities range from diagnosis to medical imaging, and include hospital management systems and therapy prescription. In spite of their diversity these systems have one commonality: their reliance on a large body of medical knowledge in computer-readable form. This knowledge enables such programs to draw inferences, validate hypotheses, and in general to perform their intended task. As has been clear to developers of such systems, however, the creation and maintenance of medical knowledge bases are very expensive. Practical and economical difficulties encountered during this long-term process have discouraged most attempts. This paper discusses knowledge base creation and maintenance, with special emphasis on medical applications. We first describe the methods currently used and their limitations. We then present our recent work on developing tools and methodologies which will assist in the process of creating a medical knowledge base. We focus, in particular, on the possibility of multi-center creation of the knowledge base.

  11. Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop

    NASA Astrophysics Data System (ADS)

    Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.

    2018-04-01

    The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.

  12. Simulating electron wave dynamics in graphene superlattices exploiting parallel processing advantages

    NASA Astrophysics Data System (ADS)

    Rodrigues, Manuel J.; Fernandes, David E.; Silveirinha, Mário G.; Falcão, Gabriel

    2018-01-01

    This work introduces a parallel computing framework to characterize the propagation of electron waves in graphene-based nanostructures. The electron wave dynamics is modeled using both "microscopic" and effective medium formalisms and the numerical solution of the two-dimensional massless Dirac equation is determined using a Finite-Difference Time-Domain scheme. The propagation of electron waves in graphene superlattices with localized scattering centers is studied, and the role of the symmetry of the microscopic potential in the electron velocity is discussed. The computational methodologies target the parallel capabilities of heterogeneous multi-core CPU and multi-GPU environments and are built with the OpenCL parallel programming framework which provides a portable, vendor agnostic and high throughput-performance solution. The proposed heterogeneous multi-GPU implementation achieves speedup ratios up to 75x when compared to multi-thread and multi-core CPU execution, reducing simulation times from several hours to a couple of minutes.

  13. Intraoperative imaging-guided cancer surgery: from current fluorescence molecular imaging methods to future multi-modality imaging technology.

    PubMed

    Chi, Chongwei; Du, Yang; Ye, Jinzuo; Kou, Deqiang; Qiu, Jingdan; Wang, Jiandong; Tian, Jie; Chen, Xiaoyuan

    2014-01-01

    Cancer is a major threat to human health. Diagnosis and treatment using precision medicine is expected to be an effective method for preventing the initiation and progression of cancer. Although anatomical and functional imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) have played an important role for accurate preoperative diagnostics, for the most part these techniques cannot be applied intraoperatively. Optical molecular imaging is a promising technique that provides a high degree of sensitivity and specificity in tumor margin detection. Furthermore, existing clinical applications have proven that optical molecular imaging is a powerful intraoperative tool for guiding surgeons performing precision procedures, thus enabling radical resection and improved survival rates. However, detection depth limitation exists in optical molecular imaging methods and further breakthroughs from optical to multi-modality intraoperative imaging methods are needed to develop more extensive and comprehensive intraoperative applications. Here, we review the current intraoperative optical molecular imaging technologies, focusing on contrast agents and surgical navigation systems, and then discuss the future prospects of multi-modality imaging technology for intraoperative imaging-guided cancer surgery.

  14. Intraoperative Imaging-Guided Cancer Surgery: From Current Fluorescence Molecular Imaging Methods to Future Multi-Modality Imaging Technology

    PubMed Central

    Chi, Chongwei; Du, Yang; Ye, Jinzuo; Kou, Deqiang; Qiu, Jingdan; Wang, Jiandong; Tian, Jie; Chen, Xiaoyuan

    2014-01-01

    Cancer is a major threat to human health. Diagnosis and treatment using precision medicine is expected to be an effective method for preventing the initiation and progression of cancer. Although anatomical and functional imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) have played an important role for accurate preoperative diagnostics, for the most part these techniques cannot be applied intraoperatively. Optical molecular imaging is a promising technique that provides a high degree of sensitivity and specificity in tumor margin detection. Furthermore, existing clinical applications have proven that optical molecular imaging is a powerful intraoperative tool for guiding surgeons performing precision procedures, thus enabling radical resection and improved survival rates. However, detection depth limitation exists in optical molecular imaging methods and further breakthroughs from optical to multi-modality intraoperative imaging methods are needed to develop more extensive and comprehensive intraoperative applications. Here, we review the current intraoperative optical molecular imaging technologies, focusing on contrast agents and surgical navigation systems, and then discuss the future prospects of multi-modality imaging technology for intraoperative imaging-guided cancer surgery. PMID:25250092

  15. SCELib3.0: The new revision of SCELib, the parallel computational library of molecular properties in the Single Center Approach

    NASA Astrophysics Data System (ADS)

    Sanna, N.; Baccarelli, I.; Morelli, G.

    2009-12-01

    SCELib is a computer program which implements the Single Center Expansion (SCE) method to describe molecular electronic densities and the interaction potentials between a charged projectile (electron or positron) and a target molecular system. The first version (CPC Catalog identifier ADMG_v1_0) was submitted to the CPC Program Library in 2000, and version 2.0 (ADMG_v2_0) was submitted in 2004. We here announce the new release 3.0 which presents additional features with respect to the previous versions aiming at a significative enhance of its capabilities to deal with larger molecular systems. SCELib 3.0 allows for ab initio effective core potential (ECP) calculations of the molecular wavefunctions to be used in the SCE method in addition to the standard all-electron description of the molecule. The list of supported architectures has been updated and the code has been ported to platforms based on accelerating coprocessors, such as the NVIDIA GPGPU and the new parallel model adopted is able to efficiently run on a mixed many-core computing system. Program summaryProgram title: SCELib3.0 Catalogue identifier: ADMG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADMG_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2 018 862 No. of bytes in distributed program, including test data, etc.: 4 955 014 Distribution format: tar.gz Programming language: C Compilers used: xlc V8.x, Intel C V10.x, Portland Group V7.x, nvcc V2.x Computer: All SMP platforms based on AIX, Linux and SUNOS operating systems over SPARC, POWER, Intel Itanium2, X86, em64t and Opteron processors Operating system: SUNOS, IBM AIX, Linux RedHat (Enterprise), Linux SuSE (SLES) Has the code been vectorized or parallelized?: Yes. 1 to 32 (CPU or GPU) used RAM: Up to 32 GB depending on the molecular system and runtime parameters Classification: 16.5 Catalogue identifier of previous version: ADMG_v2_0 Journal reference of previous version: Comput. Phys. Comm. 162 (2004) 51 External routines: CUDA libraries (SDK V2.x). Does the new version supersede the previous version?: Yes Nature of problem: In this set of codes an efficient procedure is implemented to describe the wavefunction and related molecular properties of a polyatomic molecular system within the Single Center of Expansion (SCE) approximation. The resulting SCE wavefunction, electron density, electrostatic and correlation/polarization potentials can then be used in a wide variety of applications, such as electron-molecule scattering calculations, quantum chemistry studies, biomodelling and drug design. Solution method: The polycentre Hartree-Fock solution for a molecule of arbitrary geometry, based on linear combination of Gaussian-Type Orbital (GTO), is expanded over a single center, typically the Center Of Mass (C.O.M.), by means of a Gauss Legendre/Chebyschev quadrature over the θ,φ angular coordinates. The resulting SCE numerical wavefunction is then used to calculate the one-particle electron density, the electrostatic potential and two different models for the correlation/polarization potentials induced by the impinging electron, which have the correct asymptotic behavior for the leading dipole molecular polarizabilities. Reasons for new version: The present release of SCELib allows the study of larger molecular systems with respect to the previous versions by means of theoretical and technological advances, with the first implementation of the code over a many-core computing system. Summary of revisions: The major features added with respect to SCELib Version 2.0 are molecular wavefunctions obtained via the Los Alamos (Hay and Wadt) LAN ECP plus DZ description of the inner-shell electrons (on Na-La, Hf-Bi elements) [1] can now be single-center-expanded; the addition required modifications of: (i) the filtering code readgau, (ii) the main reading function setinp, (iii) the sphint code (including changes to the CalcMO code), (iv) the densty code, (v) the vst code; the classes of platforms supported now include two more architectures based on accelerated coprocessors (Nvidia GSeries GPGPU and ClearSpeed e720 (ClearSpeed version, experimental; initial preliminary porting of the sphint() function not for production runs - see the code documentation for additional detail). A single-precision representation for real numbers in the SCE mapping of the GTOs ( sphint code), has been implemented into the new code; the I h symmetry point group for the molecular systems has been added to those already allowed in the SCE procedure; the orientation of the molecular axis system for the Cs (planar) symmetry has been changed in accord with the standard orientation adopted by the latest version of the quantum chemistry code (Gaussian C03 [2]), which is used to generate the input multi-centre molecular wavefunctions ( z-axis perpendicular to the symmetry plane); the abelian subgroup for the Cs point group has been changed from C 1 to Cs; atomic basis functions including g-type GTOs can now be single-center-expanded. Restrictions: Depending on the molecular system under study and on the operating conditions the program may or may not fit into available RAM memory. In this case a feature of the program is to memory map a disk file in order to efficiently access the memory data through a disk device. The parallel GP-GPU implementation limits the number of CPU threads to the number of GPU cores present. Running time: The execution time strongly depends on the molecular target description and on the hardware/OS chosen, it is directly proportional to the ( r,θ,φ) grid size and to the number of angular basis functions used. Thus, from the program printout of the main arrays memory occupancy, the user can approximately derive the expected computer time needed for a given calculation executed in serial mode. For parallel executions the overall efficiency must be further taken into account, and this depends on the no. of processors used as well as on the parallel architecture chosen, so a simple general law is at present not determinable. References:[1] P.J. Hay, W.R. Wadt, J. Chem. Phys. 82 (1985) 270; W.R. Wadt, P.J. Hay, J. Chem. Phys. 284 (1985);P.J. Hay, W.R. Wadt, J. Chem. Phys. 299 (1985). [2] M.J. Frisch et al., Gaussian 03, revision C.02, Gaussian, Inc., Wallingford, CT, 2004.

  16. Multi-heme Cytochromes in Shewanella oneidensis MR-1: Structures, functions and opportunities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Breuer, Marian; Rosso, Kevin M.; Blumberger, Jochen

    Multi-heme cytochromes are employed by a range of microorganisms to transport electrons over distances of up to tens of nanometers. Perhaps the most spectacular utilization of these proteins is in the reduction of extracellular solid substrates, including electrodes and insoluble mineral oxides of Fe(III) and Mn(III/IV), by species of Shewanella and Geobacter. However, multi-heme cytochromes are found in numerous and phylogenetically diverse prokaryotes where they participate in electron transfer and redox catalysis that contributes to biogeochemical cycling of N, S and Fe on the global scale. These properties of multi-heme cytochromes have attracted much interest and contributed to advances inmore » bioenergy applications and bioremediation of contaminated soils. Looking forward there are opportunities to engage multi-heme cytochromes for biological photovoltaic cells, microbial electrosynthesis and developing bespoke molecular devices. As a consequence it is timely to review our present understanding of these proteins and we do this here with a focus on the multitude of functionally diverse multi-heme cytochromes in Shewanella oneidensis MR-1. We draw on findings from experimental and computational approaches which ideally complement each other in the study of these systems: computational methods can interpret experimentally determined properties in terms of molecular structure to cast light on the relation between structure and function. We show how this synergy has contributed to our understanding of multi-heme cytochromes and can be expected to continue to do so for greater insight into natural processes and their informed exploitation in biotechnologies.« less

  17. ExpoCast: Exposure Science for Prioritization and Toxicity Testing (T)

    EPA Science Inventory

    The US EPA National Center for Computational Toxicology (NCCT) has a mission to integrate modern computing and information technology with molecular biology to improve Agency prioritization of data requirements and risk assessment of chemicals. Recognizing the critical need for ...

  18. Multi-Length Scale-Enriched Continuum-Level Material Model for Kevlar (registered trademark)-Fiber-Reinforced Polymer-Matrix Composites

    DTIC Science & Technology

    2013-03-01

    of coarser-scale materials and structures containing Kevlar fibers (e.g., yarns, fabrics, plies, lamina, and laminates ). Journal of Materials...Multi-Length Scale-Enriched Continuum-Level Material Model for Kevlar -Fiber-Reinforced Polymer-Matrix Composites M. Grujicic, B. Pandurangan, J.S...extensive set of molecular-level computational analyses regarding the role of various microstructural/morphological defects on the Kevlar fiber

  19. MOO: Using a Computer Gaming Environment to Teach about Community Arts

    ERIC Educational Resources Information Center

    Garber, Elizabeth

    2004-01-01

    In this paper, the author discusses the use of an interactive computer technology, "MOO" (Multi-user domain, Object-Oriented), in her art education classes for preservice teachers. A MOO is a text-based environment wherein interactivity is centered on text exchanges made between users based on problems or other materials created by teachers. The…

  20. Symbolic programming language in molecular multicenter integral problem

    NASA Astrophysics Data System (ADS)

    Safouhi, Hassan; Bouferguene, Ahmed

    It is well known that in any ab initio molecular orbital (MO) calculation, the major task involves the computation of molecular integrals, among which the computation of three-center nuclear attraction and Coulomb integrals is the most frequently encountered. As the molecular system becomes larger, computation of these integrals becomes one of the most laborious and time-consuming steps in molecular systems calculation. Improvement of the computational methods of molecular integrals would be indispensable to further development in computational studies of large molecular systems. To develop fast and accurate algorithms for the numerical evaluation of these integrals over B functions, we used nonlinear transformations for improving convergence of highly oscillatory integrals. These methods form the basis of new methods for solving various problems that were unsolvable otherwise and have many applications as well. To apply these nonlinear transformations, the integrands should satisfy linear differential equations with coefficients having asymptotic power series in the sense of Poincaré, which in their turn should satisfy some limit conditions. These differential equations are very difficult to obtain explicitly. In the case of molecular integrals, we used a symbolic programming language (MAPLE) to demonstrate that all the conditions required to apply these nonlinear transformation methods are satisfied. Differential equations are obtained explicitly, allowing us to demonstrate that the limit conditions are also satisfied.

  1. [Computerized monitoring system in the operating center with UNIX and X-window].

    PubMed

    Tanaka, Y; Hashimoto, S; Chihara, E; Kinoshita, T; Hirose, M; Nakagawa, M; Murakami, T

    1992-01-01

    We previously reported the fully automated data logging system in the operating center. Presently, we revised the system using a highly integrated operating system, UNIX instead of OS/9. With this multi-task and multi-window (X-window) system, we could monitor all 12 rooms in the operating center at a time. The system in the operating center consists of 2 computers, SONY NEWS1450 (UNIX workstation) and Sord M223 (CP/M, data logger). On the bitmapped display of the workstation, using X-window, the data of all the operating rooms can be visualized. Furthermore, 2 other minicomputers (Fujitsu A50 in the conference room, and A60 in the ICU) and a workstation (Sun3-80 in the ICU) were connected with ethernet. With the remote login function (NFS), we could easily obtain the data during the operation from outside the operating center. This system works automatically and needs no routine maintenance.

  2. On computing stress in polymer systems involving multi-body potentials from molecular dynamics simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Yao, E-mail: fu5@mailbox.sc.edu, E-mail: jhsong@cec.sc.edu; Song, Jeong-Hoon, E-mail: fu5@mailbox.sc.edu, E-mail: jhsong@cec.sc.edu

    2014-08-07

    Hardy stress definition has been restricted to pair potentials and embedded-atom method potentials due to the basic assumptions in the derivation of a symmetric microscopic stress tensor. Force decomposition required in the Hardy stress expression becomes obscure for multi-body potentials. In this work, we demonstrate the invariance of the Hardy stress expression for a polymer system modeled with multi-body interatomic potentials including up to four atoms interaction, by applying central force decomposition of the atomic force. The balance of momentum has been demonstrated to be valid theoretically and tested under various numerical simulation conditions. The validity of momentum conservation justifiesmore » the extension of Hardy stress expression to multi-body potential systems. Computed Hardy stress has been observed to converge to the virial stress of the system with increasing spatial averaging volume. This work provides a feasible and reliable linkage between the atomistic and continuum scales for multi-body potential systems.« less

  3. From Computational Photobiology to the Design of Vibrationally Coherent Molecular Devices and Motors

    NASA Astrophysics Data System (ADS)

    Olivucci, Massimo

    2014-03-01

    In the past multi-configurational quantum chemical computations coupled with molecular mechanics force fields have been employed to investigate spectroscopic, thermal and photochemical properties of visual pigments. Here we show how the same computational technology can nowadays be used to design, characterize and ultimately, prepare light-driven molecular switches which mimics the photophysics of the visual pigment bovine rhodopsin (Rh). When embedded in the protein cavity the chromophore of Rh undergoes an ultrafast and coherent photoisomerization. In order to design a synthetic chromophore displaying similar properties in common solvents, we recently focused on indanylidene-pyrroline (NAIP) systems. We found that these systems display light-induced ground state coherent vibrational motion similar to the one detected in Rh. Semi-classical trajectories provide a mechanistic description of the structural changes associated to the observed coherent motion which is shown to be ultimately due to periodic changes in the π-conjugation.

  4. Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces

    NASA Technical Reports Server (NTRS)

    Ellman, Alvin; Carlton, Magdi

    1993-01-01

    The Network Operations Control Center (NOCC) of the DSN is responsible for scheduling the resources of DSN, and monitoring all multi-mission spacecraft tracking activities in real-time. Operations performs this job with computer systems at JPL connected to over 100 computers at Goldstone, Australia and Spain. The old computer system became obsolete, and the first version of the new system was installed in 1991. Significant improvements for the computer-human interfaces became the dominant theme for the replacement project. Major issues required innovating problem solving. Among these issues were: How to present several thousand data elements on displays without overloading the operator? What is the best graphical representation of DSN end-to-end data flow? How to operate the system without memorizing mnemonics of hundreds of operator directives? Which computing environment will meet the competing performance requirements? This paper presents the technical challenges, engineering solutions, and results of the NOCC computer-human interface design.

  5. Overview of NASA MSFC IEC Multi-CAD Collaboration Capability

    NASA Technical Reports Server (NTRS)

    Moushon, Brian; McDuffee, Patrick

    2005-01-01

    This viewgraph presentation provides an overview of a Design and Data Management System (DDMS) for Computer Aided Design (CAD) collaboration in order to support the Integrated Engineering Capability (IEC) at Marshall Space Flight Center (MSFC).

  6. Library Automation.

    ERIC Educational Resources Information Center

    Husby, Ole

    1990-01-01

    The challenges and potential benefits of automating university libraries are reviewed, with special attention given to cooperative systems. Aspects discussed include database size, the role of the university computer center, storage modes, multi-institutional systems, resource sharing, cooperative system management, networking, and intelligent…

  7. Design of Control Plane Architecture Based on Cloud Platform and Experimental Network Demonstration for Multi-domain SDON

    NASA Astrophysics Data System (ADS)

    Li, Ming; Yin, Hongxi; Xing, Fangyuan; Wang, Jingchao; Wang, Honghuan

    2016-02-01

    With the features of network virtualization and resource programming, Software Defined Optical Network (SDON) is considered as the future development trend of optical network, provisioning a more flexible, efficient and open network function, supporting intraconnection and interconnection of data centers. Meanwhile cloud platform can provide powerful computing, storage and management capabilities. In this paper, with the coordination of SDON and cloud platform, a multi-domain SDON architecture based on cloud control plane has been proposed, which is composed of data centers with database (DB), path computation element (PCE), SDON controller and orchestrator. In addition, the structure of the multidomain SDON orchestrator and OpenFlow-enabled optical node are proposed to realize the combination of centralized and distributed effective management and control platform. Finally, the functional verification and demonstration are performed through our optical experiment network.

  8. Atomic Gaussian type orbitals and their Fourier transforms via the Rayleigh expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yükçü, Niyazi

    Gaussian type orbitals (GTOs), which are one of the types of exponential type orbitals (ETOs), are used usually as basis functions in the multi-center atomic and molecular integrals to better understand physical and chemical properties of matter. In the Fourier transform method (FTM), basis functions have not simplicity to make mathematical operations, but their Fourier transforms are easier to use. In this work, with the help of FTM, Rayleigh expansion and some properties of unnormalized GTOs, we present new mathematical results for the Fourier transform of GTOs in terms of Laguerre polynomials, hypergeometric and Whittaker functions. Physical and analytical propertiesmore » of GTOs are discussed and some numerical results have been given in a table. Finally, we compare our mathematical results with the other known literature results by using a computer program and details of evaluation are presented.« less

  9. Challenges and Development of a Multi-Scale Computational Model for Photosystem I Decoupled Energy Conversion

    DTIC Science & Technology

    2013-06-01

    Applications of Molecular Modeling to Challenges in Clean Energy; Fitzgerald, G., et al .; ACS Symposium Series; American Chemical Society: Washington, DC...to 178 In Applications of Molecular Modeling to Challenges in Clean Energy; Fitzgerald, G., et al .; ACS Symposium Series; American Chemical Society...Washington, DC, 2013. developmodels of spectral properties and energy transfer kinetics (20–22). Ivashin et al . optimized select ligands (α

  10. Does Patient Time Spent Viewing Computer-Tailored Colorectal Cancer Screening Materials Predict Patient-Reported Discussion of Screening with Providers?

    ERIC Educational Resources Information Center

    Sanders, Mechelle; Fiscella, Kevin; Veazie, Peter; Dolan, James G.; Jerant, Anthony

    2016-01-01

    The main aim is to examine whether patients' viewing time on information about colorectal cancer (CRC) screening before a primary care physician (PCP) visit is associated with discussion of screening options during the visit. We analyzed data from a multi-center randomized controlled trial of a tailored interactive multimedia computer program…

  11. Assisting People with Developmental Disabilities to Improve Computer Pointing Efficiency through Multiple Mice and Automatic Pointing Assistive Programs

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang

    2011-01-01

    This study combines multi-mice technology (people with disabilities can use standard mice, instead of specialized alternative computer input devices, to achieve complete mouse operation) with an assistive pointing function (i.e. cursor-capturing, which enables the user to move the cursor to the target center automatically), to assess whether two…

  12. Cloudbursting - Solving the 3-body problem

    NASA Astrophysics Data System (ADS)

    Chang, G.; Heistand, S.; Vakhnin, A.; Huang, T.; Zimdars, P.; Hua, H.; Hood, R.; Koenig, J.; Mehrotra, P.; Little, M. M.; Law, E.

    2014-12-01

    Many science projects in the future will be accomplished through collaboration among 2 or more NASA centers along with, potentially, external scientists. Science teams will be composed of more geographically dispersed individuals and groups. However, the current computing environment does not make this easy and seamless. By being able to share computing resources among members of a multi-center team working on a science/ engineering project, limited pre-competition funds could be more efficiently applied and technical work could be conducted more effectively with less time spent moving data or waiting for computing resources to free up. Based on the work from an NASA CIO IT Labs task, this presentation will highlight our prototype work in identifying the feasibility and identify the obstacles, both technical and management, to perform "Cloudbursting" among private clouds located at three different centers. We will demonstrate the use of private cloud computing infrastructure at the Jet Propulsion Laboratory, Langley Research Center, and Ames Research Center to provide elastic computation to each other to perform parallel Earth Science data imaging. We leverage elastic load balancing and auto-scaling features at each data center so that each location can independently define how many resources to allocate to a particular job that was "bursted" from another data center and demonstrate that compute capacity scales up and down with the job. We will also discuss future work in the area, which could include the use of cloud infrastructure from different cloud framework providers as well as other cloud service providers.

  13. Multiscale investigation of chemical interference in proteins

    NASA Astrophysics Data System (ADS)

    Samiotakis, Antonios; Homouz, Dirar; Cheung, Margaret S.

    2010-05-01

    We developed a multiscale approach (MultiSCAAL) that integrates the potential of mean force obtained from all-atomistic molecular dynamics simulations with a knowledge-based energy function for coarse-grained molecular simulations in better exploring the energy landscape of a small protein under chemical interference such as chemical denaturation. An excessive amount of water molecules in all-atomistic molecular dynamics simulations often negatively impacts the sampling efficiency of some advanced sampling techniques such as the replica exchange method and it makes the investigation of chemical interferences on protein dynamics difficult. Thus, there is a need to develop an effective strategy that focuses on sampling structural changes in protein conformations rather than solvent molecule fluctuations. In this work, we address this issue by devising a multiscale simulation scheme (MultiSCAAL) that bridges the gap between all-atomistic molecular dynamics simulation and coarse-grained molecular simulation. The two key features of this scheme are the Boltzmann inversion and a protein atomistic reconstruction method we previously developed (SCAAL). Using MultiSCAAL, we were able to enhance the sampling efficiency of proteins solvated by explicit water molecules. Our method has been tested on the folding energy landscape of a small protein Trp-cage with explicit solvent under 8M urea using both the all-atomistic replica exchange molecular dynamics and MultiSCAAL. We compared computational analyses on ensemble conformations of Trp-cage with its available experimental NOE distances. The analysis demonstrated that conformations explored by MultiSCAAL better agree with the ones probed in the experiments because it can effectively capture the changes in side-chain orientations that can flip out of the hydrophobic pocket in the presence of urea and water molecules. In this regard, MultiSCAAL is a promising and effective sampling scheme for investigating chemical interference which presents a great challenge when modeling protein interactions in vivo.

  14. U.S. EPA computational toxicology programs: Central role of chemical-annotation efforts and molecular databases

    EPA Science Inventory

    EPA’s National Center for Computational Toxicology is engaged in high-profile research efforts to improve the ability to more efficiently and effectively prioritize and screen thousands of environmental chemicals for potential toxicity. A central component of these efforts invol...

  15. Geology, structure, and statistics of multi-ring basins on Mars

    NASA Technical Reports Server (NTRS)

    Schultz, Richard A.; Frey, Herbert V.

    1990-01-01

    Available data on Martian multi-ring basins were compiled and evaluated using the new 1:15 million scale geologic maps of Mars and global topography was revised as base maps. Published center coordinates and ring diameters of Martian basins were plotted by computer and superimposed onto the base maps. In many cases basin centers or ring diameters or both had to be adjusted to achieve a better fit to the revised maps. It was also found that additional basins can explain subcircular topographic lows as well as map patterns of old Noachian materials, volcanic plains units, and channels in the Tharsis region.

  16. Autonomous aircraft initiative study

    NASA Technical Reports Server (NTRS)

    Hewett, Marle D.

    1991-01-01

    The results of a consulting effort to aid NASA Ames-Dryden in defining a new initiative in aircraft automation are described. The initiative described is a multi-year, multi-center technology development and flight demonstration program. The initiative features the further development of technologies in aircraft automation already being pursued at multiple NASA centers and Department of Defense (DoD) research and Development (R and D) facilities. The proposed initiative involves the development of technologies in intelligent systems, guidance, control, software development, airborne computing, navigation, communications, sensors, unmanned vehicles, and air traffic control. It involves the integration and implementation of these technologies to the extent necessary to conduct selected and incremental flight demonstrations.

  17. Senior Computational Scientist | Center for Cancer Research

    Cancer.gov

    The Basic Science Program (BSP) pursues independent, multidisciplinary research in basic and applied molecular biology, immunology, retrovirology, cancer biology, and human genetics. Research efforts and support are an integral part of the Center for Cancer Research (CCR) at the Frederick National Laboratory for Cancer Research (FNLCR). The Cancer & Inflammation Program (CIP),

  18. Parallelising a molecular dynamics algorithm on a multi-processor workstation

    NASA Astrophysics Data System (ADS)

    Müller-Plathe, Florian

    1990-12-01

    The Verlet neighbour-list algorithm is parallelised for a multi-processor Hewlett-Packard/Apollo DN10000 workstation. The implementation makes use of memory shared between the processors. It is a genuine master-slave approach by which most of the computational tasks are kept in the master process and the slaves are only called to do part of the nonbonded forces calculation. The implementation features elements of both fine-grain and coarse-grain parallelism. Apart from three calls to library routines, two of which are standard UNIX calls, and two machine-specific language extensions, the whole code is written in standard Fortran 77. Hence, it may be expected that this parallelisation concept can be transfered in parts or as a whole to other multi-processor shared-memory computers. The parallel code is routinely used in production work.

  19. Acquisition of Ice-Tethered Profilers with Velocity (ITP-V) Instruments for Future Arctic Studies

    DTIC Science & Technology

    2016-11-15

    instrument that measures sea water temperature and salinity versus depth, the ITP-V adds a multi-axis acoustic -travel-time current meter and...housing capped by an ultra-high-molecular-weight polyethylene dome. The electronics case sits within a foam body designed to provide buoyancy for...then transmits them by satellite to a logger computer at WHO I. The ITP-V instruments add a multi-axis acoustic -travel-time current meter and

  20. Ab initio molecular simulations with numeric atom-centered orbitals

    NASA Astrophysics Data System (ADS)

    Blum, Volker; Gehrke, Ralf; Hanke, Felix; Havu, Paula; Havu, Ville; Ren, Xinguo; Reuter, Karsten; Scheffler, Matthias

    2009-11-01

    We describe a complete set of algorithms for ab initio molecular simulations based on numerically tabulated atom-centered orbitals (NAOs) to capture a wide range of molecular and materials properties from quantum-mechanical first principles. The full algorithmic framework described here is embodied in the Fritz Haber Institute "ab initio molecular simulations" (FHI-aims) computer program package. Its comprehensive description should be relevant to any other first-principles implementation based on NAOs. The focus here is on density-functional theory (DFT) in the local and semilocal (generalized gradient) approximations, but an extension to hybrid functionals, Hartree-Fock theory, and MP2/GW electron self-energies for total energies and excited states is possible within the same underlying algorithms. An all-electron/full-potential treatment that is both computationally efficient and accurate is achieved for periodic and cluster geometries on equal footing, including relaxation and ab initio molecular dynamics. We demonstrate the construction of transferable, hierarchical basis sets, allowing the calculation to range from qualitative tight-binding like accuracy to meV-level total energy convergence with the basis set. Since all basis functions are strictly localized, the otherwise computationally dominant grid-based operations scale as O(N) with system size N. Together with a scalar-relativistic treatment, the basis sets provide access to all elements from light to heavy. Both low-communication parallelization of all real-space grid based algorithms and a ScaLapack-based, customized handling of the linear algebra for all matrix operations are possible, guaranteeing efficient scaling (CPU time and memory) up to massively parallel computer systems with thousands of CPUs.

  1. Addressing the challenges of standalone multi-core simulations in molecular dynamics

    NASA Astrophysics Data System (ADS)

    Ocaya, R. O.; Terblans, J. J.

    2017-07-01

    Computational modelling in material science involves mathematical abstractions of force fields between particles with the aim to postulate, develop and understand materials by simulation. The aggregated pairwise interactions of the material's particles lead to a deduction of its macroscopic behaviours. For practically meaningful macroscopic scales, a large amount of data are generated, leading to vast execution times. Simulation times of hours, days or weeks for moderately sized problems are not uncommon. The reduction of simulation times, improved result accuracy and the associated software and hardware engineering challenges are the main motivations for many of the ongoing researches in the computational sciences. This contribution is concerned mainly with simulations that can be done on a "standalone" computer based on Message Passing Interfaces (MPI), parallel code running on hardware platforms with wide specifications, such as single/multi- processor, multi-core machines with minimal reconfiguration for upward scaling of computational power. The widely available, documented and standardized MPI library provides this functionality through the MPI_Comm_size (), MPI_Comm_rank () and MPI_Reduce () functions. A survey of the literature shows that relatively little is written with respect to the efficient extraction of the inherent computational power in a cluster. In this work, we discuss the main avenues available to tap into this extra power without compromising computational accuracy. We also present methods to overcome the high inertia encountered in single-node-based computational molecular dynamics. We begin by surveying the current state of the art and discuss what it takes to achieve parallelism, efficiency and enhanced computational accuracy through program threads and message passing interfaces. Several code illustrations are given. The pros and cons of writing raw code as opposed to using heuristic, third-party code are also discussed. The growing trend towards graphical processor units and virtual computing clouds for high-performance computing is also discussed. Finally, we present the comparative results of vacancy formation energy calculations using our own parallelized standalone code called Verlet-Stormer velocity (VSV) operating on 30,000 copper atoms. The code is based on the Sutton-Chen implementation of the Finnis-Sinclair pairwise embedded atom potential. A link to the code is also given.

  2. Path integral molecular dynamics for exact quantum statistics of multi-electronic-state systems.

    PubMed

    Liu, Xinzijian; Liu, Jian

    2018-03-14

    An exact approach to compute physical properties for general multi-electronic-state (MES) systems in thermal equilibrium is presented. The approach is extended from our recent progress on path integral molecular dynamics (PIMD), Liu et al. [J. Chem. Phys. 145, 024103 (2016)] and Zhang et al. [J. Chem. Phys. 147, 034109 (2017)], for quantum statistical mechanics when a single potential energy surface is involved. We first define an effective potential function that is numerically favorable for MES-PIMD and then derive corresponding estimators in MES-PIMD for evaluating various physical properties. Its application to several representative one-dimensional and multi-dimensional models demonstrates that MES-PIMD in principle offers a practical tool in either of the diabatic and adiabatic representations for studying exact quantum statistics of complex/large MES systems when the Born-Oppenheimer approximation, Condon approximation, and harmonic bath approximation are broken.

  3. Path integral molecular dynamics for exact quantum statistics of multi-electronic-state systems

    NASA Astrophysics Data System (ADS)

    Liu, Xinzijian; Liu, Jian

    2018-03-01

    An exact approach to compute physical properties for general multi-electronic-state (MES) systems in thermal equilibrium is presented. The approach is extended from our recent progress on path integral molecular dynamics (PIMD), Liu et al. [J. Chem. Phys. 145, 024103 (2016)] and Zhang et al. [J. Chem. Phys. 147, 034109 (2017)], for quantum statistical mechanics when a single potential energy surface is involved. We first define an effective potential function that is numerically favorable for MES-PIMD and then derive corresponding estimators in MES-PIMD for evaluating various physical properties. Its application to several representative one-dimensional and multi-dimensional models demonstrates that MES-PIMD in principle offers a practical tool in either of the diabatic and adiabatic representations for studying exact quantum statistics of complex/large MES systems when the Born-Oppenheimer approximation, Condon approximation, and harmonic bath approximation are broken.

  4. Multiscale modeling and computation of optically manipulated nano devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Gang, E-mail: baog@zju.edu.cn; Liu, Di, E-mail: richardl@math.msu.edu; Luo, Songting, E-mail: luos@iastate.edu

    2016-07-01

    We present a multiscale modeling and computational scheme for optical-mechanical responses of nanostructures. The multi-physical nature of the problem is a result of the interaction between the electromagnetic (EM) field, the molecular motion, and the electronic excitation. To balance accuracy and complexity, we adopt the semi-classical approach that the EM field is described classically by the Maxwell equations, and the charged particles follow the Schrödinger equations quantum mechanically. To overcome the numerical challenge of solving the high dimensional multi-component many-body Schrödinger equations, we further simplify the model with the Ehrenfest molecular dynamics to determine the motion of the nuclei, andmore » use the Time-Dependent Current Density Functional Theory (TD-CDFT) to calculate the excitation of the electrons. This leads to a system of coupled equations that computes the electromagnetic field, the nuclear positions, and the electronic current and charge densities simultaneously. In the regime of linear responses, the resonant frequencies initiating the out-of-equilibrium optical-mechanical responses can be formulated as an eigenvalue problem. A self-consistent multiscale method is designed to deal with the well separated space scales. The isomerization of azobenzene is presented as a numerical example.« less

  5. Center for Advanced Computational Technology

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    2000-01-01

    The Center for Advanced Computational Technology (ACT) was established to serve as a focal point for diverse research activities pertaining to application of advanced computational technology to future aerospace systems. These activities include the use of numerical simulations, artificial intelligence methods, multimedia and synthetic environments, and computational intelligence, in the modeling, analysis, sensitivity studies, optimization, design and operation of future aerospace systems. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The Center has four specific objectives: 1) conduct innovative research on applications of advanced computational technology to aerospace systems; 2) act as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); 3) help in identifying future directions of research in support of the aeronautical and space missions of the twenty-first century; and 4) help in the rapid transfer of research results to industry and in broadening awareness among researchers and engineers of the state-of-the-art in applications of advanced computational technology to the analysis, design prototyping and operations of aerospace and other high-performance engineering systems. In addition to research, Center activities include helping in the planning and coordination of the activities of a multi-center team of NASA and JPL researchers who are developing an intelligent synthesis environment for future aerospace systems; organizing workshops and national symposia; as well as writing state-of-the-art monographs and NASA special publications on timely topics.

  6. A Multi-Level Parallelization Concept for High-Fidelity Multi-Block Solvers

    NASA Technical Reports Server (NTRS)

    Hatay, Ferhat F.; Jespersen, Dennis C.; Guruswamy, Guru P.; Rizk, Yehia M.; Byun, Chansup; Gee, Ken; VanDalsem, William R. (Technical Monitor)

    1997-01-01

    The integration of high-fidelity Computational Fluid Dynamics (CFD) analysis tools with the industrial design process benefits greatly from the robust implementations that are transportable across a wide range of computer architectures. In the present work, a hybrid domain-decomposition and parallelization concept was developed and implemented into the widely-used NASA multi-block Computational Fluid Dynamics (CFD) packages implemented in ENSAERO and OVERFLOW. The new parallel solver concept, PENS (Parallel Euler Navier-Stokes Solver), employs both fine and coarse granularity in data partitioning as well as data coalescing to obtain the desired load-balance characteristics on the available computer platforms. This multi-level parallelism implementation itself introduces no changes to the numerical results, hence the original fidelity of the packages are identically preserved. The present implementation uses the Message Passing Interface (MPI) library for interprocessor message passing and memory accessing. By choosing an appropriate combination of the available partitioning and coalescing capabilities only during the execution stage, the PENS solver becomes adaptable to different computer architectures from shared-memory to distributed-memory platforms with varying degrees of parallelism. The PENS implementation on the IBM SP2 distributed memory environment at the NASA Ames Research Center obtains 85 percent scalable parallel performance using fine-grain partitioning of single-block CFD domains using up to 128 wide computational nodes. Multi-block CFD simulations of complete aircraft simulations achieve 75 percent perfect load-balanced executions using data coalescing and the two levels of parallelism. SGI PowerChallenge, SGI Origin 2000, and a cluster of workstations are the other platforms where the robustness of the implementation is tested. The performance behavior on the other computer platforms with a variety of realistic problems will be included as this on-going study progresses.

  7. Molecular characteristics of Multidrug Resistant Acinetobacter baumannii Isolates from US soldiers from Iraq at the National Naval Medical Center

    USDA-ARS?s Scientific Manuscript database

    Background: Infections with A. baumannii-calcoaceticus complex (ABC) have complicated the care of combat casualties. The majority of A. baumannii isolates cultured from injured personnel from OIF and OEF have been multi drug resistant (MDR). Therefore, the genes causing MDR and genotypes related to ...

  8. Symposium on the Interface: Computing Science and Statistics (20th). Theme: Computationally Intensive Methods in Statistics Held in Reston, Virginia on April 20-23, 1988

    DTIC Science & Technology

    1988-08-20

    34 William A. Link, Patuxent Wildlife Research Center "Increasing reliability of multiversion fault-tolerant software design by modulation," Junryo 3... Multiversion lault-Tolerant Software Design by Modularization Junryo Miyashita Department of Computer Science California state University at san Bernardino Fault...They shall beE refered to as " multiversion fault-tolerant software design". Onel problem of developing multi-versions of a program is the high cost

  9. An Investigation of the Flow Physics of Acoustic Liners by Direct Numerical Simulation

    NASA Technical Reports Server (NTRS)

    Watson, Willie R. (Technical Monitor); Tam, Christopher

    2004-01-01

    This report concentrates on reporting the effort and status of work done on three dimensional (3-D) simulation of a multi-hole resonator in an impedance tube. This work is coordinated with a parallel experimental effort to be carried out at the NASA Langley Research Center. The outline of this report is as follows : 1. Preliminary consideration. 2. Computation model. 3. Mesh design and parallel computing. 4. Visualization. 5. Status of computer code development. 1. Preliminary Consideration.

  10. Data Serving Climate Simulation Science at the NASA Center for Climate Simulation

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen M.

    2011-01-01

    The NASA Center for Climate Simulation (NCCS) provides high performance computational resources, a multi-petabyte archive, and data services in support of climate simulation research and other NASA-sponsored science. This talk describes the NCCS's data-centric architecture and processing, which are evolving in anticipation of researchers' growing requirements for higher resolution simulations and increased data sharing among NCCS users and the external science community.

  11. Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing

    PubMed Central

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811

  12. Design and development of a run-time monitor for multi-core architectures in cloud computing.

    PubMed

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  13. Comparison of the Prognostic Utility of the Diverse Molecular Data among lncRNA, DNA Methylation, microRNA, and mRNA across Five Human Cancers

    PubMed Central

    Xu, Li; Fengji, Liang; Changning, Liu; Liangcai, Zhang; Yinghui, Li; Yu, Li; Shanguang, Chen; Jianghui, Xiong

    2015-01-01

    Introduction Advances in high-throughput technologies have generated diverse informative molecular markers for cancer outcome prediction. Long non-coding RNA (lncRNA) and DNA methylation as new classes of promising markers are emerging as key molecules in human cancers; however, the prognostic utility of such diverse molecular data remains to be explored. Materials and Methods We proposed a computational pipeline (IDFO) to predict patient survival by identifying prognosis-related biomarkers using multi-type molecular data (mRNA, microRNA, DNA methylation, and lncRNA) from 3198 samples of five cancer types. We assessed the predictive performance of both single molecular data and integrated multi-type molecular data in patient survival stratification, and compared their relative importance in each type of cancer, respectively. Survival analysis using multivariate Cox regression was performed to investigate the impact of the IDFO-identified markers and traditional variables on clinical outcome. Results Using the IDFO approach, we obtained good predictive performance of the molecular datasets (bootstrap accuracy: 0.71–0.97) in five cancer types. Impressively, lncRNA was identified as the best prognostic predictor in the validated cohorts of four cancer types, followed by DNA methylation, mRNA, and then microRNA. We found the incorporating of multi-type molecular data showed similar predictive power to single-type molecular data, but with the exception of the lncRNA + DNA methylation combinations in two cancers. Survival analysis of proportional hazard models confirmed a high robustness for lncRNA and DNA methylation as prognosis factors independent of traditional clinical variables. Conclusion Our study provides insight into systematically understanding the prognostic performance of diverse molecular data in both single and aggregate patterns, which may have specific reference to subsequent related studies. PMID:26606135

  14. SWAG: Survey of Water and Ammonia in the Galactic Center

    NASA Astrophysics Data System (ADS)

    Ott, Jürgen; Meier, David S.; Krieger, Nico; Rickert, Matthew

    2017-01-01

    SWAG (``Survey of Water and Ammonia in the Galactic Center'') is a multi-line interferometric survey toward the Center of the Milky Way conducted with the Australia Telescope Compact Array. The survey region spans the entire ~400 pc Central Molecular Zone and comprises ~42 spectral lines at pc spatial and sub-km/s spectral resolution. In addition, we deeply map continuum intensity, spectral index, and polarization at the frequencies where synchrotron, free-free, and thermal dust sources emit. The observed spectral lines include many transitions of ammonia, which we use to construct maps of molecular gas temperature, opacity and gas formation temperature (see poster by Nico Krieger et al., this volume). Water masers pinpoint the sites of active star formation and other lines are good tracers for density, radiation field, shocks, and ionization. This extremely rich survey forms a perfect basis to construct maps of the physical parameters of the gas in this extreme environment.

  15. A method to obtain static potential for electron-molecule scattering

    NASA Astrophysics Data System (ADS)

    Srivastava, Rajesh; Das, Tapasi; Stauffer, Allan

    2014-05-01

    Electron scattering from molecules is complicated by the fact that molecules are a multi-centered target with the nuclei of the constituent atoms being a center of charge. One of the most important parts of a scattering calculation is to obtain the static potential which represents the interaction of the incident electron with the unperturbed charge distribution of the molecule. A common way to represent the charge distribution of molecules is with Gaussian orbitals centered on the various nuclei. We have derived a way to calculate spherically-averaged molecular static potentials using this form of molecular wave function which is mostly analytic. This method has been applied to elastic electron scattering from water molecules and we obtained differential cross sections which are compared with previous experimental and theoretical results. The method can be extended to more complex molecules. One of us (RS) is thankful to IAEA, Vienna, Austria and DAE-BRNS, Mumbai, India for financial support.

  16. NIMBUS: A Near-Infrared Multi-Band Ultraprecise Spectroimager for SOFIA

    NASA Technical Reports Server (NTRS)

    McElwain, Michael W.; Mandell, Avi; Woodgate, Bruce E.; Spiegel, David S.; Madhusudhan, Nikku; Amatucci, Edward; Blake, Cullen; Budinoff, Jason; Burgasser, Adam; Burrows, Adam; hide

    2012-01-01

    We present a new and innovative near-infrared multi-band ultraprecise spectroimager (NIMBUS) for SOFIA. This instrument will enable many exciting observations in the new age of precision astronomy. This optical design splits the beam into 8 separate spectral bandpasses, centered around key molecular bands from 1 to 4 microns. Each spectral channel has a wide field of view for simultaneous observations of a reference star that can decorrelate time-variable atmospheric and optical assembly effects, allowing the instrument to achieve ultraprecise photometry for a wide variety of astrophysical sources

  17. Advanced intellect-augmentation techniques

    NASA Technical Reports Server (NTRS)

    Engelbart, D. C.

    1972-01-01

    User experience in applying our augmentation tools and techniques to various normal working tasks within our center is described so as to convey a subjective impression of what it is like to work in an augmented environment. It is concluded that working-support, computer-aid systems for augmenting individuals and teams, are undoubtedly going to be widely developed and used. A very special role in this development is seen for multi-access computer networks.

  18. Vectorization for Molecular Dynamics on Intel Xeon Phi Corpocessors

    NASA Astrophysics Data System (ADS)

    Yi, Hongsuk

    2014-03-01

    Many modern processors are capable of exploiting data-level parallelism through the use of single instruction multiple data (SIMD) execution. The new Intel Xeon Phi coprocessor supports 512 bit vector registers for the high performance computing. In this paper, we have developed a hierarchical parallelization scheme for accelerated molecular dynamics simulations with the Terfoff potentials for covalent bond solid crystals on Intel Xeon Phi coprocessor systems. The scheme exploits multi-level parallelism computing. We combine thread-level parallelism using a tightly coupled thread-level and task-level parallelism with 512-bit vector register. The simulation results show that the parallel performance of SIMD implementations on Xeon Phi is apparently superior to their x86 CPU architecture.

  19. FERMI: A Flexible Expert Reasoner with Multi-Domain Inferencing.

    DTIC Science & Technology

    1985-07-29

    for Life Sciences University of Leyden AFOSR Education Research Center Boiling AFB Boerhaavelaan 2 Washington. DC 20032-6448 2334 EN Leyden The... Kathleen McKeown Columbia University Dr. Michael Levine Department of Computer Science Educational Psychology New York, NY 10027 210 Education Bldg

  20. Bridging the PSI Knowledge Gap: A Multi-Scale Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wirth, Brian D.

    2015-01-08

    Plasma-surface interactions (PSI) pose an immense scientific hurdle in magnetic confinement fusion and our present understanding of PSI in confinement environments is highly inadequate; indeed, a recent Fusion Energy Sciences Advisory Committee report found that 4 out of the 5 top five fusion knowledge gaps were related to PSI. The time is appropriate to develop a concentrated and synergistic science effort that would expand, exploit and integrate the wealth of laboratory ion-beam and plasma research, as well as exciting new computational tools, towards the goal of bridging the PSI knowledge gap. This effort would broadly advance plasma and material sciences,more » while providing critical knowledge towards progress in fusion PSI. This project involves the development of a Science Center focused on a new approach to PSI science; an approach that both exploits access to state-of-the-art PSI experiments and modeling, as well as confinement devices. The organizing principle is to develop synergistic experimental and modeling tools that treat the truly coupled multi-scale aspect of the PSI issues in confinement devices. This is motivated by the simple observation that while typical lab experiments and models allow independent manipulation of controlling variables, the confinement PSI environment is essentially self-determined with few outside controls. This means that processes that may be treated independently in laboratory experiments, because they involve vastly different physical and time scales, will now affect one another in the confinement environment. Also, lab experiments cannot simultaneously match all exposure conditions found in confinement devices typically forcing a linear extrapolation of lab results. At the same time programmatic limitations prevent confinement experiments alone from answering many key PSI questions. The resolution to this problem is to usefully exploit access to PSI science in lab devices, while retooling our thinking from a linear and de-coupled extrapolation to a multi-scale, coupled approach. The PSI Plasma Center consisted of three equal co-centers; one located at the MIT Plasma Science and Fusion Center, one at UC San Diego Center for Energy Research and one at the UC Berkeley Department of Nuclear Engineering, which moved to the University of Tennessee, Knoxville (UTK) with Professor Brian Wirth in July 2010. The Center had three co-directors: Prof. Dennis Whyte led the MIT co-center, the UCSD co-center was led by Dr. Russell Doerner, and Prof. Brian Wirth led the UCB/UTK center. The directors have extensive experience in PSI and material research, and have been internationally recognized in the magnetic fusion, materials and plasma research fields. The co-centers feature keystone PSI experimental and modeling facilities dedicated to PSI science: the DIONISOS/CLASS facility at MIT, the PISCES facility at UCSD, and the state-of-the-art numerical modeling capabilities at UCB/UTK. A collaborative partner in the center is Sandia National Laboratory at Livermore (SNL/CA), which has extensive capabilities with low energy ion beams and surface diagnostics, as well as supporting plasma facilities, including the Tritium Plasma Experiment, all of which significantly augment the Center. Interpretive, continuum material models are available through SNL/CA, UCSD and MIT. The participating institutions of MIT, UCSD, UCB/UTK, SNL/CA and LLNL brought a formidable array of experimental tools and personnel abilities into the PSI Plasma Center. Our work has focused on modeling activities associated with plasma surface interactions that are involved in effects of He and H plasma bombardment on tungsten surfaces. This involved performing computational material modeling of the surface evolution during plasma bombardment using molecular dynamics modeling. The principal outcomes of the research efforts within the combined experimental – modeling PSI center are to provide a knowledgebase of the mechanisms of surface degradation, and the influence of the surface on plasma conditions.« less

  1. Performance Assessment Institute-NV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lombardo, Joesph

    2012-12-31

    The National Supercomputing Center for Energy and the Environment’s intention is to purchase a multi-purpose computer cluster in support of the Performance Assessment Institute (PA Institute). The PA Institute will serve as a research consortium located in Las Vegas Nevada with membership that includes: national laboratories, universities, industry partners, and domestic and international governments. This center will provide a one-of-a-kind centralized facility for the accumulation of information for use by Institutions of Higher Learning, the U.S. Government, and Regulatory Agencies and approved users. This initiative will enhance and extend High Performance Computing (HPC) resources in Nevada to support critical nationalmore » and international needs in "scientific confirmation". The PA Institute will be promoted as the leading Modeling, Learning and Research Center worldwide. The program proposes to utilize the existing supercomputing capabilities and alliances of the University of Nevada Las Vegas as a base, and to extend these resource and capabilities through a collaborative relationship with its membership. The PA Institute will provide an academic setting for interactive sharing, learning, mentoring and monitoring of multi-disciplinary performance assessment and performance confirmation information. The role of the PA Institute is to facilitate research, knowledge-increase, and knowledge-sharing among users.« less

  2. Spotting and designing promiscuous ligands for drug discovery.

    PubMed

    Schneider, P; Röthlisberger, M; Reker, D; Schneider, G

    2016-01-21

    The promiscuous binding behavior of bioactive compounds forms a mechanistic basis for understanding polypharmacological drug action. We present the development and prospective application of a computational tool for identifying potential promiscuous drug-like ligands. In combination with computational target prediction methods, the approach provides a working concept for rationally designing such molecular structures. We could confirm the multi-target binding of a de novo generated compound in a proof-of-concept study relying on the new method.

  3. National Center for Biotechnology Information Celebrates 25th Anniversary | NIH MedlinePlus the Magazine

    MedlinePlus

    ... is a national and international resource for molecular biology information. It creates public databases, conducts research in computational biology, develops software tools for analyzing genome data, and ...

  4. Flight test validation of a design procedure for digital autopilots

    NASA Technical Reports Server (NTRS)

    Bryant, W. H.

    1983-01-01

    Commercially available general aviation autopilots are currently in transition from an analogue circuit system to a computer implemented digital flight control system. Well known advantages of the digital autopilot include enhanced modes, self-test capacity, fault detection, and greater computational capacity. A digital autopilot's computational capacity can be used to full advantage by increasing the sophistication of the digital autopilot's chief function, stability and control. NASA's Langley Research Center has been pursuing the development of direct digital design tools for aircraft stabilization systems for several years. This effort has most recently been directed towards the development and realization of multi-mode digital autopilots for GA aircraft, conducted under a SPIFR-related program called the General Aviation Terminal Operations Research (GATOR) Program. This presentation focuses on the implementation and testing of a candidate multi-mode autopilot designed using these newly developed tools.

  5. A Multi-center Milestone Study of Clinical Vertebral CT Segmentation

    PubMed Central

    Yao, Jianhua; Burns, Joseph E.; Forsberg, Daniel; Seitel, Alexander; Rasoulian, Abtin; Abolmaesumi, Purang; Hammernik, Kerstin; Urschler, Martin; Ibragimov, Bulat; Korez, Robert; Vrtovec, Tomaž; Castro-Mateos, Isaac; Pozo, Jose M.; Frangi, Alejandro F.; Summers, Ronald M.; Li, Shuo

    2017-01-01

    A multiple center milestone study of clinical vertebra segmentation is presented in this paper. Vertebra segmentation is a fundamental step for spinal image analysis and intervention. The first half of the study was conducted in the spine segmentation challenge in 2014 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) Workshop on Computational Spine Imaging (CSI 2014). The objective was to evaluate the performance of several state-of-the-art vertebra segmentation algorithms on computed tomography (CT) scans using ten training and five testing dataset, all healthy cases; the second half of the study was conducted after the challenge, where additional 5 abnormal cases are used for testing to evaluate the performance under abnormal cases. Dice coefficients and absolute surface distances were used as evaluation metrics. Segmentation of each vertebra as a single geometric unit, as well as separate segmentation of vertebra substructures, was evaluated. Five teams participated in the comparative study. The top performers in the study achieved Dice coefficient of 0.93 in the upper thoracic, 0.95 in the lower thoracic and 0.96 in the lumbar spine for healthy cases, and 0.88 in the upper thoracic, 0.89 in the lower thoracic and 0.92 in the lumbar spine for osteoporotic and fractured cases. The strengths and weaknesses of each method as well as future suggestion for improvement are discussed. This is the first multi-center comparative study for vertebra segmentation methods, which will provide an up-to-date performance milestone for the fast growing spinal image analysis and intervention. PMID:26878138

  6. Network-based drug discovery by integrating systems biology and computational technologies

    PubMed Central

    Leung, Elaine L.; Cao, Zhi-Wei; Jiang, Zhi-Hong; Zhou, Hua

    2013-01-01

    Network-based intervention has been a trend of curing systemic diseases, but it relies on regimen optimization and valid multi-target actions of the drugs. The complex multi-component nature of medicinal herbs may serve as valuable resources for network-based multi-target drug discovery due to its potential treatment effects by synergy. Recently, robustness of multiple systems biology platforms shows powerful to uncover molecular mechanisms and connections between the drugs and their targeting dynamic network. However, optimization methods of drug combination are insufficient, owning to lacking of tighter integration across multiple ‘-omics’ databases. The newly developed algorithm- or network-based computational models can tightly integrate ‘-omics’ databases and optimize combinational regimens of drug development, which encourage using medicinal herbs to develop into new wave of network-based multi-target drugs. However, challenges on further integration across the databases of medicinal herbs with multiple system biology platforms for multi-target drug optimization remain to the uncertain reliability of individual data sets, width and depth and degree of standardization of herbal medicine. Standardization of the methodology and terminology of multiple system biology and herbal database would facilitate the integration. Enhance public accessible databases and the number of research using system biology platform on herbal medicine would be helpful. Further integration across various ‘-omics’ platforms and computational tools would accelerate development of network-based drug discovery and network medicine. PMID:22877768

  7. UC Merced Center for Computational Biology Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colvin, Michael; Watanabe, Masakatsu

    Final report for the UC Merced Center for Computational Biology. The Center for Computational Biology (CCB) was established to support multidisciplinary scientific research and academic programs in computational biology at the new University of California campus in Merced. In 2003, the growing gap between biology research and education was documented in a report from the National Academy of Sciences, Bio2010 Transforming Undergraduate Education for Future Research Biologists. We believed that a new type of biological sciences undergraduate and graduate programs that emphasized biological concepts and considered biology as an information science would have a dramatic impact in enabling the transformationmore » of biology. UC Merced as newest UC campus and the first new U.S. research university of the 21st century was ideally suited to adopt an alternate strategy - to create a new Biological Sciences majors and graduate group that incorporated the strong computational and mathematical vision articulated in the Bio2010 report. CCB aimed to leverage this strong commitment at UC Merced to develop a new educational program based on the principle of biology as a quantitative, model-driven science. Also we expected that the center would be enable the dissemination of computational biology course materials to other university and feeder institutions, and foster research projects that exemplify a mathematical and computations-based approach to the life sciences. As this report describes, the CCB has been successful in achieving these goals, and multidisciplinary computational biology is now an integral part of UC Merced undergraduate, graduate and research programs in the life sciences. The CCB began in fall 2004 with the aid of an award from U.S. Department of Energy (DOE), under its Genomes to Life program of support for the development of research and educational infrastructure in the modern biological sciences. This report to DOE describes the research and academic programs made possible by the CCB from its inception until August, 2010, at the end of the final extension. Although DOE support for the center ended in August 2010, the CCB will continue to exist and support its original objectives. The research and academic programs fostered by the CCB have led to additional extramural funding from other agencies, and we anticipate that CCB will continue to provide support for quantitative and computational biology program at UC Merced for many years to come. Since its inception in fall 2004, CCB research projects have continuously had a multi-institutional collaboration with Lawrence Livermore National Laboratory (LLNL), and the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, as well as individual collaborators at other sites. CCB affiliated faculty cover a broad range of computational and mathematical research including molecular modeling, cell biology, applied math, evolutional biology, bioinformatics, etc. The CCB sponsored the first distinguished speaker series at UC Merced, which had an important role is spreading the word about the computational biology emphasis at this new campus. One of CCB's original goals is to help train a new generation of biologists who bridge the gap between the computational and life sciences. To archive this goal, by summer 2006, a new program - summer undergraduate internship program, have been established under CCB to train the highly mathematical and computationally intensive Biological Science researchers. By the end of summer 2010, 44 undergraduate students had gone through this program. Out of those participants, 11 students have been admitted to graduate schools and 10 more students are interested in pursuing graduate studies in the sciences. The center is also continuing to facilitate the development and dissemination of undergraduate and graduate course materials based on the latest research in computational biology.« less

  8. Multi-threaded ATLAS simulation on Intel Knights Landing processors

    NASA Astrophysics Data System (ADS)

    Farrell, Steven; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea; ATLAS Collaboration

    2017-10-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with details on its multi-threaded design. Then, we will present a performance analysis of the application on KNL devices and compare it to a traditional x86 platform to demonstrate the capabilities of the architecture and evaluate the benefits of utilizing KNL platforms like Cori for ATLAS production.

  9. Production of Hydrogen by Electrocatalysis: Making the H-H Bond by Combining Protons and Hydrides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bullock, R. Morris; Appel, Aaron M.; Helm, Monte L.

    2014-03-25

    Generation of hydrogen by reduction of two protons by two electrons can be catalysed by molecular electrocatalysts. Determination of the thermodynamic driving force for elimination of H2 from molecular complexes is important for the rational design of molecular electrocatalysts, and allows the design of metal complexes of abundant, inexpensive metals rather than precious metals (“Cheap Metals for Noble Tasks”). The rate of H2 evolution can be dramatically accelerated by incorporating pendant amines into diphosphine ligands. These pendant amines in the second coordination sphere function as protons relays, accelerating intramolecular and intermolecular proton transfer reactions. The thermodynamics of hydride transfer frommore » metal hydrides and the acidity of protonated pendant amines (pKa of N-H) contribute to the thermodynamics of elimination of H2; both of the hydricity and acidity can be systematically varied by changing the substituents on the ligands. A series of Ni(II) electrocatalysts with pendant amines have been developed. In addition to the thermochemical considerations, the catalytic rate is strongly influenced by the ability to deliver protons to the correct location of the pendant amine. Protonation of the amine endo to the metal leads to the N-H being positioned appropriately to favor rapid heterocoupling with the M-H. Designing ligands that include proton relays that are properly positioned and thermodynamically tuned is a key principle for molecular electrocatalysts for H2 production as well as for other multi-proton, multi-electron reactions important for energy conversions. The research was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences. Pacific Northwest National Laboratory is operated by Battelle for DOE.« less

  10. Embedded 100 Gbps Photonic Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuznia, Charlie

    This innovation to fiber optic component technology increases the performance, reduces the size and reduces the power consumption of optical communications within dense network systems, such as advanced distributed computing systems and data centers. VCSEL technology is enabling short-reach (< 100 m) and >100 Gbps optical interconnections over multi-mode fiber in commercial applications.

  11. System for assessing Aviation's Global Emissions (SAGE), part 1 : model description and inventory results

    DOT National Transportation Integrated Search

    2007-07-01

    In early 2001, the US Federal Aviation Administration embarked on a multi-year effort to develop a new computer model, the System for assessing Aviation's Global Emissions (SAGE). Currently at Version 1.5, the basic use of the model has centered on t...

  12. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.

  13. Modeling synthetic lethality

    PubMed Central

    Le Meur, Nolwenn; Gentleman, Robert

    2008-01-01

    Background Synthetic lethality defines a genetic interaction where the combination of mutations in two or more genes leads to cell death. The implications of synthetic lethal screens have been discussed in the context of drug development as synthetic lethal pairs could be used to selectively kill cancer cells, but leave normal cells relatively unharmed. A challenge is to assess genome-wide experimental data and integrate the results to better understand the underlying biological processes. We propose statistical and computational tools that can be used to find relationships between synthetic lethality and cellular organizational units. Results In Saccharomyces cerevisiae, we identified multi-protein complexes and pairs of multi-protein complexes that share an unusually high number of synthetic genetic interactions. As previously predicted, we found that synthetic lethality can arise from subunits of an essential multi-protein complex or between pairs of multi-protein complexes. Finally, using multi-protein complexes allowed us to take into account the pleiotropic nature of the gene products. Conclusions Modeling synthetic lethality using current estimates of the yeast interactome is an efficient approach to disentangle some of the complex molecular interactions that drive a cell. Our model in conjunction with applied statistical methods and computational methods provides new tools to better characterize synthetic genetic interactions. PMID:18789146

  14. Future applications of artificial intelligence to Mission Control Centers

    NASA Technical Reports Server (NTRS)

    Friedland, Peter

    1991-01-01

    Future applications of artificial intelligence to Mission Control Centers are presented in the form of the viewgraphs. The following subject areas are covered: basic objectives of the NASA-wide AI program; inhouse research program; constraint-based scheduling; learning and performance improvement for scheduling; GEMPLAN multi-agent planner; planning, scheduling, and control; Bayesian learning; efficient learning algorithms; ICARUS (an integrated architecture for learning); design knowledge acquisition and retention; computer-integrated documentation; and some speculation on future applications.

  15. Molecular insight into the inclusion of the dietary plant flavonol fisetin and its chromophore within a chemically modified γ-cyclodextrin: Multi-spectroscopic, molecular docking and solubility studies.

    PubMed

    Pahari, Biswapathik; Chakraborty, Sandipan; Sengupta, Pradeep K

    2018-09-15

    We explored the encapsulation of dietary plant flavonols fisetin and its chromophore 3-hydroxyflavone, within 2-hydroxypropyl-γ-cyclodextrin (HPγ-CDx) nano-cavity in aqueous solution using multi-spectroscopic approaches and molecular docking. Upon addition of HPγ-CDx, dramatic changes occur in the intrinsic 'two color' fluorescence behavior of the fluorophores. This is manifested by significant increase in the steady state fluorescence intensities, anisotropies, average fluorescence lifetimes and rotational correlation times. Furthermore, in the CDx environment, intrinsically achiral flavonols exhibit prominent induced circular dichroism bands. These findings indicate that the flavonol molecules spontaneously enter the relatively hydrophobic, chiral environment of the HPγ-CDx nano-cavities. Molecular docking computations corroborate the spectroscopic findings, and predict selectivity in orientation of the encapsulated flavonols. HPγ-CDx inclusion increases the aqueous solubility of individual flavonols ∼100-1000 times. The present study demonstrates that the hydroxypropyl substituent in γ-CDx controls the inclusion mode of the flavonols, leading to their enhanced solubilization and altered spectral signatures. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. In Pursuit of Improving Airburst and Ground Damage Predictions: Recent Advances in Multi-Body Aerodynamic Testing and Computational Tools Validation

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj; Gulhan, Ali; Aftosmis, Michael; Brock, Joseph; Mathias, Donovan; Need, Dominic; Rodriguez, David; Seltner, Patrick; Stern, Eric; Wiles, Sebastian

    2017-01-01

    An airburst from a large asteroid during entry can cause significant ground damage. The damage depends on the energy and the altitude of airburst. Breakup of asteroids into fragments and their lateral spread have been observed. Modeling the underlying physics of fragmented bodies interacting at hypersonic speeds and the spread of fragments is needed for a true predictive capability. Current models use heuristic arguments and assumptions such as pancaking or point source explosive energy release at pre-determined altitude or an assumed fragmentation spread rate to predict airburst damage. A multi-year collaboration between German Aerospace Center (DLR) and NASA has been established to develop validated computational tools to address the above challenge.

  17. First-Principles Molecular Dynamics Studies of Organometallic Complexes and Homogeneous Catalytic Processes.

    PubMed

    Vidossich, Pietro; Lledós, Agustí; Ujaque, Gregori

    2016-06-21

    Computational chemistry is a valuable aid to complement experimental studies of organometallic systems and their reactivity. It allows probing mechanistic hypotheses and investigating molecular structures, shedding light on the behavior and properties of molecular assemblies at the atomic scale. When approaching a chemical problem, the computational chemist has to decide on the theoretical approach needed to describe electron/nuclear interactions and the composition of the model used to approximate the actual system. Both factors determine the reliability of the modeling study. The community dedicated much effort to developing and improving the performance and accuracy of theoretical approaches for electronic structure calculations, on which the description of (inter)atomic interactions rely. Here, the importance of the model system used in computational studies is highlighted through examples from our recent research focused on organometallic systems and homogeneous catalytic processes. We show how the inclusion of explicit solvent allows the characterization of molecular events that would otherwise not be accessible in reduced model systems (clusters). These include the stabilization of nascent charged fragments via microscopic solvation (notably, hydrogen bonding), transfer of charge (protons) between distant fragments mediated by solvent molecules, and solvent coordination to unsaturated metal centers. Furthermore, when weak interactions are involved, we show how conformational and solvation properties of organometallic complexes are also affected by the explicit inclusion of solvent molecules. Such extended model systems may be treated under periodic boundary conditions, thus removing the cluster/continuum (or vacuum) boundary, and require a statistical mechanics simulation technique to sample the accessible configurational space. First-principles molecular dynamics, in which atomic forces are computed from electronic structure calculations (namely, density functional theory), is certainly the technique of choice to investigate chemical events in solution. This methodology is well established and thanks to advances in both algorithms and computational resources simulation times required for the modeling of chemical events are nowadays accessible, though the computational requirements use to be high. Specific applications reviewed here include mechanistic studies of the Shilov and Wacker processes, speciation in Pd chemistry, hydrogen bonding to metal centers, and the dynamics of agostic interactions.

  18. Modeling Molecules

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The molecule modeling method known as Multibody Order (N) Dynamics, or MBO(N)D, was developed by Moldyn, Inc. at Goddard Space Flight Center through funding provided by the SBIR program. The software can model the dynamics of molecules through technology which stimulates low-frequency molecular motions and properties, such as movements among a molecule's constituent parts. With MBO(N)D, a molecule is substructured into a set of interconnected rigid and flexible bodies. These bodies replace the computation burden of mapping individual atoms. Moldyn's technology cuts computation time while increasing accuracy. The MBO(N)D technology is available as Insight II 97.0 from Molecular Simulations, Inc. Currently the technology is used to account for forces on spacecraft parts and to perform molecular analyses for pharmaceutical purposes. It permits the solution of molecular dynamics problems on a moderate workstation, as opposed to on a supercomputer.

  19. Tools and procedures for visualization of proteins and other biomolecules.

    PubMed

    Pan, Lurong; Aller, Stephen G

    2015-04-01

    Protein, peptides, and nucleic acids are biomolecules that drive biological processes in living organisms. An enormous amount of structural data for a large number of these biomolecules has been described with atomic precision in the form of structural "snapshots" that are freely available in public repositories. These snapshots can help explain how the biomolecules function, the nature of interactions between multi-molecular complexes, and even how small-molecule drugs can modulate the biomolecules for clinical benefits. Furthermore, these structural snapshots serve as inputs for sophisticated computer simulations to turn the biomolecules into moving, "breathing" molecular machines for understanding their dynamic properties in real-time computer simulations. In order for the researcher to take advantage of such a wealth of structural data, it is necessary to gain competency in the use of computer molecular visualization tools for exploring the structures and visualizing three-dimensional spatial representations. Here, we present protocols for using two common visualization tools--the Web-based Jmol and the stand-alone PyMOL package--as well as a few examples of other popular tools. Copyright © 2015 John Wiley & Sons, Inc.

  20. Multi-Scale Computational Enzymology: Enhancing Our Understanding of Enzymatic Catalysis

    PubMed Central

    Gherib, Rami; Dokainish, Hisham M.; Gauld, James W.

    2014-01-01

    Elucidating the origin of enzymatic catalysis stands as one the great challenges of contemporary biochemistry and biophysics. The recent emergence of computational enzymology has enhanced our atomistic-level description of biocatalysis as well the kinetic and thermodynamic properties of their mechanisms. There exists a diversity of computational methods allowing the investigation of specific enzymatic properties. Small or large density functional theory models allow the comparison of a plethora of mechanistic reactive species and divergent catalytic pathways. Molecular docking can model different substrate conformations embedded within enzyme active sites and determine those with optimal binding affinities. Molecular dynamics simulations provide insights into the dynamics and roles of active site components as well as the interactions between substrate and enzymes. Hybrid quantum mechanical/molecular mechanical (QM/MM) can model reactions in active sites while considering steric and electrostatic contributions provided by the surrounding environment. Using previous studies done within our group, on OvoA, EgtB, ThrRS, LuxS and MsrA enzymatic systems, we will review how these methods can be used either independently or cooperatively to get insights into enzymatic catalysis. PMID:24384841

  1. Role of Open Source Tools and Resources in Virtual Screening for Drug Discovery.

    PubMed

    Karthikeyan, Muthukumarasamy; Vyas, Renu

    2015-01-01

    Advancement in chemoinformatics research in parallel with availability of high performance computing platform has made handling of large scale multi-dimensional scientific data for high throughput drug discovery easier. In this study we have explored publicly available molecular databases with the help of open-source based integrated in-house molecular informatics tools for virtual screening. The virtual screening literature for past decade has been extensively investigated and thoroughly analyzed to reveal interesting patterns with respect to the drug, target, scaffold and disease space. The review also focuses on the integrated chemoinformatics tools that are capable of harvesting chemical data from textual literature information and transform them into truly computable chemical structures, identification of unique fragments and scaffolds from a class of compounds, automatic generation of focused virtual libraries, computation of molecular descriptors for structure-activity relationship studies, application of conventional filters used in lead discovery along with in-house developed exhaustive PTC (Pharmacophore, Toxicophores and Chemophores) filters and machine learning tools for the design of potential disease specific inhibitors. A case study on kinase inhibitors is provided as an example.

  2. Enabling campus grids with open science grid technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weitzel, Derek; Bockelman, Brian; Swanson, David

    2011-01-01

    The Open Science Grid is a recognized key component of the US national cyber-infrastructure enabling scientific discovery through advanced high throughput computing. The principles and techniques that underlie the Open Science Grid can also be applied to Campus Grids since many of the requirements are the same, even if the implementation technologies differ. We find five requirements for a campus grid: trust relationships, job submission, resource independence, accounting, and data management. The Holland Computing Center's campus grid at the University of Nebraska-Lincoln was designed to fulfill the requirements of a campus grid. A bridging daemon was designed to bring non-Condormore » clusters into a grid managed by Condor. Condor features which make it possible to bridge Condor sites into a multi-campus grid have been exploited at the Holland Computing Center as well.« less

  3. RNA nanotechnology for computer design and in vivo computation

    PubMed Central

    Qiu, Meikang; Khisamutdinov, Emil; Zhao, Zhengyi; Pan, Cheryl; Choi, Jeong-Woo; Leontis, Neocles B.; Guo, Peixuan

    2013-01-01

    Molecular-scale computing has been explored since 1989 owing to the foreseeable limitation of Moore's law for silicon-based computation devices. With the potential of massive parallelism, low energy consumption and capability of working in vivo, molecular-scale computing promises a new computational paradigm. Inspired by the concepts from the electronic computer, DNA computing has realized basic Boolean functions and has progressed into multi-layered circuits. Recently, RNA nanotechnology has emerged as an alternative approach. Owing to the newly discovered thermodynamic stability of a special RNA motif (Shu et al. 2011 Nat. Nanotechnol. 6, 658–667 (doi:10.1038/nnano.2011.105)), RNA nanoparticles are emerging as another promising medium for nanodevice and nanomedicine as well as molecular-scale computing. Like DNA, RNA sequences can be designed to form desired secondary structures in a straightforward manner, but RNA is structurally more versatile and more thermodynamically stable owing to its non-canonical base-pairing, tertiary interactions and base-stacking property. A 90-nucleotide RNA can exhibit 490 nanostructures, and its loops and tertiary architecture can serve as a mounting dovetail that eliminates the need for external linking dowels. Its enzymatic and fluorogenic activity creates diversity in computational design. Varieties of small RNA can work cooperatively, synergistically or antagonistically to carry out computational logic circuits. The riboswitch and enzymatic ribozyme activities and its special in vivo attributes offer a great potential for in vivo computation. Unique features in transcription, termination, self-assembly, self-processing and acid resistance enable in vivo production of RNA nanoparticles that harbour various regulators for intracellular manipulation. With all these advantages, RNA computation is promising, but it is still in its infancy. Many challenges still exist. Collaborations between RNA nanotechnologists and computer scientists are necessary to advance this nascent technology. PMID:24000362

  4. RNA nanotechnology for computer design and in vivo computation.

    PubMed

    Qiu, Meikang; Khisamutdinov, Emil; Zhao, Zhengyi; Pan, Cheryl; Choi, Jeong-Woo; Leontis, Neocles B; Guo, Peixuan

    2013-10-13

    Molecular-scale computing has been explored since 1989 owing to the foreseeable limitation of Moore's law for silicon-based computation devices. With the potential of massive parallelism, low energy consumption and capability of working in vivo, molecular-scale computing promises a new computational paradigm. Inspired by the concepts from the electronic computer, DNA computing has realized basic Boolean functions and has progressed into multi-layered circuits. Recently, RNA nanotechnology has emerged as an alternative approach. Owing to the newly discovered thermodynamic stability of a special RNA motif (Shu et al. 2011 Nat. Nanotechnol. 6, 658-667 (doi:10.1038/nnano.2011.105)), RNA nanoparticles are emerging as another promising medium for nanodevice and nanomedicine as well as molecular-scale computing. Like DNA, RNA sequences can be designed to form desired secondary structures in a straightforward manner, but RNA is structurally more versatile and more thermodynamically stable owing to its non-canonical base-pairing, tertiary interactions and base-stacking property. A 90-nucleotide RNA can exhibit 4⁹⁰ nanostructures, and its loops and tertiary architecture can serve as a mounting dovetail that eliminates the need for external linking dowels. Its enzymatic and fluorogenic activity creates diversity in computational design. Varieties of small RNA can work cooperatively, synergistically or antagonistically to carry out computational logic circuits. The riboswitch and enzymatic ribozyme activities and its special in vivo attributes offer a great potential for in vivo computation. Unique features in transcription, termination, self-assembly, self-processing and acid resistance enable in vivo production of RNA nanoparticles that harbour various regulators for intracellular manipulation. With all these advantages, RNA computation is promising, but it is still in its infancy. Many challenges still exist. Collaborations between RNA nanotechnologists and computer scientists are necessary to advance this nascent technology.

  5. Star-formation in the central kpc of the starburst/LINER galaxy NGC 1614

    NASA Astrophysics Data System (ADS)

    Olsson, E.; Aalto, S.; Thomasson, M.; Beswick, R.

    2010-04-01

    Aims: The aim is to investigate the star-formation and LINER (low ionization nuclear emission line region) activity within the central kiloparsec of the galaxy NGC 1614. In this paper the radio continuum morphology, which provides a tracer of both nuclear and star-formation activity, and the distribution and dynamics of the cold molecular and atomic gas feeding this activity, are studied. In particular, the nature of an R ≈ 300 pc nuclear ring of star-formation and its relationship to the LINER activity in NGC 1614 is addressed. Methods: A high angular resolution, multi-wavelength study of the LINER galaxy NGC 1614 has been performed. Deep observations of the CO 1-0 spectral line were performed using the Owens Valley Radio Observatory (OVRO). These data have been complemented by extensive multi-frequency radio continuum and Hi absorption observations using the Very Large Array (VLA) and Multi-Element Radio Linked Interferometer Network (MERLIN). Results: Toward the center of NGC 1614, we have detected a ring of radio continuum emission with a radius of 300 pc. This ring is coincident with previous radio and Paα observations. The dynamical mass of the ring based on Hi absorption is 3.1 × 109 M⊙. The peak of the integrated CO 1-0 emission is shifted by 1” to the north-west of the ring center. An upper limit to the molecular gas mass in the ring region is ~1.7 × 109 M⊙. Inside the ring, there is a north to south elongated 1.4 GHz radio continuum feature, with a nuclear peak. This peak is also seen in the 5 GHz radio continuum and in the CO. Conclusions: We suggest that the R = 300 pc star forming ring represents the radius of a dynamical resonance - as an alternative to the scenario that the starburst is propagating outwards from the center into a molecular ring. The ring-like appearance is probably part of a spiral structure. Substantial amounts of molecular gas have passed the radius of the ring and reached the nuclear region. The nuclear peak seen in 5 GHz radio continuum and CO is likely related to previous star formation, where all molecular gas was not consumed. The LINER-like optical spectrum observed in NGC 1614 may be due to nuclear starburst activity, and not to an active galactic nucleus (AGN). Although the presence of an AGN cannot be excluded.

  6. Photogrammetric Technique for Center of Gravity Determination

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Johnson, Thomas H.; Shemwell, Dave; Shreves, Christopher M.

    2012-01-01

    A new measurement technique for determination of the center of gravity (CG) for large scale objects has been demonstrated. The experimental method was conducted as part of an LS-DYNA model validation program for the Max Launch Abort System (MLAS) crew module. The test was conducted on the full scale crew module concept at NASA Langley Research Center. Multi-camera photogrammetry was used to measure the test article in several asymmetric configurations. The objective of these measurements was to provide validation of the CG as computed from the original mechanical design. The methodology, measurement technique, and measurement results are presented.

  7. Single photon emission computed tomography and positron emission tomography imaging of multi-drug resistant P-glycoprotein--monitoring a transport activity important in cancer, blood-brain barrier function and Alzheimer's disease.

    PubMed

    Piwnica-Worms, David; Kesarwala, Aparna H; Pichler, Andrea; Prior, Julie L; Sharma, Vijay

    2006-11-01

    Overexpression of multi-drug resistant P-glycoprotein (Pgp) remains an important barrier to successful chemotherapy in cancer patients and impacts the pharmacokinetics of many important drugs. Pgp is also expressed on the luminal surface of brain capillary endothelial cells wherein Pgp functionally comprises a major component of the blood-brain barrier by limiting central nervous system penetration of various therapeutic agents. In addition, Pgp in brain capillary endothelial cells removes amyloid-beta from the brain. Several single photon emission computed tomography and positron emission tomography radiopharmaceutical have been shown to be transported by Pgp, thereby enabling the noninvasive interrogation of Pgp-mediated transport activity in vivo. Therefore, molecular imaging of Pgp activity may enable noninvasive dynamic monitoring of multi-drug resistance in cancer, guide therapeutic choices in cancer chemotherapy, and identify transporter deficiencies of the blood-brain barrier in Alzheimer's disease.

  8. Characterizing the heterogeneity of tumor tissues from spatially resolved molecular measures

    PubMed Central

    Zavodszky, Maria I.

    2017-01-01

    Background Tumor heterogeneity can manifest itself by sub-populations of cells having distinct phenotypic profiles expressed as diverse molecular, morphological and spatial distributions. This inherent heterogeneity poses challenges in terms of diagnosis, prognosis and efficient treatment. Consequently, tools and techniques are being developed to properly characterize and quantify tumor heterogeneity. Multiplexed immunofluorescence (MxIF) is one such technology that offers molecular insight into both inter-individual and intratumor heterogeneity. It enables the quantification of both the concentration and spatial distribution of 60+ proteins across a tissue section. Upon bioimage processing, protein expression data can be generated for each cell from a tissue field of view. Results The Multi-Omics Heterogeneity Analysis (MOHA) tool was developed to compute tissue heterogeneity metrics from MxIF spatially resolved tissue imaging data. This technique computes the molecular state of each cell in a sample based on a pathway or gene set. Spatial states are then computed based on the spatial arrangements of the cells as distinguished by their respective molecular states. MOHA computes tissue heterogeneity metrics from the distributions of these molecular and spatially defined states. A colorectal cancer cohort of approximately 700 subjects with MxIF data is presented to demonstrate the MOHA methodology. Within this dataset, statistically significant correlations were found between the intratumor AKT pathway state diversity and cancer stage and histological tumor grade. Furthermore, intratumor spatial diversity metrics were found to correlate with cancer recurrence. Conclusions MOHA provides a simple and robust approach to characterize molecular and spatial heterogeneity of tissues. Research projects that generate spatially resolved tissue imaging data can take full advantage of this useful technique. The MOHA algorithm is implemented as a freely available R script (see supplementary information). PMID:29190747

  9. What Websites Are Patients Using: Results of a Tracking Study Exploring Patients Use of Websites at a Multi-Media Patient Education Center

    PubMed Central

    Ravitch, Stephanie; Fleisher, Linda; Torres, Stephen

    2005-01-01

    An exploratory study utilized computer tracking software at a hospital based patient education center was conducted to access use of the Web. During six months, 625 hits were tracked with 1/3 to www.cancer.gov one of the recommended websites, while over half of the sites were not on the recommended list. Here we report the challenges and results of this tracking study. PMID:16779379

  10. 78 FR 11659 - Center For Scientific Review; Notice of Closed Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-19

    ..., Computational, and Molecular Biology. Date: March 12, 2013. Time: 8:00 a.m. to 6:00 p.m. Agenda: To review and... Scientific Review Special Emphasis Panel; Member Conflict: Genetics, Informatics and Vision Studies. Date...

  11. Interactive Design Strategy for a Multi-Functional PAMAM Dendrimer-Based Nano-Therapeutic Using Computational Models and Experimental Analysis

    PubMed Central

    Lee, Inhan; Williams, Christopher R.; Athey, Brian D.; Baker, James R.

    2010-01-01

    Molecular dynamics simulations of nano-therapeutics as a final product and of all intermediates in the process of generating a multi-functional nano-therapeutic based on a poly(amidoamine) (PAMAM) dendrimer were performed along with chemical analyses of each of them. The actual structures of the dendrimers were predicted, based on potentiometric titration, gel permeation chromatography, and NMR. The chemical analyses determined the numbers of functional molecules, based on the actual structure of the dendrimer. Molecular dynamics simulations calculated the configurations of the intermediates and the radial distributions of functional molecules, based on their numbers. This interactive process between the simulation results and the chemical analyses provided a further strategy to design the next reaction steps and to gain insight into the products at each chemical reaction step. PMID:20700476

  12. Investigation of the graphene-electrolyte interface in Li-air batteries: A molecular dynamics study

    NASA Astrophysics Data System (ADS)

    Pavlov, S. V.; Kislenko, S. A.

    2018-01-01

    In this work the behavior of the main reactants (Li+, O2) of the oxygen reduction reaction (ORR) in acetonitrile solvent near the multi-layer graphene edge has been studied. It was observed by molecular dynamics simulations that the concentration distributions of the Li+ and O2 represent a “chessboard” structure. It was ascertained that the concentrations of the lithium ions and oxygen molecules reach their maximum values near the graphene edges pushed out from the surface, which may act as nucleation centers for the formation of crystalline products of the ORR. The maps of the free energy were estimated for the Li+ and O2. Energy optimal trajectories for the adsorption of oxygen molecules and lithium ions were found. Moreover, the distributions of the electric potential were obtained near the following carbon surfaces: single- and multi-layer graphene edge, graphene plane, which shows the qualitative differences in the double-layer structure.

  13. An attentive multi-camera system

    NASA Astrophysics Data System (ADS)

    Napoletano, Paolo; Tisato, Francesco

    2014-03-01

    Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.

  14. Computer image generation: Reconfigurability as a strategy in high fidelity space applications

    NASA Technical Reports Server (NTRS)

    Bartholomew, Michael J.

    1989-01-01

    The demand for realistic, high fidelity, computer image generation systems to support space simulation is well established. However, as the number and diversity of space applications increase, the complexity and cost of computer image generation systems also increase. One strategy used to harmonize cost with varied requirements is establishment of a reconfigurable image generation system that can be adapted rapidly and easily to meet new and changing requirements. The reconfigurability strategy through the life cycle of system conception, specification, design, implementation, operation, and support for high fidelity computer image generation systems are discussed. The discussion is limited to those issues directly associated with reconfigurability and adaptability of a specialized scene generation system in a multi-faceted space applications environment. Examples and insights gained through the recent development and installation of the Improved Multi-function Scene Generation System at Johnson Space Center, Systems Engineering Simulator are reviewed and compared with current simulator industry practices. The results are clear; the strategy of reconfigurability applied to space simulation requirements provides a viable path to supporting diverse applications with an adaptable computer image generation system.

  15. Electronic structure of multi-walled carbon fullerenes

    NASA Astrophysics Data System (ADS)

    Doore, Keith; Cook, Matthew; Clausen, Eric; Lukashev, Pavel V.; Kidd, Tim E.; Stollenwerk, Andrew J.

    2017-02-01

    Despite an enormous amount of research on carbon based nanostructures, relatively little is known about the electronic structure of multi-walled carbon fullerenes, also known as carbon onions. In part, this is due to the very high computational expense involved in estimating electronic structure of large molecules. At the same time, experimentally, the exact crystal structure of the carbon onion is usually unknown, and therefore one relies on qualitative arguments only. In this work we present the results of a computational study on a series of multi-walled fullerenes and compare their electronic structures to experimental data. Experimentally, the carbon onions were fabricated using ultrasonic agitation of isopropanol alcohol and deposited onto the surface of highly ordered pyrolytic graphite using a drop cast method. Scanning tunneling microscopy images indicate that the carbon onions produced using this technique are ellipsoidal with dimensions on the order of 10 nm. The majority of differential tunneling spectra acquired on individual carbon onions are similar to that of graphite with the addition of molecular-like peaks, indicating that these particles span the transition between molecules and bulk crystals. A smaller, yet sizable number exhibited a semiconducting gap between the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) levels. These results are compared with the electronic structure of different carbon onion configurations calculated using first-principles. Similar to the experimental results, the majority of these configurations are metallic with a minority behaving as semiconductors. Analysis of the configurations investigated here reveals that each carbon onion exhibiting an energy band gap consisted only of non-metallic fullerene layers, indicating that the interlayer interaction is not significant enough to affect the total density of states in these structures.

  16. Diabat Interpolation for Polymorph Free-Energy Differences.

    PubMed

    Kamat, Kartik; Peters, Baron

    2017-02-02

    Existing methods to compute free-energy differences between polymorphs use harmonic approximations, advanced non-Boltzmann bias sampling techniques, and/or multistage free-energy perturbations. This work demonstrates how Bennett's diabat interpolation method ( J. Comput. Phys. 1976, 22, 245 ) can be combined with energy gaps from lattice-switch Monte Carlo techniques ( Phys. Rev. E 2000, 61, 906 ) to swiftly estimate polymorph free-energy differences. The new method requires only two unbiased molecular dynamics simulations, one for each polymorph. To illustrate the new method, we compute the free-energy difference between face-centered cubic and body-centered cubic polymorphs for a Gaussian core solid. We discuss the justification for parabolic models of the free-energy diabats and similarities to methods that have been used in studies of electron transfer.

  17. Accelerating and focusing protein-protein docking correlations using multi-dimensional rotational FFT generating functions.

    PubMed

    Ritchie, David W; Kozakov, Dima; Vajda, Sandor

    2008-09-01

    Predicting how proteins interact at the molecular level is a computationally intensive task. Many protein docking algorithms begin by using fast Fourier transform (FFT) correlation techniques to find putative rigid body docking orientations. Most such approaches use 3D Cartesian grids and are therefore limited to computing three dimensional (3D) translational correlations. However, translational FFTs can speed up the calculation in only three of the six rigid body degrees of freedom, and they cannot easily incorporate prior knowledge about a complex to focus and hence further accelerate the calculation. Furthemore, several groups have developed multi-term interaction potentials and others use multi-copy approaches to simulate protein flexibility, which both add to the computational cost of FFT-based docking algorithms. Hence there is a need to develop more powerful and more versatile FFT docking techniques. This article presents a closed-form 6D spherical polar Fourier correlation expression from which arbitrary multi-dimensional multi-property multi-resolution FFT correlations may be generated. The approach is demonstrated by calculating 1D, 3D and 5D rotational correlations of 3D shape and electrostatic expansions up to polynomial order L=30 on a 2 GB personal computer. As expected, 3D correlations are found to be considerably faster than 1D correlations but, surprisingly, 5D correlations are often slower than 3D correlations. Nonetheless, we show that 5D correlations will be advantageous when calculating multi-term knowledge-based interaction potentials. When docking the 84 complexes of the Protein Docking Benchmark, blind 3D shape plus electrostatic correlations take around 30 minutes on a contemporary personal computer and find acceptable solutions within the top 20 in 16 cases. Applying a simple angular constraint to focus the calculation around the receptor binding site produces acceptable solutions within the top 20 in 28 cases. Further constraining the search to the ligand binding site gives up to 48 solutions within the top 20, with calculation times of just a few minutes per complex. Hence the approach described provides a practical and fast tool for rigid body protein-protein docking, especially when prior knowledge about one or both binding sites is available.

  18. AHPCRC (Army High Performance Computing Rsearch Center) Bulletin. Volume 1, Issue 4

    DTIC Science & Technology

    2011-01-01

    Computational and Mathematical Engineering, Stanford University esgs@stanford.edu (650) 723-3764 Molecular Dynamics Models of Antimicrobial ...simulations using low-fidelity Reynolds-av- eraged models illustrate the limited predictive capabili- ties of these schemes. The predictions for scalar and...driving force. The AHPCRC group has used their models to predict nonuniform concentra- tion profiles across small channels as a result of variations

  19. Design of a Multi-Week Sound and Motion Recording and Telemetry (SMRT) Tag for Behavioral Studies on Whales

    DTIC Science & Technology

    2015-09-30

    phone: +44 1334 462624 fax: +44 1334 463443 e-mail: markjohnson@st-andrews.ac.uk Todd Lindstrom Wildlife Computers 8345 154th Avenue NE...in situ processing algorithms for sound and motion data. In a parallel project Dr. Andrews at the Alaska SeaLife Center teamed with Wildlife ...from Wildlife Computers to produce a highly integrated Sound and Motion Recording and Telemetry (SMRT) tag. The complete tag development is expected

  20. Computational Materials: Modeling and Simulation of Nanostructured Materials and Systems

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Hinkley, Jeffrey A.

    2003-01-01

    The paper provides details on the structure and implementation of the Computational Materials program at the NASA Langley Research Center. Examples are given that illustrate the suggested approaches to predicting the behavior and influencing the design of nanostructured materials such as high-performance polymers, composites, and nanotube-reinforced polymers. Primary simulation and measurement methods applicable to multi-scale modeling are outlined. Key challenges including verification and validation of models are highlighted and discussed within the context of NASA's broad mission objectives.

  1. Transport coefficients of dense fluids composed of globular molecules. Equilibrium molecular dynamics investigations using more-center Lennard-Jones potentials

    NASA Astrophysics Data System (ADS)

    Hoheisel, C.

    1988-09-01

    Equilibrium molecular dynamics calculations with constraints have been performed for model liquids SF6 and CF4. The computations were carried out with four- and six-center Lennard-Jones potentials and up to 2×105 integration steps. Shear, bulk viscosity and the thermal conductivity have been calculated with use of Green-Kubo relations in the formulation of ``molecule variables.'' Various thermodynamic states were investigated. For SF6, a detailed comparison with experimental data was possible. For CF4, the MD results could only be compared with experiment for one liquid state. For the latter liquid, a complementary comparison was performed using MD results obtained with a one-center Lennard-Jones potential. A limited test of the particle number dependence of the results is presented. Partial and total correlations functions are shown and discussed with respect to findings obtained for the one-center Lennard-Jones liquid.

  2. 76 FR 64359 - Center for Scientific Review; Notice of Closed Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-18

    ... Brain, Neurotransmission and Aging Special Emphasis Panel. Date: November 1, 2011. Time: 8 a.m. to 5 p.m...: Cell, Computational and Molecular Biology. Date: November 9, 2011. Time: 10 a.m. to 6 p.m. Agenda: To...

  3. Time-efficient simulations of tight-binding electronic structures with Intel Xeon PhiTM many-core processors

    NASA Astrophysics Data System (ADS)

    Ryu, Hoon; Jeong, Yosang; Kang, Ji-Hoon; Cho, Kyu Nam

    2016-12-01

    Modelling of multi-million atomic semiconductor structures is important as it not only predicts properties of physically realizable novel materials, but can accelerate advanced device designs. This work elaborates a new Technology-Computer-Aided-Design (TCAD) tool for nanoelectronics modelling, which uses a sp3d5s∗ tight-binding approach to describe multi-million atomic structures, and simulate electronic structures with high performance computing (HPC), including atomic effects such as alloy and dopant disorders. Being named as Quantum simulation tool for Advanced Nanoscale Devices (Q-AND), the tool shows nice scalability on traditional multi-core HPC clusters implying the strong capability of large-scale electronic structure simulations, particularly with remarkable performance enhancement on latest clusters of Intel Xeon PhiTM coprocessors. A review of the recent modelling study conducted to understand an experimental work of highly phosphorus-doped silicon nanowires, is presented to demonstrate the utility of Q-AND. Having been developed via Intel Parallel Computing Center project, Q-AND will be open to public to establish a sound framework of nanoelectronics modelling with advanced HPC clusters of a many-core base. With details of the development methodology and exemplary study of dopant electronics, this work will present a practical guideline for TCAD development to researchers in the field of computational nanoelectronics.

  4. QM/MM Geometry Optimization on Extensive Free-Energy Surfaces for Examination of Enzymatic Reactions and Design of Novel Functional Properties of Proteins.

    PubMed

    Hayashi, Shigehiko; Uchida, Yoshihiro; Hasegawa, Taisuke; Higashi, Masahiro; Kosugi, Takahiro; Kamiya, Motoshi

    2017-05-05

    Many remarkable molecular functions of proteins use their characteristic global and slow conformational dynamics through coupling of local chemical states in reaction centers with global conformational changes of proteins. To theoretically examine the functional processes of proteins in atomic detail, a methodology of quantum mechanical/molecular mechanical (QM/MM) free-energy geometry optimization is introduced. In the methodology, a geometry optimization of a local reaction center is performed with a quantum mechanical calculation on a free-energy surface constructed with conformational samples of the surrounding protein environment obtained by a molecular dynamics simulation with a molecular mechanics force field. Geometry optimizations on extensive free-energy surfaces by a QM/MM reweighting free-energy self-consistent field method designed to be variationally consistent and computationally efficient have enabled examinations of the multiscale molecular coupling of local chemical states with global protein conformational changes in functional processes and analysis and design of protein mutants with novel functional properties.

  5. QM/MM Geometry Optimization on Extensive Free-Energy Surfaces for Examination of Enzymatic Reactions and Design of Novel Functional Properties of Proteins

    NASA Astrophysics Data System (ADS)

    Hayashi, Shigehiko; Uchida, Yoshihiro; Hasegawa, Taisuke; Higashi, Masahiro; Kosugi, Takahiro; Kamiya, Motoshi

    2017-05-01

    Many remarkable molecular functions of proteins use their characteristic global and slow conformational dynamics through coupling of local chemical states in reaction centers with global conformational changes of proteins. To theoretically examine the functional processes of proteins in atomic detail, a methodology of quantum mechanical/molecular mechanical (QM/MM) free-energy geometry optimization is introduced. In the methodology, a geometry optimization of a local reaction center is performed with a quantum mechanical calculation on a free-energy surface constructed with conformational samples of the surrounding protein environment obtained by a molecular dynamics simulation with a molecular mechanics force field. Geometry optimizations on extensive free-energy surfaces by a QM/MM reweighting free-energy self-consistent field method designed to be variationally consistent and computationally efficient have enabled examinations of the multiscale molecular coupling of local chemical states with global protein conformational changes in functional processes and analysis and design of protein mutants with novel functional properties.

  6. The calculated rovibronic spectrum of scandium hydride, ScH

    NASA Astrophysics Data System (ADS)

    Lodi, Lorenzo; Yurchenko, Sergei N.; Tennyson, Jonathan

    2015-07-01

    The electronic structure of six low-lying electronic states of scandium hydride, X 1Σ+, a 3Δ, b 3Π, A 1Δ, c 3Σ+ and B 1Π, is studied using multi-reference configuration interaction as a function of bond length. Diagonal and off-diagonal dipole moment, spin-orbit coupling and electronic angular momentum curves are also computed. The results are benchmarked against experimental measurements and calculations on atomic scandium. The resulting curves are used to compute a line list of molecular rovibronic transitions for 45ScH.

  7. Multi-Dielectric Brownian Dynamics and Design-Space-Exploration Studies of Permeation in Ion Channels.

    PubMed

    Siksik, May; Krishnamurthy, Vikram

    2017-09-01

    This paper proposes a multi-dielectric Brownian dynamics simulation framework for design-space-exploration (DSE) studies of ion-channel permeation. The goal of such DSE studies is to estimate the channel modeling-parameters that minimize the mean-squared error between the simulated and expected "permeation characteristics." To address this computational challenge, we use a methodology based on statistical inference that utilizes the knowledge of channel structure to prune the design space. We demonstrate the proposed framework and DSE methodology using a case study based on the KcsA ion channel, in which the design space is successfully reduced from a 6-D space to a 2-D space. Our results show that the channel dielectric map computed using the framework matches with that computed directly using molecular dynamics with an error of 7%. Finally, the scalability and resolution of the model used are explored, and it is shown that the memory requirements needed for DSE remain constant as the number of parameters (degree of heterogeneity) increases.

  8. An Informal Overview of the Unitary Group Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sonnad, V.; Escher, J.; Kruse, M.

    The Unitary Groups Approach (UGA) is an elegant and conceptually unified approach to quantum structure calculations. It has been widely used in molecular structure calculations, and holds the promise of a single computational approach to structure calculations in a variety of different fields. We explore the possibility of extending the UGA to computations in atomic and nuclear structure as a simpler alternative to traditional Racah algebra-based approaches. We provide a simple introduction to the basic UGA and consider some of the issues in using the UGA with spin-dependent, multi-body Hamiltonians requiring multi-shell bases adapted to additional symmetries. While the UGAmore » is perfectly capable of dealing with such problems, it is seen that the complexity rises dramatically, and the UGA is not at this time, a simpler alternative to Racah algebra-based approaches.« less

  9. Laser pulse induced multi-exciton dynamics in molecular systems

    NASA Astrophysics Data System (ADS)

    Wang, Luxia; May, Volkhard

    2018-03-01

    Ultrafast optical excitation of an arrangement of identical molecules is analyzed theoretically. The computations are particularly dedicated to molecules where the excitation energy into the second excited singlet state E(S 2) - E(S 0) is larger than twice the excitation energy into the first excited singlet state E(S 1) - E(S 0). Then, exciton-exciton annihilation is diminished and resonant and intensive excitation may simultaneously move different molecules into their first excited singlet state | {S}1> . To describe the temporal evolution of the thus created multi-exciton state a direct computation of the related wave function is circumvented. Instead, we derive equations of motion for expectation values formed by different arrangements of single-molecule transition operators | {S}1> < {S}0| . First simulation results are presented and the approximate treatment suggested recently in 2016 Phys. Rev. B 94 195413 is evaluated.

  10. Nanoparticle imaging probes for molecular imaging with computed tomography and application to cancer imaging

    NASA Astrophysics Data System (ADS)

    Roeder, Ryan K.; Curtis, Tyler E.; Nallathamby, Prakash D.; Irimata, Lisa E.; McGinnity, Tracie L.; Cole, Lisa E.; Vargo-Gogola, Tracy; Cowden Dahl, Karen D.

    2017-03-01

    Precision imaging is needed to realize precision medicine in cancer detection and treatment. Molecular imaging offers the ability to target and identify tumors, associated abnormalities, and specific cell populations with overexpressed receptors. Nuclear imaging and radionuclide probes provide high sensitivity but subject the patient to a high radiation dose and provide limited spatiotemporal information, requiring combined computed tomography (CT) for anatomic imaging. Therefore, nanoparticle contrast agents have been designed to enable molecular imaging and improve detection in CT alone. Core-shell nanoparticles provide a powerful platform for designing tailored imaging probes. The composition of the core is chosen for enabling strong X-ray contrast, multi-agent imaging with photon-counting spectral CT, and multimodal imaging. A silica shell is used for protective, biocompatible encapsulation of the core composition, volume-loading fluorophores or radionuclides for multimodal imaging, and facile surface functionalization with antibodies or small molecules for targeted delivery. Multi-agent (k-edge) imaging and quantitative molecular imaging with spectral CT was demonstrated using current clinical agents (iodine and BaSO4) and a proposed spectral library of contrast agents (Gd2O3, HfO2, and Au). Bisphosphonate-functionalized Au nanoparticles were demonstrated to enhance sensitivity and specificity for the detection of breast microcalcifications by conventional radiography and CT in both normal and dense mammary tissue using murine models. Moreover, photon-counting spectral CT enabled quantitative material decomposition of the Au and calcium signals. Immunoconjugated Au@SiO2 nanoparticles enabled highly-specific targeting of CD133+ ovarian cancer stem cells for contrast-enhanced detection in model tumors.

  11. Structural and preliminary molecular dynamics studies of the Rhodobacter sphaeroides reaction center and its mutant form L(M196)H + H(M202)L

    NASA Astrophysics Data System (ADS)

    Klyashtorny, V. G.; Fufina, T. Yu.; Vasilieva, L. G.; Shuvalov, V. A.; Gabdulkhakov, A. G.

    2014-07-01

    Pigment-protein interactions are responsible for the high efficiency of the light-energy transfer and conversion in photosynthesis. The reaction center (RC) from the purple bacterium Rhodobacter sphaeroides is the most convenient model for studying the mechanisms of primary processes of photosynthesis. Site-directed mutagenesis can be used to study the effect of the protein environment of electron-transfer cofactors on the optical properties, stability, pigment composition, and functional activity of RC. The preliminary analysis of RC was performed by computer simulation of the amino acid substitutions L(M196)H + H(M202)L at the pigment-protein interface and by estimating the stability of the threedimensional structure of the mutant RC by the molecular dynamics method. The doubly mutated reaction center was overexpressed, purified, and crystallized. The three-dimensional structure of this mutant was determined by X-ray crystallography and compared with the molecular dynamics model.

  12. Coarse-grained models of key self-assembly processes in HIV-1

    NASA Astrophysics Data System (ADS)

    Grime, John

    Computational molecular simulations can elucidate microscopic information that is inaccessible to conventional experimental techniques. However, many processes occur over time and length scales that are beyond the current capabilities of atomic-resolution molecular dynamics (MD). One such process is the self-assembly of the HIV-1 viral capsid, a biological structure that is crucial to viral infectivity. The nucleation and growth of capsid structures requires the interaction of large numbers of capsid proteins within a complicated molecular environment. Coarse-grained (CG) models, where degrees of freedom are removed to produce more computationally efficient models, can in principle access large-scale phenomena such as the nucleation and growth of HIV-1 capsid lattice. We report here studies of the self-assembly behaviors of a CG model of HIV-1 capsid protein, including the influence of the local molecular environment on nucleation and growth processes. Our results suggest a multi-stage process, involving several characteristic structures, eventually producing metastable capsid lattice morphologies that are amenable to subsequent capsid dissociation in order to transmit the viral infection.

  13. Charting molecular free-energy landscapes with an atlas of collective variables

    NASA Astrophysics Data System (ADS)

    Hashemian, Behrooz; Millán, Daniel; Arroyo, Marino

    2016-11-01

    Collective variables (CVs) are a fundamental tool to understand molecular flexibility, to compute free energy landscapes, and to enhance sampling in molecular dynamics simulations. However, identifying suitable CVs is challenging, and is increasingly addressed with systematic data-driven manifold learning techniques. Here, we provide a flexible framework to model molecular systems in terms of a collection of locally valid and partially overlapping CVs: an atlas of CVs. The specific motivation for such a framework is to enhance the applicability and robustness of CVs based on manifold learning methods, which fail in the presence of periodicities in the underlying conformational manifold. More generally, using an atlas of CVs rather than a single chart may help us better describe different regions of conformational space. We develop the statistical mechanics foundation for our multi-chart description and propose an algorithmic implementation. The resulting atlas of data-based CVs are then used to enhance sampling and compute free energy surfaces in two model systems, alanine dipeptide and β-D-glucopyranose, whose conformational manifolds have toroidal and spherical topologies.

  14. Heterogeneity in homogeneous nucleation from billion-atom molecular dynamics simulation of solidification of pure metal.

    PubMed

    Shibuta, Yasushi; Sakane, Shinji; Miyoshi, Eisuke; Okita, Shin; Takaki, Tomohiro; Ohno, Munekazu

    2017-04-05

    Can completely homogeneous nucleation occur? Large scale molecular dynamics simulations performed on a graphics-processing-unit rich supercomputer can shed light on this long-standing issue. Here, a billion-atom molecular dynamics simulation of homogeneous nucleation from an undercooled iron melt reveals that some satellite-like small grains surrounding previously formed large grains exist in the middle of the nucleation process, which are not distributed uniformly. At the same time, grains with a twin boundary are formed by heterogeneous nucleation from the surface of the previously formed grains. The local heterogeneity in the distribution of grains is caused by the local accumulation of the icosahedral structure in the undercooled melt near the previously formed grains. This insight is mainly attributable to the multi-graphics processing unit parallel computation combined with the rapid progress in high-performance computational environments.Nucleation is a fundamental physical process, however it is a long-standing issue whether completely homogeneous nucleation can occur. Here the authors reveal, via a billion-atom molecular dynamics simulation, that local heterogeneity exists during homogeneous nucleation in an undercooled iron melt.

  15. A network-based multi-target computational estimation scheme for anticoagulant activities of compounds.

    PubMed

    Li, Qian; Li, Xudong; Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie

    2011-03-22

    Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. This article proposes a network-based multi-target computational estimation method for anticoagulant activities of compounds by combining network efficiency analysis with scoring function from molecular docking.

  16. A Network-Based Multi-Target Computational Estimation Scheme for Anticoagulant Activities of Compounds

    PubMed Central

    Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie

    2011-01-01

    Background Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. Methodology We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. Conclusions This article proposes a network-based multi-target computational estimation method for anticoagulant activities of compounds by combining network efficiency analysis with scoring function from molecular docking. PMID:21445339

  17. Analysis Commons, A Team Approach to Discovery in a Big-Data Environment for Genetic Epidemiology

    PubMed Central

    Brody, Jennifer A.; Morrison, Alanna C.; Bis, Joshua C.; O'Connell, Jeffrey R.; Brown, Michael R.; Huffman, Jennifer E.; Ames, Darren C.; Carroll, Andrew; Conomos, Matthew P.; Gabriel, Stacey; Gibbs, Richard A.; Gogarten, Stephanie M.; Gupta, Namrata; Jaquish, Cashell E.; Johnson, Andrew D.; Lewis, Joshua P.; Liu, Xiaoming; Manning, Alisa K.; Papanicolaou, George J.; Pitsillides, Achilleas N.; Rice, Kenneth M.; Salerno, William; Sitlani, Colleen M.; Smith, Nicholas L.; Heckbert, Susan R.; Laurie, Cathy C.; Mitchell, Braxton D.; Vasan, Ramachandran S.; Rich, Stephen S.; Rotter, Jerome I.; Wilson, James G.; Boerwinkle, Eric; Psaty, Bruce M.; Cupples, L. Adrienne

    2017-01-01

    Summary paragraph The exploding volume of whole-genome sequence (WGS) and multi-omics data requires new approaches for analysis. As one solution, we have created a cloud-based Analysis Commons, which brings together genotype and phenotype data from multiple studies in a setting that is accessible by multiple investigators. This framework addresses many of the challenges of multi-center WGS analyses, including data sharing mechanisms, phenotype harmonization, integrated multi-omics analyses, annotation, and computational flexibility. In this setting, the computational pipeline facilitates a sequence-to-discovery analysis workflow illustrated here by an analysis of plasma fibrinogen levels in 3996 individuals from the National Heart, Lung, and Blood Institute (NHLBI) Trans-Omics for Precision Medicine (TOPMed) WGS program. The Analysis Commons represents a novel model for transforming WGS resources from a massive quantity of phenotypic and genomic data into knowledge of the determinants of health and disease risk in diverse human populations. PMID:29074945

  18. Benefits Analysis of Multi-Center Dynamic Weather Routes

    NASA Technical Reports Server (NTRS)

    Sheth, Kapil; McNally, David; Morando, Alexander; Clymer, Alexis; Lock, Jennifer; Petersen, Julien

    2014-01-01

    Dynamic weather routes are flight plan corrections that can provide airborne flights more than user-specified minutes of flying-time savings, compared to their current flight plan. These routes are computed from the aircraft's current location to a flight plan fix downstream (within a predefined limit region), while avoiding forecasted convective weather regions. The Dynamic Weather Routes automation has been continuously running with live air traffic data for a field evaluation at the American Airlines Integrated Operations Center in Fort Worth, TX since July 31, 2012, where flights within the Fort Worth Air Route Traffic Control Center are evaluated for time savings. This paper extends the methodology to all Centers in United States and presents benefits analysis of Dynamic Weather Routes automation, if it was implemented in multiple airspace Centers individually and concurrently. The current computation of dynamic weather routes requires a limit rectangle so that a downstream capture fix can be selected, preventing very large route changes spanning several Centers. In this paper, first, a method of computing a limit polygon (as opposed to a rectangle used for Fort Worth Center) is described for each of the 20 Centers in the National Airspace System. The Future ATM Concepts Evaluation Tool, a nationwide simulation and analysis tool, is used for this purpose. After a comparison of results with the Center-based Dynamic Weather Routes automation in Fort Worth Center, results are presented for 11 Centers in the contiguous United States. These Centers are generally most impacted by convective weather. A breakdown of individual Center and airline savings is presented and the results indicate an overall average savings of about 10 minutes of flying time are obtained per flight.

  19. Implicit gas-kinetic unified algorithm based on multi-block docking grid for multi-body reentry flows covering all flow regimes

    NASA Astrophysics Data System (ADS)

    Peng, Ao-Ping; Li, Zhi-Hui; Wu, Jun-Lin; Jiang, Xin-Yu

    2016-12-01

    Based on the previous researches of the Gas-Kinetic Unified Algorithm (GKUA) for flows from highly rarefied free-molecule transition to continuum, a new implicit scheme of cell-centered finite volume method is presented for directly solving the unified Boltzmann model equation covering various flow regimes. In view of the difficulty in generating the single-block grid system with high quality for complex irregular bodies, a multi-block docking grid generation method is designed on the basis of data transmission between blocks, and the data structure is constructed for processing arbitrary connection relations between blocks with high efficiency and reliability. As a result, the gas-kinetic unified algorithm with the implicit scheme and multi-block docking grid has been firstly established and used to solve the reentry flow problems around the multi-bodies covering all flow regimes with the whole range of Knudsen numbers from 10 to 3.7E-6. The implicit and explicit schemes are applied to computing and analyzing the supersonic flows in near-continuum and continuum regimes around a circular cylinder with careful comparison each other. It is shown that the present algorithm and modelling possess much higher computational efficiency and faster converging properties. The flow problems including two and three side-by-side cylinders are simulated from highly rarefied to near-continuum flow regimes, and the present computed results are found in good agreement with the related DSMC simulation and theoretical analysis solutions, which verify the good accuracy and reliability of the present method. It is observed that the spacing of the multi-body is smaller, the cylindrical throat obstruction is greater with the flow field of single-body asymmetrical more obviously and the normal force coefficient bigger. While in the near-continuum transitional flow regime of near-space flying surroundings, the spacing of the multi-body increases to six times of the diameter of the single-body, the interference effects of the multi-bodies tend to be negligible. The computing practice has confirmed that it is feasible for the present method to compute the aerodynamics and reveal flow mechanism around complex multi-body vehicles covering all flow regimes from the gas-kinetic point of view of solving the unified Boltzmann model velocity distribution function equation.

  20. NCC: A Multidisciplinary Design/Analysis Tool for Combustion Systems

    NASA Technical Reports Server (NTRS)

    Liu, Nan-Suey; Quealy, Angela

    1999-01-01

    A multi-disciplinary design/analysis tool for combustion systems is critical for optimizing the low-emission, high-performance combustor design process. Based on discussions between NASA Lewis Research Center and the jet engine companies, an industry-government team was formed in early 1995 to develop the National Combustion Code (NCC), which is an integrated system of computer codes for the design and analysis of combustion systems. NCC has advanced features that address the need to meet designer's requirements such as "assured accuracy", "fast turnaround", and "acceptable cost". The NCC development team is comprised of Allison Engine Company (Allison), CFD Research Corporation (CFDRC), GE Aircraft Engines (GEAE), NASA Lewis Research Center (LeRC), and Pratt & Whitney (P&W). This development team operates under the guidance of the NCC steering committee. The "unstructured mesh" capability and "parallel computing" are fundamental features of NCC from its inception. The NCC system is composed of a set of "elements" which includes grid generator, main flow solver, turbulence module, turbulence and chemistry interaction module, chemistry module, spray module, radiation heat transfer module, data visualization module, and a post-processor for evaluating engine performance parameters. Each element may have contributions from several team members. Such a multi-source multi-element system needs to be integrated in a way that facilitates inter-module data communication, flexibility in module selection, and ease of integration.

  1. The truncated conjugate gradient (TCG), a non-iterative/fixed-cost strategy for computing polarization in molecular dynamics: Fast evaluation of analytical forces

    NASA Astrophysics Data System (ADS)

    Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip

    2017-10-01

    In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.

  2. The truncated conjugate gradient (TCG), a non-iterative/fixed-cost strategy for computing polarization in molecular dynamics: Fast evaluation of analytical forces.

    PubMed

    Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip

    2017-10-28

    In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.

  3. Implementing Molecular Dynamics on Hybrid High Performance Computers - Particle-Particle Particle-Mesh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, W Michael; Kohlmeyer, Axel; Plimpton, Steven J

    The use of accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high-performance computers, machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. In this paper, we present a continuation of previous work implementing algorithms for using accelerators into the LAMMPS molecular dynamics software for distributed memory parallel hybrid machines. In our previous work, we focused on acceleration for short-range models with anmore » approach intended to harness the processing power of both the accelerator and (multi-core) CPUs. To augment the existing implementations, we present an efficient implementation of long-range electrostatic force calculation for molecular dynamics. Specifically, we present an implementation of the particle-particle particle-mesh method based on the work by Harvey and De Fabritiis. We present benchmark results on the Keeneland InfiniBand GPU cluster. We provide a performance comparison of the same kernels compiled with both CUDA and OpenCL. We discuss limitations to parallel efficiency and future directions for improving performance on hybrid or heterogeneous computers.« less

  4. Laboratory Sequence in Computational Methods for Introductory Chemistry

    NASA Astrophysics Data System (ADS)

    Cody, Jason A.; Wiser, Dawn C.

    2003-07-01

    A four-exercise laboratory sequence for introductory chemistry integrating hands-on, student-centered experience with computer modeling has been designed and implemented. The progression builds from exploration of molecular shapes to intermolecular forces and the impact of those forces on chemical separations made with gas chromatography and distillation. The sequence ends with an exploration of molecular orbitals. The students use the computers as a tool; they build the molecules, submit the calculations, and interpret the results. Because of the construction of the sequence and its placement spanning the semester break, good laboratory notebook practices are reinforced and the continuity of course content and methods between semesters is emphasized. The inclusion of these techniques in the first year of chemistry has had a positive impact on student perceptions and student learning.

  5. Application of dragonfly algorithm for optimal performance analysis of process parameters in turn-mill operations- A case study

    NASA Astrophysics Data System (ADS)

    Vikram, K. Arun; Ratnam, Ch; Lakshmi, VVK; Kumar, A. Sunny; Ramakanth, RT

    2018-02-01

    Meta-heuristic multi-response optimization methods are widely in use to solve multi-objective problems to obtain Pareto optimal solutions during optimization. This work focuses on optimal multi-response evaluation of process parameters in generating responses like surface roughness (Ra), surface hardness (H) and tool vibration displacement amplitude (Vib) while performing operations like tangential and orthogonal turn-mill processes on A-axis Computer Numerical Control vertical milling center. Process parameters like tool speed, feed rate and depth of cut are considered as process parameters machined over brass material under dry condition with high speed steel end milling cutters using Taguchi design of experiments (DOE). Meta-heuristic like Dragonfly algorithm is used to optimize the multi-objectives like ‘Ra’, ‘H’ and ‘Vib’ to identify the optimal multi-response process parameters combination. Later, the results thus obtained from multi-objective dragonfly algorithm (MODA) are compared with another multi-response optimization technique Viz. Grey relational analysis (GRA).

  6. Kernel optimization for short-range molecular dynamics

    NASA Astrophysics Data System (ADS)

    Hu, Changjun; Wang, Xianmeng; Li, Jianjiang; He, Xinfu; Li, Shigang; Feng, Yangde; Yang, Shaofeng; Bai, He

    2017-02-01

    To optimize short-range force computations in Molecular Dynamics (MD) simulations, multi-threading and SIMD optimizations are presented in this paper. With respect to multi-threading optimization, a Partition-and-Separate-Calculation (PSC) method is designed to avoid write conflicts caused by using Newton's third law. Serial bottlenecks are eliminated with no additional memory usage. The method is implemented by using the OpenMP model. Furthermore, the PSC method is employed on Intel Xeon Phi coprocessors in both native and offload models. We also evaluate the performance of the PSC method under different thread affinities on the MIC architecture. In the SIMD execution, we explain the performance influence in the PSC method, considering the "if-clause" of the cutoff radius check. The experiment results show that our PSC method is relatively more efficient compared to some traditional methods. In double precision, our 256-bit SIMD implementation is about 3 times faster than the scalar version.

  7. 2017 ISCB Overton Prize: Christoph Bock

    PubMed Central

    Fogg, Christiana N.; Kovats, Diane E.; Berger, Bonnie

    2017-01-01

    The International Society for Computational Biology (ISCB) each year recognizes the achievements of an early to mid-career scientist with the Overton Prize. This prize honors the untimely death of Dr. G. Christian Overton, an admired computational biologist and founding ISCB Board member. Winners of the Overton Prize are independent investigators who are in the early to middle phases of their careers and are selected because of their significant contributions to computational biology through research, teaching, and service. ISCB is pleased to recognize Dr. Christoph Bock, Principal Investigator at the CeMM Research Center for Molecular Medicine of the Austrian Academy of Sciences in Vienna, Austria, as the 2017 winner of the Overton Prize. Bock will be presenting a keynote presentation at the 2017 International Conference on Intelligent Systems for Molecular Biology/European Conference on Computational Biology (ISMB/ECCB) in Prague, Czech Republic being held during July 21-25, 2017. PMID:28713546

  8. 2017 ISCB Overton Prize: Christoph Bock.

    PubMed

    Fogg, Christiana N; Kovats, Diane E; Berger, Bonnie

    2017-01-01

    The International Society for Computational Biology (ISCB) each year recognizes the achievements of an early to mid-career scientist with the Overton Prize. This prize honors the untimely death of Dr. G. Christian Overton, an admired computational biologist and founding ISCB Board member. Winners of the Overton Prize are independent investigators who are in the early to middle phases of their careers and are selected because of their significant contributions to computational biology through research, teaching, and service. ISCB is pleased to recognize Dr. Christoph Bock, Principal Investigator at the CeMM Research Center for Molecular Medicine of the Austrian Academy of Sciences in Vienna, Austria, as the 2017 winner of the Overton Prize. Bock will be presenting a keynote presentation at the 2017 International Conference on Intelligent Systems for Molecular Biology/European Conference on Computational Biology (ISMB/ECCB) in Prague, Czech Republic being held during July 21-25, 2017.

  9. CPMIP: measurements of real computational performance of Earth system models in CMIP6

    NASA Astrophysics Data System (ADS)

    Balaji, Venkatramani; Maisonnave, Eric; Zadeh, Niki; Lawrence, Bryan N.; Biercamp, Joachim; Fladrich, Uwe; Aloisio, Giovanni; Benson, Rusty; Caubel, Arnaud; Durachta, Jeffrey; Foujols, Marie-Alice; Lister, Grenville; Mocavero, Silvia; Underwood, Seth; Wright, Garrett

    2017-01-01

    A climate model represents a multitude of processes on a variety of timescales and space scales: a canonical example of multi-physics multi-scale modeling. The underlying climate system is physically characterized by sensitive dependence on initial conditions, and natural stochastic variability, so very long integrations are needed to extract signals of climate change. Algorithms generally possess weak scaling and can be I/O and/or memory-bound. Such weak-scaling, I/O, and memory-bound multi-physics codes present particular challenges to computational performance. Traditional metrics of computational efficiency such as performance counters and scaling curves do not tell us enough about real sustained performance from climate models on different machines. They also do not provide a satisfactory basis for comparative information across models. codes present particular challenges to computational performance. We introduce a set of metrics that can be used for the study of computational performance of climate (and Earth system) models. These measures do not require specialized software or specific hardware counters, and should be accessible to anyone. They are independent of platform and underlying parallel programming models. We show how these metrics can be used to measure actually attained performance of Earth system models on different machines, and identify the most fruitful areas of research and development for performance engineering. codes present particular challenges to computational performance. We present results for these measures for a diverse suite of models from several modeling centers, and propose to use these measures as a basis for a CPMIP, a computational performance model intercomparison project (MIP).

  10. A Low Cost Micro-Computer Based Local Area Network for Medical Office and Medical Center Automation

    PubMed Central

    Epstein, Mel H.; Epstein, Lynn H.; Emerson, Ron G.

    1984-01-01

    A Low Cost Micro-computer based Local Area Network for medical office automation is described which makes use of an array of multiple and different personal computers interconnected by a local area network. Each computer on the network functions as fully potent workstations for data entry and report generation. The network allows each workstation complete access to the entire database. Additionally, designated computers may serve as access ports for remote terminals. Through “Gateways” the network may serve as a front end for a large mainframe, or may interface with another network. The system provides for the medical office environment the expandability and flexibility of a multi-terminal mainframe system at a far lower cost without sacrifice of performance.

  11. Algorithmic trends in computational fluid dynamics; The Institute for Computer Applications in Science and Engineering (ICASE)/LaRC Workshop, NASA Langley Research Center, Hampton, VA, US, Sep. 15-17, 1991

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y. (Editor); Kumar, A. (Editor); Salas, M. D. (Editor)

    1993-01-01

    The purpose here is to assess the state of the art in the areas of numerical analysis that are particularly relevant to computational fluid dynamics (CFD), to identify promising new developments in various areas of numerical analysis that will impact CFD, and to establish a long-term perspective focusing on opportunities and needs. Overviews are given of discretization schemes, computational fluid dynamics, algorithmic trends in CFD for aerospace flow field calculations, simulation of compressible viscous flow, and massively parallel computation. Also discussed are accerelation methods, spectral and high-order methods, multi-resolution and subcell resolution schemes, and inherently multidimensional schemes.

  12. Coarse-grained molecular dynamics simulations for giant protein-DNA complexes

    NASA Astrophysics Data System (ADS)

    Takada, Shoji

    Biomolecules are highly hierarchic and intrinsically flexible. Thus, computational modeling calls for multi-scale methodologies. We have been developing a coarse-grained biomolecular model where on-average 10-20 atoms are grouped into one coarse-grained (CG) particle. Interactions among CG particles are tuned based on atomistic interactions and the fluctuation matching algorithm. CG molecular dynamics methods enable us to simulate much longer time scale motions of much larger molecular systems than fully atomistic models. After broad sampling of structures with CG models, we can easily reconstruct atomistic models, from which one can continue conventional molecular dynamics simulations if desired. Here, we describe our CG modeling methodology for protein-DNA complexes, together with various biological applications, such as the DNA duplication initiation complex, model chromatins, and transcription factor dynamics on chromatin-like environment.

  13. Multi-scale lung modeling.

    PubMed

    Tawhai, Merryn H; Bates, Jason H T

    2011-05-01

    Multi-scale modeling of biological systems has recently become fashionable due to the growing power of digital computers as well as to the growing realization that integrative systems behavior is as important to life as is the genome. While it is true that the behavior of a living organism must ultimately be traceable to all its components and their myriad interactions, attempting to codify this in its entirety in a model misses the insights gained from understanding how collections of system components at one level of scale conspire to produce qualitatively different behavior at higher levels. The essence of multi-scale modeling thus lies not in the inclusion of every conceivable biological detail, but rather in the judicious selection of emergent phenomena appropriate to the level of scale being modeled. These principles are exemplified in recent computational models of the lung. Airways responsiveness, for example, is an organ-level manifestation of events that begin at the molecular level within airway smooth muscle cells, yet it is not necessary to invoke all these molecular events to accurately describe the contraction dynamics of a cell, nor is it necessary to invoke all phenomena observable at the level of the cell to account for the changes in overall lung function that occur following methacholine challenge. Similarly, the regulation of pulmonary vascular tone has complex origins within the individual smooth muscle cells that line the blood vessels but, again, many of the fine details of cell behavior average out at the level of the organ to produce an effect on pulmonary vascular pressure that can be described in much simpler terms. The art of multi-scale lung modeling thus reduces not to being limitlessly inclusive, but rather to knowing what biological details to leave out.

  14. Molecular Dynamics of Hot Dense Plasmas: New Horizons

    NASA Astrophysics Data System (ADS)

    Graziani, Frank

    2011-10-01

    We describe the status of a new time-dependent simulation capability for hot dense plasmas. The backbone of this multi-institutional computational and experimental effort--the Cimarron Project--is the massively parallel molecular dynamics (MD) code ``ddcMD''. The project's focus is material conditions such as exist in inertial confinement fusion experiments, and in many stellar interiors: high temperatures, high densities, significant electromagnetic fields, mixtures of high- and low- Zelements, and non-Maxwellian particle distributions. Of particular importance is our ability to incorporate into this classical MD code key atomic, radiative, and nuclear processes, so that their interacting effects under non-ideal plasma conditions can be investigated. This talk summarizes progress in computational methodology, discusses strengths and weaknesses of quantum statistical potentials as effective interactions for MD, explains the model used for quantum events possibly occurring in a collision and highlights some significant results obtained to date. We describe the status of a new time-dependent simulation capability for hot dense plasmas. The backbone of this multi-institutional computational and experimental effort--the Cimarron Project--is the massively parallel molecular dynamics (MD) code ``ddcMD''. The project's focus is material conditions such as exist in inertial confinement fusion experiments, and in many stellar interiors: high temperatures, high densities, significant electromagnetic fields, mixtures of high- and low- Zelements, and non-Maxwellian particle distributions. Of particular importance is our ability to incorporate into this classical MD code key atomic, radiative, and nuclear processes, so that their interacting effects under non-ideal plasma conditions can be investigated. This talk summarizes progress in computational methodology, discusses strengths and weaknesses of quantum statistical potentials as effective interactions for MD, explains the model used for quantum events possibly occurring in a collision and highlights some significant results obtained to date. This work is performed under the auspices of the U. S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  15. A BIOINFORMATIC STRATEGY TO RAPIDLY CHARACTERIZE CDNA LIBRARIES

    EPA Science Inventory

    A Bioinformatic Strategy to Rapidly Characterize cDNA Libraries

    G. Charles Ostermeier1, David J. Dix2 and Stephen A. Krawetz1.
    1Departments of Obstetrics and Gynecology, Center for Molecular Medicine and Genetics, & Institute for Scientific Computing, Wayne State Univer...

  16. SPERMATOZOAL RNA PROFILES OF NORMAL FERTILE MEN

    EPA Science Inventory

    What Constitutes the Normal Fertile Male?

    G. Charles Ostermeier1, David J. Dix2, David Miller3, Purvesh Khatri4, and Stephen A. Krawetz1.

    1Departments of Obstetrics and Gynaecology, Center for Molecular Medicine and Genetics, & Institute for Scientific Computing, Wa...

  17. Nanoparticles in practice for molecular-imaging applications: An overview.

    PubMed

    Padmanabhan, Parasuraman; Kumar, Ajay; Kumar, Sundramurthy; Chaudhary, Ravi Kumar; Gulyás, Balázs

    2016-09-01

    Nanoparticles (NPs) are playing a progressively more significant role in multimodal and multifunctional molecular imaging. The agents like Superparamagnetic iron oxide (SPIO), manganese oxide (MnO), gold NPs/nanorods and quantum dots (QDs) possess specific properties like paramagnetism, superparamagnetism, surface plasmon resonance (SPR) and photoluminescence respectively. These specific properties make them able for single/multi-modal and single/multi-functional molecular imaging. NPs generally have nanomolar or micromolar sensitivity range and can be detected via imaging instrumentation. The distinctive characteristics of these NPs make them suitable for imaging, therapy and delivery of drugs. Multifunctional nanoparticles (MNPs) can be produced through either modification of shell or surface or by attaching an affinity ligand to the nanoparticles. They are utilized for targeted imaging by magnetic resonance imaging (MRI), single photon emission computed tomography (SPECT), positron emission tomography (PET), computed tomography (CT), photo acoustic imaging (PAI), two photon or fluorescent imaging and ultra sound etc. Toxicity factor of NPs is also a very important concern and toxic effect should be eliminated. First generation NPs have been designed, developed and tested in living subjects and few of them are already in clinical use. In near future, molecular imaging will get advanced with multimodality and multifunctionality to detect diseases like cancer, neurodegenerative diseases, cardiac diseases, inflammation, stroke, atherosclerosis and many others in their early stages. In the current review, we discussed single/multifunctional nanoparticles along with molecular imaging modalities. The present article intends to reveal recent avenues for nanomaterials in multimodal and multifunctional molecular imaging through a review of pertinent literatures. The topic emphasises on the distinctive characteristics of nanomaterial which makes them, suitable for biomedical imaging, therapy and delivery of drugs. This review is more informative of indicative technologies which will be helpful in a way to plan, understand and lead the nanotechnology related work. Copyright © 2016 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  18. Molecular design for nonpolar chiral-axial quadratic nonlinear optics

    NASA Astrophysics Data System (ADS)

    Wiggers, Gregory A.

    In this thesis the hyperpolarizability of various multi-dimensional molecules is studied theoretically/computationally, with particular focus on the second-rank Kleinman-disallowed (KD) component of the hyperpolarizability. This component, which transforms as a second-rank traceless symmetric tensor, could be utilized in certain chiral-axial molecular alignment schemes to produce a bulk response. Nonpolar chiral-axial systems have been proposed in contrast to polar media, which utilize the vector component of the molecular hyperpolarizability and require parallel alignment of the molecular dipoles. Such parallel alignment of dipoles must be "frozen in" in order to overcome the natural tendency for dipoles to align anti-parallel. This limits the density of chromophores that can be loaded into a polar material. Nonpolar materials do not have such limits in theory. The two geometric classes of molecules that can most easily be incorporated into nonpolar chiral-uniaxial materials are propeller-shaped (C3 or D3 symmetry) and Λ-shaped (C2v symmetry). This work describes efforts to design molecules within these classes that would be suitable for bulk NLO materials. The sum-over-states (SOS) expression is used to model the molecular hyperpolarizability, and quantum chemical calculations, along with linear absorption data (when available) provide the necessary parameters to evaluate truncated forms of the SOS expression. A host of chemical and geometric modifications will be considered in order to elucidate important structure/function relationships. Also, the SOS model will be tested in some cases when experimental measurements (via Kleinman-disallowed hyper-Rayleigh scattering) are available. While a majority of this work focuses on multi-dimensional molecules, a small section deals with the question of optimizing the hyperpolarizability of a one-dimensional system. It is suggested that the recently-proposed idea of "modulated conjugation" as a means for improving intrinsic molecular hyperpolarizability is based on subtle misinterpretations of computational results. Even so, the concept of modulated conjugation may lead to improved hyperpolarizabilities and possible reasons are discussed.

  19. Thermal Analysis of Magnetically-Coupled Pump for Cryogenic Applications

    NASA Technical Reports Server (NTRS)

    Senocak, Inanc; Udaykumar, H. S.; Ndri, Narcisse; Francois, Marianne; Shyy, Wei

    1999-01-01

    Magnetically-coupled pump is under evaluation at Kennedy Space Center for possible cryogenic applications. A major concern is the impact of low temperature fluid flows on the pump performance. As a first step toward addressing this and related issues, a computational fluid dynamics and heat transfer tool has been adopted in a pump geometry. The computational tool includes (i) a commercial grid generator to handle multiple grid blocks and complicated geometric definitions, and (ii) an in-house computational fluid dynamics and heat transfer software developed in the Principal Investigator's group at the University of Florida. Both pure-conduction and combined convection-conduction computations have been conducted. A pure-conduction analysis gives insufficient information about the overall thermal distribution. Combined convection-conduction analysis indicates the significant influence of the coolant over the entire flow path. Since 2-D simulation is of limited help, future work on full 3-D modeling of the pump using multi-materials is needed. A comprehensive and accurate model can be developed to take into account the effect of multi-phase flow in the cooling flow loop, and the magnetic interactions.

  20. Biophysical Discovery through the Lens of a Computational Microscope

    NASA Astrophysics Data System (ADS)

    Amaro, Rommie

    With exascale computing power on the horizon, improvements in the underlying algorithms and available structural experimental data are enabling new paradigms for chemical discovery. My work has provided key insights for the systematic incorporation of structural information resulting from state-of-the-art biophysical simulations into protocols for inhibitor and drug discovery. We have shown that many disease targets have druggable pockets that are otherwise ``hidden'' in high resolution x-ray structures, and that this is a common theme across a wide range of targets in different disease areas. We continue to push the limits of computational biophysical modeling by expanding the time and length scales accessible to molecular simulation. My sights are set on, ultimately, the development of detailed physical models of cells, as the fundamental unit of life, and two recent achievements highlight our efforts in this arena. First is the development of a molecular and Brownian dynamics multi-scale modeling framework, which allows us to investigate drug binding kinetics in addition to thermodynamics. In parallel, we have made significant progress developing new tools to extend molecular structure to cellular environments. Collectively, these achievements are enabling the investigation of the chemical and biophysical nature of cells at unprecedented scales.

  1. Advances and challenges in logical modeling of cell cycle regulation: perspective for multi-scale, integrative yeast cell models

    PubMed Central

    Todd, Robert G.; van der Zee, Lucas

    2016-01-01

    Abstract The eukaryotic cell cycle is robustly designed, with interacting molecules organized within a definite topology that ensures temporal precision of its phase transitions. Its underlying dynamics are regulated by molecular switches, for which remarkable insights have been provided by genetic and molecular biology efforts. In a number of cases, this information has been made predictive, through computational models. These models have allowed for the identification of novel molecular mechanisms, later validated experimentally. Logical modeling represents one of the youngest approaches to address cell cycle regulation. We summarize the advances that this type of modeling has achieved to reproduce and predict cell cycle dynamics. Furthermore, we present the challenge that this type of modeling is now ready to tackle: its integration with intracellular networks, and its formalisms, to understand crosstalks underlying systems level properties, ultimate aim of multi-scale models. Specifically, we discuss and illustrate how such an integration may be realized, by integrating a minimal logical model of the cell cycle with a metabolic network. PMID:27993914

  2. High performance in silico virtual drug screening on many-core processors.

    PubMed

    McIntosh-Smith, Simon; Price, James; Sessions, Richard B; Ibarra, Amaurys A

    2015-05-01

    Drug screening is an important part of the drug development pipeline for the pharmaceutical industry. Traditional, lab-based methods are increasingly being augmented with computational methods, ranging from simple molecular similarity searches through more complex pharmacophore matching to more computationally intensive approaches, such as molecular docking. The latter simulates the binding of drug molecules to their targets, typically protein molecules. In this work, we describe BUDE, the Bristol University Docking Engine, which has been ported to the OpenCL industry standard parallel programming language in order to exploit the performance of modern many-core processors. Our highly optimized OpenCL implementation of BUDE sustains 1.43 TFLOP/s on a single Nvidia GTX 680 GPU, or 46% of peak performance. BUDE also exploits OpenCL to deliver effective performance portability across a broad spectrum of different computer architectures from different vendors, including GPUs from Nvidia and AMD, Intel's Xeon Phi and multi-core CPUs with SIMD instruction sets.

  3. High performance in silico virtual drug screening on many-core processors

    PubMed Central

    Price, James; Sessions, Richard B; Ibarra, Amaurys A

    2015-01-01

    Drug screening is an important part of the drug development pipeline for the pharmaceutical industry. Traditional, lab-based methods are increasingly being augmented with computational methods, ranging from simple molecular similarity searches through more complex pharmacophore matching to more computationally intensive approaches, such as molecular docking. The latter simulates the binding of drug molecules to their targets, typically protein molecules. In this work, we describe BUDE, the Bristol University Docking Engine, which has been ported to the OpenCL industry standard parallel programming language in order to exploit the performance of modern many-core processors. Our highly optimized OpenCL implementation of BUDE sustains 1.43 TFLOP/s on a single Nvidia GTX 680 GPU, or 46% of peak performance. BUDE also exploits OpenCL to deliver effective performance portability across a broad spectrum of different computer architectures from different vendors, including GPUs from Nvidia and AMD, Intel’s Xeon Phi and multi-core CPUs with SIMD instruction sets. PMID:25972727

  4. A CPU/MIC Collaborated Parallel Framework for GROMACS on Tianhe-2 Supercomputer.

    PubMed

    Peng, Shaoliang; Yang, Shunyun; Su, Wenhe; Zhang, Xiaoyu; Zhang, Tenglilang; Liu, Weiguo; Zhao, Xingming

    2017-06-16

    Molecular Dynamics (MD) is the simulation of the dynamic behavior of atoms and molecules. As the most popular software for molecular dynamics, GROMACS cannot work on large-scale data because of limit computing resources. In this paper, we propose a CPU and Intel® Xeon Phi Many Integrated Core (MIC) collaborated parallel framework to accelerate GROMACS using the offload mode on a MIC coprocessor, with which the performance of GROMACS is improved significantly, especially with the utility of Tianhe-2 supercomputer. Furthermore, we optimize GROMACS so that it can run on both the CPU and MIC at the same time. In addition, we accelerate multi-node GROMACS so that it can be used in practice. Benchmarking on real data, our accelerated GROMACS performs very well and reduces computation time significantly. Source code: https://github.com/tianhe2/gromacs-mic.

  5. Study of the mapping of Navier-Stokes algorithms onto multiple-instruction/multiple-data-stream computers

    NASA Technical Reports Server (NTRS)

    Eberhardt, D. S.; Baganoff, D.; Stevens, K.

    1984-01-01

    Implicit approximate-factored algorithms have certain properties that are suitable for parallel processing. A particular computational fluid dynamics (CFD) code, using this algorithm, is mapped onto a multiple-instruction/multiple-data-stream (MIMD) computer architecture. An explanation of this mapping procedure is presented, as well as some of the difficulties encountered when trying to run the code concurrently. Timing results are given for runs on the Ames Research Center's MIMD test facility which consists of two VAX 11/780's with a common MA780 multi-ported memory. Speedups exceeding 1.9 for characteristic CFD runs were indicated by the timing results.

  6. NHERI: Advancing the Research Infrastructure of the Multi-Hazard Community

    NASA Astrophysics Data System (ADS)

    Blain, C. A.; Ramirez, J. A.; Bobet, A.; Browning, J.; Edge, B.; Holmes, W.; Johnson, D.; Robertson, I.; Smith, T.; Zuo, D.

    2017-12-01

    The Natural Hazards Engineering Research Infrastructure (NHERI), supported by the National Science Foundation (NSF), is a distributed, multi-user national facility that provides the natural hazards research community with access to an advanced research infrastructure. Components of NHERI are comprised of a Network Coordination Office (NCO), a cloud-based cyberinfrastructure (DesignSafe-CI), a computational modeling and simulation center (SimCenter), and eight Experimental Facilities (EFs), including a post-disaster, rapid response research facility (RAPID). Utimately NHERI enables researchers to explore and test ground-breaking concepts to protect homes, businesses and infrastructure lifelines from earthquakes, windstorms, tsunamis, and surge enabling innovations to help prevent natural hazards from becoming societal disasters. When coupled with education and community outreach, NHERI will facilitate research and educational advances that contribute knowledge and innovation toward improving the resiliency of the nation's civil infrastructure to withstand natural hazards. The unique capabilities and coordinating activities over Year 1 between NHERI's DesignSafe-CI, the SimCenter, and individual EFs will be presented. Basic descriptions of each component are also found at https://www.designsafe-ci.org/facilities/. Additionally to be discussed are the various roles of the NCO in leading development of a 5-year multi-hazard science plan, coordinating facility scheduling and fostering the sharing of technical knowledge and best practices, leading education and outreach programs such as the recent Summer Institute and multi-facility REU program, ensuring a platform for technology transfer to practicing engineers, and developing strategic national and international partnerships to support a diverse multi-hazard research and user community.

  7. Multi -omics and metabolic modelling pipelines: challenges and tools for systems microbiology.

    PubMed

    Fondi, Marco; Liò, Pietro

    2015-02-01

    Integrated -omics approaches are quickly spreading across microbiology research labs, leading to (i) the possibility of detecting previously hidden features of microbial cells like multi-scale spatial organization and (ii) tracing molecular components across multiple cellular functional states. This promises to reduce the knowledge gap between genotype and phenotype and poses new challenges for computational microbiologists. We underline how the capability to unravel the complexity of microbial life will strongly depend on the integration of the huge and diverse amount of information that can be derived today from -omics experiments. In this work, we present opportunities and challenges of multi -omics data integration in current systems biology pipelines. We here discuss which layers of biological information are important for biotechnological and clinical purposes, with a special focus on bacterial metabolism and modelling procedures. A general review of the most recent computational tools for performing large-scale datasets integration is also presented, together with a possible framework to guide the design of systems biology experiments by microbiologists. Copyright © 2015. Published by Elsevier GmbH.

  8. myPresto/omegagene: a GPU-accelerated molecular dynamics simulator tailored for enhanced conformational sampling methods with a non-Ewald electrostatic scheme.

    PubMed

    Kasahara, Kota; Ma, Benson; Goto, Kota; Dasgupta, Bhaskar; Higo, Junichi; Fukuda, Ikuo; Mashimo, Tadaaki; Akiyama, Yutaka; Nakamura, Haruki

    2016-01-01

    Molecular dynamics (MD) is a promising computational approach to investigate dynamical behavior of molecular systems at the atomic level. Here, we present a new MD simulation engine named "myPresto/omegagene" that is tailored for enhanced conformational sampling methods with a non-Ewald electrostatic potential scheme. Our enhanced conformational sampling methods, e.g. , the virtual-system-coupled multi-canonical MD (V-McMD) method, replace a multi-process parallelized run with multiple independent runs to avoid inter-node communication overhead. In addition, adopting the non-Ewald-based zero-multipole summation method (ZMM) makes it possible to eliminate the Fourier space calculations altogether. The combination of these state-of-the-art techniques realizes efficient and accurate calculations of the conformational ensemble at an equilibrium state. By taking these advantages, myPresto/omegagene is specialized for the single process execution with Graphics Processing Unit (GPU). We performed benchmark simulations for the 20-mer peptide, Trp-cage, with explicit solvent. One of the most thermodynamically stable conformations generated by the V-McMD simulation is very similar to an experimentally solved native conformation. Furthermore, the computation speed is four-times faster than that of our previous simulation engine, myPresto/psygene-G. The new simulator, myPresto/omegagene, is freely available at the following URLs: http://www.protein.osaka-u.ac.jp/rcsfp/pi/omegagene/ and http://presto.protein.osaka-u.ac.jp/myPresto4/.

  9. Computer-Aided Diagnosis of Breast Cancer: A Multi-Center Demonstrator

    DTIC Science & Technology

    1998-10-01

    Artificial Neural Network (ANN) approach to computer aided diagnosis of breast cancer from mammographic findings. An ANN has been developed to provide support for the clinical decision to perform breast biopsy. The system is designed to aid in the decision to biopsy those patients who have suspicious mammographic findings. The decision to biopsy can be viewed as a two stage process: 1)the mammographer views the mammogram and determines the presence or absence of image features such as calcifications and masses, 2) the presence and description of these features

  10. Heavy Lift Vehicle (HLV) Avionics Flight Computing Architecture Study

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.; Chen, Yuan; Morgan, Dwayne R.; Butler, A. Marc; Sdhuh, Joseph M.; Petelle, Jennifer K.; Gwaltney, David A.; Coe, Lisa D.; Koelbl, Terry G.; Nguyen, Hai D.

    2011-01-01

    A NASA multi-Center study team was assembled from LaRC, MSFC, KSC, JSC and WFF to examine potential flight computing architectures for a Heavy Lift Vehicle (HLV) to better understand avionics drivers. The study examined Design Reference Missions (DRMs) and vehicle requirements that could impact the vehicles avionics. The study considered multiple self-checking and voting architectural variants and examined reliability, fault-tolerance, mass, power, and redundancy management impacts. Furthermore, a goal of the study was to develop the skills and tools needed to rapidly assess additional architectures should requirements or assumptions change.

  11. A source-controlled data center network model.

    PubMed

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS.

  12. A source-controlled data center network model

    PubMed Central

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS. PMID:28328925

  13. An evaluation of noise reduction algorithms for particle-based fluid simulations in multi-scale applications

    NASA Astrophysics Data System (ADS)

    Zimoń, M. J.; Prosser, R.; Emerson, D. R.; Borg, M. K.; Bray, D. J.; Grinberg, L.; Reese, J. M.

    2016-11-01

    Filtering of particle-based simulation data can lead to reduced computational costs and enable more efficient information transfer in multi-scale modelling. This paper compares the effectiveness of various signal processing methods to reduce numerical noise and capture the structures of nano-flow systems. In addition, a novel combination of these algorithms is introduced, showing the potential of hybrid strategies to improve further the de-noising performance for time-dependent measurements. The methods were tested on velocity and density fields, obtained from simulations performed with molecular dynamics and dissipative particle dynamics. Comparisons between the algorithms are given in terms of performance, quality of the results and sensitivity to the choice of input parameters. The results provide useful insights on strategies for the analysis of particle-based data and the reduction of computational costs in obtaining ensemble solutions.

  14. Computing Protein-Protein Association Affinity with Hybrid Steered Molecular Dynamics.

    PubMed

    Rodriguez, Roberto A; Yu, Lili; Chen, Liao Y

    2015-09-08

    Computing protein-protein association affinities is one of the fundamental challenges in computational biophysics/biochemistry. The overwhelming amount of statistics in the phase space of very high dimensions cannot be sufficiently sampled even with today's high-performance computing power. In this article, we extend a potential of mean force (PMF)-based approach, the hybrid steered molecular dynamics (hSMD) approach we developed for ligand-protein binding, to protein-protein association problems. For a protein complex consisting of two protomers, P1 and P2, we choose m (≥3) segments of P1 whose m centers of mass are to be steered in a chosen direction and n (≥3) segments of P2 whose n centers of mass are to be steered in the opposite direction. The coordinates of these m + n centers constitute a phase space of 3(m + n) dimensions (3(m + n)D). All other degrees of freedom of the proteins, ligands, solvents, and solutes are freely subject to the stochastic dynamics of the all-atom model system. Conducting SMD along a line in this phase space, we obtain the 3(m + n)D PMF difference between two chosen states: one single state in the associated state ensemble and one single state in the dissociated state ensemble. This PMF difference is the first of four contributors to the protein-protein association energy. The second contributor is the 3(m + n - 1)D partial partition in the associated state accounting for the rotations and fluctuations of the (m + n - 1) centers while fixing one of the m + n centers of the P1-P2 complex. The two other contributors are the 3(m - 1)D partial partition of P1 and the 3(n - 1)D partial partition of P2 accounting for the rotations and fluctuations of their m - 1 or n - 1 centers while fixing one of the m/n centers of P1/P2 in the dissociated state. Each of these three partial partitions can be factored exactly into a 6D partial partition in multiplication with a remaining factor accounting for the small fluctuations while fixing three of the centers of P1, P2, or the P1-P2 complex, respectively. These small fluctuations can be well-approximated as Gaussian, and every 6D partition can be reduced in an exact manner to three problems of 1D sampling, counting the rotations and fluctuations around one of the centers as being fixed. We implement this hSMD approach to the Ras-RalGDS complex, choosing three centers on RalGDS and three on Ras (m = n = 3). At a computing cost of about 71.6 wall-clock hours using 400 computing cores in parallel, we obtained the association energy, -9.2 ± 1.9 kcal/mol on the basis of CHARMM 36 parameters, which well agrees with the experimental data, -8.4 ± 0.2 kcal/mol.

  15. Teen Savvy, Web Literate, and Multi-Talented: New Authors and Their Debut Novels for Young Adults

    ERIC Educational Resources Information Center

    Clark, Ruth Cox

    2009-01-01

    Secondary school libraries often look much like research centers with banks of computers linked to databases related to curriculum topics. A glance into the library shows teens using online resources for research while trying to check their MySpace or Facebook page when they think the teacher or librarian is not looking. The librarian's weekly…

  16. Integration of Panda Workload Management System with supercomputers

    NASA Astrophysics Data System (ADS)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accomplishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility's infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.

  17. Computational Plume Modeling of COnceptual ARES Vehicle Stage Tests

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel C.; Ahuja, Vineet

    2007-01-01

    The plume-induced environment of a conceptual ARES V vehicle stage test at the NASA Stennis Space Center (NASA-SSC) was modeled using computational fluid dynamics (CFD). A full-scale multi-element grid was generated for the NASA-SSC B-2 test stand with the ARES V stage being located in a proposed off-center forward position. The plume produced by the ARES V main power plant (cluster of five RS-68 LOX/LH2 engines) was simulated using a multi-element flow solver - CRUNCH. The primary objective of this work was to obtain a fundamental understanding of the ARES V plume and its impingement characteristics on the B-2 flame-deflector. The location, size and shape of the impingement region were quantified along with the un-cooled deflector wall pressures, temperatures and incident heating rates. Issues with the proposed tests were identified and several of these addressed using the CFD methodology. The final results of this modeling effort will provide useful data and boundary conditions in upcoming engineering studies that are directed towards determining the required facility modifications for ensuring safe and reliable stage testing in support of the Constellation Program.

  18. The ToxCast Chemical Prioritization Program at the US EPA (UCLA Molecular Toxicology Program)

    EPA Science Inventory

    To meet the needs of chemical regulators reviewing large numbers of data-poor chemicals for safety, the EPA's National Center for Computational Toxicology is developing a means of efficiently testing thousands of compounds for potential toxicity. High-throughput bioactivity profi...

  19. Multi-Scale Modeling of a Graphite-Epoxy-Nanotube System

    NASA Technical Reports Server (NTRS)

    Frankland, S. J. V.; Riddick, J. C.; Gates, T. S.

    2005-01-01

    A multi-scale method is utilized to determine some of the constitutive properties of a three component graphite-epoxy-nanotube system. This system is of interest because carbon nanotubes have been proposed as stiffening and toughening agents in the interlaminar regions of carbon fiber/epoxy laminates. The multi-scale method uses molecular dynamics simulation and equivalent-continuum modeling to compute three of the elastic constants of the graphite-epoxy-nanotube system: C11, C22, and C33. The 1-direction is along the nanotube axis, and the graphene sheets lie in the 1-2 plane. It was found that the C11 is only 4% larger than the C22. The nanotube therefore does have a small, but positive effect on the constitutive properties in the interlaminar region.

  20. Benign Breast Disease: Toward Molecular Prediction of Breast Cancer Risk

    DTIC Science & Technology

    2006-06-01

    with benign breast disease ( BBD ) (1967-1991); 2) the application of potential biomarkers of risk to this archival tissue set; and 3) the discovery...of new, potentially relevant biomarkers of risk in fresh and frozen specimens of BBD . The Center includes a multi-institutional team of basic...State). I. Task 1: Establish Retrospective Cohort of BBD and Nested Case-Control Study A. Complete cohort follow-up We provide here an update of

  1. 3D Multi-Level Non-LTE Radiative Transfer for the CO Molecule

    NASA Astrophysics Data System (ADS)

    Berkner, A.; Schweitzer, A.; Hauschildt, P. H.

    2015-01-01

    The photospheres of cool stars are both rich in molecules and an environment where the assumption of LTE can not be upheld under all circumstances. Unfortunately, detailed 3D non-LTE calculations involving molecules are hardly feasible with current computers. For this reason, we present our implementation of the super level technique, in which molecular levels are combined into super levels, to reduce the number of unknowns in the rate equations and, thus, the computational effort and memory requirements involved, and show the results of our first tests against the 1D implementation of the same method.

  2. Multiscale Computational Analysis of Nitrogen and Oxygen Gas-Phase Thermochemistry in Hypersonic Flows

    NASA Astrophysics Data System (ADS)

    Bender, Jason D.

    Understanding hypersonic aerodynamics is important for the design of next-generation aerospace vehicles for space exploration, national security, and other applications. Ground-level experimental studies of hypersonic flows are difficult and expensive; thus, computational science plays a crucial role in this field. Computational fluid dynamics (CFD) simulations of extremely high-speed flows require models of chemical and thermal nonequilibrium processes, such as dissociation of diatomic molecules and vibrational energy relaxation. Current models are outdated and inadequate for advanced applications. We describe a multiscale computational study of gas-phase thermochemical processes in hypersonic flows, starting at the atomic scale and building systematically up to the continuum scale. The project was part of a larger effort centered on collaborations between aerospace scientists and computational chemists. We discuss the construction of potential energy surfaces for the N4, N2O2, and O4 systems, focusing especially on the multi-dimensional fitting problem. A new local fitting method named L-IMLS-G2 is presented and compared with a global fitting method. Then, we describe the theory of the quasiclassical trajectory (QCT) approach for modeling molecular collisions. We explain how we implemented the approach in a new parallel code for high-performance computing platforms. Results from billions of QCT simulations of high-energy N2 + N2, N2 + N, and N2 + O2 collisions are reported and analyzed. Reaction rate constants are calculated and sets of reactive trajectories are characterized at both thermal equilibrium and nonequilibrium conditions. The data shed light on fundamental mechanisms of dissociation and exchange reactions -- and their coupling to internal energy transfer processes -- in thermal environments typical of hypersonic flows. We discuss how the outcomes of this investigation and other related studies lay a rigorous foundation for new macroscopic models for hypersonic CFD. This research was supported by the Department of Energy Computational Science Graduate Fellowship and by the Air Force Office of Scientific Research Multidisciplinary University Research Initiative.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raugei, Simone; DuBois, Daniel L.; Rousseau, Roger J.

    Rational design of molecular catalysts requires a systematic approach to designing ligands with specific functionality and precisely tailored electronic and steric properties. It then becomes possible to devise computer protocols to predict accurately the required properties and ultimately to design catalysts by computer. In this account we first review how thermodynamic properties such as oxidation-reduction potentials (E0), acidities (pKa), and hydride donor abilities (ΔGH-) form the basis for a systematic design of molecular catalysts for reactions that are critical for a secure energy future (hydrogen evolution and oxidation, oxygen and nitrogen reduction, and carbon dioxide reduction). We highlight how densitymore » functional theory allows us to determine and predict these properties within “chemical” accuracy (~ 0.06 eV for redox potentials, ~ 1 pKa unit for pKa values, and ~ 1.5 kcal/mol for hydricities). These quantities determine free energy maps and profiles associated with catalytic cycles, i.e. the relative energies of intermediates, and help us distinguish between desirable and high-energy pathways and mechanisms. Good catalysts have flat profiles that avoid high activation barriers due to low and high energy intermediates. We illustrate how the criterion of a flat energy profile lends itself to the prediction of design points by computer for optimum catalysts. This research was carried out in the Center for Molecular Electro-catalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy (DOE), Office of Science, Office of Basic Energy Sciences. Pacific Northwest National Laboratory (PNNL) is operated for the DOE by Battelle.« less

  4. An accelerated line-by-line option for MODTRAN combining on-the-fly generation of line center absorption within 0.1 cm-1 bins and pre-computed line tails

    NASA Astrophysics Data System (ADS)

    Berk, Alexander; Conforti, Patrick; Hawes, Fred

    2015-05-01

    A Line-By-Line (LBL) option is being developed for MODTRAN6. The motivation for this development is two-fold. Firstly, when MODTRAN is validated against an independent LBL model, it is difficult to isolate the source of discrepancies. One must verify consistency between pressure, temperature and density profiles, between column density calculations, between continuum and particulate data, between spectral convolution methods, and more. Introducing a LBL option directly within MODTRAN will insure common elements for all calculations other than those used to compute molecular transmittances. The second motivation for the LBL upgrade is that it will enable users to compute high spectral resolution transmittances and radiances for the full range of current MODTRAN applications. In particular, introducing the LBL feature into MODTRAN will enable first-principle calculations of scattered radiances, an option that is often not readily available with LBL models. MODTRAN will compute LBL transmittances within one 0.1 cm-1 spectral bin at a time, marching through the full requested band pass. The LBL algorithm will use the highly accurate, pressure- and temperature-dependent MODTRAN Padé approximant fits of the contribution from line tails to define the absorption from all molecular transitions centered more than 0.05 cm-1 from each 0.1 cm-1 spectral bin. The beauty of this approach is that the on-the-fly computations for each 0.1 cm-1 bin will only require explicit LBL summing of transitions centered within a 0.2 cm-1 spectral region. That is, the contribution from the more distant lines will be pre-computed via the Padé approximants. The status of the LBL effort will be presented. This will include initial thermal and solar radiance calculations, validation calculations, and self-validations of the MODTRAN band model against its own LBL calculations.

  5. Better, Cheaper, Faster Molecular Dynamics

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    Recent, revolutionary progress in genomics and structural, molecular and cellular biology has created new opportunities for molecular-level computer simulations of biological systems by providing vast amounts of data that require interpretation. These opportunities are further enhanced by the increasing availability of massively parallel computers. For many problems, the method of choice is classical molecular dynamics (iterative solving of Newton's equations of motion). It focuses on two main objectives. One is to calculate the relative stability of different states of the system. A typical problem that has' such an objective is computer-aided drug design. Another common objective is to describe evolution of the system towards a low energy (possibly the global minimum energy), "native" state. Perhaps the best example of such a problem is protein folding. Both types of problems share the same difficulty. Often, different states of the system are separated by high energy barriers, which implies that transitions between these states are rare events. This, in turn, can greatly impede exploration of phase space. In some instances this can lead to "quasi non-ergodicity", whereby a part of phase space is inaccessible on time scales of the simulation. To overcome this difficulty and to extend molecular dynamics to "biological" time scales (millisecond or longer) new physical formulations and new algorithmic developments are required. To be efficient they should account for natural limitations of multi-processor computer architecture. I will present work along these lines done in my group. In particular, I will focus on a new approach to calculating the free energies (stability) of different states and to overcoming "the curse of rare events". I will also discuss algorithmic improvements to multiple time step methods and to the treatment of slowly decaying, log-ranged, electrostatic effects.

  6. Region-specific network plasticity in simulated and living cortical networks: comparison of the center of activity trajectory (CAT) with other statistics

    NASA Astrophysics Data System (ADS)

    Chao, Zenas C.; Bakkum, Douglas J.; Potter, Steve M.

    2007-09-01

    Electrically interfaced cortical networks cultured in vitro can be used as a model for studying the network mechanisms of learning and memory. Lasting changes in functional connectivity have been difficult to detect with extracellular multi-electrode arrays using standard firing rate statistics. We used both simulated and living networks to compare the ability of various statistics to quantify functional plasticity at the network level. Using a simulated integrate-and-fire neural network, we compared five established statistical methods to one of our own design, called center of activity trajectory (CAT). CAT, which depicts dynamics of the location-weighted average of spatiotemporal patterns of action potentials across the physical space of the neuronal circuitry, was the most sensitive statistic for detecting tetanus-induced plasticity in both simulated and living networks. By reducing the dimensionality of multi-unit data while still including spatial information, CAT allows efficient real-time computation of spatiotemporal activity patterns. Thus, CAT will be useful for studies in vivo or in vitro in which the locations of recording sites on multi-electrode probes are important.

  7. 75 FR 1064 - Center for Scientific Review; Notice of Closed Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-08

    ... 20892, 301-435- 1033, [email protected] . Name of Committee: Genes, Genomes, and Genetics Integrated Review Group; Molecular Genetics B Study Section. Date: February 3-4, 2010. Time: 7 p.m. to 6 p.m. Agenda... Committee: Genes, Genomes, and Genetics Integrated Review Group; Genomics, Computational Biology and...

  8. Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Mason, B. H.; Walsh, J. L.

    2001-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.

  9. Materials-by-design: computation, synthesis, and characterization from atoms to structures

    NASA Astrophysics Data System (ADS)

    Yeo, Jingjie; Jung, Gang Seob; Martín-Martínez, Francisco J.; Ling, Shengjie; Gu, Grace X.; Qin, Zhao; Buehler, Markus J.

    2018-05-01

    In the 50 years that succeeded Richard Feynman’s exposition of the idea that there is ‘plenty of room at the bottom’ for manipulating individual atoms for the synthesis and manufacturing processing of materials, the materials-by-design paradigm is being developed gradually through synergistic integration of experimental material synthesis and characterization with predictive computational modeling and optimization. This paper reviews how this paradigm creates the possibility to develop materials according to specific, rational designs from the molecular to the macroscopic scale. We discuss promising techniques in experimental small-scale material synthesis and large-scale fabrication methods to manipulate atomistic or macroscale structures, which can be designed by computational modeling. These include recombinant protein technology to produce peptides and proteins with tailored sequences encoded by recombinant DNA, self-assembly processes induced by conformational transition of proteins, additive manufacturing for designing complex structures, and qualitative and quantitative characterization of materials at different length scales. We describe important material characterization techniques using numerous methods of spectroscopy and microscopy. We detail numerous multi-scale computational modeling techniques that complements these experimental techniques: DFT at the atomistic scale; fully atomistic and coarse-grain molecular dynamics at the molecular to mesoscale; continuum modeling at the macroscale. Additionally, we present case studies that utilize experimental and computational approaches in an integrated manner to broaden our understanding of the properties of two-dimensional materials and materials based on silk and silk-elastin-like proteins.

  10. An application of the Multi-Purpose System Simulation /MPSS/ model to the Monitor and Control Display System /MACDS/ at the National Aeronautics and Space Administration /NASA/ Goddard Space Flight Center /GSFC/

    NASA Technical Reports Server (NTRS)

    Mill, F. W.; Krebs, G. N.; Strauss, E. S.

    1976-01-01

    The Multi-Purpose System Simulator (MPSS) model was used to investigate the current and projected performance of the Monitor and Control Display System (MACDS) at the Goddard Space Flight Center in processing and displaying launch data adequately. MACDS consists of two interconnected mini-computers with associated terminal input and display output equipment and a disk-stored data base. Three configurations of MACDS were evaluated via MPSS and their performances ascertained. First, the current version of MACDS was found inadequate to handle projected launch data loads because of unacceptable data backlogging. Second, the current MACDS hardware with enhanced software was capable of handling two times the anticipated data loads. Third, an up-graded hardware ensemble combined with the enhanced software was capable of handling four times the anticipated data loads.

  11. Management of venous thromboembolism in patients with acute leukemia at high bleeding risk: a multi-center study.

    PubMed

    Napolitano, Mariasanta; Valore, Luca; Malato, Alessandra; Saccullo, Giorgia; Vetro, Calogero; Mitra, Maria Enza; Fabbiano, Francesco; Mannina, Donato; Casuccio, Alessandra; Lucchesi, Alessandro; Del Principe, Maria Ilaria; Candoni, Anna; Di Raimondo, Francesco; Siragusa, Sergio

    2016-01-01

    In the last decades, evaluation of clinically relevant thrombotic complications in patients with acute leukemia (AL) has been poorly investigated. The authors performed a multi-center study to evaluate the management of symptomatic venous thromboembolism (VTE) in adult patients with AL. The intention was to find as clinically relevant the following: symptomatic Venous Thrombosis (VT) occurred in typical (lower limbs) and atypical (cerebral, upper limbs, abdominal, etc) sites with or without pulmonary embolism (PE). Over a population of 1461 patients with AL, 22 cases of symptomatic VTE were recorded in hospitalized patients with a mean age of 54.6 years. The absolute incidence of VTE was 1.5%. VTE occurred during chemotherapy in 17/22 (77.2%) cases, mainly (14/17, 82.3%) during the induction phase. Treatment of acute VTE was based on Low Molecular Weight Heparin (LMWH) at full dosage for the first month from diagnosis and reduced dosage (75%) for the following months.

  12. Molecular motions that shape the cardiac action potential: Insights from voltage clamp fluorometry.

    PubMed

    Zhu, Wandi; Varga, Zoltan; Silva, Jonathan R

    2016-01-01

    Very recently, voltage-clamp fluorometry (VCF) protocols have been developed to observe the membrane proteins responsible for carrying the ventricular ionic currents that form the action potential (AP), including those carried by the cardiac Na(+) channel, NaV1.5, the L-type Ca(2+) channel, CaV1.2, the Na(+)/K(+) ATPase, and the rapid and slow components of the delayed rectifier, KV11.1 and KV7.1. This development is significant, because VCF enables simultaneous observation of ionic current kinetics with conformational changes occurring within specific channel domains. The ability gained from VCF, to connect nanoscale molecular movement to ion channel function has revealed how the voltage-sensing domains (VSDs) control ion flux through channel pores, mechanisms of post-translational regulation and the molecular pathology of inherited mutations. In the future, we expect that this data will be of great use for the creation of multi-scale computational AP models that explicitly represent ion channel conformations, connecting molecular, cell and tissue electrophysiology. Here, we review the VCF protocol, recent results, and discuss potential future developments, including potential use of these experimental findings to create novel computational models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Third-Kind Encounters in Biomedicine: Immunology Meets Mathematics and Informatics to Become Quantitative and Predictive.

    PubMed

    Eberhardt, Martin; Lai, Xin; Tomar, Namrata; Gupta, Shailendra; Schmeck, Bernd; Steinkasserer, Alexander; Schuler, Gerold; Vera, Julio

    2016-01-01

    The understanding of the immune response is right now at the center of biomedical research. There are growing expectations that immune-based interventions will in the midterm provide new, personalized, and targeted therapeutic options for many severe and highly prevalent diseases, from aggressive cancers to infectious and autoimmune diseases. To this end, immunology should surpass its current descriptive and phenomenological nature, and become quantitative, and thereby predictive.Immunology is an ideal field for deploying the tools, methodologies, and philosophy of systems biology, an approach that combines quantitative experimental data, computational biology, and mathematical modeling. This is because, from an organism-wide perspective, the immunity is a biological system of systems, a paradigmatic instance of a multi-scale system. At the molecular scale, the critical phenotypic responses of immune cells are governed by large biochemical networks, enriched in nested regulatory motifs such as feedback and feedforward loops. This network complexity confers them the ability of highly nonlinear behavior, including remarkable examples of homeostasis, ultra-sensitivity, hysteresis, and bistability. Moving from the cellular level, different immune cell populations communicate with each other by direct physical contact or receiving and secreting signaling molecules such as cytokines. Moreover, the interaction of the immune system with its potential targets (e.g., pathogens or tumor cells) is far from simple, as it involves a number of attack and counterattack mechanisms that ultimately constitute a tightly regulated multi-feedback loop system. From a more practical perspective, this leads to the consequence that today's immunologists are facing an ever-increasing challenge of integrating massive quantities from multi-platforms.In this chapter, we support the idea that the analysis of the immune system demands the use of systems-level approaches to ensure the success in the search for more effective and personalized immune-based therapies.

  14. Molecular details of dimerization kinetics reveal negligible populations of transient µ-opioid receptor homodimers at physiological concentrations.

    PubMed

    Meral, Derya; Provasi, Davide; Prada-Gracia, Diego; Möller, Jan; Marino, Kristen; Lohse, Martin J; Filizola, Marta

    2018-05-16

    Various experimental and computational techniques have been employed over the past decade to provide structural and thermodynamic insights into G Protein-Coupled Receptor (GPCR) dimerization. Here, we use multiple microsecond-long, coarse-grained, biased and unbiased molecular dynamics simulations (a total of ~4 milliseconds) combined with multi-ensemble Markov state models to elucidate the kinetics of homodimerization of a prototypic GPCR, the µ-opioid receptor (MOR), embedded in a 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC)/cholesterol lipid bilayer. Analysis of these computations identifies kinetically distinct macrostates comprising several different short-lived dimeric configurations of either inactive or activated MOR. Calculated kinetic rates and fractions of dimers at different MOR concentrations suggest a negligible population of MOR homodimers at physiological concentrations, which is supported by acceptor photobleaching fluorescence resonance energy transfer (FRET) experiments. This study provides a rigorous, quantitative explanation for some conflicting experimental data on GPCR oligomerization.

  15. Integration of XNAT/PACS, DICOM, and Research Software for Automated Multi-modal Image Analysis.

    PubMed

    Gao, Yurui; Burns, Scott S; Lauzon, Carolyn B; Fong, Andrew E; James, Terry A; Lubar, Joel F; Thatcher, Robert W; Twillie, David A; Wirt, Michael D; Zola, Marc A; Logan, Bret W; Anderson, Adam W; Landman, Bennett A

    2013-03-29

    Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software.

  16. Integration of XNAT/PACS, DICOM, and research software for automated multi-modal image analysis

    NASA Astrophysics Data System (ADS)

    Gao, Yurui; Burns, Scott S.; Lauzon, Carolyn B.; Fong, Andrew E.; James, Terry A.; Lubar, Joel F.; Thatcher, Robert W.; Twillie, David A.; Wirt, Michael D.; Zola, Marc A.; Logan, Bret W.; Anderson, Adam W.; Landman, Bennett A.

    2013-03-01

    Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software.

  17. Integration of XNAT/PACS, DICOM, and Research Software for Automated Multi-modal Image Analysis

    PubMed Central

    Gao, Yurui; Burns, Scott S.; Lauzon, Carolyn B.; Fong, Andrew E.; James, Terry A.; Lubar, Joel F.; Thatcher, Robert W.; Twillie, David A.; Wirt, Michael D.; Zola, Marc A.; Logan, Bret W.; Anderson, Adam W.; Landman, Bennett A.

    2013-01-01

    Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software. PMID:24386548

  18. Quantum optical emulation of molecular vibronic spectroscopy using a trapped-ion device.

    PubMed

    Shen, Yangchao; Lu, Yao; Zhang, Kuan; Zhang, Junhua; Zhang, Shuaining; Huh, Joonsuk; Kim, Kihwan

    2018-01-28

    Molecules are one of the most demanding quantum systems to be simulated by quantum computers due to their complexity and the emergent role of quantum nature. The recent theoretical proposal of Huh et al. (Nature Photon., 9, 615 (2015)) showed that a multi-photon network with a Gaussian input state can simulate a molecular spectroscopic process. Here, we present the first quantum device that generates a molecular spectroscopic signal with the phonons in a trapped ion system, using SO 2 as an example. In order to perform reliable Gaussian sampling, we develop the essential experimental technology with phonons, which includes the phase-coherent manipulation of displacement, squeezing, and rotation operations with multiple modes in a single realization. The required quantum optical operations are implemented through Raman laser beams. The molecular spectroscopic signal is reconstructed from the collective projection measurements for the two-phonon-mode. Our experimental demonstration will pave the way to large-scale molecular quantum simulations, which are classically intractable, but would be easily verifiable by real molecular spectroscopy.

  19. ASTEC and MODEL: Controls software development at Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Downing, John P.; Bauer, Frank H.; Surber, Jeffrey L.

    1993-01-01

    The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at the Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. In the last three years the ASTEC (Analysis and Simulation Tools for Engineering Controls) software has been under development. ASTEC is meant to be an integrated collection of controls analysis tools for use at the desktop level. MODEL (Multi-Optimal Differential Equation Language) is a translator that converts programs written in the MODEL language to FORTRAN. An upgraded version of the MODEL program will be merged into ASTEC. MODEL has not been modified since 1981 and has not kept with changes in computers or user interface techniques. This paper describes the changes made to MODEL in order to make it useful in the 90's and how it relates to ASTEC.

  20. Sensitivity subgroup analysis based on single-center vs. multi-center trial status when interpreting meta-analyses pooled estimates: the logical way forward.

    PubMed

    Alexander, Paul E; Bonner, Ashley J; Agarwal, Arnav; Li, Shelly-Anne; Hariharan, Abishek; Izhar, Zain; Bhatnagar, Neera; Alba, Carolina; Akl, Elie A; Fei, Yutong; Guyatt, Gordon H; Beyene, Joseph

    2016-06-01

    Prior studies regarding whether single-center trial estimates are larger than multi-center are equivocal. We examined the extent to which single-center trials yield systematically larger effects than multi-center trials. We searched the 119 core clinical journals and the Cochrane Database of Systematic Reviews for meta-analyses (MAs) of randomized controlled trials (RCTs) published during 2012. In this meta-epidemiologic study, for binary variables, we computed the pooled ratio of ORs (RORs), and for continuous outcomes mean difference in standardized mean differences (SMDs), we conducted weighted random-effects meta-regression and random-effects MA modeling. Our primary analyses were restricted to MAs that included at least five RCTs and in which at least 25% of the studies used each of single trial center (SC) and more trial center (MC) designs. We identified 81 MAs for the odds ratio (OR) and 43 for the SMD outcome measures. Based on our analytic plan, our primary analysis (core) is based on 25 MAs/241 RCTs (binary outcome) and 18 MAs/173 RCTs (continuous outcome). Based on the core analysis, we found no difference in magnitude of effect between SC and MC for binary outcomes [RORs: 1.02; 95% confidence interval (CI): 0.83, 1.24; I(2) 20.2%]. Effect sizes were systematically larger for SC than MC for the continuous outcome measure (mean difference in SMDs: -0.13; 95% CI: -0.21, -0.05; I(2) 0%). Our results do not support prior findings of larger effects in SC than MC trials addressing binary outcomes but show a very similar small increase in effect in SC than MC trials addressing continuous outcomes. Authors of systematic reviews would be wise to include all trials irrespective of SC vs. MC design and address SC vs. MC status as a possible explanation of heterogeneity (and consider sensitivity analyses). Copyright © 2015 Elsevier Inc. All rights reserved.

  1. How to Store Energy Fast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Augustyn, Veronica; Ko, Jesse; Rauda, Iris

    Representing the Molecularly Engineered Energy Materials (MEEM), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE energy. The mission of MEEM, using inexpensive custom-designed molecular building blocks, aims to create revolutionary new materials withmore » self-assembled multi-scale architectures that will enable high performing energy generation and storage applications.« less

  2. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, K; Jha, S; Klimentov, A

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full production for the ATLAS experiment since September 2015. We will present our current accomplishments with running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less

  3. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads.

    PubMed

    Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-05-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.

  4. Computer network access to scientific information systems for minority universities

    NASA Astrophysics Data System (ADS)

    Thomas, Valerie L.; Wakim, Nagi T.

    1993-08-01

    The evolution of computer networking technology has lead to the establishment of a massive networking infrastructure which interconnects various types of computing resources at many government, academic, and corporate institutions. A large segment of this infrastructure has been developed to facilitate information exchange and resource sharing within the scientific community. The National Aeronautics and Space Administration (NASA) supports both the development and the application of computer networks which provide its community with access to many valuable multi-disciplinary scientific information systems and on-line databases. Recognizing the need to extend the benefits of this advanced networking technology to the under-represented community, the National Space Science Data Center (NSSDC) in the Space Data and Computing Division at the Goddard Space Flight Center has developed the Minority University-Space Interdisciplinary Network (MU-SPIN) Program: a major networking and education initiative for Historically Black Colleges and Universities (HBCUs) and Minority Universities (MUs). In this paper, we will briefly explain the various components of the MU-SPIN Program while highlighting how, by providing access to scientific information systems and on-line data, it promotes a higher level of collaboration among faculty and students and NASA scientists.

  5. Assessment of the MHD capability in the ATHENA code using data from the ALEX facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roth, P.A.

    1989-03-01

    The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code is a system transient analysis code with multi-loop, multi-fluid capabilities, which is available to the fusion community at the National Magnetic Fusion Energy Computing Center (NMFECC). The work reported here assesses the ATHENA magnetohydrodynamic (MHD) pressure drop model for liquid metals flowing through a strong magnetic field. An ATHENA model was developed for two simple geometry, adiabatic test sections used in the Argonne Liquid Metal Experiment (ALEX) at Argonne National Laboratory (ANL). The pressure drops calculated by ATHENA agreed well with the experimental results from the ALEX facility.

  6. Development of Multi-Disciplinary Finite Element Method Analysis Courses at California State University, Los Angeles

    NASA Technical Reports Server (NTRS)

    McKinney, John; Wu, Chivey

    1998-01-01

    The NASA Dryden Flight Research Center (DFRC) Partnership Awards Grant to California State University, Los Angeles (CSULA) has two primary goals that help to achieve NASA objectives. The overall objectives of the NASA Partnership Awards are to create opportunities for joint University NASA/Government sponsored research and related activities. One of the goals of the grant is to have university faculty researchers participate and contribute to the development of NASA technology that supports NASA goals for research and development (R&D) in Aeronautics and Astronautics. The other goal is technology transfer in the other direction, where NASA developed technology is made available to the general public and more specifically, targeted to industries that can profit from utilization of government developed technology. This years NASA Dryden Partnership Awards grant to CSULA entitled, "Computer Simulation of Multi-Disciplinary Engineering Systems", has two major tasks that satisfy overall NASA objectives. The first task conducts basic and applied research that contributes to technology development at the Dryden Flight Research Center. The second part of the grant provides for dissemination of NASA developed technology, by using the teaching environment created in the CSULA classroom. The second task and how this is accomplished is the topic of this paper. The NASA STARS (Structural Analysis Routines) computer simulation program is used at the Dryden center to support flight testing of high-performance experimental aircraft and to conduct research and development of new and advanced Aerospace technology.

  7. A new fast algorithm for solving the minimum spanning tree problem based on DNA molecules computation.

    PubMed

    Wang, Zhaocai; Huang, Dongmei; Meng, Huajun; Tang, Chengpei

    2013-10-01

    The minimum spanning tree (MST) problem is to find minimum edge connected subsets containing all the vertex of a given undirected graph. It is a vitally important NP-complete problem in graph theory and applied mathematics, having numerous real life applications. Moreover in previous studies, DNA molecular operations usually were used to solve NP-complete head-to-tail path search problems, rarely for NP-hard problems with multi-lateral path solutions result, such as the minimum spanning tree problem. In this paper, we present a new fast DNA algorithm for solving the MST problem using DNA molecular operations. For an undirected graph with n vertex and m edges, we reasonably design flexible length DNA strands representing the vertex and edges, take appropriate steps and get the solutions of the MST problem in proper length range and O(3m+n) time complexity. We extend the application of DNA molecular operations and simultaneity simplify the complexity of the computation. Results of computer simulative experiments show that the proposed method updates some of the best known values with very short time and that the proposed method provides a better performance with solution accuracy over existing algorithms. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  8. Development of a COTS-Based Computing Environment Blueprint Application at KSC

    NASA Technical Reports Server (NTRS)

    Ghansah, Isaac; Boatright, Bryan

    1996-01-01

    This paper describes a blueprint that can be used for developing a distributed computing environment (DCE) for NASA in general, and the Kennedy Space Center (KSC) in particular. A comprehensive, open, secure, integrated, and multi-vendor DCE such as OSF DCE has been suggested. Design issues, as well as recommendations for each component have been given. Where necessary, modifications were suggested to fit the needs of KSC. This was done in the areas of security and directory services. Readers requiring a more comprehensive coverage are encouraged to refer to the eight-chapter document prepared for this work.

  9. Complex supramolecular interfacial tessellation through convergent multi-step reaction of a dissymmetric simple organic precursor

    NASA Astrophysics Data System (ADS)

    Zhang, Yi-Qi; Paszkiewicz, Mateusz; Du, Ping; Zhang, Liding; Lin, Tao; Chen, Zhi; Klyatskaya, Svetlana; Ruben, Mario; Seitsonen, Ari P.; Barth, Johannes V.; Klappenberger, Florian

    2018-03-01

    Interfacial supramolecular self-assembly represents a powerful tool for constructing regular and quasicrystalline materials. In particular, complex two-dimensional molecular tessellations, such as semi-regular Archimedean tilings with regular polygons, promise unique properties related to their nontrivial structures. However, their formation is challenging, because current methods are largely limited to the direct assembly of precursors, that is, where structure formation relies on molecular interactions without using chemical transformations. Here, we have chosen ethynyl-iodophenanthrene (which features dissymmetry in both geometry and reactivity) as a single starting precursor to generate the rare semi-regular (3.4.6.4) Archimedean tiling with long-range order on an atomically flat substrate through a multi-step reaction. Intriguingly, the individual chemical transformations converge to form a symmetric alkynyl-Ag-alkynyl complex as the new tecton in high yields. Using a combination of microscopy and X-ray spectroscopy tools, as well as computational modelling, we show that in situ generated catalytic Ag complexes mediate the tecton conversion.

  10. Multi-charge-state molecular dynamics and self-diffusion coefficient in the warm dense matter regime

    NASA Astrophysics Data System (ADS)

    Fu, Yongsheng; Hou, Yong; Kang, Dongdong; Gao, Cheng; Jin, Fengtao; Yuan, Jianmin

    2018-01-01

    We present a multi-ion molecular dynamics (MIMD) simulation and apply it to calculating the self-diffusion coefficients of ions with different charge-states in the warm dense matter (WDM) regime. First, the method is used for the self-consistent calculation of electron structures of different charge-state ions in the ion sphere, with the ion-sphere radii being determined by the plasma density and the ion charges. The ionic fraction is then obtained by solving the Saha equation, taking account of interactions among different charge-state ions in the system, and ion-ion pair potentials are computed using the modified Gordon-Kim method in the framework of temperature-dependent density functional theory on the basis of the electron structures. Finally, MIMD is used to calculate ionic self-diffusion coefficients from the velocity correlation function according to the Green-Kubo relation. A comparison with the results of the average-atom model shows that different statistical processes will influence the ionic diffusion coefficient in the WDM regime.

  11. Gualou Guizhi decoction reverses brain damage with cerebral ischemic stroke, multi-component directed multi-target to screen calcium-overload inhibitors using combination of molecular docking and protein-protein docking.

    PubMed

    Hu, Juan; Pang, Wen-Sheng; Han, Jing; Zhang, Kuan; Zhang, Ji-Zhou; Chen, Li-Dian

    2018-12-01

    Stroke is a disease of the leading causes of mortality and disability across the world, but the benefits of drugs curative effects look less compelling, intracellular calcium overload is considered to be a key pathologic factor for ischemic stroke. Gualou Guizhi decoction (GLGZD), a classical Chinese medicine compound prescription, it has been used to human clinical therapy of sequela of cerebral ischemia stroke for 10 years. This work investigated the GLGZD improved prescription against intracellular calcium overload could decreased the concentration of [Ca 2+ ] i in cortex and striatum neurone of MCAO rats. GLGZD contains Trichosanthin and various small molecular that they are the potential active ingredients directed against NR2A, NR2B, FKBP12 and Calnodulin target proteins/enzyme have been screened by computer simulation. "Multicomponent systems" is capable to create pharmacological superposition effects. The Chinese medicine compound prescriptions could be considered as promising sources of candidates for discovery new agents.

  12. Computational Modeling of Multi-Scale Material Features in Cement Paste - An Overview

    DTIC Science & Technology

    2015-05-25

    and concrete ; though commonly used are one of the most complex in terms of material morphology and structure than most materials, for example...across the multiple scales are required. In this paper, recent work from our research group on the nano to continuum level modeling of cementitious...of our research work consisting of, • Molecular Dynamics (MD) modeling for the nano scale features of the cementitious material chemistry. • Micro

  13. GENESIS: a hybrid-parallel and multi-scale molecular dynamics simulator with enhanced sampling algorithms for biomolecular and cellular simulations.

    PubMed

    Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji

    2015-07-01

    GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.

  14. Implicit method for the computation of unsteady flows on unstructured grids

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, V.; Mavriplis, D. J.

    1995-01-01

    An implicit method for the computation of unsteady flows on unstructured grids is presented. Following a finite difference approximation for the time derivative, the resulting nonlinear system of equations is solved at each time step by using an agglomeration multigrid procedure. The method allows for arbitrarily large time steps and is efficient in terms of computational effort and storage. Inviscid and viscous unsteady flows are computed to validate the procedure. The issue of the mass matrix which arises with vertex-centered finite volume schemes is addressed. The present formulation allows the mass matrix to be inverted indirectly. A mesh point movement and reconnection procedure is described that allows the grids to evolve with the motion of bodies. As an example of flow over bodies in relative motion, flow over a multi-element airfoil system undergoing deployment is computed.

  15. Multi-scale genetic dynamic modelling I : an algorithm to compute generators.

    PubMed

    Kirkilionis, Markus; Janus, Ulrich; Sbano, Luca

    2011-09-01

    We present a new approach or framework to model dynamic regulatory genetic activity. The framework is using a multi-scale analysis based upon generic assumptions on the relative time scales attached to the different transitions of molecular states defining the genetic system. At micro-level such systems are regulated by the interaction of two kinds of molecular players: macro-molecules like DNA or polymerases, and smaller molecules acting as transcription factors. The proposed genetic model then represents the larger less abundant molecules with a finite discrete state space, for example describing different conformations of these molecules. This is in contrast to the representations of the transcription factors which are-like in classical reaction kinetics-represented by their particle number only. We illustrate the method by considering the genetic activity associated to certain configurations of interacting genes that are fundamental to modelling (synthetic) genetic clocks. A largely unknown question is how different molecular details incorporated via this more realistic modelling approach lead to different macroscopic regulatory genetic models which dynamical behaviour might-in general-be different for different model choices. The theory will be applied to a real synthetic clock in a second accompanying article (Kirkilioniset al., Theory Biosci, 2011).

  16. Description of the F-16XL Geometry and Computational Grids Used in CAWAPI

    NASA Technical Reports Server (NTRS)

    Boelens, O. J.; Badcock, K. J.; Gortz, S.; Morton, S.; Fritz, W.; Karman, S. L., Jr.; Michal, T.; Lamar, J. E.

    2009-01-01

    The objective of the Cranked-Arrow Wing Aerodynamics Project International (CAWAPI) was to allow a comprehensive validation of Computational Fluid Dynamics methods against the CAWAP flight database. A major part of this work involved the generation of high-quality computational grids. Prior to the grid generation an IGES file containing the air-tight geometry of the F-16XL aircraft was generated by a cooperation of the CAWAPI partners. Based on this geometry description both structured and unstructured grids have been generated. The baseline structured (multi-block) grid (and a family of derived grids) has been generated by the National Aerospace Laboratory NLR. Although the algorithms used by NLR had become available just before CAWAPI and thus only a limited experience with their application to such a complex configuration had been gained, a grid of good quality was generated well within four weeks. This time compared favourably with that required to produce the unstructured grids in CAWAPI. The baseline all-tetrahedral and hybrid unstructured grids has been generated at NASA Langley Research Center and the USAFA, respectively. To provide more geometrical resolution, trimmed unstructured grids have been generated at EADS-MAS, the UTSimCenter, Boeing Phantom Works and KTH/FOI. All grids generated within the framework of CAWAPI will be discussed in the article. Both results obtained on the structured grids and the unstructured grids showed a significant improvement in agreement with flight test data in comparison with those obtained on the structured multi-block grid used during CAWAP.

  17. Adaptively restrained molecular dynamics in LAMMPS

    NASA Astrophysics Data System (ADS)

    Kant Singh, Krishna; Redon, Stephane

    2017-07-01

    Adaptively restrained molecular dynamics (ARMD) is a recently introduced particles simulation method that switches positional degrees of freedom on and off during simulation in order to speed up calculations. In the NVE ensemble, ARMD allows users to trade between precision and speed while, in the NVT ensemble, it makes it possible to compute statistical averages faster. Despite the conceptual simplicity of the approach, however, integrating it in existing molecular dynamics packages is non-trivial, in particular since implemented potentials should a priori be rewritten to take advantage of frozen particles and achieve a speed-up. In this paper, we present novel algorithms for integrating ARMD in LAMMPS, a popular multi-purpose molecular simulation package. In particular, we demonstrate how to enable ARMD in LAMMPS without having to re-implement all available force fields. The proposed algorithms are assessed on four different benchmarks, and show how they allow us to speed up simulations up to one order of magnitude.

  18. SCELib2: the new revision of SCELib, the parallel computational library of molecular properties in the single center approach

    NASA Astrophysics Data System (ADS)

    Sanna, N.; Morelli, G.

    2004-09-01

    In this paper we present the new version of the SCELib program (CPC Catalogue identifier ADMG) a full numerical implementation of the Single Center Expansion (SCE) method. The physics involved is that of producing the SCE description of molecular electronic densities, of molecular electrostatic potentials and of molecular perturbed potentials due to a point negative or positive charge. This new revision of the program has been optimized to run in serial as well as in parallel execution mode, to support a larger set of molecular symmetries and to permit the restart of long-lasting calculations. To measure the performance of this new release, a comparative study has been carried out on the most powerful computing architectures in serial and parallel runs. The results of the calculations reported in this paper refer to real cases medium to large molecular systems and they are reported in full details to benchmark at best the parallel architectures the new SCELib code will run on. Program summaryTitle of program: SCELib2 Catalogue identifier: ADGU Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADGU Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Reference to previous versions: Comput. Phys. Commun. 128 (2) (2000) 139 (CPC catalogue identifier: ADMG) Does the new version supersede the original program?: Yes Computer for which the program is designed and others on which it has been tested: HP ES45 and rx2600, SUN ES4500, IBM SP and any single CPU workstation based on Alpha, SPARC, POWER, Itanium2 and X86 processors Installations: CASPUR, local Operating systems under which the program has been tested: HP Tru64 V5.X, SUNOS V5.8, IBM AIX V5.X, Linux RedHat V8.0 Programming language used: C Memory required to execute with typical data: 10 Mwords. Up to 2000 Mwords depending on the molecular system and runtime parameters No. of bits in a word: 64 No. of processors used: 1 to 32 Has the code been vectorized or parallelized?: Yes No. of bytes in distributed program, including test data, etc.: 3 798 507 No. of lines in distributed program, including test data, etc.: 187 226 Distribution format: tar.gz Nature of physical problem: In this set of codes an efficient procedure is implemented to describe the wavefunction and related molecular properties of a polyatomic molecular system within the Single Center of Expansion (SCE) approximation. The resulting SCE wavefunction, electron density, electrostatic and exchange/correlation potentials can then be used via a proper Application Programming Interface (API) to describe the target molecular system which can be employed in electron-molecule scattering calculations. The molecular properties expanded over a single center turn out to also be of more general application and some possible uses in quantum chemistry, biomodelling and drug design are also outlined. Method of solution: The polycentre Hartee-Fock solution for a molecule of arbitrary geometry, based on linear combination of Gaussian-Type Orbital (GTO), is expanded over a single center, typically the Center Of Mass (C.O.M.), by means of a Gauss-Legendre/Chebyschev quadrature over the θ, φ angular coordinates. The resulting SCE numerical wavefunction is then used to calculate the one-particle electron density, the electrostatic potential and two different models for the correlation/polarization potentials induced by the impinging electron, which have the correct asymptotic behaviour for the leading dipole molecular polarizabilities. Restrictions on the complexity of the problem: Depending on the molecular system under study and on the operating conditions the program may or may not fit into available RAM memory. In this case a feature of the program is to memory map a disk file in order to efficiently access the memory data through a disk device. Typical running time: The execution time strongly depends on the molecular target description and on the hardware/OS chosen, it is directly proportional to the ( r, θ, φ) grid size and to the number of angular basis functions used. Thus, from the program printout of the main arrays memory occupancy, the user can approximately derive the expected computer time needed for a given calculation executed in serial mode. For parallel executions the overall efficiency must be further taken into account, and this depends on the no. of processors used as well as on the parallel architecture chosen, so a simple general law is at present not determinable. Unusual features of the program: The code has been engineered to use dynamical, runtime determined, global parameters with the aim to have all the data fitted in the RAM memory. Some unusual circumstances, e.g., when using large values of those parameters, may cause the program to run with unexpected performance reductions due to runtime bottlenecks like those caused by memory swap operations which strongly depend on the hardware used. In such cases, a parallel execution of the code is generally sufficient to fix the problem since the data size is partitioned over the available processors. When a suitable parallel system is not available for execution, a mechanism of memory mapped file can be used; with this option on, all the available memory will be used as a buffer for a disk file which contains the whole data set, thus having a better throughput with respect to the traditional swapping/paging of the Unix OS.

  19. Multi-Dimensional Impact of the Public-Private Center for Translational Molecular Medicine (CTMM) in the Netherlands: Understanding New 21(st) Century Institutional Designs to Support Innovation-in-Society.

    PubMed

    Steuten, Lotte M

    2016-05-01

    Knowledge translation is at the epicenter of 21st century life sciences and integrative biology. Several innovative institutional designs have been formulated to cultivate knowledge translation. One of these organizational innovations has been the Center for Translational Molecular Medicine (CTMM), a multi-million public-private partnership in the Netherlands. The CTMM aims to accelerate molecular diagnostics and imaging technologies to forecast disease susceptibilities in healthy populations and early diagnosis and personalized treatment of patients. This research evaluated CTMM's impact on scientific, translational, clinical, and economic dimensions. A pragmatic, operationally-defined process indicators approach was used. Data were gathered from CTMM administrations, through a CTMM-wide survey (n = 167) and group interviews. We found that the CTMM focused on disease areas with high human, clinical, and economic burden to society (i.e., oncology, cardiovascular, neurologic, infection, and immunity diseases). CTMM displayed a robust scientific impact that rests 15%-80% above international reference values regarding publication volume and impact. Technology translation to the clinic was accelerated, with >50% of projects progressing from pre-clinical development to clinical testing within 5 years. Furthermore, CTMM has generated nearly 1500 Full Time Equivalent (FTE) of translational R&D capacity. Its positive impact on translational, (future) clinical, and economic aspects is recognized across all surveyed stakeholders. As organizational innovation is increasingly considered critical to forge linkages between life sciences discoveries and innovation-in-society, lessons learned from this study may inform other institutions with similar objectives such as the Clinical and Translational Science Awards (CTSA) Program of the National Institutes of Health (NIH) in the United States.

  20. The effects of a visualization-centered curriculum on conceptual understanding and representational competence in high school biology

    NASA Astrophysics Data System (ADS)

    Wilder, Anna

    The purpose of this study was to investigate the effects of a visualization-centered curriculum, Hemoglobin: A Case of Double Identity, on conceptual understanding and representational competence in high school biology. Sixty-nine students enrolled in three sections of freshman biology taught by the same teacher participated in this study. Online Chemscape Chime computer-based molecular visualizations were incorporated into the 10-week curriculum to introduce students to fundamental structure and function relationships. Measures used in this study included a Hemoglobin Structure and Function Test, Mental Imagery Questionnaire, Exam Difficulty Survey, the Student Assessment of Learning Gains, the Group Assessment of Logical Thinking, the Attitude Toward Science in School Assessment, audiotapes of student interviews, students' artifacts, weekly unit activity surveys, informal researcher observations and a teacher's weekly questionnaire. The Hemoglobin Structure and Function Test, consisting of Parts A and B, was administered as a pre and posttest. Part A used exclusively verbal test items to measure conceptual understanding, while Part B used visual-verbal test items to measure conceptual understanding and representational competence. Results of the Hemoglobin Structure and Function pre and posttest revealed statistically significant gains in conceptual understanding and representational competence, suggesting the visualization-centered curriculum implemented in this study was effective in supporting positive learning outcomes. The large positive correlation between posttest results on Part A, comprised of all-verbal test items, and Part B, using visual-verbal test items, suggests this curriculum supported students' mutual development of conceptual understanding and representational competence. Evidence based on student interviews, Student Assessment of Learning Gains ratings and weekly activity surveys indicated positive attitudes toward the use of Chemscape Chime software and the computer-based molecular visualization activities as learning tools. Evidence from these same sources also indicated that students felt computer-based molecular visualization activities in conjunction with other classroom activities supported their learning. Implications for instructional design are discussed.

  1. Lessons learned: mobile device encryption in the academic medical center.

    PubMed

    Kusche, Kristopher P

    2009-01-01

    The academic medical center is faced with the unique challenge of meeting the multi-faceted needs of both a modern healthcare organization and an academic institution, The need for security to protect patient information must be balanced by the academic freedoms expected in the college setting. The Albany Medical Center, consisting of the Albany Medical College and the Albany Medical Center Hospital, was challenged with implementing a solution that would preserve the availability, integrity and confidentiality of business, patient and research data stored on mobile devices. To solve this problem, Albany Medical Center implemented a mobile encryption suite across the enterprise. Such an implementation comes with complexities, from performance across multiple generations of computers and operating systems, to diversity of application use mode and end user adoption, all of which requires thoughtful policy and standards creation, understanding of regulations, and a willingness and ability to work through such diverse needs.

  2. Automated daily quality control analysis for mammography in a multi-unit imaging center.

    PubMed

    Sundell, Veli-Matti; Mäkelä, Teemu; Meaney, Alexander; Kaasalainen, Touko; Savolainen, Sauli

    2018-01-01

    Background The high requirements for mammography image quality necessitate a systematic quality assurance process. Digital imaging allows automation of the image quality analysis, which can potentially improve repeatability and objectivity compared to a visual evaluation made by the users. Purpose To develop an automatic image quality analysis software for daily mammography quality control in a multi-unit imaging center. Material and Methods An automated image quality analysis software using the discrete wavelet transform and multiresolution analysis was developed for the American College of Radiology accreditation phantom. The software was validated by analyzing 60 randomly selected phantom images from six mammography systems and 20 phantom images with different dose levels from one mammography system. The results were compared to a visual analysis made by four reviewers. Additionally, long-term image quality trends of a full-field digital mammography system and a computed radiography mammography system were investigated. Results The automated software produced feature detection levels comparable to visual analysis. The agreement was good in the case of fibers, while the software detected somewhat more microcalcifications and characteristic masses. Long-term follow-up via a quality assurance web portal demonstrated the feasibility of using the software for monitoring the performance of mammography systems in a multi-unit imaging center. Conclusion Automated image quality analysis enables monitoring the performance of digital mammography systems in an efficient, centralized manner.

  3. Free Enthalpy Differences between α-, π-, and 310-Helices of an Atomic Level Fine-Grained Alanine Deca-Peptide Solvated in Supramolecular Coarse-Grained Water.

    PubMed

    Lin, Zhixiong; Riniker, Sereina; van Gunsteren, Wilfred F

    2013-03-12

    Atomistic molecular dynamics simulations of peptides or proteins in aqueous solution are still limited to the multi-nanosecond time scale and multi-nanometer range by computational cost. Combining atomic solutes with a supramolecular solvent model in hybrid fine-grained/coarse-grained (FG/CG) simulations allows atomic detail in the region of interest while being computationally more efficient. We used enveloping distribution sampling (EDS) to calculate the free enthalpy differences between different helical conformations, i.e., α-, π-, and 310-helices, of an atomic level FG alanine deca-peptide solvated in a supramolecular CG water solvent. The free enthalpy differences obtained show that by replacing the FG solvent by the CG solvent, the π-helix is destabilized with respect to the α-helix by about 2.5 kJ mol(-1), and the 310-helix is stabilized with respect to the α-helix by about 9 kJ mol(-1). In addition, the dynamics of the peptide becomes faster. By introducing a FG water layer of 0.8 nm around the peptide, both thermodynamic and dynamic properties are recovered, while the hybrid FG/CG simulations are still four times more efficient than the atomistic simulations, even when the cutoff radius for the nonbonded interactions is increased from 1.4 to 2.0 nm. Hence, the hybrid FG/CG model, which yields an appropriate balance between reduced accuracy and enhanced computational speed, is very suitable for molecular dynamics simulation investigations of biomolecules.

  4. A multi-stage heuristic algorithm for matching problem in the modified miniload automated storage and retrieval system of e-commerce

    NASA Astrophysics Data System (ADS)

    Wang, Wenrui; Wu, Yaohua; Wu, Yingying

    2016-05-01

    E-commerce, as an emerging marketing mode, has attracted more and more attention and gradually changed the way of our life. However, the existing layout of distribution centers can't fulfill the storage and picking demands of e-commerce sufficiently. In this paper, a modified miniload automated storage/retrieval system is designed to fit these new characteristics of e-commerce in logistics. Meanwhile, a matching problem, concerning with the improvement of picking efficiency in new system, is studied in this paper. The problem is how to reduce the travelling distance of totes between aisles and picking stations. A multi-stage heuristic algorithm is proposed based on statement and model of this problem. The main idea of this algorithm is, with some heuristic strategies based on similarity coefficients, minimizing the transportations of items which can not arrive in the destination picking stations just through direct conveyors. The experimental results based on the cases generated by computers show that the average reduced rate of indirect transport times can reach 14.36% with the application of multi-stage heuristic algorithm. For the cases from a real e-commerce distribution center, the order processing time can be reduced from 11.20 h to 10.06 h with the help of the modified system and the proposed algorithm. In summary, this research proposed a modified system and a multi-stage heuristic algorithm that can reduce the travelling distance of totes effectively and improve the whole performance of e-commerce distribution center.

  5. Neuropharmacology beyond reductionism - A likely prospect.

    PubMed

    Margineanu, Doru Georg

    2016-03-01

    Neuropharmacology had several major past successes, but the last few decades did not witness any leap forward in the drug treatment of brain disorders. Moreover, current drugs used in neurology and psychiatry alleviate the symptoms, while hardly curing any cause of disease, basically because the etiology of most neuro-psychic syndromes is but poorly known. This review argues that this largely derives from the unbalanced prevalence in neuroscience of the analytic reductionist approach, focused on the cellular and molecular level, while the understanding of integrated brain activities remains flimsier. The decline of drug discovery output in the last decades, quite obvious in neuropharmacology, coincided with the advent of the single target-focused search of potent ligands selective for a well-defined protein, deemed critical in a given pathology. However, all the widespread neuro-psychic troubles are multi-mechanistic and polygenic, their complex etiology making unsuited the single-target drug discovery. An evolving approach, based on systems biology considers that a disease expresses a disturbance of the network of interactions underlying organismic functions, rather than alteration of single molecular components. Accordingly, systems pharmacology seeks to restore a disturbed network via multi-targeted drugs. This review notices that neuropharmacology in fact relies on drugs which are multi-target, this feature having occurred just because those drugs were selected by phenotypic screening in vivo, or emerged from serendipitous clinical observations. The novel systems pharmacology aims, however, to devise ab initio multi-target drugs that will appropriately act on multiple molecular entities. Though this is a task much more complex than the single-target strategy, major informatics resources and computational tools for the systemic approach of drug discovery are already set forth and their rapid progress forecasts promising outcomes for neuropharmacology. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Information resources at the National Center for Biotechnology Information.

    PubMed Central

    Woodsmall, R M; Benson, D A

    1993-01-01

    The National Center for Biotechnology Information (NCBI), part of the National Library of Medicine, was established in 1988 to perform basic research in the field of computational molecular biology as well as build and distribute molecular biology databases. The basic research has led to new algorithms and analysis tools for interpreting genomic data and has been instrumental in the discovery of human disease genes for neurofibromatosis and Kallmann syndrome. The principal database responsibility is the National Institutes of Health (NIH) genetic sequence database, GenBank. NCBI, in collaboration with international partners, builds, distributes, and provides online and CD-ROM access to over 112,000 DNA sequences. Another major program is the integration of multiple sequences databases and related bibliographic information and the development of network-based retrieval systems for Internet access. PMID:8374583

  7. A geometric initial guess for localized electronic orbitals in modular biological systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckman, P. G.; Fattebert, J. L.; Lau, E. Y.

    Recent first-principles molecular dynamics algorithms using localized electronic orbitals have achieved O(N) complexity and controlled accuracy in simulating systems with finite band gaps. However, accurately deter- mining the centers of these localized orbitals during simulation setup may require O(N 3) operations, which is computationally infeasible for many biological systems. We present an O(N) approach for approximating orbital centers in proteins, DNA, and RNA which uses non-localized solutions for a set of fixed-size subproblems to create a set of geometric maps applicable to larger systems. This scalable approach, used as an initial guess in the O(N) first-principles molecular dynamics code MGmol,more » facilitates first-principles simulations in biological systems of sizes which were previously impossible.« less

  8. Multi-layer Lanczos iteration approach to calculations of vibrational energies and dipole transition intensities for polyatomic molecules

    DOE PAGES

    Yu, Hua-Gen

    2015-01-28

    We report a rigorous full dimensional quantum dynamics algorithm, the multi-layer Lanczos method, for computing vibrational energies and dipole transition intensities of polyatomic molecules without any dynamics approximation. The multi-layer Lanczos method is developed by using a few advanced techniques including the guided spectral transform Lanczos method, multi-layer Lanczos iteration approach, recursive residue generation method, and dipole-wavefunction contraction. The quantum molecular Hamiltonian at the total angular momentum J = 0 is represented in a set of orthogonal polyspherical coordinates so that the large amplitude motions of vibrations are naturally described. In particular, the algorithm is general and problem-independent. An applicationmore » is illustrated by calculating the infrared vibrational dipole transition spectrum of CH₄ based on the ab initio T8 potential energy surface of Schwenke and Partridge and the low-order truncated ab initio dipole moment surfaces of Yurchenko and co-workers. A comparison with experiments is made. The algorithm is also applicable for Raman polarizability active spectra.« less

  9. Information Infrastructure Technology and Applications (IITA) Program: Annual K-12 Workshop

    NASA Technical Reports Server (NTRS)

    Hunter, Paul; Likens, William; Leon, Mark

    1995-01-01

    The purpose of the K-12 workshop is to stimulate a cross pollination of inter-center activity and introduce the regional centers to curing edge K-1 activities. The format of the workshop consists of project presentations, working groups, and working group reports, all contained in a three day period. The agenda is aggressive and demanding. The K-12 Education Project is a multi-center activity managed by the Information Infrastructure Technology and Applications (IITA)/K-12 Project Office at the NASA Ames Research Center (ARC). this workshop is conducted in support of executing the K-12 Education element of the IITA Project The IITA/K-12 Project funds activities that use the National Information Infrastructure (NII) (e.g., the Internet) to foster reform and restructuring in mathematics, science, computing, engineering, and technical education.

  10. Predicting the breakdown strength and lifetime of nanocomposites using a multi-scale modeling approach

    NASA Astrophysics Data System (ADS)

    Huang, Yanhui; Zhao, He; Wang, Yixing; Ratcliff, Tyree; Breneman, Curt; Brinson, L. Catherine; Chen, Wei; Schadler, Linda S.

    2017-08-01

    It has been found that doping dielectric polymers with a small amount of nanofiller or molecular additive can stabilize the material under a high field and lead to increased breakdown strength and lifetime. Choosing appropriate fillers is critical to optimizing the material performance, but current research largely relies on experimental trial and error. The employment of computer simulations for nanodielectric design is rarely reported. In this work, we propose a multi-scale modeling approach that employs ab initio, Monte Carlo, and continuum scales to predict the breakdown strength and lifetime of polymer nanocomposites based on the charge trapping effect of the nanofillers. The charge transfer, charge energy relaxation, and space charge effects are modeled in respective hierarchical scales by distinctive simulation techniques, and these models are connected together for high fidelity and robustness. The preliminary results show good agreement with the experimental data, suggesting its promise for use in the computer aided material design of high performance dielectrics.

  11. 1001 Ways to run AutoDock Vina for virtual screening

    NASA Astrophysics Data System (ADS)

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D.

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  12. 1001 Ways to run AutoDock Vina for virtual screening.

    PubMed

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  13. S3DB core: a framework for RDF generation and management in bioinformatics infrastructures

    PubMed Central

    2010-01-01

    Background Biomedical research is set to greatly benefit from the use of semantic web technologies in the design of computational infrastructure. However, beyond well defined research initiatives, substantial issues of data heterogeneity, source distribution, and privacy currently stand in the way towards the personalization of Medicine. Results A computational framework for bioinformatic infrastructure was designed to deal with the heterogeneous data sources and the sensitive mixture of public and private data that characterizes the biomedical domain. This framework consists of a logical model build with semantic web tools, coupled with a Markov process that propagates user operator states. An accompanying open source prototype was developed to meet a series of applications that range from collaborative multi-institution data acquisition efforts to data analysis applications that need to quickly traverse complex data structures. This report describes the two abstractions underlying the S3DB-based infrastructure, logical and numerical, and discusses its generality beyond the immediate confines of existing implementations. Conclusions The emergence of the "web as a computer" requires a formal model for the different functionalities involved in reading and writing to it. The S3DB core model proposed was found to address the design criteria of biomedical computational infrastructure, such as those supporting large scale multi-investigator research, clinical trials, and molecular epidemiology. PMID:20646315

  14. Human factors dimensions in the evolution of increasingly automated control rooms for near-earth satellites

    NASA Technical Reports Server (NTRS)

    Mitchell, C. M.

    1982-01-01

    The NASA-Goddard Space Flight Center is responsible for the control and ground support for all of NASA's unmanned near-earth satellites. Traditionally, each satellite had its own dedicated mission operations room. In the mid-seventies, an integration of some of these dedicated facilities was begun with the primary objective to reduce costs. In this connection, the Multi-Satellite Operations Control Center (MSOCC) was designed. MSOCC represents currently a labor intensive operation. Recently, Goddard has become increasingly aware of human factors and human-machine interface issues. A summary is provided of some of the attempts to apply human factors considerations in the design of command and control environments. Current and future activities with respect to human factors and systems design are discussed, giving attention to the allocation of tasks between human and computer, and the interface for the human-computer dialogue.

  15. Cyber-Security Issues in Healthcare Information Technology.

    PubMed

    Langer, Steve G

    2017-02-01

    In 1999-2003, SIIM (then SCAR) sponsored the creation of several special topic Primers, one of which was concerned with computer security. About the same time, a multi-society collaboration authored an ACR Guideline with a similar plot; the latter has recently been updated. The motivation for these efforts was the launch of Health Information Portability and Accountability Act (HIPAA). That legislation directed care providers to enable the portability of patient medical records across authorized medical centers, while simultaneously protecting patient confidentiality among unauthorized agents. These policy requirements resulted in the creation of numerous technical solutions which the above documents described. While the mathematical concepts and algorithms in those papers are as valid today as they were then, recent increases in the complexity of computer criminal applications (and defensive countermeasures) and the pervasiveness of Internet connected devices have raised the bar. This work examines how a medical center can adapt to these evolving threats.

  16. High-Performance Computational Analysis of Glioblastoma Pathology Images with Database Support Identifies Molecular and Survival Correlates.

    PubMed

    Kong, Jun; Wang, Fusheng; Teodoro, George; Cooper, Lee; Moreno, Carlos S; Kurc, Tahsin; Pan, Tony; Saltz, Joel; Brat, Daniel

    2013-12-01

    In this paper, we present a novel framework for microscopic image analysis of nuclei, data management, and high performance computation to support translational research involving nuclear morphometry features, molecular data, and clinical outcomes. Our image analysis pipeline consists of nuclei segmentation and feature computation facilitated by high performance computing with coordinated execution in multi-core CPUs and Graphical Processor Units (GPUs). All data derived from image analysis are managed in a spatial relational database supporting highly efficient scientific queries. We applied our image analysis workflow to 159 glioblastomas (GBM) from The Cancer Genome Atlas dataset. With integrative studies, we found statistics of four specific nuclear features were significantly associated with patient survival. Additionally, we correlated nuclear features with molecular data and found interesting results that support pathologic domain knowledge. We found that Proneural subtype GBMs had the smallest mean of nuclear Eccentricity and the largest mean of nuclear Extent, and MinorAxisLength. We also found gene expressions of stem cell marker MYC and cell proliferation maker MKI67 were correlated with nuclear features. To complement and inform pathologists of relevant diagnostic features, we queried the most representative nuclear instances from each patient population based on genetic and transcriptional classes. Our results demonstrate that specific nuclear features carry prognostic significance and associations with transcriptional and genetic classes, highlighting the potential of high throughput pathology image analysis as a complementary approach to human-based review and translational research.

  17. Water-mediated interactions enable smooth substrate transport in a bacterial efflux pump.

    PubMed

    Vargiu, Attilio Vittorio; Ramaswamy, Venkata Krishnan; Malvacio, Ivana; Malloci, Giuliano; Kleinekathöfer, Ulrich; Ruggerone, Paolo

    2018-04-01

    Efflux pumps of the Resistance-Nodulation-cell Division superfamily confer multi-drug resistance to Gram-negative bacteria. The most-studied polyspecific transporter belonging to this class is the inner-membrane trimeric antiporter AcrB of Escherichia coli. In previous studies, a functional rotation mechanism was proposed for its functioning, according to which the three monomers undergo concerted conformational changes facilitating the extrusion of substrates. However, the molecular determinants and the energetics of this mechanism still remain unknown, so its feasibility must be proven mechanistically. A computational protocol able to mimic the functional rotation mechanism in AcrB was developed. By using multi-bias molecular dynamics simulations we characterized the translocation of the substrate doxorubicin driven by conformational changes of the protein. In addition, we estimated for the first time the free energy profile associated to this process. We provided a molecular view of the process in agreement with experimental data. Moreover, we showed that the conformational changes occurring in AcrB enable the formation of a layer of structured waters on the internal surface of the transport channel. This water layer, in turn, allows for a fairly constant hydration of the substrate, facilitating its diffusion over a smooth free energy profile. Our findings reveal a new molecular mechanism of polyspecific transport whereby water contributes by screening potentially strong substrate-protein interactions. We provided a mechanistic understanding of a fundamental process related to multi-drug transport. Our results can help rationalizing the behavior of other polyspecific transporters and designing compounds avoiding extrusion or inhibitors of efflux pumps. Copyright © 2018 The Author(s). Published by Elsevier B.V. All rights reserved.

  18. Evaluation of PACS in a multihospital environment

    NASA Astrophysics Data System (ADS)

    Siegel, Eliot L.; Reiner, Bruce I.; Protopapas, Zenon

    1998-07-01

    Although a number of authors have described the challenges and benefits of filmless operation using a hospital-wide Picture Archival and Communication System (PACS), there have been few descriptions of a multi-hospital wide area PACS. The purpose of this paper is to describe our two and a half year experience with PACS in an integrated multi-facility health care environment, the Veterans Affairs Maryland Health Care System (VAMHCS). On June 17, 1995 the Radiology and Nuclear Medicine services became integrated for four medical centers forming the VA Maryland Health Care System creating a single multi-facility imaging department. The facilities consisted of the Baltimore VA (acute and outpatient care, tertiary referral center), Ft. Howard (primarily long term care), Perry Point (primarily psychiatric care), and the Baltimore Rehabilitation and extended care facility (nursing home). The combined number of studies at all four sites is slightly more than 80,000 examinations per year. In addition to residents and fellows, the number of radiologists at Baltimore was approximately seven, with two at Perry Point, one at Ft. Howard, and no radiologists at the Rehabilitation and Extended Care facility. A single HIS/RIS, which is located physically at the Baltimore VAMC is utilized for all four medical centers. The multi- facility image management and communication system utilizes two separate PAC Systems that are physically located at the Baltimore VA Medical Center (BVAMC). The commercial system (GE Medical Systems) has been in place in Baltimore for more than 41/2 years and is utilized primarily in the acquisition, storage, distribution and display of radiology and nuclear medicine studies. The second PACS is the VISTA Imaging System, which has been developed as a module of the VA's HIS/RIS by and for the Department of Veterans Affairs. All of the radiology images obtained on the commercial PACS are requested by the VISTA Imaging System using DICOM query/retrieve commands and are stored on a separate server and optical jukebox. Additionally, the VISTA system is used to store all images obtained by all specialties in the medical center including pathology, dermatology, GI medicine, surgery, podiatry, ophthalmology, etc. Using this two PAC system approach, the hospital is able to achieve redundancy with regard to image storage, retrieval, and display of radiology images. The transition to a 'virtual' multi-facility imaging department was accomplished over a period of two years. Initially, Perry Point and Ft. Howard replaced their general radiographic film processors with Computed Radiography (CR) units. The CR units and subsequently, the CT and Ultrasound systems at Perry Point were interfaced (DeJarnette Research Systems) with the commercial PACS located in Baltimore. A HIS/RIS to modality interface was developed (DeJarnette and Fuji Medical Systems) between the computed radiography and CT units and VISTA Information System at Baltimore. A digital dictation system was recently implemented across the multi- facility network. The integration of the three radiology departments into a single virtual imaging department serving four medical centers has resulted in a number of benefits. Economically, there has been the elimination via attrition of one and a half radiologist FTE's (full time equivalents) and an administrative position resulting in an annual savings of more than $375,000 per year. Additionally, the expenditures for moonlighter coverage for vacation, meeting, and sick leave have been eliminated. There is now subspecialty coverage for primary or secondary interpretation and for peer review.

  19. Transport coefficients of liquid CF4 and SF6 computed by molecular dynamics using polycenter Lennard-Jones potentials

    NASA Astrophysics Data System (ADS)

    Hoheisel, C.

    1989-01-01

    For several liquid states of CF4 and SF4, the shear and the bulk viscosity as well as the thermal conductivity were determined by equilibrium molecular dynamics (MD) calculations. Lennard-Jones four- and six-center pair potentials were applied, and the method of constraints was chosen for the MD. The computed Green-Kubo integrands show a steep time decay, and no particular longtime behavior occurs. The molecule number dependence of the results is found to be small, and 3×105 integration steps allow an accuracy of about 10% for the shear viscosity and the thermal conductivity coefficient. Comparison with experimental data shows a fair agreement for CF4, while for SF6 the transport coefficients fall below the experimental ones by about 30%.

  20. Quantitative Image Informatics for Cancer Research (QIICR) | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    Imaging has enormous untapped potential to improve cancer research through software to extract and process morphometric and functional biomarkers. In the era of non-cytotoxic treatment agents, multi- modality image-guided ablative therapies and rapidly evolving computational resources, quantitative imaging software can be transformative in enabling minimally invasive, objective and reproducible evaluation of cancer treatment response. Post-processing algorithms are integral to high-throughput analysis and fine- grained differentiation of multiple molecular targets.

  1. Multi-Conformation Monte Carlo: A Method for Introducing Flexibility in Efficient Simulations of Many-Protein Systems.

    PubMed

    Prytkova, Vera; Heyden, Matthias; Khago, Domarin; Freites, J Alfredo; Butts, Carter T; Martin, Rachel W; Tobias, Douglas J

    2016-08-25

    We present a novel multi-conformation Monte Carlo simulation method that enables the modeling of protein-protein interactions and aggregation in crowded protein solutions. This approach is relevant to a molecular-scale description of realistic biological environments, including the cytoplasm and the extracellular matrix, which are characterized by high concentrations of biomolecular solutes (e.g., 300-400 mg/mL for proteins and nucleic acids in the cytoplasm of Escherichia coli). Simulation of such environments necessitates the inclusion of a large number of protein molecules. Therefore, computationally inexpensive methods, such as rigid-body Brownian dynamics (BD) or Monte Carlo simulations, can be particularly useful. However, as we demonstrate herein, the rigid-body representation typically employed in simulations of many-protein systems gives rise to certain artifacts in protein-protein interactions. Our approach allows us to incorporate molecular flexibility in Monte Carlo simulations at low computational cost, thereby eliminating ambiguities arising from structure selection in rigid-body simulations. We benchmark and validate the methodology using simulations of hen egg white lysozyme in solution, a well-studied system for which extensive experimental data, including osmotic second virial coefficients, small-angle scattering structure factors, and multiple structures determined by X-ray and neutron crystallography and solution NMR, as well as rigid-body BD simulation results, are available for comparison.

  2. Optical characterization of shock-induced chemistry in the explosive nitromethane using DFT and time-dependent DFT

    NASA Astrophysics Data System (ADS)

    Pellouchoud, Lenson; Reed, Evan

    2014-03-01

    With continual improvements in ultrafast optical spectroscopy and new multi-scale methods for simulating chemistry for hundreds of picoseconds, the opportunity is beginning to exist to connect experiments with simulations on the same timescale. We compute the optical properties of the liquid phase energetic material nitromethane (CH3NO2) for the first 100 picoseconds behind the front of a simulated shock at 6.5km/s, close to the experimentally observed detonation shock speed. We utilize molecular dynamics trajectories computed using the multi-scale shock technique (MSST) for time-resolved optical spectrum calculations based on both linear response time-dependent DFT (TDDFT) and the Kubo-Greenwood (KG) formula within Kohn-Sham DFT. We find that TDDFT predicts optical conductivities 25-35% lower than KG-based values and provides better agreement with the experimentally measured index of refraction of unreacted nitromethane. We investigate the influence of electronic temperature on the KG spectra and find no significant effect at optical wavelengths. With all methods, the spectra evolve non-monotonically in time as shock-induced chemistry takes place. We attribute the time-resolved absorption at optical wavelengths to time-dependent populations of molecular decomposition products, including NO, CNO, CNOH, H2O, and larger molecules. Supported by NASA Space Technology Research Fellowship (NSTRF) #NNX12AM48H.

  3. An Innovative Plant Genomics and Gene Annotation Program for High School, Community College, and University Faculty

    ERIC Educational Resources Information Center

    Hacisalihoglu, Gokhan; Hilgert, Uwe; Nash, E. Bruce; Micklos, David A.

    2008-01-01

    Today's biology educators face the challenge of training their students in modern molecular biology techniques including genomics and bioinformatics. The Dolan DNA Learning Center (DNALC) of Cold Spring Harbor Laboratory has developed and disseminated a bench- and computer-based plant genomics curriculum for biology faculty. In 2007, a five-day…

  4. Simulation Studies of Mechanical Properties of Novel Silica Nano-structures

    NASA Astrophysics Data System (ADS)

    Muralidharan, Krishna; Torras Costa, Joan; Trickey, Samuel B.

    2006-03-01

    Advances in nanotechnology and the importance of silica as a technological material continue to stimulate computational study of the properties of possible novel silica nanostructures. Thus we have done classical molecular dynamics (MD) and multi-scale quantum mechanical (QM/MD) simulation studies of the mechanical properties of single-wall and multi-wall silica nano-rods of varying dimensions. Such nano-rods have been predicted by Mallik et al. to be unusually strong in tensile failure. Here we compare failure mechanisms of such nano-rods under tension, compression, and bending. The concurrent multi-scale QM/MD studies use the general PUPIL system (Torras et al.). In this case, PUPIL provides automated interoperation of the MNDO Transfer Hamiltonian QM code (Taylor et al.) and a locally written MD code. Embedding of the QM-forces domain is via the scheme of Mallik et al. Work supported by NSF ITR award DMR-0325553.

  5. ms2: A molecular simulation tool for thermodynamic properties

    NASA Astrophysics Data System (ADS)

    Deublein, Stephan; Eckl, Bernhard; Stoll, Jürgen; Lishchuk, Sergey V.; Guevara-Carrion, Gabriela; Glass, Colin W.; Merker, Thorsten; Bernreuther, Martin; Hasse, Hans; Vrabec, Jadran

    2011-11-01

    This work presents the molecular simulation program ms2 that is designed for the calculation of thermodynamic properties of bulk fluids in equilibrium consisting of small electro-neutral molecules. ms2 features the two main molecular simulation techniques, molecular dynamics (MD) and Monte-Carlo. It supports the calculation of vapor-liquid equilibria of pure fluids and multi-component mixtures described by rigid molecular models on the basis of the grand equilibrium method. Furthermore, it is capable of sampling various classical ensembles and yields numerous thermodynamic properties. To evaluate the chemical potential, Widom's test molecule method and gradual insertion are implemented. Transport properties are determined by equilibrium MD simulations following the Green-Kubo formalism. ms2 is designed to meet the requirements of academia and industry, particularly achieving short response times and straightforward handling. It is written in Fortran90 and optimized for a fast execution on a broad range of computer architectures, spanning from single processor PCs over PC-clusters and vector computers to high-end parallel machines. The standard Message Passing Interface (MPI) is used for parallelization and ms2 is therefore easily portable to different computing platforms. Feature tools facilitate the interaction with the code and the interpretation of input and output files. The accuracy and reliability of ms2 has been shown for a large variety of fluids in preceding work. Program summaryProgram title:ms2 Catalogue identifier: AEJF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Special Licence supplied by the authors No. of lines in distributed program, including test data, etc.: 82 794 No. of bytes in distributed program, including test data, etc.: 793 705 Distribution format: tar.gz Programming language: Fortran90 Computer: The simulation tool ms2 is usable on a wide variety of platforms, from single processor machines over PC-clusters and vector computers to vector-parallel architectures. (Tested with Fortran compilers: gfortran, Intel, PathScale, Portland Group and Sun Studio.) Operating system: Unix/Linux, Windows Has the code been vectorized or parallelized?: Yes. Message Passing Interface (MPI) protocol Scalability. Excellent scalability up to 16 processors for molecular dynamics and >512 processors for Monte-Carlo simulations. RAM:ms2 runs on single processors with 512 MB RAM. The memory demand rises with increasing number of processors used per node and increasing number of molecules. Classification: 7.7, 7.9, 12 External routines: Message Passing Interface (MPI) Nature of problem: Calculation of application oriented thermodynamic properties for rigid electro-neutral molecules: vapor-liquid equilibria, thermal and caloric data as well as transport properties of pure fluids and multi-component mixtures. Solution method: Molecular dynamics, Monte-Carlo, various classical ensembles, grand equilibrium method, Green-Kubo formalism. Restrictions: No. The system size is user-defined. Typical problems addressed by ms2 can be solved by simulating systems containing typically 2000 molecules or less. Unusual features: Feature tools are available for creating input files, analyzing simulation results and visualizing molecular trajectories. Additional comments: Sample makefiles for multiple operation platforms are provided. Documentation is provided with the installation package and is available at http://www.ms-2.de. Running time: The running time of ms2 depends on the problem set, the system size and the number of processes used in the simulation. Running four processes on a "Nehalem" processor, simulations calculating VLE data take between two and twelve hours, calculating transport properties between six and 24 hours.

  6. Technical assistance for law-enforcement communications: Case study report two

    NASA Technical Reports Server (NTRS)

    Reilly, N. B.; Mustain, J. A.

    1979-01-01

    Two case histories are presented. In one study the feasibility of consolidating dispatch center operations for small agencies is considered. System load measurements were taken and queueing analysis applied to determine numbers of personnel required for each separate agency and for a consolidated dispatch center. Functional requirements were developed and a cost model was designed to compare relative costs of various alternatives including continuation of the present system, consolidation of a manual system, and consolidated computer-aided dispatching. The second case history deals with the consideration of a multi-regional, intrastate radio frequency for improved interregional communications. Sample standards and specifications for radio equipment are provided.

  7. Bacterial Adherence and Dwelling Probability: Two Drivers of Early Alveolar Infection by Streptococcus pneumoniae Identified in Multi-Level Mathematical Modeling

    PubMed Central

    Santos, Guido; Lai, Xin; Eberhardt, Martin; Vera, Julio

    2018-01-01

    Pneumococcal infection is the most frequent cause of pneumonia, and one of the most prevalent diseases worldwide. The population groups at high risk of death from bacterial pneumonia are infants, elderly and immunosuppressed people. These groups are more vulnerable because they have immature or impaired immune systems, the efficacy of their response to vaccines is lower, and antibiotic treatment often does not take place until the inflammatory response triggered is already overwhelming. The immune response to bacterial lung infections involves dynamic interactions between several types of cells whose activation is driven by intracellular molecular networks. A feasible approach to the integration of knowledge and data linking tissue, cellular and intracellular events and the construction of hypotheses in this area is the use of mathematical modeling. For this paper, we used a multi-level computational model to analyse the role of cellular and molecular interactions during the first 10 h after alveolar invasion of Streptococcus pneumoniae bacteria. By “multi-level” we mean that we simulated the interplay between different temporal and spatial scales in a single computational model. In this instance, we included the intracellular scale of processes driving lung epithelial cell activation together with the scale of cell-to-cell interactions at the alveolar tissue. In our analysis, we combined systematic model simulations with logistic regression analysis and decision trees to find genotypic-phenotypic signatures that explain differences in bacteria strain infectivity. According to our simulations, pneumococci benefit from a high dwelling probability and a high proliferation rate during the first stages of infection. In addition to this, the model predicts that during the very early phases of infection the bacterial capsule could be an impediment to the establishment of the alveolar infection because it impairs bacterial colonization. PMID:29868515

  8. Bacterial Adherence and Dwelling Probability: Two Drivers of Early Alveolar Infection by Streptococcus pneumoniae Identified in Multi-Level Mathematical Modeling.

    PubMed

    Santos, Guido; Lai, Xin; Eberhardt, Martin; Vera, Julio

    2018-01-01

    Pneumococcal infection is the most frequent cause of pneumonia, and one of the most prevalent diseases worldwide. The population groups at high risk of death from bacterial pneumonia are infants, elderly and immunosuppressed people. These groups are more vulnerable because they have immature or impaired immune systems, the efficacy of their response to vaccines is lower, and antibiotic treatment often does not take place until the inflammatory response triggered is already overwhelming. The immune response to bacterial lung infections involves dynamic interactions between several types of cells whose activation is driven by intracellular molecular networks. A feasible approach to the integration of knowledge and data linking tissue, cellular and intracellular events and the construction of hypotheses in this area is the use of mathematical modeling. For this paper, we used a multi-level computational model to analyse the role of cellular and molecular interactions during the first 10 h after alveolar invasion of Streptococcus pneumoniae bacteria. By "multi-level" we mean that we simulated the interplay between different temporal and spatial scales in a single computational model. In this instance, we included the intracellular scale of processes driving lung epithelial cell activation together with the scale of cell-to-cell interactions at the alveolar tissue. In our analysis, we combined systematic model simulations with logistic regression analysis and decision trees to find genotypic-phenotypic signatures that explain differences in bacteria strain infectivity. According to our simulations, pneumococci benefit from a high dwelling probability and a high proliferation rate during the first stages of infection. In addition to this, the model predicts that during the very early phases of infection the bacterial capsule could be an impediment to the establishment of the alveolar infection because it impairs bacterial colonization.

  9. An Application-Based Performance Characterization of the Columbia Supercluster

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Djomehri, Jahed M.; Hood, Robert; Jin, Hoaqiang; Kiris, Cetin; Saini, Subhash

    2005-01-01

    Columbia is a 10,240-processor supercluster consisting of 20 Altix nodes with 512 processors each, and currently ranked as the second-fastest computer in the world. In this paper, we present the performance characteristics of Columbia obtained on up to four computing nodes interconnected via the InfiniBand and/or NUMAlink4 communication fabrics. We evaluate floating-point performance, memory bandwidth, message passing communication speeds, and compilers using a subset of the HPC Challenge benchmarks, and some of the NAS Parallel Benchmarks including the multi-zone versions. We present detailed performance results for three scientific applications of interest to NASA, one from molecular dynamics, and two from computational fluid dynamics. Our results show that both the NUMAlink4 and the InfiniBand hold promise for application scaling to a large number of processors.

  10. Dynamic provisioning of local and remote compute resources with OpenStack

    NASA Astrophysics Data System (ADS)

    Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.

    2015-12-01

    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.

  11. Overview of the NCC

    NASA Technical Reports Server (NTRS)

    Liu, Nan-Suey

    2001-01-01

    A multi-disciplinary design/analysis tool for combustion systems is critical for optimizing the low-emission, high-performance combustor design process. Based on discussions between then NASA Lewis Research Center and the jet engine companies, an industry-government team was formed in early 1995 to develop the National Combustion Code (NCC), which is an integrated system of computer codes for the design and analysis of combustion systems. NCC has advanced features that address the need to meet designer's requirements such as "assured accuracy", "fast turnaround", and "acceptable cost". The NCC development team is comprised of Allison Engine Company (Allison), CFD Research Corporation (CFDRC), GE Aircraft Engines (GEAE), NASA Glenn Research Center (LeRC), and Pratt & Whitney (P&W). The "unstructured mesh" capability and "parallel computing" are fundamental features of NCC from its inception. The NCC system is composed of a set of "elements" which includes grid generator, main flow solver, turbulence module, turbulence and chemistry interaction module, chemistry module, spray module, radiation heat transfer module, data visualization module, and a post-processor for evaluating engine performance parameters. Each element may have contributions from several team members. Such a multi-source multi-element system needs to be integrated in a way that facilitates inter-module data communication, flexibility in module selection, and ease of integration. The development of the NCC beta version was essentially completed in June 1998. Technical details of the NCC elements are given in the Reference List. Elements such as the baseline flow solver, turbulence module, and the chemistry module, have been extensively validated; and their parallel performance on large-scale parallel systems has been evaluated and optimized. However the scalar PDF module and the Spray module, as well as their coupling with the baseline flow solver, were developed in a small-scale distributed computing environment. As a result, the validation of the NCC beta version as a whole was quite limited. Current effort has been focused on the validation of the integrated code and the evaluation/optimization of its overall performance on large-scale parallel systems.

  12. Multi-Omics Factor Analysis-a framework for unsupervised integration of multi-omics data sets.

    PubMed

    Argelaguet, Ricard; Velten, Britta; Arnol, Damien; Dietrich, Sascha; Zenz, Thorsten; Marioni, John C; Buettner, Florian; Huber, Wolfgang; Stegle, Oliver

    2018-06-20

    Multi-omics studies promise the improved characterization of biological processes across molecular layers. However, methods for the unsupervised integration of the resulting heterogeneous data sets are lacking. We present Multi-Omics Factor Analysis (MOFA), a computational method for discovering the principal sources of variation in multi-omics data sets. MOFA infers a set of (hidden) factors that capture biological and technical sources of variability. It disentangles axes of heterogeneity that are shared across multiple modalities and those specific to individual data modalities. The learnt factors enable a variety of downstream analyses, including identification of sample subgroups, data imputation and the detection of outlier samples. We applied MOFA to a cohort of 200 patient samples of chronic lymphocytic leukaemia, profiled for somatic mutations, RNA expression, DNA methylation and ex vivo drug responses. MOFA identified major dimensions of disease heterogeneity, including immunoglobulin heavy-chain variable region status, trisomy of chromosome 12 and previously underappreciated drivers, such as response to oxidative stress. In a second application, we used MOFA to analyse single-cell multi-omics data, identifying coordinated transcriptional and epigenetic changes along cell differentiation. © 2018 The Authors. Published under the terms of the CC BY 4.0 license.

  13. CARES/LIFE Software Commercialization

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The NASA Lewis Research Center has entered into a letter agreement with BIOSYM Technologies Inc. (now merged with Molecular Simulations Inc. (MSI)). Under this agreement, NASA will provide a developmental copy of the CARES/LIFE computer program to BIOSYM for evaluation. This computer code predicts the time-dependent reliability of a thermomechanically loaded component. BIOSYM will become familiar with CARES/LIFE, provide results of computations useful in validating the code, evaluate it for potential commercialization, and submit suggestions for improvements or extensions to the code or its documentation. If BIOSYM/Molecular Simulations reaches a favorable evaluation of CARES/LIFE, NASA will enter into negotiations for a cooperative agreement with BIOSYM/Molecular Simulations to further develop the code--adding features such as a user-friendly interface and other improvements. This agreement would give BIOSYM intellectual property rights in the modified codes, which they could protect and then commercialize. NASA would provide BIOSYM with the NASA-developed source codes and would agree to cooperate with BIOSYM in further developing the code. In return, NASA would receive certain use rights in the modified CARES/LIFE program. Presently BIOSYM Technologies Inc. has been involved with integration issues concerning its merger with Molecular Simulations Inc., since both companies used to compete in the computational chemistry market, and to some degree, in the materials market. Consequently, evaluation of the CARES/LIFE software is on hold for a month or two while the merger is finalized. Their interest in CARES continues, however, and they expect to get back to the evaluation by early November 1995.

  14. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads

    PubMed Central

    Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-01-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922

  15. Alternative Fuels Data Center: Electric Vehicle Charging for Multi-Unit

    Science.gov Websites

    Dwellings Electric Vehicle Charging for Multi-Unit Dwellings to someone by E-mail Share Alternative Fuels Data Center: Electric Vehicle Charging for Multi-Unit Dwellings on Facebook Tweet about Alternative Fuels Data Center: Electric Vehicle Charging for Multi-Unit Dwellings on Twitter Bookmark

  16. Lattice dynamics calculations based on density-functional perturbation theory in real space

    NASA Astrophysics Data System (ADS)

    Shang, Honghui; Carbogno, Christian; Rinke, Patrick; Scheffler, Matthias

    2017-06-01

    A real-space formalism for density-functional perturbation theory (DFPT) is derived and applied for the computation of harmonic vibrational properties in molecules and solids. The practical implementation using numeric atom-centered orbitals as basis functions is demonstrated exemplarily for the all-electron Fritz Haber Institute ab initio molecular simulations (FHI-aims) package. The convergence of the calculations with respect to numerical parameters is carefully investigated and a systematic comparison with finite-difference approaches is performed both for finite (molecules) and extended (periodic) systems. Finally, the scaling tests and scalability tests on massively parallel computer systems demonstrate the computational efficiency.

  17. Adaptation of a software development methodology to the implementation of a large-scale data acquisition and control system. [for Deep Space Network

    NASA Technical Reports Server (NTRS)

    Madrid, G. A.; Westmoreland, P. T.

    1983-01-01

    A progress report is presented on a program to upgrade the existing NASA Deep Space Network in terms of a redesigned computer-controlled data acquisition system for channelling tracking, telemetry, and command data between a California-based control center and three signal processing centers in Australia, California, and Spain. The methodology for the improvements is oriented towards single subsystem development with consideration for a multi-system and multi-subsystem network of operational software. Details of the existing hardware configurations and data transmission links are provided. The program methodology includes data flow design, interface design and coordination, incremental capability availability, increased inter-subsystem developmental synthesis and testing, system and network level synthesis and testing, and system verification and validation. The software has been implemented thus far to a 65 percent completion level, and the methodology being used to effect the changes, which will permit enhanced tracking and communication with spacecraft, has been concluded to feature effective techniques.

  18. Animated computer graphics models of space and earth sciences data generated via the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David

    1987-01-01

    The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.

  19. Computational Optimization and Characterization of Molecularly Imprinted Polymers

    NASA Astrophysics Data System (ADS)

    Terracina, Jacob J.

    Molecularly imprinted polymers (MIPs) are a class of materials containing sites capable of selectively binding to the imprinted target molecule. Computational chemistry techniques were used to study the effect of different fabrication parameters (the monomer-to-target ratios, pre-polymerization solvent, temperature, and pH) on the formation of the MIP binding sites. Imprinted binding sites were built in silico for the purposes of better characterizing the receptor - ligand interactions. Chiefly, the sites were characterized with respect to their selectivities and the heterogeneity between sites. First, a series of two-step molecular mechanics (MM) and quantum mechanics (QM) computational optimizations of monomer -- target systems was used to determine optimal monomer-to-target ratios for the MIPs. Imidazole- and xanthine-derived target molecules were studied. The investigation included both small-scale models (one-target) and larger scale models (five-targets). The optimal ratios differed between the small and larger scales. For the larger models containing multiple targets, binding-site surface area analysis was used to evaluate the heterogeneity of the sites. The more fully surrounded sites had greater binding energies. Molecular docking was then used to measure the selectivities of the QM-optimized binding sites by comparing the binding energies of the imprinted target to that of a structural analogue. Selectivity was also shown to improve as binding sites become more fully encased by the monomers. For internal sites, docking consistently showed selectivity favoring the molecules that had been imprinted via QM geometry optimizations. The computationally imprinted sites were shown to exhibit size-, shape-, and polarity-based selectivity. This represented a novel approach to investigate the selectivity and heterogeneity of imprinted polymer binding sites, by applying the rapid orientation screening of MM docking to the highly accurate QM-optimized geometries. Next, we sought to computationally construct and investigate binding sites for their enantioselectivity. Again, a two-step MM [special characters removed] QM optimization scheme was used to "computationally imprint" chiral molecules. Using docking techniques, the imprinted binding sites were shown to exhibit an enantioselective preference for the imprinted molecule over its enantiomer. Docking of structurally similar chiral molecules showed that the sites computationally imprinted with R- or S-tBOC-tyrosine were able to differentiate between R- and S-forms of other tyrosine derivatives. The cross-enantioselectivity did not hold for chiral molecules that did not share the tyrosine H-bonding functional group orientations. Further analysis of the individual monomer - target interactions within the binding site led us to conclude that H-bonding functional groups that are located immediately next to the target's chiral center, and therefore spatially fixed relative to the chiral center, will have a stronger contribution to the enantioselectivity of the site than those groups separated from the chiral center by two or more rotatable bonds. These models were the first computationally imprinted binding sites to exhibit this enantioselective preference for the imprinted target molecules. Finally, molecular dynamics (MD) was used to quantify H-bonding interactions between target molecules, monomers, and solvents representative of the pre-polymerization matrix. It was found that both target dimerization and solvent interference decrease the number of monomer - target H-bonds present. Systems were optimized via simulated annealing to create binding sites that were then subjected to molecular docking analysis. Docking showed that the presence of solvent had a detrimental effect on the sensitivity and selectivity of the sites, and that solvents with more H-bonding capabilities were more disruptive to the binding properties of the site. Dynamic simulations also showed that increasing the temperature of the solution can significantly decrease the number of H-bonds formed between the targets and monomers. It is believed that the monomer - target complexes formed within the pre-polymerization matrix are translated into the selective binding cavities formed during polymerization. Elucidating the nature of these interactions in silico improves our understanding of MIPs, ultimately allowing for more optimized sensing materials.

  20. Developing the human-computer interface for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Holden, Kritina L.

    1991-01-01

    For the past two years, the Human-Computer Interaction Laboratory (HCIL) at the Johnson Space Center has been involved in prototyping and prototype reviews of in support of the definition phase of the Space Station Freedom program. On the Space Station, crew members will be interacting with multi-monitor workstations where interaction with several displays at one time will be common. The HCIL has conducted several experiments to begin to address design issues for this complex system. Experiments have dealt with design of ON/OFF indicators, the movement of the cursor across multiple monitors, and the importance of various windowing capabilities for users performing multiple tasks simultaneously.

  1. A Lightning Channel Retrieval Algorithm for the North Alabama Lightning Mapping Array (LMA)

    NASA Technical Reports Server (NTRS)

    Koshak, William; Arnold, James E. (Technical Monitor)

    2002-01-01

    A new multi-station VHF time-of-arrival (TOA) antenna network is, at the time of this writing, coming on-line in Northern Alabama. The network, called the Lightning Mapping Array (LMA), employs GPS timing and detects VHF radiation from discrete segments (effectively point emitters) that comprise the channel of lightning strokes within cloud and ground flashes. The network will support on-going ground validation activities of the low Earth orbiting Lightning Imaging Sensor (LIS) satellite developed at NASA Marshall Space Flight Center (MSFC) in Huntsville, Alabama. It will also provide for many interesting and detailed studies of the distribution and evolution of thunderstorms and lightning in the Tennessee Valley, and will offer many interesting comparisons with other meteorological/geophysical wets associated with lightning and thunderstorms. In order to take full advantage of these benefits, it is essential that the LMA channel mapping accuracy (in both space and time) be fully characterized and optimized. In this study, a new revised channel mapping retrieval algorithm is introduced. The algorithm is an extension of earlier work provided in Koshak and Solakiewicz (1996) in the analysis of the NASA Kennedy Space Center (KSC) Lightning Detection and Ranging (LDAR) system. As in the 1996 study, direct algebraic solutions are obtained by inverting a simple linear system of equations, thereby making computer searches through a multi-dimensional parameter domain of a Chi-Squared function unnecessary. However, the new algorithm is developed completely in spherical Earth-centered coordinates (longitude, latitude, altitude), rather than in the (x, y, z) cartesian coordinates employed in the 1996 study. Hence, no mathematical transformations from (x, y, z) into spherical coordinates are required (such transformations involve more numerical error propagation, more computer program coding, and slightly more CPU computing time). The new algorithm also has a more realistic definition of source altitude that accounts for Earth oblateness (this can become important for sources that are hundreds of kilometers away from the network). In addition, the new algorithm is being applied to analyze computer simulated LMA datasets in order to obtain detailed location/time retrieval error maps for sources in and around the LMA network. These maps will provide a more comprehensive analysis of retrieval errors for LMA than the 1996 study did of LDAR retrieval errors. Finally, we note that the new algorithm can be applied to LDAR, and essentially any other multi-station TWA network that depends on direct line-of-site antenna excitation.

  2. Extending Strong Scaling of Quantum Monte Carlo to the Exascale

    NASA Astrophysics Data System (ADS)

    Shulenburger, Luke; Baczewski, Andrew; Luo, Ye; Romero, Nichols; Kent, Paul

    Quantum Monte Carlo is one of the most accurate and most computationally expensive methods for solving the electronic structure problem. In spite of its significant computational expense, its massively parallel nature is ideally suited to petascale computers which have enabled a wide range of applications to relatively large molecular and extended systems. Exascale capabilities have the potential to enable the application of QMC to significantly larger systems, capturing much of the complexity of real materials such as defects and impurities. However, both memory and computational demands will require significant changes to current algorithms to realize this possibility. This talk will detail both the causes of the problem and potential solutions. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corp, a wholly owned subsidiary of Lockheed Martin Corp, for the US Department of Energys National Nuclear Security Administration under contract DE-AC04-94AL85000.

  3. Electrostatic solvation free energies of charged hard spheres using molecular dynamics with density functional theory interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duignan, Timothy T.; Baer, Marcel D.; Schenter, Gregory K.

    Determining the solvation free energies of single ions in water is one of the most fundamental problems in physical chemistry and yet many unresolved questions remain. In particular, the ability to decompose the solvation free energy into simple and intuitive contributions will have important implications for coarse grained models of electrolyte solution. Here, we provide rigorous definitions of the various types of single ion solvation free energies based on different simulation protocols. We calculate solvation free energies of charged hard spheres using density functional theory interaction potentials with molecular dynamics simulation (DFT-MD) and isolate the effects of charge and cavitation,more » comparing to the Born (linear response) model. We show that using uncorrected Ewald summation leads to highly unphysical values for the solvation free energy and that charging free energies for cations are approximately linear as a function of charge but that there is a small non-linearity for small anions. The charge hydration asymmetry (CHA) for hard spheres, determined with quantum mechanics, is much larger than for the analogous real ions. This suggests that real ions, particularly anions, are significantly more complex than simple charged hard spheres, a commonly employed representation. We would like to thank Thomas Beck, Shawn Kathmann, Richard Remsing and John Weeks for helpful discussions. Computing resources were generously allocated by PNNL's Institutional Computing program. This research also used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. TTD, GKS, and CJM were supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. MDB was supported by MS3 (Materials Synthesis and Simulation Across Scales) Initiative, a Laboratory Directed Research and Development Program at Pacific Northwest National Laboratory (PNNL). PNNL is a multi-program national laboratory operated by Battelle for the U.S. Department of Energy.« less

  4. Solving a bi-objective mathematical model for location-routing problem with time windows in multi-echelon reverse logistics using metaheuristic procedure

    NASA Astrophysics Data System (ADS)

    Ghezavati, V. R.; Beigi, M.

    2016-12-01

    During the last decade, the stringent pressures from environmental and social requirements have spurred an interest in designing a reverse logistics (RL) network. The success of a logistics system may depend on the decisions of the facilities locations and vehicle routings. The location-routing problem (LRP) simultaneously locates the facilities and designs the travel routes for vehicles among established facilities and existing demand points. In this paper, the location-routing problem with time window (LRPTW) and homogeneous fleet type and designing a multi-echelon, and capacitated reverse logistics network, are considered which may arise in many real-life situations in logistics management. Our proposed RL network consists of hybrid collection/inspection centers, recovery centers and disposal centers. Here, we present a new bi-objective mathematical programming (BOMP) for LRPTW in reverse logistic. Since this type of problem is NP-hard, the non-dominated sorting genetic algorithm II (NSGA-II) is proposed to obtain the Pareto frontier for the given problem. Several numerical examples are presented to illustrate the effectiveness of the proposed model and algorithm. Also, the present work is an effort to effectively implement the ɛ-constraint method in GAMS software for producing the Pareto-optimal solutions in a BOMP. The results of the proposed algorithm have been compared with the ɛ-constraint method. The computational results show that the ɛ-constraint method is able to solve small-size instances to optimality within reasonable computing times, and for medium-to-large-sized problems, the proposed NSGA-II works better than the ɛ-constraint.

  5. A Dual-Modality System for Both Multi-Color Ultrasound-Switchable Fluorescence and Ultrasound Imaging

    PubMed Central

    Kandukuri, Jayanth; Yu, Shuai; Cheng, Bingbing; Bandi, Venugopal; D’Souza, Francis; Nguyen, Kytai T.; Hong, Yi; Yuan, Baohong

    2017-01-01

    Simultaneous imaging of multiple targets (SIMT) in opaque biological tissues is an important goal for molecular imaging in the future. Multi-color fluorescence imaging in deep tissues is a promising technology to reach this goal. In this work, we developed a dual-modality imaging system by combining our recently developed ultrasound-switchable fluorescence (USF) imaging technology with the conventional ultrasound (US) B-mode imaging. This dual-modality system can simultaneously image tissue acoustic structure information and multi-color fluorophores in centimeter-deep tissue with comparable spatial resolutions. To conduct USF imaging on the same plane (i.e., x-z plane) as US imaging, we adopted two 90°-crossed ultrasound transducers with an overlapped focal region, while the US transducer (the third one) was positioned at the center of these two USF transducers. Thus, the axial resolution of USF is close to the lateral resolution, which allows a point-by-point USF scanning on the same plane as the US imaging. Both multi-color USF and ultrasound imaging of a tissue phantom were demonstrated. PMID:28165390

  6. Analysing and Rationalising Molecular and Materials Databases Using Machine-Learning

    NASA Astrophysics Data System (ADS)

    de, Sandip; Ceriotti, Michele

    Computational materials design promises to greatly accelerate the process of discovering new or more performant materials. Several collaborative efforts are contributing to this goal by building databases of structures, containing between thousands and millions of distinct hypothetical compounds, whose properties are computed by high-throughput electronic-structure calculations. The complexity and sheer amount of information has made manual exploration, interpretation and maintenance of these databases a formidable challenge, making it necessary to resort to automatic analysis tools. Here we will demonstrate how, starting from a measure of (dis)similarity between database items built from a combination of local environment descriptors, it is possible to apply hierarchical clustering algorithms, as well as dimensionality reduction methods such as sketchmap, to analyse, classify and interpret trends in molecular and materials databases, as well as to detect inconsistencies and errors. Thanks to the agnostic and flexible nature of the underlying metric, we will show how our framework can be applied transparently to different kinds of systems ranging from organic molecules and oligopeptides to inorganic crystal structures as well as molecular crystals. Funded by National Center for Computational Design and Discovery of Novel Materials (MARVEL) and Swiss National Science Foundation.

  7. GENESIS: a hybrid-parallel and multi-scale molecular dynamics simulator with enhanced sampling algorithms for biomolecular and cellular simulations

    PubMed Central

    Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji

    2015-01-01

    GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310–323. doi: 10.1002/wcms.1220 PMID:26753008

  8. Computational design and multivariate optimization of an electrochemical metoprolol sensor based on molecular imprinting in combination with carbon nanotubes.

    PubMed

    Nezhadali, Azizollah; Mojarrab, Maliheh

    2016-06-14

    This work describes the development of an electrochemical sensor based on a new molecularly imprinted polymer for detection of metoprolol (MTP) at ultra-trace level. The polypyrrole (PPy) was electrochemically synthesized on the tip of a pencil graphite electrode (PGE) which modified whit functionalized multi-walled carbon nanotubes (MWCNTs). The fabrication process of the sensor was characterized by cyclic voltammetry (CV) and the measurement process was carried out by differential pulse voltammetry (DPV). A computational approach was used to screening functional monomers and polymerization solvent for rational design of molecularly imprinted polymer (MIP). Based on computational results, pyrrole and water were selected as functional monomer and polymerization solvent, respectively. Several significant parameters controlling the performance of the MIP sensor were examined and optimized using multivariate optimization methods such as Plackett-Burman design (PBD) and central composite design (CCD). Under the selected optimal conditions, MIP sensor was showed a linear range from 0.06 to 490 μmol L(-1) MTP, a limit of detection of 2.88 nmol L(-1), a highly reproducible response (RSD 3.9%) and a good selectivity in the presence of structurally related molecules. Furthermore, the applicability of the method was successfully tested with determination of MTP in real samples (tablet, and serum). Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Machine learning of molecular electronic properties in chemical compound space

    NASA Astrophysics Data System (ADS)

    Montavon, Grégoire; Rupp, Matthias; Gobre, Vivekanand; Vazquez-Mayagoitia, Alvaro; Hansen, Katja; Tkatchenko, Alexandre; Müller, Klaus-Robert; Anatole von Lilienfeld, O.

    2013-09-01

    The combination of modern scientific computing with electronic structure theory can lead to an unprecedented amount of data amenable to intelligent data analysis for the identification of meaningful, novel and predictive structure-property relationships. Such relationships enable high-throughput screening for relevant properties in an exponentially growing pool of virtual compounds that are synthetically accessible. Here, we present a machine learning model, trained on a database of ab initio calculation results for thousands of organic molecules, that simultaneously predicts multiple electronic ground- and excited-state properties. The properties include atomization energy, polarizability, frontier orbital eigenvalues, ionization potential, electron affinity and excitation energies. The machine learning model is based on a deep multi-task artificial neural network, exploiting the underlying correlations between various molecular properties. The input is identical to ab initio methods, i.e. nuclear charges and Cartesian coordinates of all atoms. For small organic molecules, the accuracy of such a ‘quantum machine’ is similar, and sometimes superior, to modern quantum-chemical methods—at negligible computational cost.

  10. uPy: a ubiquitous computer graphics Python API with Biological Modeling Applications

    PubMed Central

    Autin, L.; Johnson, G.; Hake, J.; Olson, A.; Sanner, M.

    2015-01-01

    In this paper we describe uPy, an extension module for the Python programming language that provides a uniform abstraction of the APIs of several 3D computer graphics programs called hosts, including: Blender, Maya, Cinema4D, and DejaVu. A plugin written with uPy is a unique piece of code that will run in all uPy-supported hosts. We demonstrate the creation of complex plug-ins for molecular/cellular modeling and visualization and discuss how uPy can more generally simplify programming for many types of projects (not solely science applications) intended for multi-host distribution. uPy is available at http://upy.scripps.edu PMID:24806987

  11. Multi-target-qubit unconventional geometric phase gate in a multi-cavity system

    NASA Astrophysics Data System (ADS)

    Liu, Tong; Cao, Xiao-Zhi; Su, Qi-Ping; Xiong, Shao-Jie; Yang, Chui-Ping

    2016-02-01

    Cavity-based large scale quantum information processing (QIP) may involve multiple cavities and require performing various quantum logic operations on qubits distributed in different cavities. Geometric-phase-based quantum computing has drawn much attention recently, which offers advantages against inaccuracies and local fluctuations. In addition, multiqubit gates are particularly appealing and play important roles in QIP. We here present a simple and efficient scheme for realizing a multi-target-qubit unconventional geometric phase gate in a multi-cavity system. This multiqubit phase gate has a common control qubit but different target qubits distributed in different cavities, which can be achieved using a single-step operation. The gate operation time is independent of the number of qubits and only two levels for each qubit are needed. This multiqubit gate is generic, e.g., by performing single-qubit operations, it can be converted into two types of significant multi-target-qubit phase gates useful in QIP. The proposal is quite general, which can be used to accomplish the same task for a general type of qubits such as atoms, NV centers, quantum dots, and superconducting qubits.

  12. Multi-target-qubit unconventional geometric phase gate in a multi-cavity system.

    PubMed

    Liu, Tong; Cao, Xiao-Zhi; Su, Qi-Ping; Xiong, Shao-Jie; Yang, Chui-Ping

    2016-02-22

    Cavity-based large scale quantum information processing (QIP) may involve multiple cavities and require performing various quantum logic operations on qubits distributed in different cavities. Geometric-phase-based quantum computing has drawn much attention recently, which offers advantages against inaccuracies and local fluctuations. In addition, multiqubit gates are particularly appealing and play important roles in QIP. We here present a simple and efficient scheme for realizing a multi-target-qubit unconventional geometric phase gate in a multi-cavity system. This multiqubit phase gate has a common control qubit but different target qubits distributed in different cavities, which can be achieved using a single-step operation. The gate operation time is independent of the number of qubits and only two levels for each qubit are needed. This multiqubit gate is generic, e.g., by performing single-qubit operations, it can be converted into two types of significant multi-target-qubit phase gates useful in QIP. The proposal is quite general, which can be used to accomplish the same task for a general type of qubits such as atoms, NV centers, quantum dots, and superconducting qubits.

  13. Predictive Model and Methodology for Heat Treatment Distortion Final Report CRADA No. TC-298-92

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikkel, D. J.; McCabe, J.

    This project was a multi-lab, multi-partner CRADA involving LLNL, Los Alamos National Laboratory, Sandia National Laboratories, Oak Ridge National Laboratory, Martin Marietta Energy Systems and the industrial partner, The National Center of Manufacturing Sciences (NCMS). A number of member companies of NCMS participated including General Motors Corporation, Ford Motor Company, The Torrington Company, Gear Research, the Illinois Institute of Technology Research Institute, and Deformation Control Technology •. LLNL was the lead laboratory for metrology technology used for validation of the computational tool/methodology. LLNL was also the lead laboratory for the development of the software user interface , for the computationalmore » tool. This report focuses on the participation of LLNL and NCMS. The purpose of the project was to develop a computational tool/methodology that engineers would use to predict the effects of heat treatment on the _size and shape of industrial parts made of quench hardenable alloys. Initially, the target application of the tool was gears for automotive power trains.« less

  14. Multi-blocking strategies for the INS3D incompressible Navier-Stokes code

    NASA Technical Reports Server (NTRS)

    Gatlin, Boyd

    1990-01-01

    With the continuing development of bigger and faster supercomputers, computational fluid dynamics (CFD) has become a useful tool for real-world engineering design and analysis. However, the number of grid points necessary to resolve realistic flow fields numerically can easily exceed the memory capacity of available computers. In addition, geometric shapes of flow fields, such as those in the Space Shuttle Main Engine (SSME) power head, may be impossible to fill with continuous grids upon which to obtain numerical solutions to the equations of fluid motion. The solution to this dilemma is simply to decompose the computational domain into subblocks of manageable size. Computer codes that are single-block by construction can be modified to handle multiple blocks, but ad-hoc changes in the FORTRAN have to be made for each geometry treated. For engineering design and analysis, what is needed is generalization so that the blocking arrangement can be specified by the user. INS3D is a computer program for the solution of steady, incompressible flow problems. It is used frequently to solve engineering problems in the CFD Branch at Marshall Space Flight Center. INS3D uses an implicit solution algorithm and the concept of artificial compressibility to provide the necessary coupling between the pressure field and the velocity field. The development of generalized multi-block capability in INS3D is described.

  15. A self-consistent first-principle based approach to model carrier mobility in organic materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meded, Velimir; Friederich, Pascal; Symalla, Franz

    2015-12-31

    Transport through thin organic amorphous films, utilized in OLEDs and OPVs, has been a challenge to model by using ab-initio methods. Charge carrier mobility depends strongly on the disorder strength and reorganization energy, both of which are significantly affected by the details in environment of each molecule. Here we present a multi-scale approach to describe carrier mobility in which the materials morphology is generated using DEPOSIT, a Monte Carlo based atomistic simulation approach, or, alternatively by molecular dynamics calculations performed with GROMACS. From this morphology we extract the material specific hopping rates, as well as the on-site energies using amore » fully self-consistent embedding approach to compute the electronic structure parameters, which are then used in an analytic expression for the carrier mobility. We apply this strategy to compute the carrier mobility for a set of widely studied molecules and obtain good agreement between experiment and theory varying over several orders of magnitude in the mobility without any freely adjustable parameters. The work focuses on the quantum mechanical step of the multi-scale workflow, explains the concept along with the recently published workflow optimization, which combines density functional with semi-empirical tight binding approaches. This is followed by discussion on the analytic formula and its agreement with established percolation fits as well as kinetic Monte Carlo numerical approaches. Finally, we skatch an unified multi-disciplinary approach that integrates materials science simulation and high performance computing, developed within EU project MMM@HPC.« less

  16. Reliability assessment of multiple quantum well avalanche photodiodes

    NASA Technical Reports Server (NTRS)

    Yun, Ilgu; Menkara, Hicham M.; Wang, Yang; Oguzman, Isamil H.; Kolnik, Jan; Brennan, Kevin F.; May, Gray S.; Wagner, Brent K.; Summers, Christopher J.

    1995-01-01

    The reliability of doped-barrier AlGaAs/GsAs multi-quantum well avalanche photodiodes fabricated by molecular beam epitaxy is investigated via accelerated life tests. Dark current and breakdown voltage were the parameters monitored. The activation energy of the degradation mechanism and median device lifetime were determined. Device failure probability as a function of time was computed using the lognormal model. Analysis using the electron beam induced current method revealed the degradation to be caused by ionic impurities or contamination in the passivation layer.

  17. Multi-color incomplete Cholesky conjugate gradient methods for vector computers. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Poole, E. L.

    1986-01-01

    In this research, we are concerned with the solution on vector computers of linear systems of equations, Ax = b, where A is a larger, sparse symmetric positive definite matrix. We solve the system using an iterative method, the incomplete Cholesky conjugate gradient method (ICCG). We apply a multi-color strategy to obtain p-color matrices for which a block-oriented ICCG method is implemented on the CYBER 205. (A p-colored matrix is a matrix which can be partitioned into a pXp block matrix where the diagonal blocks are diagonal matrices). This algorithm, which is based on a no-fill strategy, achieves O(N/p) length vector operations in both the decomposition of A and in the forward and back solves necessary at each iteration of the method. We discuss the natural ordering of the unknowns as an ordering that minimizes the number of diagonals in the matrix and define multi-color orderings in terms of disjoint sets of the unknowns. We give necessary and sufficient conditions to determine which multi-color orderings of the unknowns correpond to p-color matrices. A performance model is given which is used both to predict execution time for ICCG methods and also to compare an ICCG method to conjugate gradient without preconditioning or another ICCG method. Results are given from runs on the CYBER 205 at NASA's Langley Research Center for four model problems.

  18. 78 FR 6127 - Center for Scientific Review; Notice of Closed Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-29

    ..., Computational Biology and Technology Study Section. Date: February 20-21, 2013. Time: 11:00 a.m. to 6:00 p.m... Hematology Integrated Review Group Vascular Cell and Molecular Biology Study Section. Date: February 21-22... Rehabilitation Study Section. Date: February 22, 2013. Time: 8:00 p.m. to 9:00 p.m. Agenda: To review and...

  19. Interacting domain-specific languages with biological problem solving environments

    NASA Astrophysics Data System (ADS)

    Cickovski, Trevor M.

    Iteratively developing a biological model and verifying results with lab observations has become standard practice in computational biology. This process is currently facilitated by biological Problem Solving Environments (PSEs), multi-tiered and modular software frameworks which traditionally consist of two layers: a computational layer written in a high level language using design patterns, and a user interface layer which hides its details. Although PSEs have proven effective, they still enforce some communication overhead between biologists refining their models through repeated comparison with experimental observations in vitro or in vivo, and programmers actually implementing model extensions and modifications within the computational layer. I illustrate the use of biological Domain-Specific Languages (DSLs) as a middle-level PSE tier to ameliorate this problem by providing experimentalists with the ability to iteratively test and develop their models using a higher degree of expressive power compared to a graphical interface, while saving the requirement of general purpose programming knowledge. I develop two radically different biological DSLs: XML-based BIOLOGO will model biological morphogenesis using a cell-centered stochastic cellular automaton and translate into C++ modules for an object-oriented PSE C OMPUCELL3D, and MDLab will provide a set of high-level Python libraries for running molecular dynamics simulations, using wrapped functionality from the C++ PSE PROTOMOL. I describe each language in detail, including its its roles within the larger PSE and its expressibility in terms of representable phenomena, and a discussion of observations from users of the languages. Moreover I will use these studies to draw general conclusions about biological DSL development, including dependencies upon the goals of the corresponding PSE, strategies, and tradeoffs.

  20. Molecular Rayleigh Scattering Techniques Developed for Measuring Gas Flow Velocity, Density, Temperature, and Turbulence

    NASA Technical Reports Server (NTRS)

    Mielke, Amy F.; Seasholtz, Richard G.; Elam, Kristie A.; Panda, Jayanta

    2005-01-01

    Nonintrusive optical point-wise measurement techniques utilizing the principles of molecular Rayleigh scattering have been developed at the NASA Glenn Research Center to obtain time-averaged information about gas velocity, density, temperature, and turbulence, or dynamic information about gas velocity and density in unseeded flows. These techniques enable measurements that are necessary for validating computational fluid dynamics (CFD) and computational aeroacoustic (CAA) codes. Dynamic measurements allow the calculation of power spectra for the various flow properties. This type of information is currently being used in jet noise studies, correlating sound pressure fluctuations with velocity and density fluctuations to determine noise sources in jets. These nonintrusive techniques are particularly useful in supersonic flows, where seeding the flow with particles is not an option, and where the environment is too harsh for hot-wire measurements.

  1. Status of the Combustion Devices Injector Technology Program at the NASA MSFC

    NASA Technical Reports Server (NTRS)

    Jones, Gregg; Protz, Christopher; Trinh, Huu; Tucker, Kevin; Nesman, Tomas; Hulka, James

    2005-01-01

    To support the NASA Space Exploration Mission, an in-house program called Combustion Devices Injector Technology (CDIT) is being conducted at the NASA Marshall Space Flight Center (MSFC) for the fiscal year 2005. CDIT is focused on developing combustor technology and analysis tools to improve reliability and durability of upper-stage and in-space liquid propellant rocket engines. The three areas of focus include injector/chamber thermal compatibility, ignition, and combustion stability. In the compatibility and ignition areas, small-scale single- and multi-element hardware experiments will be conducted to demonstrate advanced technological concepts as well as to provide experimental data for validation of computational analysis tools. In addition, advanced analysis tools will be developed to eventually include 3-dimensional and multi- element effects and improve capability and validity to analyze heat transfer and ignition in large, multi-element injectors.

  2. Multi-modality molecular imaging: pre-clinical laboratory configuration

    NASA Astrophysics Data System (ADS)

    Wu, Yanjun; Wellen, Jeremy W.; Sarkar, Susanta K.

    2006-02-01

    In recent years, the prevalence of in vivo molecular imaging applications has rapidly increased. Here we report on the construction of a multi-modality imaging facility in a pharmaceutical setting that is expected to further advance existing capabilities for in vivo imaging of drug distribution and the interaction with their target. The imaging instrumentation in our facility includes a microPET scanner, a four wavelength time-domain optical imaging scanner, a 9.4T/30cm MRI scanner and a SPECT/X-ray CT scanner. An electronics shop and a computer room dedicated to image analysis are additional features of the facility. The layout of the facility was designed with a central animal preparation room surrounded by separate laboratory rooms for each of the major imaging modalities to accommodate the work-flow of simultaneous in vivo imaging experiments. This report will focus on the design of and anticipated applications for our microPET and optical imaging laboratory spaces. Additionally, we will discuss efforts to maximize the daily throughput of animal scans through development of efficient experimental work-flows and the use of multiple animals in a single scanning session.

  3. Final Technical Report: Mathematical Foundations for Uncertainty Quantification in Materials Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plechac, Petr; Vlachos, Dionisios G.

    We developed path-wise information theory-based and goal-oriented sensitivity analysis and parameter identification methods for complex high-dimensional dynamics and in particular of non-equilibrium extended molecular systems. The combination of these novel methodologies provided the first methods in the literature which are capable to handle UQ questions for stochastic complex systems with some or all of the following features: (a) multi-scale stochastic models such as (bio)chemical reaction networks, with a very large number of parameters, (b) spatially distributed systems such as Kinetic Monte Carlo or Langevin Dynamics, (c) non-equilibrium processes typically associated with coupled physico-chemical mechanisms, driven boundary conditions, hybrid micro-macro systems,more » etc. A particular computational challenge arises in simulations of multi-scale reaction networks and molecular systems. Mathematical techniques were applied to in silico prediction of novel materials with emphasis on the effect of microstructure on model uncertainty quantification (UQ). We outline acceleration methods to make calculations of real chemistry feasible followed by two complementary tasks on structure optimization and microstructure-induced UQ.« less

  4. MC3, Version 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cawkwell, Marc Jon

    2016-09-09

    The MC3 code is used to perform Monte Carlo simulations in the isothermal-isobaric ensemble (constant number of particles, temperature, and pressure) on molecular crystals. The molecules within the periodic simulation cell are treated as rigid bodies, alleviating the requirement for a complex interatomic potential. Intermolecular interactions are described using generic, atom-centered pair potentials whose parameterization is taken from the literature [D. E. Williams, J. Comput. Chem., 22, 1154 (2001)] and electrostatic interactions arising from atom-centered, fixed, point partial charges. The primary uses of the MC3 code are the computation of i) the temperature and pressure dependence of lattice parameters andmore » thermal expansion coefficients, ii) tensors of elastic constants and compliances via the Parrinello and Rahman’s fluctuation formula [M. Parrinello and A. Rahman, J. Chem. Phys., 76, 2662 (1982)], and iii) the investigation of polymorphic phase transformations. The MC3 code is written in Fortran90 and requires LAPACK and BLAS linear algebra libraries to be linked during compilation. Computationally expensive loops are accelerated using OpenMP.« less

  5. Computational Simulation of the Formation and Material Behavior of Ice

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Singhal, Surendra N.; Chamis, Christos C.

    1994-01-01

    Computational methods are described for simulating the formation and the material behavior of ice in prevailing transient environments. The methodology developed at the NASA Lewis Research Center was adopted. A three dimensional finite-element heat transfer analyzer was used to predict the thickness of ice formed under prevailing environmental conditions. A multi-factor interaction model for simulating the material behavior of time-variant ice layers is presented. The model, used in conjunction with laminated composite mechanics, updates the material properties of an ice block as its thickness increases with time. A sample case of ice formation in a body of water was used to demonstrate the methodology. The results showed that the formation and the material behavior of ice can be computationally simulated using the available composites technology.

  6. Application of Wavelet-Based Methods for Accelerating Multi-Time-Scale Simulation of Bistable Heterogeneous Catalysis

    DOE PAGES

    Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ; ...

    2017-02-16

    Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less

  7. Hilar cholangiocarcinoma: Cross sectional evaluation of disease spectrum

    PubMed Central

    Mahajan, Mangal S; Moorthy, Srikanth; Karumathil, Sreekumar P; Rajeshkannan, R; Pothera, Ramchandran

    2015-01-01

    Although hilar cholangiocarcinoma is relatively rare, it can be diagnosed on imaging by identifying its typical pattern. In most cases, the tumor appears to be centered on the right or left hepatic duct with involvement of the ipsilateral portal vein, atrophy of hepatic lobe on that side, and invasion of adjacent liver parenchyma. Multi-detector computed tomography (MDCT) and magnetic resonance cholangiopancreatography (MRCP) are commonly used imaging modalities to assess the longitudinal and horizontal spread of tumor. PMID:25969643

  8. Communications Network

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The Multi-Compatible Network Interface Unit (MCNIU) is intended to connect the space station's communications and tracking, guidance and navigation, life support, electric power, payload data, hand controls, display consoles and other systems, and also communicate with diverse processors. Honeywell is now marketing MCNIU commercially. It has applicability in certain military operations or civil control centers. It has nongovernment utility among large companies, universities and research organizations that transfer large amounts of data among workstations and computers. *This product is no longer commercially available.

  9. Speeding Up Geophysical Research Using Docker Containers Within Multi-Cloud Environment.

    NASA Astrophysics Data System (ADS)

    Synytsky, R.; Henadiy, S.; Lobzakov, V.; Kolesnikov, L.; Starovoit, Y. O.

    2016-12-01

    How useful are the geophysical observations in a scope of minimizing losses from natural disasters today? Does it help to decrease number of human victims during tsunami and earthquake? Unfortunately it's still at early stage these days. It's a big goal and achievement to make such observations more useful by improving early warning and prediction systems with the help of cloud computing. Cloud computing technologies have proved the ability to speed up application development in many areas for 10 years already. Cloud unlocks new opportunities for geoscientists by providing access to modern data processing tools and algorithms including real-time high-performance computing, big data processing, artificial intelligence and others. Emerging lightweight cloud technologies, such as Docker containers, are gaining wide traction in IT due to the fact of faster and more efficient deployment of different applications in a cloud environment. It allows to deploy and manage geophysical applications and systems in minutes across multiple clouds and data centers that becomes of utmost importance for the next generation applications. In this session we'll demonstrate how Docker containers technology within multi-cloud can accelerate the development of applications specifically designed for geophysical researches.

  10. Concurrent extensions to the FORTRAN language for parallel programming of computational fluid dynamics algorithms

    NASA Technical Reports Server (NTRS)

    Weeks, Cindy Lou

    1986-01-01

    Experiments were conducted at NASA Ames Research Center to define multi-tasking software requirements for multiple-instruction, multiple-data stream (MIMD) computer architectures. The focus was on specifying solutions for algorithms in the field of computational fluid dynamics (CFD). The program objectives were to allow researchers to produce usable parallel application software as soon as possible after acquiring MIMD computer equipment, to provide researchers with an easy-to-learn and easy-to-use parallel software language which could be implemented on several different MIMD machines, and to enable researchers to list preferred design specifications for future MIMD computer architectures. Analysis of CFD algorithms indicated that extensions of an existing programming language, adaptable to new computer architectures, provided the best solution to meeting program objectives. The CoFORTRAN Language was written in response to these objectives and to provide researchers a means to experiment with parallel software solutions to CFD algorithms on machines with parallel architectures.

  11. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhakal, Tilak Raj

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crackmore » tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared with direct MD simulation results to demonstrate the feasibility of the method. Also, the multi-scale method is applied for a two dimensional problem of jet formation around copper notch under a strong impact.« less

  12. Coupled-cluster treatment of molecular strong-field ionization

    NASA Astrophysics Data System (ADS)

    Jagau, Thomas-C.

    2018-05-01

    Ionization rates and Stark shifts of H2, CO, O2, H2O, and CH4 in static electric fields have been computed with coupled-cluster methods in a basis set of atom-centered Gaussian functions with a complex-scaled exponent. Consideration of electron correlation is found to be of great importance even for a qualitatively correct description of the dependence of ionization rates and Stark shifts on the strength and orientation of the external field. The analysis of the second moments of the molecular charge distribution suggests a simple criterion for distinguishing tunnel and barrier suppression ionization in polyatomic molecules.

  13. Terascale Visualization: Multi-resolution Aspirin for Big-Data Headaches

    NASA Astrophysics Data System (ADS)

    Duchaineau, Mark

    2001-06-01

    Recent experience on the Accelerated Strategic Computing Initiative (ASCI) computers shows that computational physicists are successfully producing a prodigious collection of numbers on several thousand processors. But with this wealth of numbers comes an unprecedented difficulty in processing and moving them to provide useful insight and analysis. In this talk, a few simulations are highlighted where recent advancements in multiple-resolution mathematical representations and algorithms have provided some hope of seeing most of the physics of interest while keeping within the practical limits of the post-simulation storage and interactive data-exploration resources. A whole host of visualization research activities was spawned by the 1999 Gordon Bell Prize-winning computation of a shock-tube experiment showing Richtmyer-Meshkov turbulent instabilities. This includes efforts for the entire data pipeline from running simulation to interactive display: wavelet compression of field data, multi-resolution volume rendering and slice planes, out-of-core extraction and simplification of mixing-interface surfaces, shrink-wrapping to semi-regularize the surfaces, semi-structured surface wavelet compression, and view-dependent display-mesh optimization. More recently on the 12 TeraOps ASCI platform, initial results from a 5120-processor, billion-atom molecular dynamics simulation showed that 30-to-1 reductions in storage size can be achieved with no human-observable errors for the analysis required in simulations of supersonic crack propagation. This made it possible to store the 25 trillion bytes worth of simulation numbers in the available storage, which was under 1 trillion bytes. While multi-resolution methods and related systems are still in their infancy, for the largest-scale simulations there is often no other choice should the science require detailed exploration of the results.

  14. Chromatin Computation

    PubMed Central

    Bryant, Barbara

    2012-01-01

    In living cells, DNA is packaged along with protein and RNA into chromatin. Chemical modifications to nucleotides and histone proteins are added, removed and recognized by multi-functional molecular complexes. Here I define a new computational model, in which chromatin modifications are information units that can be written onto a one-dimensional string of nucleosomes, analogous to the symbols written onto cells of a Turing machine tape, and chromatin-modifying complexes are modeled as read-write rules that operate on a finite set of adjacent nucleosomes. I illustrate the use of this “chromatin computer” to solve an instance of the Hamiltonian path problem. I prove that chromatin computers are computationally universal – and therefore more powerful than the logic circuits often used to model transcription factor control of gene expression. Features of biological chromatin provide a rich instruction set for efficient computation of nontrivial algorithms in biological time scales. Modeling chromatin as a computer shifts how we think about chromatin function, suggests new approaches to medical intervention, and lays the groundwork for the engineering of a new class of biological computing machines. PMID:22567109

  15. Virtualized Multi-Mission Operations Center (vMMOC) and its Cloud Services

    NASA Technical Reports Server (NTRS)

    Ido, Haisam Kassim

    2017-01-01

    His presentation will cover, the current and future, technical and organizational opportunities and challenges with virtualizing a multi-mission operations center. The full deployment of Goddard Space Flight Centers (GSFC) Virtualized Multi-Mission Operations Center (vMMOC) is nearly complete. The Space Science Mission Operations (SSMO) organizations spacecraft ACE, Fermi, LRO, MMS(4), OSIRIS-REx, SDO, SOHO, Swift, and Wind are in the process of being fully migrated to the vMMOC. The benefits of the vMMOC will be the normalization and the standardization of IT services, mission operations, maintenance, and development as well as ancillary services and policies such as collaboration tools, change management systems, and IT Security. The vMMOC will also provide operational efficiencies regarding hardware, IT domain expertise, training, maintenance and support.The presentation will also cover SSMO's secure Situational Awareness Dashboard in an integrated, fleet centric, cloud based web services fashion. Additionally the SSMO Telemetry as a Service (TaaS) will be covered, which allows authorized users and processes to access telemetry for the entire SSMO fleet, and for the entirety of each spacecrafts history. Both services leverage cloud services in a secure FISMA High and FedRamp environment, and also leverage distributed object stores in order to house and provide the telemetry. The services are also in the process of leveraging the cloud computing services elasticity and horizontal scalability. In the design phase is the Navigation as a Service (NaaS) which will provide a standardized, efficient, and normalized service for the fleet's space flight dynamics operations. Additional future services that may be considered are Ground Segment as a Service (GSaaS), Telemetry and Command as a Service (TCaaS), Flight Software Simulation as a Service, etc.

  16. Molecular modeling on streptolysin-O of multidrug resistant Streptococcus pyogenes and computer aided screening and in vitro assay for novel herbal inhibitors.

    PubMed

    Skariyachan, Sinosh; Narayan, Naik Sowmyalaxmi; Aggimath, Tejaswini S; Nagaraj, Sushmitha; Reddy, Monika S; Narayanappa, Rajeswari

    2014-03-01

    Streptococcus pyogenes is a notorious pathogenic bacterium which causes various human diseases ranging from localized infections to life threatening invasive diseases. Streptolysin-O (SLO), pore-forming thiol-activated cytolysin, is the major virulent factor for streptococcal infections. Present therapies against streptococcal infections are limited as most of the strains have developed multi-drug resistance to present generation of drugs. Hence, there is a need for alternative therapeutic substances. Structure based virtual screening is a novel platform to select lead molecules with better pharmacokinetic properties. The 3D structure of SLO (not available in native form), essential for such studies, was computationally generated and this homology model was used as probable drug target. Based on literature survey, several phytoligands from 25 medicinal plants were selected. Out of these, leads from 11 plants showed better pharmacokinetic properties. The best lead molecules were screened based on computer aided drug likeness and pharmacokinetic predictions. The inhibitory properties of selected herbal leads against SLO were studied by molecular docking. An in vitro assay was further carried out and variations observed were found to be significant (p<0.05). Antibiotic sensitivity testing was also performed with the clinical strain of Streptococcus pyogenes with conventional drugs. The clinical strain showed multi-drug resistance to conventional drugs. Our study revealed that numerous phytoligands have better inhibitory properties towards the toxin. We noticed that incorporation of selected herbal extracts in blood agar medium showed significant reduction in hemolysis (MIC 300μl/plate), indicating inhibition of SLO. Furthermore, the butanol extracts of selected herbal preparation based on computer aided screening showed significant inhibitory properties at 250 mcg/disc concentration. We also noticed that selected herbal formulations have better antimicrobial properties at MIC range of 300- 400μl. Hence, our study suggests that these herbal extracts have better inhibitory properties against the toxin as well as drug resistant Streptococcus pyogenes.

  17. In silico method for modelling metabolism and gene product expression at genome scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lerman, Joshua A.; Hyduke, Daniel R.; Latif, Haythem

    2012-07-03

    Transcription and translation use raw materials and energy generated metabolically to create the macromolecular machinery responsible for all cellular functions, including metabolism. A biochemically accurate model of molecular biology and metabolism will facilitate comprehensive and quantitative computations of an organism's molecular constitution as a function of genetic and environmental parameters. Here we formulate a model of metabolism and macromolecular expression. Prototyping it using the simple microorganism Thermotoga maritima, we show our model accurately simulates variations in cellular composition and gene expression. Moreover, through in silico comparative transcriptomics, the model allows the discovery of new regulons and improving the genome andmore » transcription unit annotations. Our method presents a framework for investigating molecular biology and cellular physiology in silico and may allow quantitative interpretation of multi-omics data sets in the context of an integrated biochemical description of an organism.« less

  18. Crystal MD: The massively parallel molecular dynamics software for metal with BCC structure

    NASA Astrophysics Data System (ADS)

    Hu, Changjun; Bai, He; He, Xinfu; Zhang, Boyao; Nie, Ningming; Wang, Xianmeng; Ren, Yingwen

    2017-02-01

    Material irradiation effect is one of the most important keys to use nuclear power. However, the lack of high-throughput irradiation facility and knowledge of evolution process, lead to little understanding of the addressed issues. With the help of high-performance computing, we could make a further understanding of micro-level-material. In this paper, a new data structure is proposed for the massively parallel simulation of the evolution of metal materials under irradiation environment. Based on the proposed data structure, we developed the new molecular dynamics software named Crystal MD. The simulation with Crystal MD achieved over 90% parallel efficiency in test cases, and it takes more than 25% less memory on multi-core clusters than LAMMPS and IMD, which are two popular molecular dynamics simulation software. Using Crystal MD, a two trillion particles simulation has been performed on Tianhe-2 cluster.

  19. Discovering the intelligence in molecular biology.

    PubMed

    Uberbacher, E

    1995-12-01

    The Third International Conference on Intelligent Systems in Molecular Biology was truly an outstanding event. Computational methods in molecular biology have reached a new level of maturity and utility, resulting in many high-impact applications. The success of this meeting bodes well for the rapid and continuing development of computational methods, intelligent systems and information-based approaches for the biosciences. The basic technology, originally most often applied to 'feasibility' problems, is now dealing effectively with the most difficult real-world problems. Significant progress has been made in understanding protein-structure information, structural classification, and how functional information and the relevant features of active-site geometry can be gleaned from structures by automated computational approaches. The value and limits of homology-based methods, and the ability to classify proteins by structure in the absence of homology, have reached a new level of sophistication. New methods for covariation analysis in the folding of large structures such as RNAs have shown remarkably good results, indicating the long-term potential to understand very complicated molecules and multimolecular complexes using computational means. Novel methods, such as HMMs, context-free grammars and the uses of mutual information theory, have taken center stage as highly valuable tools in our quest to represent and characterize biological information. A focus on creative uses of intelligent systems technologies and the trend toward biological application will undoubtedly continue and grow at the 1996 ISMB meeting in St Louis.

  20. Multi-threading: A new dimension to massively parallel scientific computation

    NASA Astrophysics Data System (ADS)

    Nielsen, Ida M. B.; Janssen, Curtis L.

    2000-06-01

    Multi-threading is becoming widely available for Unix-like operating systems, and the application of multi-threading opens new ways for performing parallel computations with greater efficiency. We here briefly discuss the principles of multi-threading and illustrate the application of multi-threading for a massively parallel direct four-index transformation of electron repulsion integrals. Finally, other potential applications of multi-threading in scientific computing are outlined.

  1. Programming for 1.6 Millon cores: Early experiences with IBM's BG/Q SMP architecture

    NASA Astrophysics Data System (ADS)

    Glosli, James

    2013-03-01

    With the stall in clock cycle improvements a decade ago, the drive for computational performance has continues along a path of increasing core counts on a processor. The multi-core evolution has been expressed in both a symmetric multi processor (SMP) architecture and cpu/GPU architecture. Debates rage in the high performance computing (HPC) community which architecture best serves HPC. In this talk I will not attempt to resolve that debate but perhaps fuel it. I will discuss the experience of exploiting Sequoia, a 98304 node IBM Blue Gene/Q SMP at Lawrence Livermore National Laboratory. The advantages and challenges of leveraging the computational power BG/Q will be detailed through the discussion of two applications. The first application is a Molecular Dynamics code called ddcMD. This is a code developed over the last decade at LLNL and ported to BG/Q. The second application is a cardiac modeling code called Cardioid. This is a code that was recently designed and developed at LLNL to exploit the fine scale parallelism of BG/Q's SMP architecture. Through the lenses of these efforts I'll illustrate the need to rethink how we express and implement our computational approaches. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  2. Information Presentation and Control in a Modern Air Traffic Control Tower Simulator

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Doubek, Sharon; Rabin, Boris; Harke, Stanton

    1996-01-01

    The proper presentation and management of information in America's largest and busiest (Level V) air traffic control towers calls for an in-depth understanding of many different human-computer considerations: user interface design for graphical, radar, and text; manual and automated data input hardware; information/display output technology; reconfigurable workstations; workload assessment; and many other related subjects. This paper discusses these subjects in the context of the Surface Development and Test Facility (SDTF) currently under construction at NASA's Ames Research Center, a full scale, multi-manned, air traffic control simulator which will provide the "look and feel" of an actual airport tower cab. Special emphasis will be given to the human-computer interfaces required for the different kinds of information displayed at the various controller and supervisory positions and to the computer-aided design (CAD) and other analytic, computer-based tools used to develop the facility.

  3. Reconfigurable Computing As an Enabling Technology for Single-Photon-Counting Laser Altimetry

    NASA Technical Reports Server (NTRS)

    Powell, Wesley; Hicks, Edward; Pinchinat, Maxime; Dabney, Philip; McGarry, Jan; Murray, Paul

    2003-01-01

    Single-photon-counting laser altimetry is a new measurement technique offering significant advantages in vertical resolution, reducing instrument size, mass, and power, and reducing laser complexity as compared to analog or threshold detection laser altimetry techniques. However, these improvements come at the cost of a dramatically increased requirement for onboard real-time data processing. Reconfigurable computing has been shown to offer considerable performance advantages in performing this processing. These advantages have been demonstrated on the Multi-KiloHertz Micro-Laser Altimeter (MMLA), an aircraft based single-photon-counting laser altimeter developed by NASA Goddard Space Flight Center with several potential spaceflight applications. This paper describes how reconfigurable computing technology was employed to perform MMLA data processing in real-time under realistic operating constraints, along with the results observed. This paper also expands on these prior results to identify concepts for using reconfigurable computing to enable spaceflight single-photon-counting laser altimeter instruments.

  4. Assessment of the MHD capability in the ATHENA code using data from the ALEX (Argonne Liquid Metal Experiment) facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roth, P.A.

    1988-10-28

    The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code is a system transient analysis code with multi-loop, multi-fluid capabilities, which is available to the fusion community at the National Magnetic Fusion Energy Computing Center (NMFECC). The work reported here assesses the ATHENA magnetohydrodynamic (MHD) pressure drop model for liquid metals flowing through a strong magnetic field. An ATHENA model was developed for two simple geometry, adiabatic test sections used in the Argonne Liquid Metal Experiment (ALEX) at Argonne National Laboratory (ANL). The pressure drops calculated by ATHENA agreed well with the experimental results from the ALEX facility. 13 refs., 4more » figs., 2 tabs.« less

  5. System-level multi-target drug discovery from natural products with applications to cardiovascular diseases.

    PubMed

    Zheng, Chunli; Wang, Jinan; Liu, Jianling; Pei, Mengjie; Huang, Chao; Wang, Yonghua

    2014-08-01

    The term systems pharmacology describes a field of study that uses computational and experimental approaches to broaden the view of drug actions rooted in molecular interactions and advance the process of drug discovery. The aim of this work is to stick out the role that the systems pharmacology plays across the multi-target drug discovery from natural products for cardiovascular diseases (CVDs). Firstly, based on network pharmacology methods, we reconstructed the drug-target and target-target networks to determine the putative protein target set of multi-target drugs for CVDs treatment. Secondly, we reintegrated a compound dataset of natural products and then obtained a multi-target compounds subset by virtual-screening process. Thirdly, a drug-likeness evaluation was applied to find the ADME-favorable compounds in this subset. Finally, we conducted in vitro experiments to evaluate the reliability of the selected chemicals and targets. We found that four of the five randomly selected natural molecules can effectively act on the target set for CVDs, indicating the reasonability of our systems-based method. This strategy may serve as a new model for multi-target drug discovery of complex diseases.

  6. Multi-Institution Research Centers: Planning and Management Challenges

    ERIC Educational Resources Information Center

    Spooner, Catherine; Lavey, Lisa; Mukuka, Chilandu; Eames-Brown, Rosslyn

    2016-01-01

    Funding multi-institution centers of research excellence (CREs) has become a common means of supporting collaborative partnerships to address specific research topics. However, there is little guidance for those planning or managing a multi-institution CRE, which faces specific challenges not faced by single-institution research centers. We…

  7. Galerkin method for unsplit 3-D Dirac equation using atomically/kinetically balanced B-spline basis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fillion-Gourdeau, F., E-mail: filliong@CRM.UMontreal.ca; Centre de Recherches Mathématiques, Université de Montréal, Montréal, H3T 1J4; Lorin, E., E-mail: elorin@math.carleton.ca

    2016-02-15

    A Galerkin method is developed to solve the time-dependent Dirac equation in prolate spheroidal coordinates for an electron–molecular two-center system. The initial state is evaluated from a variational principle using a kinetic/atomic balanced basis, which allows for an efficient and accurate determination of the Dirac spectrum and eigenfunctions. B-spline basis functions are used to obtain high accuracy. This numerical method is used to compute the energy spectrum of the two-center problem and then the evolution of eigenstate wavefunctions in an external electromagnetic field.

  8. Collateral non cardiac findings in clinical routine CT coronary angiography: results from a multi-center registry.

    PubMed

    La Grutta, Ludovico; Malagò, Roberto; Maffei, Erica; Barbiani, Camilla; Pezzato, Andrea; Martini, Chiara; Arcadi, Teresa; Clemente, Alberto; Mollet, Nico R; Zuccarelli, Alessandra; Krestin, Gabriel P; Lagalla, Roberto; Pozzi Mucelli, Roberto; Cademartiri, Filippo; Midiri, Massimo

    2015-12-01

    The aim of the study was to evaluate the prevalence of collateral findings detected in computed tomography coronary angiography (CTCA) in a multi-center registry. We performed a retrospective review of 4303 patients (2719 males, mean age 60.3 ± 10.2 years) undergoing 64-slice CTCA for suspected or known coronary artery disease (CAD) at various academic institutions between 01/2006 and 09/2010. Collateral findings were recorded and scored as: non-significant (no signs of relevant pathology, not necessary to be reported), significant (clear signs of pathology, mandatory to be reported), or major (remarkable pathology, mandatory to be reported and further investigated). We detected 6886 non-cardiac findings (1.6 non cardiac finding per patient). Considering all centers, only 865/4303 (20.1 %) patients were completely without any additional finding. Overall, 2095 (30.4 %) non-significant, 4486 (65.2 %) significant, and 305 (4.4 %) major findings were detected. Among major findings, primary lung cancer was reported in 21 cases. In every center, most prevalent significant findings were mediastinal lymph nodes >1 cm. In 256 patients, collateral findings were clinically more relevant than coexisting CAD and justified the symptoms of patients. The prevalence of significant and major collateral findings in CTCA is high. Radiologists should carefully evaluate the entire scan volume in each patient.

  9. A DICOM based radiotherapy plan database for research collaboration and reporting

    NASA Astrophysics Data System (ADS)

    Westberg, J.; Krogh, S.; Brink, C.; Vogelius, I. R.

    2014-03-01

    Purpose: To create a central radiotherapy (RT) plan database for dose analysis and reporting, capable of calculating and presenting statistics on user defined patient groups. The goal is to facilitate multi-center research studies with easy and secure access to RT plans and statistics on protocol compliance. Methods: RT institutions are able to send data to the central database using DICOM communications on a secure computer network. The central system is composed of a number of DICOM servers, an SQL database and in-house developed software services to process the incoming data. A web site within the secure network allows the user to manage their submitted data. Results: The RT plan database has been developed in Microsoft .NET and users are able to send DICOM data between RT centers in Denmark. Dose-volume histogram (DVH) calculations performed by the system are comparable to those of conventional RT software. A permission system was implemented to ensure access control and easy, yet secure, data sharing across centers. The reports contain DVH statistics for structures in user defined patient groups. The system currently contains over 2200 patients in 14 collaborations. Conclusions: A central RT plan repository for use in multi-center trials and quality assurance was created. The system provides an attractive alternative to dummy runs by enabling continuous monitoring of protocol conformity and plan metrics in a trial.

  10. [Exploiture and application of an internet-based Computation Platform for Integrative Pharmacology of Traditional Chinese Medicine].

    PubMed

    Xu, Hai-Yu; Liu, Zhen-Ming; Fu, Yan; Zhang, Yan-Qiong; Yu, Jian-Jun; Guo, Fei-Fei; Tang, Shi-Huan; Lv, Chuan-Yu; Su, Jin; Cui, Ru-Yi; Yang, Hong-Jun

    2017-09-01

    Recently, integrative pharmacology(IP) has become a pivotal paradigm for the modernization of traditional Chinese medicines(TCM) and combinatorial drugs discovery, which is an interdisciplinary science for establishing the in vitro and in vivo correlation between absorption, distribution, metabolism, and excretion/pharmacokinetic(ADME/PK) profiles of TCM and the molecular networks of disease by the integration of the knowledge of multi-disciplinary and multi-stages. In the present study, an internet-based Computation Platform for IP of TCM(TCM-IP, www.tcmip.cn) is established to promote the development of the emerging discipline. Among them, a big data of TCM is an important resource for TCM-IP including Chinese Medicine Formula Database, Chinese Medical Herbs Database, Chemical Database of Chinese Medicine, Target Database for Disease and Symptoms, et al. Meanwhile, some data mining and bioinformatics approaches are critical technology for TCM-IP including the identification of the TCM constituents, ADME prediction, target prediction for the TCM constituents, network construction and analysis, et al. Furthermore, network beautification and individuation design are employed to meet the consumer's requirement. We firmly believe that TCM-IP is a very useful tool for the identification of active constituents of TCM and their involving potential molecular mechanism for therapeutics, which would wildly applied in quality evaluation, clinical repositioning, scientific discovery based on original thinking, prescription compatibility and new drug of TCM, et al. Copyright© by the Chinese Pharmaceutical Association.

  11. A Multi-scale Computational Platform to Mechanistically Assess the Effect of Genetic Variation on Drug Responses in Human Erythrocyte Metabolism

    PubMed Central

    Bordbar, Aarash; Palsson, Bernhard O.

    2016-01-01

    Progress in systems medicine brings promise to addressing patient heterogeneity and individualized therapies. Recently, genome-scale models of metabolism have been shown to provide insight into the mechanistic link between drug therapies and systems-level off-target effects while being expanded to explicitly include the three-dimensional structure of proteins. The integration of these molecular-level details, such as the physical, structural, and dynamical properties of proteins, notably expands the computational description of biochemical network-level properties and the possibility of understanding and predicting whole cell phenotypes. In this study, we present a multi-scale modeling framework that describes biological processes which range in scale from atomistic details to an entire metabolic network. Using this approach, we can understand how genetic variation, which impacts the structure and reactivity of a protein, influences both native and drug-induced metabolic states. As a proof-of-concept, we study three enzymes (catechol-O-methyltransferase, glucose-6-phosphate dehydrogenase, and glyceraldehyde-3-phosphate dehydrogenase) and their respective genetic variants which have clinically relevant associations. Using all-atom molecular dynamic simulations enables the sampling of long timescale conformational dynamics of the proteins (and their mutant variants) in complex with their respective native metabolites or drug molecules. We find that changes in a protein’s structure due to a mutation influences protein binding affinity to metabolites and/or drug molecules, and inflicts large-scale changes in metabolism. PMID:27467583

  12. A Multi-scale Computational Platform to Mechanistically Assess the Effect of Genetic Variation on Drug Responses in Human Erythrocyte Metabolism.

    PubMed

    Mih, Nathan; Brunk, Elizabeth; Bordbar, Aarash; Palsson, Bernhard O

    2016-07-01

    Progress in systems medicine brings promise to addressing patient heterogeneity and individualized therapies. Recently, genome-scale models of metabolism have been shown to provide insight into the mechanistic link between drug therapies and systems-level off-target effects while being expanded to explicitly include the three-dimensional structure of proteins. The integration of these molecular-level details, such as the physical, structural, and dynamical properties of proteins, notably expands the computational description of biochemical network-level properties and the possibility of understanding and predicting whole cell phenotypes. In this study, we present a multi-scale modeling framework that describes biological processes which range in scale from atomistic details to an entire metabolic network. Using this approach, we can understand how genetic variation, which impacts the structure and reactivity of a protein, influences both native and drug-induced metabolic states. As a proof-of-concept, we study three enzymes (catechol-O-methyltransferase, glucose-6-phosphate dehydrogenase, and glyceraldehyde-3-phosphate dehydrogenase) and their respective genetic variants which have clinically relevant associations. Using all-atom molecular dynamic simulations enables the sampling of long timescale conformational dynamics of the proteins (and their mutant variants) in complex with their respective native metabolites or drug molecules. We find that changes in a protein's structure due to a mutation influences protein binding affinity to metabolites and/or drug molecules, and inflicts large-scale changes in metabolism.

  13. Next generation of network medicine: interdisciplinary signaling approaches.

    PubMed

    Korcsmaros, Tamas; Schneider, Maria Victoria; Superti-Furga, Giulio

    2017-02-20

    In the last decade, network approaches have transformed our understanding of biological systems. Network analyses and visualizations have allowed us to identify essential molecules and modules in biological systems, and improved our understanding of how changes in cellular processes can lead to complex diseases, such as cancer, infectious and neurodegenerative diseases. "Network medicine" involves unbiased large-scale network-based analyses of diverse data describing interactions between genes, diseases, phenotypes, drug targets, drug transport, drug side-effects, disease trajectories and more. In terms of drug discovery, network medicine exploits our understanding of the network connectivity and signaling system dynamics to help identify optimal, often novel, drug targets. Contrary to initial expectations, however, network approaches have not yet delivered a revolution in molecular medicine. In this review, we propose that a key reason for the limited impact, so far, of network medicine is a lack of quantitative multi-disciplinary studies involving scientists from different backgrounds. To support this argument, we present existing approaches from structural biology, 'omics' technologies (e.g., genomics, proteomics, lipidomics) and computational modeling that point towards how multi-disciplinary efforts allow for important new insights. We also highlight some breakthrough studies as examples of the potential of these approaches, and suggest ways to make greater use of the power of interdisciplinarity. This review reflects discussions held at an interdisciplinary signaling workshop which facilitated knowledge exchange from experts from several different fields, including in silico modelers, computational biologists, biochemists, geneticists, molecular and cell biologists as well as cancer biologists and pharmacologists.

  14. A New Cell-Centered Implicit Numerical Scheme for Ions in the 2-D Axisymmetric Code Hall2de

    NASA Technical Reports Server (NTRS)

    Lopez Ortega, Alejandro; Mikellides, Ioannis G.

    2014-01-01

    We present a new algorithm in the Hall2De code to simulate the ion hydrodynamics in the acceleration channel and near plume regions of Hall-effect thrusters. This implementation constitutes an upgrade of the capabilities built in the Hall2De code. The equations of mass conservation and momentum for unmagnetized ions are solved using a conservative, finite-volume, cell-centered scheme on a magnetic-field-aligned grid. Major computational savings are achieved by making use of an implicit predictor/multi-corrector algorithm for time evolution. Inaccuracies in the prediction of the motion of low-energy ions in the near plume in hydrodynamics approaches are addressed by implementing a multi-fluid algorithm that tracks ions of different energies separately. A wide range of comparisons with measurements are performed to validate the new ion algorithms. Several numerical experiments with the location and value of the anomalous collision frequency are also presented. Differences in the plasma properties in the near-plume between the single fluid and multi-fluid approaches are discussed. We complete our validation by comparing predicted erosion rates at the channel walls of the thruster with measurements. Erosion rates predicted by the plasma properties obtained from simulations replicate accurately measured rates of erosion within the uncertainty range of the sputtering models employed.

  15. Interactive display of molecular models using a microcomputer system

    NASA Technical Reports Server (NTRS)

    Egan, J. T.; Macelroy, R. D.

    1980-01-01

    A simple, microcomputer-based, interactive graphics display system has been developed for the presentation of perspective views of wire frame molecular models. The display system is based on a TERAK 8510a graphics computer system with a display unit consisting of microprocessor, television display and keyboard subsystems. The operating system includes a screen editor, file manager, PASCAL and BASIC compilers and command options for linking and executing programs. The graphics program, written in USCD PASCAL, involves the centering of the coordinate system, the transformation of centered model coordinates into homogeneous coordinates, the construction of a viewing transformation matrix to operate on the coordinates, clipping invisible points, perspective transformation and scaling to screen coordinates; commands available include ZOOM, ROTATE, RESET, and CHANGEVIEW. Data file structure was chosen to minimize the amount of disk storage space. Despite the inherent slowness of the system, its low cost and flexibility suggests general applicability.

  16. Highly Active and Stable MgAl2O4 Supported Rh and Ir Catalysts for Methane Steam Reforming: A Combined Experimental and Theoretical Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Donghai; Glezakou, Vassiliki Alexandra; Lebarbier, Vanessa MC

    2014-07-01

    In this work we present a combined experimental and theoretical investigation of stable MgAl2O4 spinel-supported Rh and Ir catalysts for the steam methane reforming (SMR) reaction. Firstly, catalytic performance for a series of noble metal catalysts supported on MgAl2O4 spinel was evaluated for SMR at 600-850°C. Turnover rate at 850°C follows the order: Pd > Pt > Ir > Rh > Ru > Ni. However, Rh and Ir were found to have the best combination of activity and stability for methane steam reforming in the presence of simulated biomass-derived syngas. It was found that highly dispersed ~2 nm Rh andmore » ~1 nm Ir clusters were formed on the MgAl2O4 spinel support. Scanning Transition Electron Microscopy (STEM) images show that excellent dispersion was maintained even under challenging high temperature conditions (e.g. at 850°C in the presence of steam) while Ir and Rh catalysts supported on Al2O3 were observed to sinter at increased rates under the same conditions. These observations were further confirmed by ab initio molecular dynamics (AIMD) simulations which find that ~1 nm Rh and Ir particles (50-atom cluster) bind strongly to the MgAl2O4 surfaces via a redox process leading to a strong metal-support interaction, thus helping anchor the metal clusters and reduce the tendency to sinter. Density functional theory (DFT) calculations suggest that these supported smaller Rh and Ir particles have a lower work function than larger more bulk-like ones, which enables them to activate both water and methane more effectively than larger particles, yet have a minimal influence on the relative stability of coke precursors. In addition, theoretical mechanistic studies were used to probe the relationship between structure and reactivity. Consistent with the experimental observations, our theoretical modeling results also suggest that the small spinel-supported Ir particle catalyst is more active than the counterpart of Rh catalyst for SMR. This work was financially supported by the United States Department of Energy (DOE)’s Bioenergy Technologies Office (BETO) and performed at the Pacific Northwest National Laboratory (PNNL). PNNL is a multi-program national laboratory operated for DOE by Battelle Memorial Institute. Computing time was granted by a user proposal at the Molecular Science Computing Facility in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) located at PNNL. Part of the computational time was provided by the National Energy Research Scientific Computing Center (NERSC).« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trędak, Przemysław, E-mail: przemyslaw.tredak@fuw.edu.pl; Rudnicki, Witold R.; Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw, ul. Pawińskiego 5a, 02-106 Warsaw

    The second generation Reactive Bond Order (REBO) empirical potential is commonly used to accurately model a wide range hydrocarbon materials. It is also extensible to other atom types and interactions. REBO potential assumes complex multi-body interaction model, that is difficult to represent efficiently in the SIMD or SIMT programming model. Hence, despite its importance, no efficient GPGPU implementation has been developed for this potential. Here we present a detailed description of a highly efficient GPGPU implementation of molecular dynamics algorithm using REBO potential. The presented algorithm takes advantage of rarely used properties of the SIMT architecture of a modern GPUmore » to solve difficult synchronizations issues that arise in computations of multi-body potential. Techniques developed for this problem may be also used to achieve efficient solutions of different problems. The performance of proposed algorithm is assessed using a range of model systems. It is compared to highly optimized CPU implementation (both single core and OpenMP) available in LAMMPS package. These experiments show up to 6x improvement in forces computation time using single processor of the NVIDIA Tesla K80 compared to high end 16-core Intel Xeon processor.« less

  18. Building better water models using the shape of the charge distribution of a water molecule

    NASA Astrophysics Data System (ADS)

    Dharmawardhana, Chamila Chathuranga; Ichiye, Toshiko

    2017-11-01

    The unique properties of liquid water apparently arise from more than just the tetrahedral bond angle between the nuclei of a water molecule since simple three-site models of water are poor at mimicking these properties in computer simulations. Four- and five-site models add partial charges on dummy sites and are better at modeling these properties, which suggests that the shape of charge distribution is important. Since a multipole expansion of the electrostatic potential describes a charge distribution in an orthogonal basis set that is exact in the limit of infinite order, multipoles may be an even better way to model the charge distribution. In particular, molecular multipoles up to the octupole centered on the oxygen appear to describe the electrostatic potential from electronic structure calculations better than four- and five-site models, and molecular multipole models give better agreement with the temperature and pressure dependence of many liquid state properties of water while retaining the computational efficiency of three-site models. Here, the influence of the shape of the molecular charge distribution on liquid state properties is examined by correlating multipoles of non-polarizable water models with their liquid state properties in computer simulations. This will aid in the development of accurate water models for classical simulations as well as in determining the accuracy needed in quantum mechanical/molecular mechanical studies and ab initio molecular dynamics simulations of water. More fundamentally, this will lead to a greater understanding of how the charge distribution of a water molecule leads to the unique properties of liquid water. In particular, these studies indicate that p-orbital charge out of the molecular plane is important.

  19. 77 FR 12598 - Notice Correction; A Multi-Center International Hospital-Based Case-Control Study of Lymphoma in...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-01

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Notice Correction; A Multi-Center International Hospital-Based Case-Control Study of Lymphoma in Asia (AsiaLymph) (NCI) The Federal... project titled, ``A multi-center international hospital-based case-control study of lymphoma in Asia (Asia...

  20. Phase quality map based on local multi-unwrapped results for two-dimensional phase unwrapping.

    PubMed

    Zhong, Heping; Tang, Jinsong; Zhang, Sen

    2015-02-01

    The efficiency of a phase unwrapping algorithm and the reliability of the corresponding unwrapped result are two key problems in reconstructing the digital elevation model of a scene from its interferometric synthetic aperture radar (InSAR) or interferometric synthetic aperture sonar (InSAS) data. In this paper, a new phase quality map is designed and implemented in a graphic processing unit (GPU) environment, which greatly accelerates the unwrapping process of the quality-guided algorithm and enhances the correctness of the unwrapped result. In a local wrapped phase window, the center point is selected as the reference point, and then two unwrapped results are computed by integrating in two different simple ways. After the two local unwrapped results are computed, the total difference of the two unwrapped results is regarded as the phase quality value of the center point. In order to accelerate the computing process of the new proposed quality map, we have implemented it in a GPU environment. The wrapped phase data are first uploaded to the memory of a device, and then the kernel function is called in the device to compute the phase quality in parallel by blocks of threads. Unwrapping tests performed on the simulated and real InSAS data confirm the accuracy and efficiency of the proposed method.

  1. A Photogrammetric System for Model Attitude Measurement in Hypersonic Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Lunsford, Charles B.

    2007-01-01

    A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and photogrammetric principles for point tracking to compute model position including pitch, roll and yaw. A discussion of the constraints encountered during the design, and a review of the measurement results obtained from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.

  2. Autonomous Navigation, Dynamic Path and Work Flow Planning in Multi-Agent Robotic Swarms Project

    NASA Technical Reports Server (NTRS)

    Falker, John; Zeitlin, Nancy; Leucht, Kurt; Stolleis, Karl

    2015-01-01

    Kennedy Space Center has teamed up with the Biological Computation Lab at the University of New Mexico to create a swarm of small, low-cost, autonomous robots, called Swarmies, to be used as a ground-based research platform for in-situ resource utilization missions. The behavior of the robot swarm mimics the central-place foraging strategy of ants to find and collect resources in an unknown environment and return those resources to a central site.

  3. Integrating electronic patient records into a multi-media clinic-based simulation center using a PC blade platform: a foundation for a new pedagogy in dentistry.

    PubMed

    Taylor, David; Valenza, John A; Spence, James M; Baber, Randolph H

    2007-10-11

    Simulation has been used for many years in dental education, but the educational context is typically a laboratory divorced from the clinical setting, which impairs the transfer of learning. Here we report on a true simulation clinic with multimedia communication from a central teaching station. Each of the 43 fully-functioning student operatories includes a thin-client networked computer with access to an Electronic Patient Record (EPR).

  4. Molecular Imaging in the College of Optical Sciences – An Overview of Two Decades of Instrumentation Development

    PubMed Central

    Furenlid, Lars R.; Barrett, Harrison H.; Barber, H. Bradford; Clarkson, Eric W.; Kupinski, Matthew A.; Liu, Zhonglin; Stevenson, Gail D.; Woolfenden, James M.

    2015-01-01

    During the past two decades, researchers at the University of Arizona’s Center for Gamma-Ray Imaging (CGRI) have explored a variety of approaches to gamma-ray detection, including scintillation cameras, solid-state detectors, and hybrids such as the intensified Quantum Imaging Device (iQID) configuration where a scintillator is followed by optical gain and a fast CCD or CMOS camera. We have combined these detectors with a variety of collimation schemes, including single and multiple pinholes, parallel-hole collimators, synthetic apertures, and anamorphic crossed slits, to build a large number of preclinical molecular-imaging systems that perform Single-Photon Emission Computed Tomography (SPECT), Positron Emission Tomography (PET), and X-Ray Computed Tomography (CT). In this paper, we discuss the themes and methods we have developed over the years to record and fully use the information content carried by every detected gamma-ray photon. PMID:26236069

  5. Robust Selection Algorithm (RSA) for Multi-Omic Biomarker Discovery; Integration with Functional Network Analysis to Identify miRNA Regulated Pathways in Multiple Cancers.

    PubMed

    Sehgal, Vasudha; Seviour, Elena G; Moss, Tyler J; Mills, Gordon B; Azencott, Robert; Ram, Prahlad T

    2015-01-01

    MicroRNAs (miRNAs) play a crucial role in the maintenance of cellular homeostasis by regulating the expression of their target genes. As such, the dysregulation of miRNA expression has been frequently linked to cancer. With rapidly accumulating molecular data linked to patient outcome, the need for identification of robust multi-omic molecular markers is critical in order to provide clinical impact. While previous bioinformatic tools have been developed to identify potential biomarkers in cancer, these methods do not allow for rapid classification of oncogenes versus tumor suppressors taking into account robust differential expression, cutoffs, p-values and non-normality of the data. Here, we propose a methodology, Robust Selection Algorithm (RSA) that addresses these important problems in big data omics analysis. The robustness of the survival analysis is ensured by identification of optimal cutoff values of omics expression, strengthened by p-value computed through intensive random resampling taking into account any non-normality in the data and integration into multi-omic functional networks. Here we have analyzed pan-cancer miRNA patient data to identify functional pathways involved in cancer progression that are associated with selected miRNA identified by RSA. Our approach demonstrates the way in which existing survival analysis techniques can be integrated with a functional network analysis framework to efficiently identify promising biomarkers and novel therapeutic candidates across diseases.

  6. Development of massive multilevel molecular dynamics simulation program, Platypus (PLATform for dYnamic Protein Unified Simulation), for the elucidation of protein functions.

    PubMed

    Takano, Yu; Nakata, Kazuto; Yonezawa, Yasushige; Nakamura, Haruki

    2016-05-05

    A massively parallel program for quantum mechanical-molecular mechanical (QM/MM) molecular dynamics simulation, called Platypus (PLATform for dYnamic Protein Unified Simulation), was developed to elucidate protein functions. The speedup and the parallelization ratio of Platypus in the QM and QM/MM calculations were assessed for a bacteriochlorophyll dimer in the photosynthetic reaction center (DIMER) on the K computer, a massively parallel computer achieving 10 PetaFLOPs with 705,024 cores. Platypus exhibited the increase in speedup up to 20,000 core processors at the HF/cc-pVDZ and B3LYP/cc-pVDZ, and up to 10,000 core processors by the CASCI(16,16)/6-31G** calculations. We also performed excited QM/MM-MD simulations on the chromophore of Sirius (SIRIUS) in water. Sirius is a pH-insensitive and photo-stable ultramarine fluorescent protein. Platypus accelerated on-the-fly excited-state QM/MM-MD simulations for SIRIUS in water, using over 4000 core processors. In addition, it also succeeded in 50-ps (200,000-step) on-the-fly excited-state QM/MM-MD simulations for the SIRIUS in water. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.

  7. Incorporation of local structure into kriging models for the prediction of atomistic properties in the water decamer.

    PubMed

    Davie, Stuart J; Di Pasquale, Nicodemo; Popelier, Paul L A

    2016-10-15

    Machine learning algorithms have been demonstrated to predict atomistic properties approaching the accuracy of quantum chemical calculations at significantly less computational cost. Difficulties arise, however, when attempting to apply these techniques to large systems, or systems possessing excessive conformational freedom. In this article, the machine learning method kriging is applied to predict both the intra-atomic and interatomic energies, as well as the electrostatic multipole moments, of the atoms of a water molecule at the center of a 10 water molecule (decamer) cluster. Unlike previous work, where the properties of small water clusters were predicted using a molecular local frame, and where training set inputs (features) were based on atomic index, a variety of feature definitions and coordinate frames are considered here to increase prediction accuracy. It is shown that, for a water molecule at the center of a decamer, no single method of defining features or coordinate schemes is optimal for every property. However, explicitly accounting for the structure of the first solvation shell in the definition of the features of the kriging training set, and centring the coordinate frame on the atom-of-interest will, in general, return better predictions than models that apply the standard methods of feature definition, or a molecular coordinate frame. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.

  8. The University of New Mexico Center for Molecular Discovery

    PubMed Central

    Edwards, Bruce S.; Gouveia, Kristine; Oprea, Tudor I.; Sklar, Larry A.

    2015-01-01

    The University of New Mexico Center for Molecular Discovery (UNMCMD) is an academic research center that specializes in discovery using high throughput flow cytometry (HTFC) integrated with virtual screening, as well as knowledge mining and drug informatics. With a primary focus on identifying small molecules that can be used as chemical probes and as leads for drug discovery, it is a central core resource for research and translational activities at UNM that supports implementation and management of funded screening projects as well as “up-front” services such as consulting for project design and implementation, assistance in assay development and generation of preliminary data for pilot projects in support of competitive grant applications. The HTFC platform in current use represents advanced, proprietary technology developed at UNM that is now routinely capable of processing bioassays arrayed in 96-, 384- and 1536-well formats at throughputs of 60,000 or more wells per day. Key programs at UNMCMD include screening of research targets submitted by the international community through NIH’s Molecular Libraries Program; a multi-year effort involving translational partnerships at UNM directed towards drug repurposing - identifying new uses for clinically approved drugs; and a recently established personalized medicine initiative for advancing cancer therapy by the application of “smart” oncology drugs in selected patients based on response patterns of their cancer cells in vitro. UNMCMD discoveries, innovation, and translation have contributed to a wealth of inventions, patents, licenses and publications, as well as startup companies, clinical trials and a multiplicity of domestic and international collaborative partnerships to further the research enterprise. PMID:24409953

  9. The University of New Mexico Center for Molecular Discovery.

    PubMed

    Edwards, Bruce S; Gouveia, Kristine; Oprea, Tudor I; Sklar, Larry A

    2014-03-01

    The University of New Mexico Center for Molecular Discovery (UNMCMD) is an academic research center that specializes in discovery using high throughput flow cytometry (HTFC) integrated with virtual screening, as well as knowledge mining and drug informatics. With a primary focus on identifying small molecules that can be used as chemical probes and as leads for drug discovery, it is a central core resource for research and translational activities at UNM that supports implementation and management of funded screening projects as well as "up-front" services such as consulting for project design and implementation, assistance in assay development and generation of preliminary data for pilot projects in support of competitive grant applications. The HTFC platform in current use represents advanced, proprietary technology developed at UNM that is now routinely capable of processing bioassays arrayed in 96-, 384- and 1536-well formats at throughputs of 60,000 or more wells per day. Key programs at UNMCMD include screening of research targets submitted by the international community through NIH's Molecular Libraries Program; a multi-year effort involving translational partnerships at UNM directed towards drug repurposing - identifying new uses for clinically approved drugs; and a recently established personalized medicine initiative for advancing cancer therapy by the application of "smart" oncology drugs in selected patients based on response patterns of their cancer cells in vitro. UNMCMD discoveries, innovation, and translation have contributed to a wealth of inventions, patents, licenses and publications, as well as startup companies, clinical trials and a multiplicity of domestic and international collaborative partnerships to further the research enterprise.

  10. BioPortal: enhanced functionality via new Web services from the National Center for Biomedical Ontology to access and use ontologies in software applications.

    PubMed

    Whetzel, Patricia L; Noy, Natalya F; Shah, Nigam H; Alexander, Paul R; Nyulas, Csongor; Tudorache, Tania; Musen, Mark A

    2011-07-01

    The National Center for Biomedical Ontology (NCBO) is one of the National Centers for Biomedical Computing funded under the NIH Roadmap Initiative. Contributing to the national computing infrastructure, NCBO has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies (http://bioportal.bioontology.org) via the NCBO Web services. BioPortal enables community participation in the evaluation and evolution of ontology content by providing features to add mappings between terms, to add comments linked to specific ontology terms and to provide ontology reviews. The NCBO Web services (http://www.bioontology.org/wiki/index.php/NCBO_REST_services) enable this functionality and provide a uniform mechanism to access ontologies from a variety of knowledge representation formats, such as Web Ontology Language (OWL) and Open Biological and Biomedical Ontologies (OBO) format. The Web services provide multi-layered access to the ontology content, from getting all terms in an ontology to retrieving metadata about a term. Users can easily incorporate the NCBO Web services into software applications to generate semantically aware applications and to facilitate structured data collection.

  11. Anti-HMGCR antibodies as a biomarker for immune-mediated necrotizing myopathies: A history of statins and experience from a large international multi-center study.

    PubMed

    Musset, Lucile; Allenbach, Yves; Benveniste, Olivier; Boyer, Olivier; Bossuyt, Xavier; Bentow, Chelsea; Phillips, Joe; Mammen, Andrew; Van Damme, Philip; Westhovens, René; Ghirardello, Anna; Doria, Andrea; Choi, May Y; Fritzler, Marvin J; Schmeling, Heinrike; Muro, Yoshinao; García-De La Torre, Ignacio; Ortiz-Villalvazo, Miguel A; Bizzaro, Nicola; Infantino, Maria; Imbastaro, Tiziana; Peng, Qinglin; Wang, Guochun; Vencovský, Jiří; Klein, Martin; Krystufkova, Olga; Franceschini, Franco; Fredi, Micaela; Hue, Sophie; Belmondo, Thibaut; Danko, Katalin; Mahler, Michael

    2016-10-01

    In an effort to find naturally occurring substances that reduce cholesterol by inhibiting 3-hydroxy-3-methylglutaryl-coenzyme A reductase (HMGCR), statins were first discovered by Endo in 1972. With the widespread prescription and use of statins to decrease morbidity from myocardial infarction and stroke, it was noted that approximately 5% of all statin users experienced muscle pain and weakness during treatment. In a smaller proportion of patients, the myopathy progressed to severe morbidity marked by proximal weakness and severe muscle wasting. Remarkably, Mammen and colleagues were the first to discover that the molecular target of statins, 3-hydroxy-3-methylglutaryl coenzyme A reductase (HMGCR), is an autoantibody target in patients that develop an immune-mediated necrotizing myopathy (IMNM). These observations have been confirmed in a number of studies but, until today, a multi-center, international study of IMNM, related idiopathic inflammatory myopathies (IIM), other auto-inflammatory conditions and controls has not been published. Accordingly, an international, multi-center study investigated the utility of anti-HMGCR antibodies in the diagnosis of statin-associated IMNM in comparison to different forms of IIM and controls. This study included samples from patients with different forms of IIM (n=1250) and patients with other diseases (n=656) that were collected from twelve sites and tested for anti-HMGCR antibodies by ELISA. This study confirmed that anti-HMGCR autoantibodies, when found in conjunction with statin use, characterize a subset of IIM who are older and have necrosis on muscle biopsy. Taken together, the data to date indicates that testing for anti-HMGCR antibodies is important in the differential diagnosis of IIM and might be considered for future classification criteria. Copyright © 2016. Published by Elsevier B.V.

  12. CADRE-SS, an in Silico Tool for Predicting Skin Sensitization Potential Based on Modeling of Molecular Interactions.

    PubMed

    Kostal, Jakub; Voutchkova-Kostal, Adelina

    2016-01-19

    Using computer models to accurately predict toxicity outcomes is considered to be a major challenge. However, state-of-the-art computational chemistry techniques can now be incorporated in predictive models, supported by advances in mechanistic toxicology and the exponential growth of computing resources witnessed over the past decade. The CADRE (Computer-Aided Discovery and REdesign) platform relies on quantum-mechanical modeling of molecular interactions that represent key biochemical triggers in toxicity pathways. Here, we present an external validation exercise for CADRE-SS, a variant developed to predict the skin sensitization potential of commercial chemicals. CADRE-SS is a hybrid model that evaluates skin permeability using Monte Carlo simulations, assigns reactive centers in a molecule and possible biotransformations via expert rules, and determines reactivity with skin proteins via quantum-mechanical modeling. The results were promising with an overall very good concordance of 93% between experimental and predicted values. Comparison to performance metrics yielded by other tools available for this endpoint suggests that CADRE-SS offers distinct advantages for first-round screenings of chemicals and could be used as an in silico alternative to animal tests where permissible by legislative programs.

  13. Molecular determinants of enzyme cold adaptation: comparative structural and computational studies of cold- and warm-adapted enzymes.

    PubMed

    Papaleo, Elena; Tiberti, Matteo; Invernizzi, Gaetano; Pasi, Marco; Ranzani, Valeria

    2011-11-01

    The identification of molecular mechanisms underlying enzyme cold adaptation is a hot-topic both for fundamental research and industrial applications. In the present contribution, we review the last decades of structural computational investigations on cold-adapted enzymes in comparison to their warm-adapted counterparts. Comparative sequence and structural studies allow the definition of a multitude of adaptation strategies. Different enzymes carried out diverse mechanisms to adapt to low temperatures, so that a general theory for enzyme cold adaptation cannot be formulated. However, some common features can be traced in dynamic and flexibility properties of these enzymes, as well as in their intra- and inter-molecular interaction networks. Interestingly, the current data suggest that a family-centered point of view is necessary in the comparative analyses of cold- and warm-adapted enzymes. In fact, enzymes belonging to the same family or superfamily, thus sharing at least the three-dimensional fold and common features of the functional sites, have evolved similar structural and dynamic patterns to overcome the detrimental effects of low temperatures.

  14. VEST: Abstract Vector Calculus Simplification in Mathematica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Squire, J. Burby and H. Qin

    2013-03-12

    We present a new package, VEST (Vector Einstein Summation Tools), that performs abstract vector calculus computations in Mathematica. Through the use of index notation, VEST is able to reduce scalar and vector expressions of a very general type using a systematic canonicalization procedure. In addition, utilizing properties of the Levi-Civita symbol, the program can derive types of multi-term vector identities that are not recognized by canonicalization, subsequently applying these to simplify large expressions. In a companion paper [1], we employ VEST in the automation of the calculation of Lagrangians for the single particle guiding center system in plasma physics, amore » computation which illustrates its ability to handle very large expressions. VEST has been designed to be simple and intuitive to use, both for basic checking of work and more involved computations. __________________________________________________« less

  15. VEST: Abstract vector calculus simplification in Mathematica

    NASA Astrophysics Data System (ADS)

    Squire, J.; Burby, J.; Qin, H.

    2014-01-01

    We present a new package, VEST (Vector Einstein Summation Tools), that performs abstract vector calculus computations in Mathematica. Through the use of index notation, VEST is able to reduce three-dimensional scalar and vector expressions of a very general type to a well defined standard form. In addition, utilizing properties of the Levi-Civita symbol, the program can derive types of multi-term vector identities that are not recognized by reduction, subsequently applying these to simplify large expressions. In a companion paper Burby et al. (2013) [12], we employ VEST in the automation of the calculation of high-order Lagrangians for the single particle guiding center system in plasma physics, a computation which illustrates its ability to handle very large expressions. VEST has been designed to be simple and intuitive to use, both for basic checking of work and more involved computations.

  16. Clustering recommendations to compute agent reputation

    NASA Astrophysics Data System (ADS)

    Bedi, Punam; Kaur, Harmeet

    2005-03-01

    Traditional centralized approaches to security are difficult to apply to multi-agent systems which are used nowadays in e-commerce applications. Developing a notion of trust that is based on the reputation of an agent can provide a softer notion of security that is sufficient for many multi-agent applications. Our paper proposes a mechanism for computing reputation of the trustee agent for use by the trustier agent. The trustier agent computes the reputation based on its own experience as well as the experience the peer agents have with the trustee agents. The trustier agents intentionally interact with the peer agents to get their experience information in the form of recommendations. We have also considered the case of unintentional encounters between the referee agents and the trustee agent, which can be directly between them or indirectly through a set of interacting agents. The clustering is done to filter off the noise in the recommendations in the form of outliers. The trustier agent clusters the recommendations received from referee agents on the basis of the distances between recommendations using the hierarchical agglomerative method. The dendogram hence obtained is cut at the required similarity level which restricts the maximum distance between any two recommendations within a cluster. The cluster with maximum number of elements denotes the views of the majority of recommenders. The center of this cluster represents the reputation of the trustee agent which can be computed using c-means algorithm.

  17. Rasdaman for Big Spatial Raster Data

    NASA Astrophysics Data System (ADS)

    Hu, F.; Huang, Q.; Scheele, C. J.; Yang, C. P.; Yu, M.; Liu, K.

    2015-12-01

    Spatial raster data have grown exponentially over the past decade. Recent advancements on data acquisition technology, such as remote sensing, have allowed us to collect massive observation data of various spatial resolution and domain coverage. The volume, velocity, and variety of such spatial data, along with the computational intensive nature of spatial queries, pose grand challenge to the storage technologies for effective big data management. While high performance computing platforms (e.g., cloud computing) can be used to solve the computing-intensive issues in big data analysis, data has to be managed in a way that is suitable for distributed parallel processing. Recently, rasdaman (raster data manager) has emerged as a scalable and cost-effective database solution to store and retrieve massive multi-dimensional arrays, such as sensor, image, and statistics data. Within this paper, the pros and cons of using rasdaman to manage and query spatial raster data will be examined and compared with other common approaches, including file-based systems, relational databases (e.g., PostgreSQL/PostGIS), and NoSQL databases (e.g., MongoDB and Hive). Earth Observing System (EOS) data collected from NASA's Atmospheric Scientific Data Center (ASDC) will be used and stored in these selected database systems, and a set of spatial and non-spatial queries will be designed to benchmark their performance on retrieving large-scale, multi-dimensional arrays of EOS data. Lessons learnt from using rasdaman will be discussed as well.

  18. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    PubMed

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.

  19. Electrophysiological signal analysis and visualization using Cloudwave for epilepsy clinical research.

    PubMed

    Jayapandian, Catherine P; Chen, Chien-Hung; Bozorgi, Alireza; Lhatoo, Samden D; Zhang, Guo-Qiang; Sahoo, Satya S

    2013-01-01

    Epilepsy is the most common serious neurological disorder affecting 50-60 million persons worldwide. Electrophysiological data recordings, such as electroencephalogram (EEG), are the gold standard for diagnosis and pre-surgical evaluation in epilepsy patients. The increasing trend towards multi-center clinical studies require signal visualization and analysis tools to support real time interaction with signal data in a collaborative environment, which cannot be supported by traditional desktop-based standalone applications. As part of the Prevention and Risk Identification of SUDEP Mortality (PRISM) project, we have developed a Web-based electrophysiology data visualization and analysis platform called Cloudwave using highly scalable open source cloud computing infrastructure. Cloudwave is integrated with the PRISM patient cohort identification tool called MEDCIS (Multi-modality Epilepsy Data Capture and Integration System). The Epilepsy and Seizure Ontology (EpSO) underpins both Cloudwave and MEDCIS to support query composition and result retrieval. Cloudwave is being used by clinicians and research staff at the University Hospital - Case Medical Center (UH-CMC) Epilepsy Monitoring Unit (EMU) and will be progressively deployed at four EMUs in the United States and the United Kingdomas part of the PRISM project.

  20. Explorative search of distributed bio-data to answer complex biomedical questions

    PubMed Central

    2014-01-01

    Background The huge amount of biomedical-molecular data increasingly produced is providing scientists with potentially valuable information. Yet, such data quantity makes difficult to find and extract those data that are most reliable and most related to the biomedical questions to be answered, which are increasingly complex and often involve many different biomedical-molecular aspects. Such questions can be addressed only by comprehensively searching and exploring different types of data, which frequently are ordered and provided by different data sources. Search Computing has been proposed for the management and integration of ranked results from heterogeneous search services. Here, we present its novel application to the explorative search of distributed biomedical-molecular data and the integration of the search results to answer complex biomedical questions. Results A set of available bioinformatics search services has been modelled and registered in the Search Computing framework, and a Bioinformatics Search Computing application (Bio-SeCo) using such services has been created and made publicly available at http://www.bioinformatics.deib.polimi.it/bio-seco/seco/. It offers an integrated environment which eases search, exploration and ranking-aware combination of heterogeneous data provided by the available registered services, and supplies global results that can support answering complex multi-topic biomedical questions. Conclusions By using Bio-SeCo, scientists can explore the very large and very heterogeneous biomedical-molecular data available. They can easily make different explorative search attempts, inspect obtained results, select the most appropriate, expand or refine them and move forward and backward in the construction of a global complex biomedical query on multiple distributed sources that could eventually find the most relevant results. Thus, it provides an extremely useful automated support for exploratory integrated bio search, which is fundamental for Life Science data driven knowledge discovery. PMID:24564278

  1. Accurate Solution of Multi-Region Continuum Biomolecule Electrostatic Problems Using the Linearized Poisson-Boltzmann Equation with Curved Boundary Elements

    PubMed Central

    Altman, Michael D.; Bardhan, Jaydeep P.; White, Jacob K.; Tidor, Bruce

    2009-01-01

    We present a boundary-element method (BEM) implementation for accurately solving problems in biomolecular electrostatics using the linearized Poisson–Boltzmann equation. Motivating this implementation is the desire to create a solver capable of precisely describing the geometries and topologies prevalent in continuum models of biological molecules. This implementation is enabled by the synthesis of four technologies developed or implemented specifically for this work. First, molecular and accessible surfaces used to describe dielectric and ion-exclusion boundaries were discretized with curved boundary elements that faithfully reproduce molecular geometries. Second, we avoided explicitly forming the dense BEM matrices and instead solved the linear systems with a preconditioned iterative method (GMRES), using a matrix compression algorithm (FFTSVD) to accelerate matrix-vector multiplication. Third, robust numerical integration methods were employed to accurately evaluate singular and near-singular integrals over the curved boundary elements. Finally, we present a general boundary-integral approach capable of modeling an arbitrary number of embedded homogeneous dielectric regions with differing dielectric constants, possible salt treatment, and point charges. A comparison of the presented BEM implementation and standard finite-difference techniques demonstrates that for certain classes of electrostatic calculations, such as determining absolute electrostatic solvation and rigid-binding free energies, the improved convergence properties of the BEM approach can have a significant impact on computed energetics. We also demonstrate that the improved accuracy offered by the curved-element BEM is important when more sophisticated techniques, such as non-rigid-binding models, are used to compute the relative electrostatic effects of molecular modifications. In addition, we show that electrostatic calculations requiring multiple solves using the same molecular geometry, such as charge optimization or component analysis, can be computed to high accuracy using the presented BEM approach, in compute times comparable to traditional finite-difference methods. PMID:18567005

  2. Effects of molecular packing in organic crystals on singlet fission with ab initio many body perturbation theory

    NASA Astrophysics Data System (ADS)

    Haber, Jonah; Refaely-Abramson, Sivan; da Jornada, Felipe H.; Louie, Steven G.; Neaton, Jeffrey B.

    Multi-exciton generation processes, in which multiple charge carriers are generated from a single photon, are mechanisms of significant interest for achieving efficiencies beyond the Shockley-Queisser limit of conventional p-n junction solar cells. One well-studied multiexciton process is singlet fission, whereby a singlet decays into two spin-correlated triplet excitons. Here, we use a newly developed computational approach to calculate singlet-fission coupling terms and rates with an ab initio Green's function formalism based on many-body perturbation theory (MBPT) within the GW approximation and the Bethe-Salpeter equation approach. We compare results for crystalline pentacene and TIPS-pentacene and explore the effect of molecular packing on the singlet fission mechanism. This work is supported by the Department of Energy.

  3. From Interfaces to Bulk: Experimental-Computational Studies Across Time and Length Scales of Multi-Functional Ionic Polymers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perahia, Dvora; Grest, Gary S.

    Neutron experiments coupled with computational components have resulted in unprecedented understanding of the factors that impact the behavior of ionic structured polymers. Additionally, new computational tools to study macromolecules, were developed. In parallel, this DOE funding have enabled the education of the next generation of material researchers who are able to take the advantage neutron tools offer to the understanding and design of advanced materials. Our research has provided unprecedented insight into one of the major factors that limits the use of ionizable polymers, combining the macroscopic view obtained from the experimental techniques with molecular insight extracted from computational studiesmore » leading to transformative knowledge that will impact the design of nano-structured, materials. With the focus on model systems, of broad interest to the scientific community and to industry, the research addressed challenges that cut across a large number of polymers, independent of the specific chemical structure or the transported species.« less

  4. Understanding the polypharmacological anticancer effects of Xiao Chai Hu Tang via a computational pharmacological model

    PubMed Central

    ZHENG, CHUN-SONG; WU, YIN-SHENG; BAO, HONG-JUAN; XU, XIAO-JIE; CHEN, XING-QIANG; YE, HONG-ZHI; WU, GUANG-WEN; XU, HUI-FENG; LI, XI-HAI; CHEN, JIA-SHOU; LIU, XIAN-XIANG

    2014-01-01

    Xiao Chai Hu Tang (XCHT), a traditional herbal formula, is widely administered as a cancer treatment. However, the underlying molecular mechanisms of its anticancer effects are not fully understood. In the present study, a computational pharmacological model that combined chemical space mapping, molecular docking and network analysis was employed to predict which chemical compounds in XCHT are potential inhibitors of cancer-associated targets, and to establish a compound-target (C-T) network and compound-compound (C-C) association network. The identified compounds from XCHT demonstrated diversity in chemical space. Furthermore, they occupied regions of chemical space that were the same, or close to, those occupied by drug or drug-like compounds that are associated with cancer, according to the Therapeutic Targets Database. The analysis of the molecular docking and the C-T network demonstrated that the potential inhibitors possessed the properties of promiscuous drugs and combination therapies. The C-C network was classified into four clusters and the different clusters contained various multi-compound combinations that acted on different targets. The study indicated that XCHT has a polypharmacological role in treating cancer and the potential inhibitory components of XCHT require further investigation as potential therapeutic strategies for cancer patients. PMID:24926384

  5. Consistency of clinical biomechanical measures between three different institutions: implications for multi-center biomechanical and epidemiological research.

    PubMed

    Myer, Gregory D; Wordeman, Samuel C; Sugimoto, Dai; Bates, Nathaniel A; Roewer, Benjamin D; Medina McKeon, Jennifer M; DiCesare, Christopher A; Di Stasi, Stephanie L; Barber Foss, Kim D; Thomas, Staci M; Hewett, Timothy E

    2014-05-01

    Multi-center collaborations provide a powerful alternative to overcome the inherent limitations to single-center investigations. Specifically, multi-center projects can support large-scale prospective, longitudinal studies that investigate relatively uncommon outcomes, such as anterior cruciate ligament injury. This project was conceived to assess within- and between-center reliability of an affordable, clinical nomogram utilizing two-dimensional video methods to screen for risk of knee injury. The authors hypothesized that the two-dimensional screening methods would provide good-to-excellent reliability within and between institutions for assessment of frontal and sagittal plane biomechanics. Nineteen female, high school athletes participated. Two-dimensional video kinematics of the lower extremity during a drop vertical jump task were collected on all 19 study participants at each of the three facilities. Within-center and between-center reliability were assessed with intra- and inter-class correlation coefficients. Within-center reliability of the clinical nomogram variables was consistently excellent, but between-center reliability was fair-to-good. Within-center intra-class correlation coefficient for all nomogram variables combined was 0.98, while combined between-center inter-class correlation coefficient was 0.63. Injury risk screening protocols were reliable within and repeatable between centers. These results demonstrate the feasibility of multi-site biomechanical studies and establish a framework for further dissemination of injury risk screening algorithms. Specifically, multi-center studies may allow for further validation and optimization of two-dimensional video screening tools. 2b.

  6. Molecular structure of the dioctadecyldimethylammonium bromide (DODAB) bilayer.

    PubMed

    Jamróz, Dorota; Kepczynski, Mariusz; Nowakowska, Maria

    2010-10-05

    Dioctadecyldimethylammonium bromide (DODAB) is a double-chained quaternary ammonium surfactant that assembles in water into bilayer structures. This letter reports the molecular dynamics (MD) computer simulations of the DODAB bilayer at 25 °C. The simulations show that the surfactant membrane arranges spontaneously into the rippled phase (P(β)(')) at that temperature. The ordering within the chain fragment closest to the hydrophilic head (carbon atoms 1-5) is relatively low. It grows significantly for the carbon atoms located in the center of the membrane (atoms 6-17). The C6-C17 chain fragments are well aligned and tilted by ca. 15° with respect to the bilayer normal.

  7. QuBiLS-MAS, open source multi-platform software for atom- and bond-based topological (2D) and chiral (2.5D) algebraic molecular descriptors computations.

    PubMed

    Valdés-Martiní, José R; Marrero-Ponce, Yovani; García-Jacas, César R; Martinez-Mayorga, Karina; Barigye, Stephen J; Vaz d'Almeida, Yasser Silveira; Pham-The, Hai; Pérez-Giménez, Facundo; Morell, Carlos A

    2017-06-07

    In previous reports, Marrero-Ponce et al. proposed algebraic formalisms for characterizing topological (2D) and chiral (2.5D) molecular features through atom- and bond-based ToMoCoMD-CARDD (acronym for Topological Molecular Computational Design-Computer Aided Rational Drug Design) molecular descriptors. These MDs codify molecular information based on the bilinear, quadratic and linear algebraic forms and the graph-theoretical electronic-density and edge-adjacency matrices in order to consider atom- and bond-based relations, respectively. These MDs have been successfully applied in the screening of chemical compounds of different therapeutic applications ranging from antimalarials, antibacterials, tyrosinase inhibitors and so on. To compute these MDs, a computational program with the same name was initially developed. However, this in house software barely offered the functionalities required in contemporary molecular modeling tasks, in addition to the inherent limitations that made its usability impractical. Therefore, the present manuscript introduces the QuBiLS-MAS (acronym for Quadratic, Bilinear and N-Linear mapS based on graph-theoretic electronic-density Matrices and Atomic weightingS) software designed to compute topological (0-2.5D) molecular descriptors based on bilinear, quadratic and linear algebraic forms for atom- and bond-based relations. The QuBiLS-MAS module was designed as standalone software, in which extensions and generalizations of the former ToMoCoMD-CARDD 2D-algebraic indices are implemented, considering the following aspects: (a) two new matrix normalization approaches based on double-stochastic and mutual probability formalisms; (b) topological constraints (cut-offs) to take into account particular inter-atomic relations; (c) six additional atomic properties to be used as weighting schemes in the calculation of the molecular vectors; (d) four new local-fragments to consider molecular regions of interest; (e) number of lone-pair electrons in chemical structure defined by diagonal coefficients in matrix representations; and (f) several aggregation operators (invariants) applied over atom/bond-level descriptors in order to compute global indices. This software permits the parallel computation of the indices, contains a batch processing module and data curation functionalities. This program was developed in Java v1.7 using the Chemistry Development Kit library (version 1.4.19). The QuBiLS-MAS software consists of two components: a desktop interface (GUI) and an API library allowing for the easy integration of the latter in chemoinformatics applications. The relevance of the novel extensions and generalizations implemented in this software is demonstrated through three studies. Firstly, a comparative Shannon's entropy based variability study for the proposed QuBiLS-MAS and the DRAGON indices demonstrates superior performance for the former. A principal component analysis reveals that the QuBiLS-MAS approach captures chemical information orthogonal to that codified by the DRAGON descriptors. Lastly, a QSAR study for the binding affinity to the corticosteroid-binding globulin using Cramer's steroid dataset is carried out. From these analyses, it is revealed that the QuBiLS-MAS approach for atom-pair relations yields similar-to-superior performance with regard to other QSAR methodologies reported in the literature. Therefore, the QuBiLS-MAS approach constitutes a useful tool for the diversity analysis of chemical compound datasets and high-throughput screening of structure-activity data.

  8. Development of the CELSS emulator at NASA. Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Cullingford, Hatice S.

    1990-01-01

    The Closed Ecological Life Support System (CELSS) Emulator is under development. It will be used to investigate computer simulations of integrated CELSS operations involving humans, plants, and process machinery. Described here is Version 1.0 of the CELSS Emulator that was initiated in 1988 on the Johnson Space Center (JSC) Multi Purpose Applications Console Test Bed as the simulation framework. The run model of the simulation system now contains a CELSS model called BLSS. The CELSS simulator empowers us to generate model data sets, store libraries of results for further analysis, and also display plots of model variables as a function of time. The progress of the project is presented with sample test runs and simulation display pages.

  9. Organization of the secure distributed computing based on multi-agent system

    NASA Astrophysics Data System (ADS)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  10. Dynamic Shape Capture of Free-Swimming Aquatic Life using Multi-view Stereo

    NASA Astrophysics Data System (ADS)

    Daily, David

    2017-11-01

    The reconstruction and tracking of swimming fish in the past has either been restricted to flumes, small volumes, or sparse point tracking in large tanks. The purpose of this research is to use an array of cameras to automatically track 50-100 points on the surface of a fish using the multi-view stereo computer vision technique. The method is non-invasive thus allowing the fish to swim freely in a large volume and to perform more advanced maneuvers such as rolling, darting, stopping, and reversing which have not been studied. The techniques for obtaining and processing the 3D kinematics and maneuvers of tuna, sharks, stingrays, and other species will be presented and compared. The National Aquarium and the Naval Undersea Warfare Center and.

  11. Development of an Agile Knowledge Engineering Framework in Support of Multi-Disciplinary Translational Research

    PubMed Central

    Borlawsky, Tara B.; Dhaval, Rakesh; Hastings, Shannon L.; Payne, Philip R. O.

    2009-01-01

    In October 2006, the National Institutes of Health launched a new national consortium, funded through Clinical and Translational Science Awards (CTSA), with the primary objective of improving the conduct and efficiency of the inherently multi-disciplinary field of translational research. To help meet this goal, the Ohio State University Center for Clinical and Translational Science has launched a knowledge management initiative that is focused on facilitating widespread semantic interoperability among administrative, basic science, clinical and research computing systems, both internally and among the translational research community at-large, through the integration of domain-specific standard terminologies and ontologies with local annotations. This manuscript describes an agile framework that builds upon prevailing knowledge engineering and semantic interoperability methods, and will be implemented as part this initiative. PMID:21347164

  12. Development of an agile knowledge engineering framework in support of multi-disciplinary translational research.

    PubMed

    Borlawsky, Tara B; Dhaval, Rakesh; Hastings, Shannon L; Payne, Philip R O

    2009-03-01

    In October 2006, the National Institutes of Health launched a new national consortium, funded through Clinical and Translational Science Awards (CTSA), with the primary objective of improving the conduct and efficiency of the inherently multi-disciplinary field of translational research. To help meet this goal, the Ohio State University Center for Clinical and Translational Science has launched a knowledge management initiative that is focused on facilitating widespread semantic interoperability among administrative, basic science, clinical and research computing systems, both internally and among the translational research community at-large, through the integration of domain-specific standard terminologies and ontologies with local annotations. This manuscript describes an agile framework that builds upon prevailing knowledge engineering and semantic interoperability methods, and will be implemented as part this initiative.

  13. MBE growth of vertical-cavity surface-emitting laser structure without real-time monitoring

    NASA Astrophysics Data System (ADS)

    Wu, C. Z.; Tsou, Y.; Tsai, C. M.

    1999-05-01

    Evaluation of producing a vertical-cavity surface-emitting laser (VCSEL) epitaxial structure by molecular beam epitaxy (MBE) without resorting to any real-time monitoring technique is reported. Continuous grading of Al xGa 1- xAs between x=0.12 to x=0.92 was simply achieved by changing the Al and Ga cell temperatures in no more than three steps per DBR period. Highly uniform DBR and VCSEL structures were demonstrated with a multi-wafer MBE system. Run-to-run standard deviation of reflectance spectrum center wavelength was 0.5% and 1.4% for VCSEL etalon wavelength.

  14. Distributed simulation using a real-time shared memory network

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Mattern, Duane L.; Wong, Edmond; Musgrave, Jeffrey L.

    1993-01-01

    The Advanced Control Technology Branch of the NASA Lewis Research Center performs research in the area of advanced digital controls for aeronautic and space propulsion systems. This work requires the real-time implementation of both control software and complex dynamical models of the propulsion system. We are implementing these systems in a distributed, multi-vendor computer environment. Therefore, a need exists for real-time communication and synchronization between the distributed multi-vendor computers. A shared memory network is a potential solution which offers several advantages over other real-time communication approaches. A candidate shared memory network was tested for basic performance. The shared memory network was then used to implement a distributed simulation of a ramjet engine. The accuracy and execution time of the distributed simulation was measured and compared to the performance of the non-partitioned simulation. The ease of partitioning the simulation, the minimal time required to develop for communication between the processors and the resulting execution time all indicate that the shared memory network is a real-time communication technique worthy of serious consideration.

  15. A novel Gaussian-Sinc mixed basis set for electronic structure calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jerke, Jonathan L.; Lee, Young; Tymczak, C. J.

    2015-08-14

    A Gaussian-Sinc basis set methodology is presented for the calculation of the electronic structure of atoms and molecules at the Hartree–Fock level of theory. This methodology has several advantages over previous methods. The all-electron electronic structure in a Gaussian-Sinc mixed basis spans both the “localized” and “delocalized” regions. A basis set for each region is combined to make a new basis methodology—a lattice of orthonormal sinc functions is used to represent the “delocalized” regions and the atom-centered Gaussian functions are used to represent the “localized” regions to any desired accuracy. For this mixed basis, all the Coulomb integrals are definablemore » and can be computed in a dimensional separated methodology. Additionally, the Sinc basis is translationally invariant, which allows for the Coulomb singularity to be placed anywhere including on lattice sites. Finally, boundary conditions are always satisfied with this basis. To demonstrate the utility of this method, we calculated the ground state Hartree–Fock energies for atoms up to neon, the diatomic systems H{sub 2}, O{sub 2}, and N{sub 2}, and the multi-atom system benzene. Together, it is shown that the Gaussian-Sinc mixed basis set is a flexible and accurate method for solving the electronic structure of atomic and molecular species.« less

  16. Persistent Topology and Metastable State in Conformational Dynamics

    PubMed Central

    Chang, Huang-Wei; Bacallado, Sergio; Pande, Vijay S.; Carlsson, Gunnar E.

    2013-01-01

    The large amount of molecular dynamics simulation data produced by modern computational models brings big opportunities and challenges to researchers. Clustering algorithms play an important role in understanding biomolecular kinetics from the simulation data, especially under the Markov state model framework. However, the ruggedness of the free energy landscape in a biomolecular system makes common clustering algorithms very sensitive to perturbations of the data. Here, we introduce a data-exploratory tool which provides an overview of the clustering structure under different parameters. The proposed Multi-Persistent Clustering analysis combines insights from recent studies on the dynamics of systems with dominant metastable states with the concept of multi-dimensional persistence in computational topology. We propose to explore the clustering structure of the data based on its persistence on scale and density. The analysis provides a systematic way to discover clusters that are robust to perturbations of the data. The dominant states of the system can be chosen with confidence. For the clusters on the borderline, the user can choose to do more simulation or make a decision based on their structural characteristics. Furthermore, our multi-resolution analysis gives users information about the relative potential of the clusters and their hierarchical relationship. The effectiveness of the proposed method is illustrated in three biomolecules: alanine dipeptide, Villin headpiece, and the FiP35 WW domain. PMID:23565139

  17. Development of Parallel Code for the Alaska Tsunami Forecast Model

    NASA Astrophysics Data System (ADS)

    Bahng, B.; Knight, W. R.; Whitmore, P.

    2014-12-01

    The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.

  18. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    NASA Technical Reports Server (NTRS)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  19. A new aiming guide can create the tibial tunnel at favorable position in transtibial pullout repair for the medial meniscus posterior root tear.

    PubMed

    Furumatsu, T; Kodama, Y; Fujii, M; Tanaka, T; Hino, T; Kamatsuki, Y; Yamada, K; Miyazawa, S; Ozaki, T

    2017-05-01

    Injuries to the medial meniscus (MM) posterior root lead to accelerated cartilage degeneration of the knee. An anatomic placement of the MM posterior root attachment is considered to be critical in transtibial pullout repair of the medial meniscus posterior root tear (MMPRT). However, tibial tunnel creation at the anatomic attachment of the MM posterior root is technically difficult using a conventional aiming device. The aim of this study was to compare two aiming guides. We hypothesized that a newly-developed guide, specifically designed, creates the tibial tunnel at an adequate position rather than a conventional device. Twenty-six patients underwent transtibial pullout repairs. Tibial tunnel creation was performed using the Multi-use guide (8 cases) or the PRT guide that had a narrow twisting/curving shape (18 cases). Three-dimensional computed tomography images of the tibial surface were evaluated using the Tsukada's measurement method postoperatively. Expected anatomic center of the MM posterior root attachment and tibial tunnel center were evaluated using the percentage-based posterolateral location on the tibial surface. Percentage distance between anatomic center and tunnel center was calculated. Anatomic center of the MM posterior root footprint located at a position of 78.5% posterior and 39.4% lateral. Both tunnels were anteromedial but tibial tunnel center located at a more favorable position in the PRT group: percentage distance was significantly smaller in the PRT guide group (8.7%) than in the Multi-use guide group (13.1%). The PRT guide may have great advantage to achieve a more anatomic location of the tibial tunnel in MMPRT pullout repair. III. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  20. Collision for Li++He System. I. Potential Curves and Non-Adiabatic Coupling Matrix Elements

    NASA Astrophysics Data System (ADS)

    Yoshida, Junichi; O-Ohata, Kiyosi

    1984-02-01

    The potential curves and the non-adiabatic coupling matrix elements for the Li++He collision system were computed. The SCF molecular orbitals were constructed with the CGTO atomic bases centered on each nucleus and the center of mass of two nuclei. The SCF and CI calculations were done at various internuclear distances in the range of 0.1˜25.0 a.u. The potential energies and the wavefunctions were calculated with good approximation over whole internuclear distance. The non-adiabatic coupling matrix elements were calculated with the tentative method in which the ETF are approximately taken into account.

  1. Experimental and Computational Analysis of Unidirectional Flow Through Stirling Engine Heater Head

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Dyson, Rodger W.; Tew, Roy C.; Demko, Rikako

    2006-01-01

    A high efficiency Stirling Radioisotope Generator (SRG) is being developed for possible use in long-duration space science missions. NASA s advanced technology goals for next generation Stirling convertors include increasing the Carnot efficiency and percent of Carnot efficiency. To help achieve these goals, a multi-dimensional Computational Fluid Dynamics (CFD) code is being developed to numerically model unsteady fluid flow and heat transfer phenomena of the oscillating working gas inside Stirling convertors. In the absence of transient pressure drop data for the zero mean oscillating multi-dimensional flows present in the Technology Demonstration Convertors on test at NASA Glenn Research Center, unidirectional flow pressure drop test data is used to compare against 2D and 3D computational solutions. This study focuses on tracking pressure drop and mass flow rate data for unidirectional flow though a Stirling heater head using a commercial CFD code (CFD-ACE). The commercial CFD code uses a porous-media model which is dependent on permeability and the inertial coefficient present in the linear and nonlinear terms of the Darcy-Forchheimer equation. Permeability and inertial coefficient were calculated from unidirectional flow test data. CFD simulations of the unidirectional flow test were validated using the porous-media model input parameters which increased simulation accuracy by 14 percent on average.

  2. Computer Modeling of the Earliest Cellular Structures and Functions

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew; Chipot, Christophe; Schweighofer, Karl

    2000-01-01

    In the absence of extinct or extant record of protocells (the earliest ancestors of contemporary cells). the most direct way to test our understanding of the origin of cellular life is to construct laboratory models of protocells. Such efforts are currently underway in the NASA Astrobiology Program. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures and developing designs for molecules that perform proto-cellular functions. Many of these functions, such as import of nutrients, capture and storage of energy. and response to changes in the environment are carried out by proteins bound to membrane< We will discuss a series of large-scale, molecular-level computer simulations which demonstrate (a) how small proteins (peptides) organize themselves into ordered structures at water-membrane interfaces and insert into membranes, (b) how these peptides aggregate to form membrane-spanning structures (eg. channels), and (c) by what mechanisms such aggregates perform essential proto-cellular functions, such as proton transport of protons across cell walls, a key step in cellular bioenergetics. The simulations were performed using the molecular dynamics method, in which Newton's equations of motion for each item in the system are solved iteratively. The problems of interest required simulations on multi-nanosecond time scales, which corresponded to 10(exp 6)-10(exp 8) time steps.

  3. Energy-Aware Computation Offloading of IoT Sensors in Cloudlet-Based Mobile Edge Computing.

    PubMed

    Ma, Xiao; Lin, Chuang; Zhang, Han; Liu, Jianwei

    2018-06-15

    Mobile edge computing is proposed as a promising computing paradigm to relieve the excessive burden of data centers and mobile networks, which is induced by the rapid growth of Internet of Things (IoT). This work introduces the cloud-assisted multi-cloudlet framework to provision scalable services in cloudlet-based mobile edge computing. Due to the constrained computation resources of cloudlets and limited communication resources of wireless access points (APs), IoT sensors with identical computation offloading decisions interact with each other. To optimize the processing delay and energy consumption of computation tasks, theoretic analysis of the computation offloading decision problem of IoT sensors is presented in this paper. In more detail, the computation offloading decision problem of IoT sensors is formulated as a computation offloading game and the condition of Nash equilibrium is derived by introducing the tool of a potential game. By exploiting the finite improvement property of the game, the Computation Offloading Decision (COD) algorithm is designed to provide decentralized computation offloading strategies for IoT sensors. Simulation results demonstrate that the COD algorithm can significantly reduce the system cost compared with the random-selection algorithm and the cloud-first algorithm. Furthermore, the COD algorithm can scale well with increasing IoT sensors.

  4. Survey on Security Issues in Cloud Computing and Associated Mitigation Techniques

    NASA Astrophysics Data System (ADS)

    Bhadauria, Rohit; Sanyal, Sugata

    2012-06-01

    Cloud Computing holds the potential to eliminate the requirements for setting up of high-cost computing infrastructure for IT-based solutions and services that the industry uses. It promises to provide a flexible IT architecture, accessible through internet for lightweight portable devices. This would allow multi-fold increase in the capacity or capabilities of the existing and new software. In a cloud computing environment, the entire data reside over a set of networked resources, enabling the data to be accessed through virtual machines. Since these data-centers may lie in any corner of the world beyond the reach and control of users, there are multifarious security and privacy challenges that need to be understood and taken care of. Also, one can never deny the possibility of a server breakdown that has been witnessed, rather quite often in the recent times. There are various issues that need to be dealt with respect to security and privacy in a cloud computing scenario. This extensive survey paper aims to elaborate and analyze the numerous unresolved issues threatening the cloud computing adoption and diffusion affecting the various stake-holders linked to it.

  5. Theoretical description of quantum mechanical permeation of graphene membranes by charged hydrogen isotopes

    NASA Astrophysics Data System (ADS)

    Mazzuca, James W.; Haut, Nathaniel K.

    2018-06-01

    It has been recently shown that in the presence of an applied voltage, hydrogen and deuterium nuclei can be separated from one another using graphene membranes as a nuclear sieve, resulting in a 10-fold enhancement in the concentration of the lighter isotope. While previous studies, both experimental and theoretical, have attributed this effect mostly to differences in vibrational zero point energy (ZPE) of the various isotopes near the membrane surface, we propose that multi-dimensional quantum mechanical tunneling of nuclei through the graphene membrane influences this proton permeation process in a fundamental way. We perform ring polymer molecular dynamics calculations in which we include both ZPE and tunneling effects of various hydrogen isotopes as they permeate the graphene membrane and compute rate constants across a range of temperatures near 300 K. While capturing the experimentally observed separation factor, our calculations indicate that the transverse motion of the various isotopes across the surface of the graphene membrane is an essential part of this sieving mechanism. An understanding of the multi-dimensional quantum mechanical nature of this process could serve to guide the design of other such isotopic enrichment processes for a variety of atomic and molecular species of interest.

  6. Theoretical description of quantum mechanical permeation of graphene membranes by charged hydrogen isotopes.

    PubMed

    Mazzuca, James W; Haut, Nathaniel K

    2018-06-14

    It has been recently shown that in the presence of an applied voltage, hydrogen and deuterium nuclei can be separated from one another using graphene membranes as a nuclear sieve, resulting in a 10-fold enhancement in the concentration of the lighter isotope. While previous studies, both experimental and theoretical, have attributed this effect mostly to differences in vibrational zero point energy (ZPE) of the various isotopes near the membrane surface, we propose that multi-dimensional quantum mechanical tunneling of nuclei through the graphene membrane influences this proton permeation process in a fundamental way. We perform ring polymer molecular dynamics calculations in which we include both ZPE and tunneling effects of various hydrogen isotopes as they permeate the graphene membrane and compute rate constants across a range of temperatures near 300 K. While capturing the experimentally observed separation factor, our calculations indicate that the transverse motion of the various isotopes across the surface of the graphene membrane is an essential part of this sieving mechanism. An understanding of the multi-dimensional quantum mechanical nature of this process could serve to guide the design of other such isotopic enrichment processes for a variety of atomic and molecular species of interest.

  7. Image-based multi-scale simulation and experimental validation of thermal conductivity of lanthanum zirconate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Xingye; Hu, Bin; Wei, Changdong

    Lanthanum zirconate (La2Zr2O7) is a promising candidate material for thermal barrier coating (TBC) applications due to its low thermal conductivity and high-temperature phase stability. In this work, a novel image-based multi-scale simulation framework combining molecular dynamics (MD) and finite element (FE) calculations is proposed to study the thermal conductivity of La2Zr2O7 coatings. Since there is no experimental data of single crystal La2Zr2O7 thermal conductivity, a reverse non-equilibrium molecular dynamics (reverse NEMD) approach is first employed to compute the temperature-dependent thermal conductivity of single crystal La2Zr2O7. The single crystal data is then passed to a FE model which takes into accountmore » of realistic thermal barrier coating microstructures. The predicted thermal conductivities from the FE model are in good agreement with experimental validations using both flash laser technique and pulsed thermal imaging-multilayer analysis. The framework proposed in this work provides a powerful tool for future design of advanced coating systems. (C) 2016 Elsevier Ltd. All rights reserved.« less

  8. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.

  9. Multiscale methods for computational RNA enzymology

    PubMed Central

    Panteva, Maria T.; Dissanayake, Thakshila; Chen, Haoyuan; Radak, Brian K.; Kuechler, Erich R.; Giambaşu, George M.; Lee, Tai-Sung; York, Darrin M.

    2016-01-01

    RNA catalysis is of fundamental importance to biology and yet remains ill-understood due to its complex nature. The multi-dimensional “problem space” of RNA catalysis includes both local and global conformational rearrangements, changes in the ion atmosphere around nucleic acids and metal ion binding, dependence on potentially correlated protonation states of key residues and bond breaking/forming in the chemical steps of the reaction. The goal of this article is to summarize and apply multiscale modeling methods in an effort to target the different parts of the RNA catalysis problem space while also addressing the limitations and pitfalls of these methods. Classical molecular dynamics (MD) simulations, reference interaction site model (RISM) calculations, constant pH molecular dynamics (CpHMD) simulations, Hamiltonian replica exchange molecular dynamics (HREMD) and quantum mechanical/molecular mechanical (QM/MM) simulations will be discussed in the context of the study of RNA backbone cleavage transesterification. This reaction is catalyzed by both RNA and protein enzymes, and here we examine the different mechanistic strategies taken by the hepatitis delta virus ribozyme (HDVr) and RNase A. PMID:25726472

  10. Plastically bendable crystals of probenecid and its cocrystal with 4,4‧-Bipyridine

    NASA Astrophysics Data System (ADS)

    Nath, Naba K.; Hazarika, Mousumi; Gupta, Poonam; Ray, Nisha R.; Paul, Amit K.; Nauha, Elisa

    2018-05-01

    Recent findings of plastically bendable molecular crystals led to the realization that design based strategies are required for these materials to be useful in real life application. We have coincidentally discovered plastically bendable crystals of a drug molecule probenecid. Based on the structural features of its crystals at room temperature, we hypothesized that introduction of a molecular spacer between two hydrogen bonded molecules of probenecid, by replacing the carboxylic acid homodimer with similar dimeric hydrogen bonding synthon, would not disturb the layered molecular packing of probenecid. As a consequence, the new multi-component crystal would retain flexibility similar to the original probenecid crystals. Herein we have attempted to prove this hypothesis and we were successful in the case of probenecid: 4,4‧-bipyridine cocrystal. As designed, in the crystal structure 4,4‧-bypyridine molecule acted as spacer and connected two probenecid molecules resulting in the retention of the slip planes which are necessary for a molecular crystal to be plastically bendable. DFT computational calculations were carried out to account for the hydrogen bonding synthons between probenecid and the coformers under study.

  11. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.

    2015-05-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. We will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.

  12. Molecular structure and vibrational spectra of Bis(melaminium) terephthalate dihydrate: A DFT computational study

    NASA Astrophysics Data System (ADS)

    Tanak, Hasan; Marchewka, Mariusz K.; Drozd, Marek

    2013-03-01

    The experimental and theoretical vibrational spectra of Bis(melaminium) terephthalate dihydrate were studied. The Fourier transform infrared (FT-IR) spectra of the Bis(melaminium) terephthalate dihydrate and its deuterated analogue were recorded in the solid phase. The molecular geometry and vibrational frequencies of Bis(melaminium) terephthalate dihydrate in the ground state have been calculated by using the density functional method (B3LYP) with 6-31++G(d,p) basis set. The results of the optimized molecular structure are presented and compared with the experimental X-ray diffraction. The molecule contains the weak hydrogen bonds of Nsbnd H⋯O, Nsbnd H⋯N and Osbnd H⋯O types, and those bonds are calculated with DFT method. In addition, molecular electrostatic potential, frontier molecular orbitals and natural bond orbital analysis of the title compound were investigated by theoretical calculations. The lack of the second harmonic generation (SHG) confirms the presence of macroscopic center of inversion.

  13. Molecular structure and vibrational spectra of Bis(melaminium) terephthalate dihydrate: a DFT computational study.

    PubMed

    Tanak, Hasan; Marchewka, Mariusz K; Drozd, Marek

    2013-03-15

    The experimental and theoretical vibrational spectra of Bis(melaminium) terephthalate dihydrate were studied. The Fourier transform infrared (FT-IR) spectra of the Bis(melaminium) terephthalate dihydrate and its deuterated analogue were recorded in the solid phase. The molecular geometry and vibrational frequencies of Bis(melaminium) terephthalate dihydrate in the ground state have been calculated by using the density functional method (B3LYP) with 6-31++G(d,p) basis set. The results of the optimized molecular structure are presented and compared with the experimental X-ray diffraction. The molecule contains the weak hydrogen bonds of N-H···O, N-H···N and O-H···O types, and those bonds are calculated with DFT method. In addition, molecular electrostatic potential, frontier molecular orbitals and natural bond orbital analysis of the title compound were investigated by theoretical calculations. The lack of the second harmonic generation (SHG) confirms the presence of macroscopic center of inversion. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Network representations of immune system complexity

    PubMed Central

    Subramanian, Naeha; Torabi-Parizi, Parizad; Gottschalk, Rachel A.; Germain, Ronald N.; Dutta, Bhaskar

    2015-01-01

    The mammalian immune system is a dynamic multi-scale system composed of a hierarchically organized set of molecular, cellular and organismal networks that act in concert to promote effective host defense. These networks range from those involving gene regulatory and protein-protein interactions underlying intracellular signaling pathways and single cell responses to increasingly complex networks of in vivo cellular interaction, positioning and migration that determine the overall immune response of an organism. Immunity is thus not the product of simple signaling events but rather non-linear behaviors arising from dynamic, feedback-regulated interactions among many components. One of the major goals of systems immunology is to quantitatively measure these complex multi-scale spatial and temporal interactions, permitting development of computational models that can be used to predict responses to perturbation. Recent technological advances permit collection of comprehensive datasets at multiple molecular and cellular levels while advances in network biology support representation of the relationships of components at each level as physical or functional interaction networks. The latter facilitate effective visualization of patterns and recognition of emergent properties arising from the many interactions of genes, molecules, and cells of the immune system. We illustrate the power of integrating ‘omics’ and network modeling approaches for unbiased reconstruction of signaling and transcriptional networks with a focus on applications involving the innate immune system. We further discuss future possibilities for reconstruction of increasingly complex cellular and organism-level networks and development of sophisticated computational tools for prediction of emergent immune behavior arising from the concerted action of these networks. PMID:25625853

  15. Adaptive MCMC in Bayesian phylogenetics: an application to analyzing partitioned data in BEAST.

    PubMed

    Baele, Guy; Lemey, Philippe; Rambaut, Andrew; Suchard, Marc A

    2017-06-15

    Advances in sequencing technology continue to deliver increasingly large molecular sequence datasets that are often heavily partitioned in order to accurately model the underlying evolutionary processes. In phylogenetic analyses, partitioning strategies involve estimating conditionally independent models of molecular evolution for different genes and different positions within those genes, requiring a large number of evolutionary parameters that have to be estimated, leading to an increased computational burden for such analyses. The past two decades have also seen the rise of multi-core processors, both in the central processing unit (CPU) and Graphics processing unit processor markets, enabling massively parallel computations that are not yet fully exploited by many software packages for multipartite analyses. We here propose a Markov chain Monte Carlo (MCMC) approach using an adaptive multivariate transition kernel to estimate in parallel a large number of parameters, split across partitioned data, by exploiting multi-core processing. Across several real-world examples, we demonstrate that our approach enables the estimation of these multipartite parameters more efficiently than standard approaches that typically use a mixture of univariate transition kernels. In one case, when estimating the relative rate parameter of the non-coding partition in a heterochronous dataset, MCMC integration efficiency improves by > 14-fold. Our implementation is part of the BEAST code base, a widely used open source software package to perform Bayesian phylogenetic inference. guy.baele@kuleuven.be. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  16. Systems Biology Methods for Alzheimer's Disease Research Toward Molecular Signatures, Subtypes, and Stages and Precision Medicine: Application in Cohort Studies and Trials.

    PubMed

    Castrillo, Juan I; Lista, Simone; Hampel, Harald; Ritchie, Craig W

    2018-01-01

    Alzheimer's disease (AD) is a complex multifactorial disease, involving a combination of genomic, interactome, and environmental factors, with essential participation of (a) intrinsic genomic susceptibility and (b) a constant dynamic interplay between impaired pathways and central homeostatic networks of nerve cells. The proper investigation of the complexity of AD requires new holistic systems-level approaches, at both the experimental and computational level. Systems biology methods offer the potential to unveil new fundamental insights, basic mechanisms, and networks and their interplay. These may lead to the characterization of mechanism-based molecular signatures, and AD hallmarks at the earliest molecular and cellular levels (and beyond), for characterization of AD subtypes and stages, toward targeted interventions according to the evolving precision medicine paradigm. In this work, an update on advanced systems biology methods and strategies for holistic studies of multifactorial diseases-particularly AD-is presented. This includes next-generation genomics, neuroimaging and multi-omics methods, experimental and computational approaches, relevant disease models, and latest genome editing and single-cell technologies. Their progressive incorporation into basic research, cohort studies, and trials is beginning to provide novel insights into AD essential mechanisms, molecular signatures, and markers toward mechanism-based classification and staging, and tailored interventions. Selected methods which can be applied in cohort studies and trials, with the European Prevention of Alzheimer's Dementia (EPAD) project as a reference example, are presented and discussed.

  17. Future Directions in Medical Physics: Models, Technology, and Translation to Medicine

    NASA Astrophysics Data System (ADS)

    Siewerdsen, Jeffrey

    The application of physics in medicine has been integral to major advances in diagnostic and therapeutic medicine. Two primary areas represent the mainstay of medical physics research in the last century: in radiation therapy, physicists have propelled advances in conformal radiation treatment and high-precision image guidance; and in diagnostic imaging, physicists have advanced an arsenal of multi-modality imaging that includes CT, MRI, ultrasound, and PET as indispensible tools for noninvasive screening, diagnosis, and assessment of treatment response. In addition to their role in building such technologically rich fields of medicine, physicists have also become integral to daily clinical practice in these areas. The future suggests new opportunities for multi-disciplinary research bridging physics, biology, engineering, and computer science, and collaboration in medical physics carries a strong capacity for identification of significant clinical needs, access to clinical data, and translation of technologies to clinical studies. In radiation therapy, for example, the extraction of knowledge from large datasets on treatment delivery, image-based phenotypes, genomic profile, and treatment outcome will require innovation in computational modeling and connection with medical physics for the curation of large datasets. Similarly in imaging physics, the demand for new imaging technology capable of measuring physical and biological processes over orders of magnitude in scale (from molecules to whole organ systems) and exploiting new contrast mechanisms for greater sensitivity to molecular agents and subtle functional / morphological change will benefit from multi-disciplinary collaboration in physics, biology, and engineering. Also in surgery and interventional radiology, where needs for increased precision and patient safety meet constraints in cost and workflow, development of new technologies for imaging, image registration, and robotic assistance can leverage collaboration in physics, biomedical engineering, and computer science. In each area, there is major opportunity for multi-disciplinary collaboration with medical physics to accelerate the translation of such technologies to clinical use. Research supported by the National Institutes of Health, Siemens Healthcare, and Carestream Health.

  18. Computational Design of Materials: Planetary Entry to Electric Aircraft and Beyond

    NASA Technical Reports Server (NTRS)

    Thompson, Alexander; Lawson, John W.

    2014-01-01

    NASA's projects and missions push the bounds of what is possible. To support the agency's work, materials development must stay on the cutting edge in order to keep pace. Today, researchers at NASA Ames Research Center perform multiscale modeling to aid the development of new materials and provide insight into existing ones. Multiscale modeling enables researchers to determine micro- and macroscale properties by connecting computational methods ranging from the atomic level (density functional theory, molecular dynamics) to the macroscale (finite element method). The output of one level is passed on as input to the next level, creating a powerful predictive model.

  19. On the error in the nucleus-centered multipolar expansion of molecular electron density and its topology: A direct-space computational study.

    PubMed

    Michael, J Robert; Koritsanszky, Tibor

    2017-05-28

    The convergence of nucleus-centered multipolar expansion of the quantum-chemical electron density (QC-ED), gradient, and Laplacian is investigated in terms of numerical radial functions derived by projecting stockholder atoms onto real spherical harmonics at each center. The partial sums of this exact one-center expansion are compared with the corresponding Hansen-Coppens pseudoatom (HC-PA) formalism [Hansen, N. K. and Coppens, P., "Testing aspherical atom refinements on small-molecule data sets," Acta Crystallogr., Sect. A 34, 909-921 (1978)] commonly utilized in experimental electron density studies. It is found that the latter model, due to its inadequate radial part, lacks pointwise convergence and fails to reproduce the local topology of the target QC-ED even at a high-order expansion. The significance of the quantitative agreement often found between HC-PA-based (quadrupolar-level) experimental and extended-basis QC-EDs can thus be challenged.

  20. On the error in the nucleus-centered multipolar expansion of molecular electron density and its topology: A direct-space computational study

    NASA Astrophysics Data System (ADS)

    Michael, J. Robert; Koritsanszky, Tibor

    2017-05-01

    The convergence of nucleus-centered multipolar expansion of the quantum-chemical electron density (QC-ED), gradient, and Laplacian is investigated in terms of numerical radial functions derived by projecting stockholder atoms onto real spherical harmonics at each center. The partial sums of this exact one-center expansion are compared with the corresponding Hansen-Coppens pseudoatom (HC-PA) formalism [Hansen, N. K. and Coppens, P., "Testing aspherical atom refinements on small-molecule data sets," Acta Crystallogr., Sect. A 34, 909-921 (1978)] commonly utilized in experimental electron density studies. It is found that the latter model, due to its inadequate radial part, lacks pointwise convergence and fails to reproduce the local topology of the target QC-ED even at a high-order expansion. The significance of the quantitative agreement often found between HC-PA-based (quadrupolar-level) experimental and extended-basis QC-EDs can thus be challenged.

  1. Surface Modeling and Grid Generation of Orbital Sciences X34 Vehicle. Phase 1

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    1997-01-01

    The surface modeling and grid generation requirements, motivations, and methods used to develop Computational Fluid Dynamic volume grids for the X34-Phase 1 are presented. The requirements set forth by the Aerothermodynamics Branch at the NASA Langley Research Center serve as the basis for the final techniques used in the construction of all volume grids, including grids for parametric studies of the X34. The Integrated Computer Engineering and Manufacturing code for Computational Fluid Dynamics (ICEM/CFD), the Grid Generation code (GRIDGEN), the Three-Dimensional Multi-block Advanced Grid Generation System (3DMAGGS) code, and Volume Grid Manipulator (VGM) code are used to enable the necessary surface modeling, surface grid generation, volume grid generation, and grid alterations, respectively. All volume grids generated for the X34, as outlined in this paper, were used for CFD simulations within the Aerothermodynamics Branch.

  2. HSTRESS: A computer program to calculate the height of a hydraulic fracture in a multi-layered stress medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warpinski, N.R.

    A computer code for calculating hydraulic fracture height and width in a stressed-layer medium has been modified for easy use on a personal computer. HSTRESS allows for up to 51 layers having different thicknesses, stresses and fracture toughnesses. The code can calculate fracture height versus pressure or pressure versus fracture height, depending on the design model in which the data will be used. At any pressure/height, a width profile is calculated and an equivalent width factor and flow resistance factor are determined. This program is written in FORTRAN. Graphics use PLOT88 software by Plotworks, Inc., but the graphics software mustmore » be obtained by the user because of licensing restrictions. A version without graphics can also be run. This code is available through the National Energy Software Center (NESC), operated by Argonne National Laboratory. 14 refs., 21 figs.« less

  3. Molecular dynamics simulations using temperature-enhanced essential dynamics replica exchange.

    PubMed

    Kubitzki, Marcus B; de Groot, Bert L

    2007-06-15

    Today's standard molecular dynamics simulations of moderately sized biomolecular systems at full atomic resolution are typically limited to the nanosecond timescale and therefore suffer from limited conformational sampling. Efficient ensemble-preserving algorithms like replica exchange (REX) may alleviate this problem somewhat but are still computationally prohibitive due to the large number of degrees of freedom involved. Aiming at increased sampling efficiency, we present a novel simulation method combining the ideas of essential dynamics and REX. Unlike standard REX, in each replica only a selection of essential collective modes of a subsystem of interest (essential subspace) is coupled to a higher temperature, with the remainder of the system staying at a reference temperature, T(0). This selective excitation along with the replica framework permits efficient approximate ensemble-preserving conformational sampling and allows much larger temperature differences between replicas, thereby considerably enhancing sampling efficiency. Ensemble properties and sampling performance of the method are discussed using dialanine and guanylin test systems, with multi-microsecond molecular dynamics simulations of these test systems serving as references.

  4. Combining Machine Learning Systems and Multiple Docking Simulation Packages to Improve Docking Prediction Reliability for Network Pharmacology

    PubMed Central

    Hsin, Kun-Yi; Ghosh, Samik; Kitano, Hiroaki

    2013-01-01

    Increased availability of bioinformatics resources is creating opportunities for the application of network pharmacology to predict drug effects and toxicity resulting from multi-target interactions. Here we present a high-precision computational prediction approach that combines two elaborately built machine learning systems and multiple molecular docking tools to assess binding potentials of a test compound against proteins involved in a complex molecular network. One of the two machine learning systems is a re-scoring function to evaluate binding modes generated by docking tools. The second is a binding mode selection function to identify the most predictive binding mode. Results from a series of benchmark validations and a case study show that this approach surpasses the prediction reliability of other techniques and that it also identifies either primary or off-targets of kinase inhibitors. Integrating this approach with molecular network maps makes it possible to address drug safety issues by comprehensively investigating network-dependent effects of a drug or drug candidate. PMID:24391846

  5. MIDG-Emerging grid technologies for multi-site preclinical molecular imaging research communities.

    PubMed

    Lee, Jasper; Documet, Jorge; Liu, Brent; Park, Ryan; Tank, Archana; Huang, H K

    2011-03-01

    Molecular imaging is the visualization and identification of specific molecules in anatomy for insight into metabolic pathways, tissue consistency, and tracing of solute transport mechanisms. This paper presents the Molecular Imaging Data Grid (MIDG) which utilizes emerging grid technologies in preclinical molecular imaging to facilitate data sharing and discovery between preclinical molecular imaging facilities and their collaborating investigator institutions to expedite translational sciences research. Grid-enabled archiving, management, and distribution of animal-model imaging datasets help preclinical investigators to monitor, access and share their imaging data remotely, and promote preclinical imaging facilities to share published imaging datasets as resources for new investigators. The system architecture of the Molecular Imaging Data Grid is described in a four layer diagram. A data model for preclinical molecular imaging datasets is also presented based on imaging modalities currently used in a molecular imaging center. The MIDG system components and connectivity are presented. And finally, the workflow steps for grid-based archiving, management, and retrieval of preclincial molecular imaging data are described. Initial performance tests of the Molecular Imaging Data Grid system have been conducted at the USC IPILab using dedicated VMware servers. System connectivity, evaluated datasets, and preliminary results are presented. The results show the system's feasibility, limitations, direction of future research. Translational and interdisciplinary research in medicine is increasingly interested in cellular and molecular biology activity at the preclinical levels, utilizing molecular imaging methods on animal models. The task of integrated archiving, management, and distribution of these preclinical molecular imaging datasets at preclinical molecular imaging facilities is challenging due to disparate imaging systems and multiple off-site investigators. A Molecular Imaging Data Grid design, implementation, and initial evaluation is presented to demonstrate the secure and novel data grid solution for sharing preclinical molecular imaging data across the wide-area-network (WAN).

  6. Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing

    DTIC Science & Technology

    2016-07-15

    AFRL-AFOSR-JP-TR-2016-0068 Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing Hean-Teik...SUBTITLE Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER... electromagnetics to the application in microwave remote sensing as well as extension of modelling capability with computational flexibility to study

  7. Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing

    DTIC Science & Technology

    2016-07-15

    AFRL-AFOSR-JP-TR-2016-0068 Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing Hean-Teik...SUBTITLE Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER...electromagnetics to the application in microwave remote sensing as well as extension of modelling capability with computational flexibility to study

  8. SOP: parallel surrogate global optimization with Pareto center selection for computationally expensive single objective problems

    DOE PAGES

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    2016-02-02

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  9. Design and Development of a Real-Time Model Attitude Measurement System for Hypersonic Facilities

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Lunsford, Charles B.

    2005-01-01

    A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and applies photogrammetric principles for point tracking to compute model position including pitch, roll and yaw variables. A discussion of the constraints encountered during the design, development, and testing process, including lighting, vibration, operational range and optical access is included. Initial measurement results from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.

  10. Design and Development of a Real-Time Model Attitude Measurement System for Hypersonic Facilities

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Lunsford, Charles B.

    2004-01-01

    A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and applies photogrammetric principles for point tracking to compute model position including pitch, roll and yaw variables. A discussion of the constraints encountered during the design, development, and testing process, including lighting, vibration, operational range and optical access is included. Initial measurement results from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.

  11. Telemetry Technology

    NASA Technical Reports Server (NTRS)

    1997-01-01

    In 1990, Avtec Systems, Inc. developed its first telemetry boards for Goddard Space Flight Center. Avtec products now include PC/AT, PCI and VME-based high speed I/O boards and turn-key systems. The most recent and most successful technology transfer from NASA to Avtec is the Programmable Telemetry Processor (PTP), a personal computer- based, multi-channel telemetry front-end processing system originally developed to support the NASA communication (NASCOM) network. The PTP performs data acquisition, real-time network transfer, and store and forward operations. There are over 100 PTP systems located in NASA facilities and throughout the world.

  12. Computational identification of potential multi-drug combinations for reduction of microglial inflammation in Alzheimer disease

    PubMed Central

    Anastasio, Thomas J.

    2015-01-01

    Like other neurodegenerative diseases, Alzheimer Disease (AD) has a prominent inflammatory component mediated by brain microglia. Reducing microglial inflammation could potentially halt or at least slow the neurodegenerative process. A major challenge in the development of treatments targeting brain inflammation is the sheer complexity of the molecular mechanisms that determine whether microglia become inflammatory or take on a more neuroprotective phenotype. The process is highly multifactorial, raising the possibility that a multi-target/multi-drug strategy could be more effective than conventional monotherapy. This study takes a computational approach in finding combinations of approved drugs that are potentially more effective than single drugs in reducing microglial inflammation in AD. This novel approach exploits the distinct advantages of two different computer programming languages, one imperative and the other declarative. Existing programs written in both languages implement the same model of microglial behavior, and the input/output relationships of both programs agree with each other and with data on microglia over an extensive test battery. Here the imperative program is used efficiently to screen the model for the most efficacious combinations of 10 drugs, while the declarative program is used to analyze in detail the mechanisms of action of the most efficacious combinations. Of the 1024 possible drug combinations, the simulated screen identifies only 7 that are able to move simulated microglia at least 50% of the way from a neurotoxic to a neuroprotective phenotype. Subsequent analysis shows that of the 7 most efficacious combinations, 2 stand out as superior both in strength and reliability. The model offers many experimentally testable and therapeutically relevant predictions concerning effective drug combinations and their mechanisms of action. PMID:26097457

  13. Systems biology impact on antiepileptic drug discovery.

    PubMed

    Margineanu, Doru Georg

    2012-02-01

    Systems biology (SB), a recent trend in bioscience research to consider the complex interactions in biological systems from a holistic perspective, sees the disease as a disturbed network of interactions, rather than alteration of single molecular component(s). SB-relying network pharmacology replaces the prevailing focus on specific drug-receptor interaction and the corollary of rational drug design of "magic bullets", by the search for multi-target drugs that would act on biological networks as "magic shotguns". Epilepsy being a multi-factorial, polygenic and dynamic pathology, SB approach appears particularly fit and promising for antiepileptic drug (AED) discovery. In fact, long before the advent of SB, AED discovery already involved some SB-like elements. A reported SB project aimed to find out new drug targets in epilepsy relies on a relational database that integrates clinical information, recordings from deep electrodes and 3D-brain imagery with histology and molecular biology data on modified expression of specific genes in the brain regions displaying spontaneous epileptic activity. Since hitting a single target does not treat complex diseases, a proper pharmacological promiscuity might impart on an AED the merit of being multi-potent. However, multi-target drug discovery entails the complicated task of optimizing multiple activities of compounds, while having to balance drug-like properties and to control unwanted effects. Specific design tools for this new approach in drug discovery barely emerge, but computational methods making reliable in silico predictions of poly-pharmacology did appear, and their progress might be quite rapid. The current move away from reductionism into network pharmacology allows expecting that a proper integration of the intrinsic complexity of epileptic pathology in AED discovery might result in literally anti-epileptic drugs. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Multi-target QSPR modeling for simultaneous prediction of multiple gas-phase kinetic rate constants of diverse chemicals

    NASA Astrophysics Data System (ADS)

    Basant, Nikita; Gupta, Shikha

    2018-03-01

    The reactions of molecular ozone (O3), hydroxyl (•OH) and nitrate (NO3) radicals are among the major pathways of removal of volatile organic compounds (VOCs) in the atmospheric environment. The gas-phase kinetic rate constants (kO3, kOH, kNO3) are thus, important in assessing the ultimate fate and exposure risk of atmospheric VOCs. Experimental data for rate constants are not available for many emerging VOCs and the computational methods reported so far address a single target modeling only. In this study, we have developed a multi-target (mt) QSPR model for simultaneous prediction of multiple kinetic rate constants (kO3, kOH, kNO3) of diverse organic chemicals considering an experimental data set of VOCs for which values of all the three rate constants are available. The mt-QSPR model identified and used five descriptors related to the molecular size, degree of saturation and electron density in a molecule, which were mechanistically interpretable. These descriptors successfully predicted three rate constants simultaneously. The model yielded high correlations (R2 = 0.874-0.924) between the experimental and simultaneously predicted endpoint rate constant (kO3, kOH, kNO3) values in test arrays for all the three systems. The model also passed all the stringent statistical validation tests for external predictivity. The proposed multi-target QSPR model can be successfully used for predicting reactivity of new VOCs simultaneously for their exposure risk assessment.

  15. Computer Based Porosity Design by Multi Phase Topology Optimization

    NASA Astrophysics Data System (ADS)

    Burblies, Andreas; Busse, Matthias

    2008-02-01

    A numerical simulation technique called Multi Phase Topology Optimization (MPTO) based on finite element method has been developed and refined by Fraunhofer IFAM during the last five years. MPTO is able to determine the optimum distribution of two or more different materials in components under thermal and mechanical loads. The objective of optimization is to minimize the component's elastic energy. Conventional topology optimization methods which simulate adaptive bone mineralization have got the disadvantage that there is a continuous change of mass by growth processes. MPTO keeps all initial material concentrations and uses methods adapted from molecular dynamics to find energy minimum. Applying MPTO to mechanically loaded components with a high number of different material densities, the optimization results show graded and sometimes anisotropic porosity distributions which are very similar to natural bone structures. Now it is possible to design the macro- and microstructure of a mechanical component in one step. Computer based porosity design structures can be manufactured by new Rapid Prototyping technologies. Fraunhofer IFAM has applied successfully 3D-Printing and Selective Laser Sintering methods in order to produce very stiff light weight components with graded porosities calculated by MPTO.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ

    Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less

  17. Efficient Integrative Multi-SNP Association Analysis via Deterministic Approximation of Posteriors.

    PubMed

    Wen, Xiaoquan; Lee, Yeji; Luca, Francesca; Pique-Regi, Roger

    2016-06-02

    With the increasing availability of functional genomic data, incorporating genomic annotations into genetic association analysis has become a standard procedure. However, the existing methods often lack rigor and/or computational efficiency and consequently do not maximize the utility of functional annotations. In this paper, we propose a rigorous inference procedure to perform integrative association analysis incorporating genomic annotations for both traditional GWASs and emerging molecular QTL mapping studies. In particular, we propose an algorithm, named deterministic approximation of posteriors (DAP), which enables highly efficient and accurate joint enrichment analysis and identification of multiple causal variants. We use a series of simulation studies to highlight the power and computational efficiency of our proposed approach and further demonstrate it by analyzing the cross-population eQTL data from the GEUVADIS project and the multi-tissue eQTL data from the GTEx project. In particular, we find that genetic variants predicted to disrupt transcription factor binding sites are enriched in cis-eQTLs across all tissues. Moreover, the enrichment estimates obtained across the tissues are correlated with the cell types for which the annotations are derived. Copyright © 2016 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  18. Efficient implementation of the many-body Reactive Bond Order (REBO) potential on GPU

    NASA Astrophysics Data System (ADS)

    Trędak, Przemysław; Rudnicki, Witold R.; Majewski, Jacek A.

    2016-09-01

    The second generation Reactive Bond Order (REBO) empirical potential is commonly used to accurately model a wide range hydrocarbon materials. It is also extensible to other atom types and interactions. REBO potential assumes complex multi-body interaction model, that is difficult to represent efficiently in the SIMD or SIMT programming model. Hence, despite its importance, no efficient GPGPU implementation has been developed for this potential. Here we present a detailed description of a highly efficient GPGPU implementation of molecular dynamics algorithm using REBO potential. The presented algorithm takes advantage of rarely used properties of the SIMT architecture of a modern GPU to solve difficult synchronizations issues that arise in computations of multi-body potential. Techniques developed for this problem may be also used to achieve efficient solutions of different problems. The performance of proposed algorithm is assessed using a range of model systems. It is compared to highly optimized CPU implementation (both single core and OpenMP) available in LAMMPS package. These experiments show up to 6x improvement in forces computation time using single processor of the NVIDIA Tesla K80 compared to high end 16-core Intel Xeon processor.

  19. [The specialty clinical centers within the structure of the regional multi-specialty hospital].

    PubMed

    Fadeev, M G

    2008-01-01

    The analysis of the functioning of the regional referral clinical center of hand surgery, the eye injury center, the pediatric burns center and the neurosurgical center situated on the basis of large multi-field hospitals of the City of Ekaterinburg is presented. Such common conditions of their activity as experienced manpower availability and medical Academy chairs maintenance are revealed. The special referral clinical centers organized prior to the perstroyka and reformation, continue to function successfully providing high-tech medical care to the patients of the megapolis and to the inhabitants of the Sverdlovskaya Oblast. The effectiveness and perspectiveness of further functioning of the special referral clinical centers embedded into the structure of the municipal multi-field hospitals in the conditions of health reforms is demonstrated.

  20. Molecular and electronic structures of M 2O 7 (M = Mn, Tc, Re)

    DOE PAGES

    Lawler, Keith V.; Childs, Bradley C.; Mast, Daniel S.; ...

    2017-02-21

    The molecular and electronic structures for the Group 7b heptoxides were investigated by computational methods as both isolated molecules and in the solid-state. The metal-oxygen-metal bending angle of the single molecule increased with increasing atomic number, with Re 2O 7 preferring a linear structure. Natural bond orbital and localized orbital bonding analyses indicate that there is a three-center covalent bond between the metal atoms and the bridging oxygen, and the increasing ionic character of the bonds favors larger bond angles. The calculations accurately reproduce the experimental crystal structures within a few percent. Analysis of the band structures and density ofmore » states shows similar bonding for all of the solid-state heptoxides, including the presence of the three-center covalent bond. DFT+U simulations show that PBE-D3 underpredicts the band gap by ~0.2 eV due to an under-correlation of the metal d conducting states. As a result, homologue and compression studies show that Re 2O 7 adopts a polymeric structure because the Re-oxide tetrahedra are easily distorted by packing stresses to form additional three-center covalent bonds.« less

  1. Learning Natural Selection in 4th Grade with Multi-Agent-Based Computational Models

    ERIC Educational Resources Information Center

    Dickes, Amanda Catherine; Sengupta, Pratim

    2013-01-01

    In this paper, we investigate how elementary school students develop multi-level explanations of population dynamics in a simple predator-prey ecosystem, through scaffolded interactions with a multi-agent-based computational model (MABM). The term "agent" in an MABM indicates individual computational objects or actors (e.g., cars), and these…

  2. Final report for Conference Support Grant "From Computational Biophysics to Systems Biology - CBSB12"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansmann, Ulrich H.E.

    2012-07-02

    This report summarizes the outcome of the international workshop From Computational Biophysics to Systems Biology (CBSB12) which was held June 3-5, 2012, at the University of Tennessee Conference Center in Knoxville, TN, and supported by DOE through the Conference Support Grant 120174. The purpose of CBSB12 was to provide a forum for the interaction between a data-mining interested systems biology community and a simulation and first-principle oriented computational biophysics/biochemistry community. CBSB12 was the sixth in a series of workshops of the same name organized in recent years, and the second that has been held in the USA. As in previousmore » years, it gave researchers from physics, biology, and computer science an opportunity to acquaint each other with current trends in computational biophysics and systems biology, to explore venues of cooperation, and to establish together a detailed understanding of cells at a molecular level. The conference grant of $10,000 was used to cover registration fees and provide travel fellowships to selected students and postdoctoral scientists. By educating graduate students and providing a forum for young scientists to perform research into the working of cells at a molecular level, the workshop adds to DOE's mission of paving the way to exploit the abilities of living systems to capture, store and utilize energy.« less

  3. The birth of quantum networks: merging remote entanglement with local multi-qubit control

    NASA Astrophysics Data System (ADS)

    Hanson, Ronald

    The realization of a highly connected network of qubit registers is a central challenge for quantum information processing and long-distance quantum communication. Diamond spins associated with NV centers are promising building blocks for such a network: they combine a coherent spin-photon interface that has already enabled creation of spin-spin entanglement over 1km with a local register of robust and well-controlled nuclear spin qubits for information processing and error correction. We are now entering a new research stage in which we can exploit these features simultaneously and build multi-qubit networks. I will present our latest results towards the first of such experiments: entanglement distillation between remote quantum network nodes. Finally, I will discuss the challenges and opportunities ahead on the road to large-scale networks of qubit registers for quantum computation and communication.

  4. Reacting Multi-Species Gas Capability for USM3D Flow Solver

    NASA Technical Reports Server (NTRS)

    Frink, Neal T.; Schuster, David M.

    2012-01-01

    The USM3D Navier-Stokes flow solver contributed heavily to the NASA Constellation Project (CxP) as a highly productive computational tool for generating the aerodynamic databases for the Ares I and V launch vehicles and Orion launch abort vehicle (LAV). USM3D is currently limited to ideal-gas flows, which are not adequate for modeling the chemistry or temperature effects of hot-gas jet flows. This task was initiated to create an efficient implementation of multi-species gas and equilibrium chemistry into the USM3D code to improve its predictive capabilities for hot jet impingement effects. The goal of this NASA Engineering and Safety Center (NESC) assessment was to implement and validate a simulation capability to handle real-gas effects in the USM3D code. This document contains the outcome of the NESC assessment.

  5. Microgravity

    NASA Image and Video Library

    2001-06-05

    This computer-generated image depicts the Materials Science Research Rack-1 (MSRR-1) being developed by NASA's Marshall Space Flight Center and the European Space Agency (ESA) for placement in the Destiny laboratory module aboard the International Space Station. The rack is part of the plarned Materials Science Research Facility (MSRF) and is expected to include two furnace module inserts, a Quench Module Insert (being developed by NASA's Marshall Space Flight Center) to study directional solidification in rapidly cooled alloys and a Diffusion Module Insert (being developed by the European Space Agency) to study crystal growth, and a transparent furnace (being developed by NASA's Space Product Development program). Multi-user equipment in the rack is being developed under the auspices of NASA's Office of Biological and Physical Research (OBPR) and ESA. Key elements are labeled in other images (0101754, 0101830, and TBD).

  6. Microgravity

    NASA Image and Video Library

    2001-06-05

    This computer-generated image depicts the Materials Science Research Rack-1 (MSRR-1) being developed by NASA's Marshall Space Flight Center and the European Space Agency (ESA) for placement in the Destiny laboratory module aboard the International Space Station. The rack is part of the plarned Materials Science Research Facility (MSRF) and is expected to include two furnace module inserts, a Quench Module Insert (being developed by NASA's Marshall Space Flight Center) to study directional solidification in rapidly cooled alloys and a Diffusion Module Insert (being developed by the European Space Agency) to study crystal growth, and a transparent furnace (being developed by NASA's Space Product Development program). Multi-user equipment in the rack is being developed under the auspices of NASA's Office of Biological and Physical Research (OBPR) and ESA. Key elements are labeled in other images (0101754, 0101829, 0101830).

  7. Microgravity

    NASA Image and Video Library

    2001-06-05

    This computer-generated image depicts the Materials Science Research Rack-1 (MSRR-1) being developed by NASA's Marshall Space Flight Center and the European Space Agency (ESA) for placement in the Destiny laboratory module aboard the International Space Station. The rack is part of the plarned Materials Science Research Facility (MSRF) and is expected to include two furnace module inserts, a Quench Module Insert (being developed by NASA's Marshall Space Flight Center) to study directional solidification in rapidly cooled alloys and a Diffusion Module Insert (being developed by the European Space Agency) to study crystal growth, and a transparent furnace (being developed by NASA's Space Product Development program). Multi-user equipment in the rack is being developed under the auspices of NASA's Office of Biological and Physical Research (OBPR) and ESA. A larger image is available without labels (No. 0101755).

  8. [Measurement of scatter radiation on MDCT equipment using an OSL dosimeter].

    PubMed

    Tomita, Hironobu; Morozumi, Kunihiko

    2004-11-01

    The recent introduction of multi-detector row computed tomography (MDCT) has made it possible to scan the entire abdomen within approximately 10 sec in procedures such as interventional radiology computed tomography (IVRCT), which are associated with operator exposure. Therefore, anxious patients and patients who are not able to remain still can be examined with an assistant. In the present study, radiation exposure to the assistant was estimated, and the distribution of scattered radiation near the gantry was measured using an optically stimulated luminescence (OSL) dosimeter. Simultaneous measurements were obtained using a direction storage (DIS) dosimeter for reference. The maximum value of 1.47 mSv per examination was obtained at the point closest to the gantry's center (50 cm from the center at a height of 150 cm above the floor) . In addition, scattered radiation decreased as the measurement point was moved further from the gantry's center, falling below the limit of detection (0.1 mSv or less) at 200 cm and at the sides of the gantry. OSL dosimeters are also employed as personal dosimeters, permitting reliable values to be obtained easily. They were found to be an effective tool for the measurement of scattered radiation, as in the present study, helping to provide better understanding of the distribution of scattered radiation within the CT room.

  9. Protein Simulation Data in the Relational Model.

    PubMed

    Simms, Andrew M; Daggett, Valerie

    2012-10-01

    High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost-significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server.

  10. Protein Simulation Data in the Relational Model

    PubMed Central

    Simms, Andrew M.; Daggett, Valerie

    2011-01-01

    High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost—significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server. PMID:23204646

  11. TARGET Research Goals

    Cancer.gov

    TARGET researchers use various sequencing and array-based methods to examine the genomes, transcriptomes, and for some diseases epigenomes of select childhood cancers. This “multi-omic” approach generates a comprehensive profile of molecular alterations for each cancer type. Alterations are changes in DNA or RNA, such as rearrangements in chromosome structure or variations in gene expression, respectively. Through computational analyses and assays to validate biological function, TARGET researchers predict which alterations disrupt the function of a gene or pathway and promote cancer growth, progression, and/or survival. Researchers identify candidate therapeutic targets and/or prognostic markers from the cancer-associated alterations.

  12. Physics Computing '92: Proceedings of the 4th International Conference

    NASA Astrophysics Data System (ADS)

    de Groot, Robert A.; Nadrchal, Jaroslav

    1993-04-01

    The Table of Contents for the book is as follows: * Preface * INVITED PAPERS * Ab Initio Theoretical Approaches to the Structural, Electronic and Vibrational Properties of Small Clusters and Fullerenes: The State of the Art * Neural Multigrid Methods for Gauge Theories and Other Disordered Systems * Multicanonical Monte Carlo Simulations * On the Use of the Symbolic Language Maple in Physics and Chemistry: Several Examples * Nonequilibrium Phase Transitions in Catalysis and Population Models * Computer Algebra, Symmetry Analysis and Integrability of Nonlinear Evolution Equations * The Path-Integral Quantum Simulation of Hydrogen in Metals * Digital Optical Computing: A New Approach of Systolic Arrays Based on Coherence Modulation of Light and Integrated Optics Technology * Molecular Dynamics Simulations of Granular Materials * Numerical Implementation of a K.A.M. Algorithm * Quasi-Monte Carlo, Quasi-Random Numbers and Quasi-Error Estimates * What Can We Learn from QMC Simulations * Physics of Fluctuating Membranes * Plato, Apollonius, and Klein: Playing with Spheres * Steady States in Nonequilibrium Lattice Systems * CONVODE: A REDUCE Package for Differential Equations * Chaos in Coupled Rotators * Symplectic Numerical Methods for Hamiltonian Problems * Computer Simulations of Surfactant Self Assembly * High-dimensional and Very Large Cellular Automata for Immunological Shape Space * A Review of the Lattice Boltzmann Method * Electronic Structure of Solids in the Self-interaction Corrected Local-spin-density Approximation * Dedicated Computers for Lattice Gauge Theory Simulations * Physics Education: A Survey of Problems and Possible Solutions * Parallel Computing and Electronic-Structure Theory * High Precision Simulation Techniques for Lattice Field Theory * CONTRIBUTED PAPERS * Case Study of Microscale Hydrodynamics Using Molecular Dynamics and Lattice Gas Methods * Computer Modelling of the Structural and Electronic Properties of the Supported Metal Catalysis * Ordered Particle Simulations for Serial and MIMD Parallel Computers * "NOLP" -- Program Package for Laser Plasma Nonlinear Optics * Algorithms to Solve Nonlinear Least Square Problems * Distribution of Hydrogen Atoms in Pd-H Computed by Molecular Dynamics * A Ray Tracing of Optical System for Protein Crystallography Beamline at Storage Ring-SIBERIA-2 * Vibrational Properties of a Pseudobinary Linear Chain with Correlated Substitutional Disorder * Application of the Software Package Mathematica in Generalized Master Equation Method * Linelist: An Interactive Program for Analysing Beam-foil Spectra * GROMACS: A Parallel Computer for Molecular Dynamics Simulations * GROMACS Method of Virial Calculation Using a Single Sum * The Interactive Program for the Solution of the Laplace Equation with the Elimination of Singularities for Boundary Functions * Random-Number Generators: Testing Procedures and Comparison of RNG Algorithms * Micro-TOPIC: A Tokamak Plasma Impurities Code * Rotational Molecular Scattering Calculations * Orthonormal Polynomial Method for Calibrating of Cryogenic Temperature Sensors * Frame-based System Representing Basis of Physics * The Role of Massively Data-parallel Computers in Large Scale Molecular Dynamics Simulations * Short-range Molecular Dynamics on a Network of Processors and Workstations * An Algorithm for Higher-order Perturbation Theory in Radiative Transfer Computations * Hydrostochastics: The Master Equation Formulation of Fluid Dynamics * HPP Lattice Gas on Transputers and Networked Workstations * Study on the Hysteresis Cycle Simulation Using Modeling with Different Functions on Intervals * Refined Pruning Techniques for Feed-forward Neural Networks * Random Walk Simulation of the Motion of Transient Charges in Photoconductors * The Optical Hysteresis in Hydrogenated Amorphous Silicon * Diffusion Monte Carlo Analysis of Modern Interatomic Potentials for He * A Parallel Strategy for Molecular Dynamics Simulations of Polar Liquids on Transputer Arrays * Distribution of Ions Reflected on Rough Surfaces * The Study of Step Density Distribution During Molecular Beam Epitaxy Growth: Monte Carlo Computer Simulation * Towards a Formal Approach to the Construction of Large-scale Scientific Applications Software * Correlated Random Walk and Discrete Modelling of Propagation through Inhomogeneous Media * Teaching Plasma Physics Simulation * A Theoretical Determination of the Au-Ni Phase Diagram * Boson and Fermion Kinetics in One-dimensional Lattices * Computational Physics Course on the Technical University * Symbolic Computations in Simulation Code Development and Femtosecond-pulse Laser-plasma Interaction Studies * Computer Algebra and Integrated Computing Systems in Education of Physical Sciences * Coordinated System of Programs for Undergraduate Physics Instruction * Program Package MIRIAM and Atomic Physics of Extreme Systems * High Energy Physics Simulation on the T_Node * The Chapman-Kolmogorov Equation as Representation of Huygens' Principle and the Monolithic Self-consistent Numerical Modelling of Lasers * Authoring System for Simulation Developments * Molecular Dynamics Study of Ion Charge Effects in the Structure of Ionic Crystals * A Computational Physics Introductory Course * Computer Calculation of Substrate Temperature Field in MBE System * Multimagnetical Simulation of the Ising Model in Two and Three Dimensions * Failure of the CTRW Treatment of the Quasicoherent Excitation Transfer * Implementation of a Parallel Conjugate Gradient Method for Simulation of Elastic Light Scattering * Algorithms for Study of Thin Film Growth * Algorithms and Programs for Physics Teaching in Romanian Technical Universities * Multicanonical Simulation of 1st order Transitions: Interface Tension of the 2D 7-State Potts Model * Two Numerical Methods for the Calculation of Periodic Orbits in Hamiltonian Systems * Chaotic Behavior in a Probabilistic Cellular Automata? * Wave Optics Computing by a Networked-based Vector Wave Automaton * Tensor Manipulation Package in REDUCE * Propagation of Electromagnetic Pulses in Stratified Media * The Simple Molecular Dynamics Model for the Study of Thermalization of the Hot Nucleon Gas * Electron Spin Polarization in PdCo Alloys Calculated by KKR-CPA-LSD Method * Simulation Studies of Microscopic Droplet Spreading * A Vectorizable Algorithm for the Multicolor Successive Overrelaxation Method * Tetragonality of the CuAu I Lattice and Its Relation to Electronic Specific Heat and Spin Susceptibility * Computer Simulation of the Formation of Metallic Aggregates Produced by Chemical Reactions in Aqueous Solution * Scaling in Growth Models with Diffusion: A Monte Carlo Study * The Nucleus as the Mesoscopic System * Neural Network Computation as Dynamic System Simulation * First-principles Theory of Surface Segregation in Binary Alloys * Data Smooth Approximation Algorithm for Estimating the Temperature Dependence of the Ice Nucleation Rate * Genetic Algorithms in Optical Design * Application of 2D-FFT in the Study of Molecular Exchange Processes by NMR * Advanced Mobility Model for Electron Transport in P-Si Inversion Layers * Computer Simulation for Film Surfaces and its Fractal Dimension * Parallel Computation Techniques and the Structure of Catalyst Surfaces * Educational SW to Teach Digital Electronics and the Corresponding Text Book * Primitive Trinomials (Mod 2) Whose Degree is a Mersenne Exponent * Stochastic Modelisation and Parallel Computing * Remarks on the Hybrid Monte Carlo Algorithm for the ∫4 Model * An Experimental Computer Assisted Workbench for Physics Teaching * A Fully Implicit Code to Model Tokamak Plasma Edge Transport * EXPFIT: An Interactive Program for Automatic Beam-foil Decay Curve Analysis * Mapping Technique for Solving General, 1-D Hamiltonian Systems * Freeway Traffic, Cellular Automata, and Some (Self-Organizing) Criticality * Photonuclear Yield Analysis by Dynamic Programming * Incremental Representation of the Simply Connected Planar Curves * Self-convergence in Monte Carlo Methods * Adaptive Mesh Technique for Shock Wave Propagation * Simulation of Supersonic Coronal Streams and Their Interaction with the Solar Wind * The Nature of Chaos in Two Systems of Ordinary Nonlinear Differential Equations * Considerations of a Window-shopper * Interpretation of Data Obtained by RTP 4-Channel Pulsed Radar Reflectometer Using a Multi Layer Perceptron * Statistics of Lattice Bosons for Finite Systems * Fractal Based Image Compression with Affine Transformations * Algorithmic Studies on Simulation Codes for Heavy-ion Reactions * An Energy-Wise Computer Simulation of DNA-Ion-Water Interactions Explains the Abnormal Structure of Poly[d(A)]:Poly[d(T)] * Computer Simulation Study of Kosterlitz-Thouless-Like Transitions * Problem-oriented Software Package GUN-EBT for Computer Simulation of Beam Formation and Transport in Technological Electron-Optical Systems * Parallelization of a Boundary Value Solver and its Application in Nonlinear Dynamics * The Symbolic Classification of Real Four-dimensional Lie Algebras * Short, Singular Pulses Generation by a Dye Laser at Two Wavelengths Simultaneously * Quantum Monte Carlo Simulations of the Apex-Oxygen-Model * Approximation Procedures for the Axial Symmetric Static Einstein-Maxwell-Higgs Theory * Crystallization on a Sphere: Parallel Simulation on a Transputer Network * FAMULUS: A Software Product (also) for Physics Education * MathCAD vs. FAMULUS -- A Brief Comparison * First-principles Dynamics Used to Study Dissociative Chemisorption * A Computer Controlled System for Crystal Growth from Melt * A Time Resolved Spectroscopic Method for Short Pulsed Particle Emission * Green's Function Computation in Radiative Transfer Theory * Random Search Optimization Technique for One-criteria and Multi-criteria Problems * Hartley Transform Applications to Thermal Drift Elimination in Scanning Tunneling Microscopy * Algorithms of Measuring, Processing and Interpretation of Experimental Data Obtained with Scanning Tunneling Microscope * Time-dependent Atom-surface Interactions * Local and Global Minima on Molecular Potential Energy Surfaces: An Example of N3 Radical * Computation of Bifurcation Surfaces * Symbolic Computations in Quantum Mechanics: Energies in Next-to-solvable Systems * A Tool for RTP Reactor and Lamp Field Design * Modelling of Particle Spectra for the Analysis of Solid State Surface * List of Participants

  13. The Spider Center Wide File System; From Concept to Reality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shipman, Galen M; Dillow, David A; Oral, H Sarp

    2009-01-01

    The Leadership Computing Facility (LCF) at Oak Ridge National Laboratory (ORNL) has a diverse portfolio of computational resources ranging from a petascale XT4/XT5 simulation system (Jaguar) to numerous other systems supporting development, visualization, and data analytics. In order to support vastly different I/O needs of these systems Spider, a Lustre-based center wide file system was designed and deployed to provide over 240 GB/s of aggregate throughput with over 10 Petabytes of formatted capacity. A multi-stage InfiniBand network, dubbed as Scalable I/O Network (SION), with over 889 GB/s of bisectional bandwidth was deployed as part of Spider to provide connectivity tomore » our simulation, development, visualization, and other platforms. To our knowledge, while writing this paper, Spider is the largest and fastest POSIX-compliant parallel file system in production. This paper will detail the overall architecture of the Spider system, challenges in deploying and initial testings of a file system of this scale, and novel solutions to these challenges which offer key insights into file system design in the future.« less

  14. Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier-Stokes Heat Transfer Code

    NASA Technical Reports Server (NTRS)

    Gaugler, Raymond E.; Lee, Chi-Miag (Technical Monitor)

    2001-01-01

    For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid heat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this paper, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery for space launch vehicle propulsion systems.

  15. Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier-Stokes Heat Transfer Code

    NASA Technical Reports Server (NTRS)

    Gaugfer, Raymond E.

    2002-01-01

    For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid heat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this presentation, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery.

  16. Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier Stokes Heat Transfer Code

    NASA Technical Reports Server (NTRS)

    Gaugler, Raymond E.

    2002-01-01

    For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid beat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this presentation, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery.

  17. Tension Structure

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The fabric structure pictured is the Campus Center of La Verne College, La Verne, California. Unlike the facilities shown on the preceding pages, it is not air-supported. It is a "tension structure," its multi-coned fabric membrane supported by a network of cables attached to steel columns which function like circus tent poles. The spider-web in the accompanying photo is a computer graph of the tension pattern. The designers, Geiger-Berger Associates PC, of New York City, conducted lengthy computer analysis to determine the the best placement of columns and cables. The firm also served as structural engineering consultant on the Pontiac Silverdome and a number of other large fabric structures. Built by Birdair Structures, Inc., Buffalo, New York, the La Verne Campus Center was the first permanent facility in the United States enclosed by the space-spinoff fabric made of Owens-Corning Beta fiber glass coated with Du Pont Teflon TFE. The flexible design permits rearrangement of the interior to accommodate athletic events, student activities, theatrical productions and other recreational programs. Use of fabric covering reduced building cost 30 percent below conventional construction.

  18. Eddy Current Influences on the Dynamic Behaviour of Magnetic Suspension Systems

    NASA Technical Reports Server (NTRS)

    Britcher, Colin P.; Bloodgood, Dale V.

    1998-01-01

    This report will summarize some results from a multi-year research effort at NASA Langley Research Center aimed at the development of an improved capability for practical modelling of eddy current effects in magnetic suspension systems. Particular attention is paid to large-gap systems, although generic results applicable to both large-gap and small-gap systems are presented. It is shown that eddy currents can significantly affect the dynamic behavior of magnetic suspension systems, but that these effects can be amenable to modelling and measurement. Theoretical frameworks are presented, together with comparisons of computed and experimental data particularly related to the Large Angle Magnetic Suspension Test Fixture at NASA Langley Research Center, and the Annular Suspension and Pointing System at Old Dominion University. In both cases, practical computations are capable of providing reasonable estimates of important performance-related parameters. The most difficult case is seen to be that of eddy currents in highly permeable material, due to the low skin depths. Problems associated with specification of material properties and areas for future research are discussed.

  19. SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX-80

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.; Watson, Brian C.

    1992-11-01

    The finite element method has proven to be an invaluable tool for analysis and design of complex, high performance systems, such as bladed-disk assemblies in aircraft turbofan engines. However, as the problem size increase, the computation time required by conventional computers can be prohibitively high. Parallel processing computers provide the means to overcome these computation time limits. This report summarizes the results of a research activity aimed at providing a finite element capability for analyzing turbomachinery bladed-disk assemblies in a vector/parallel processing environment. A special purpose code, named with the acronym SAPNEW, has been developed to perform static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements. SAPNEW provides a stand alone capability for static and eigen analysis on the Alliant FX/80, a parallel processing computer. A preprocessor, named with the acronym NTOS, has been developed to accept NASTRAN input decks and convert them to the SAPNEW format to make SAPNEW more readily used by researchers at NASA Lewis Research Center.

  20. Size-guided multi-seed heuristic method for geometry optimization of clusters: Application to benzene clusters.

    PubMed

    Takeuchi, Hiroshi

    2018-05-08

    Since searching for the global minimum on the potential energy surface of a cluster is very difficult, many geometry optimization methods have been proposed, in which initial geometries are randomly generated and subsequently improved with different algorithms. In this study, a size-guided multi-seed heuristic method is developed and applied to benzene clusters. It produces initial configurations of the cluster with n molecules from the lowest-energy configurations of the cluster with n - 1 molecules (seeds). The initial geometries are further optimized with the geometrical perturbations previously used for molecular clusters. These steps are repeated until the size n satisfies a predefined one. The method locates putative global minima of benzene clusters with up to 65 molecules. The performance of the method is discussed using the computational cost, rates to locate the global minima, and energies of initial geometries. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  1. Polar order in nanostructured organic materials

    NASA Astrophysics Data System (ADS)

    Sayar, M.; Olvera de la Cruz, M.; Stupp, S. I.

    2003-02-01

    Achiral multi-block liquid crystals are not expected to form polar domains. Recently, however, films of nanoaggregates formed by multi-block rodcoil molecules were identified as the first example of achiral single-component materials with macroscopic polar properties. By solving an Ising-like model with dipolar and asymmetric short-range interactions, we show here that polar domains are stable in films composed of aggregates as opposed to isolated molecules. Unlike classical molecular systems, these nanoaggregates have large intralayer spacings (a approx 8 nm), leading to a reduction in the repulsive dipolar interactions which oppose polar order within layers. In finite-thickness films of nanostructures, this effect enables the formation of polar domains. We compute exactly the energies of the possible structures consistent with the experiments as a function of film thickness at zero temperature (T). We also provide Monte Carlo simulations at non-zero T for a disordered hexagonal lattice that resembles the smectic-like packing in these nanofilms.

  2. Quantum Simulation of Helium Hydride Cation in a Solid-State Spin Register.

    PubMed

    Wang, Ya; Dolde, Florian; Biamonte, Jacob; Babbush, Ryan; Bergholm, Ville; Yang, Sen; Jakobi, Ingmar; Neumann, Philipp; Aspuru-Guzik, Alán; Whitfield, James D; Wrachtrup, Jörg

    2015-08-25

    Ab initio computation of molecular properties is one of the most promising applications of quantum computing. While this problem is widely believed to be intractable for classical computers, efficient quantum algorithms exist which have the potential to vastly accelerate research throughput in fields ranging from material science to drug discovery. Using a solid-state quantum register realized in a nitrogen-vacancy (NV) defect in diamond, we compute the bond dissociation curve of the minimal basis helium hydride cation, HeH(+). Moreover, we report an energy uncertainty (given our model basis) of the order of 10(-14) hartree, which is 10 orders of magnitude below the desired chemical precision. As NV centers in diamond provide a robust and straightforward platform for quantum information processing, our work provides an important step toward a fully scalable solid-state implementation of a quantum chemistry simulator.

  3. The potential of multi-port optical memories in digital computing

    NASA Technical Reports Server (NTRS)

    Alford, C. O.; Gaylord, T. K.

    1975-01-01

    A high-capacity memory with a relatively high data transfer rate and multi-port simultaneous access capability may serve as the basis for new computer architectures. The implementation of a multi-port optical memory is discussed. Several computer structures are presented that might profitably use such a memory. These structures include (1) a simultaneous record access system, (2) a simultaneously shared memory computer system, and (3) a parallel digital processing structure.

  4. Primary Immunodeficiencies: “New” Disease in an Old Country

    PubMed Central

    Lee, Pamela P W; Lau, Yu-Lung

    2009-01-01

    Primary immunodeficiency disorders (PIDs) are rare inborn errors of the immune system. Patients with PIDs are unique models that exemplify the functional and phenotypic consequences of various immune defects underlying infections, autoimmunity, lymphoproliferation, allergy and cancer. Over 150 PID syndromes were characterized in the past 60 years, with an ever growing list of new entities being discovered. Because of their rarity, multi-center collaboration for pooled data analysis and molecular studies is important to gain meaningful insights into the phenotypic and genetic diversities of PIDs. In this article, we summarize our research findings on PIDs in Chinese population in the past 20 years. Close collaboration among various immunology centers, cross-referrals and systematic data analysis constitute the foundation for research on PIDs. Future directions include establishment of a national PID registry, raising awareness of PIDs and securing sufficient resources for patient care and scientific research. PMID:20003815

  5. MiDas: Automatic Extraction of a Common Domain of Discourse in Sleep Medicine for Multi-center Data Integration

    PubMed Central

    Sahoo, Satya S.; Ogbuji, Chimezie; Luo, Lingyun; Dong, Xiao; Cui, Licong; Redline, Susan S.; Zhang, Guo-Qiang

    2011-01-01

    Clinical studies often use data dictionaries with controlled sets of terms to facilitate data collection, limited interoperability and sharing at a local site. Multi-center retrospective clinical studies require that these data dictionaries, originating from individual participating centers, be harmonized in preparation for the integration of the corresponding clinical research data. Domain ontologies are often used to facilitate multi-center data integration by modeling terms from data dictionaries in a logic-based language, but interoperability among domain ontologies (using automated techniques) is an unresolved issue. Although many upper-level reference ontologies have been proposed to address this challenge, our experience in integrating multi-center sleep medicine data highlights the need for an upper level ontology that models a common set of terms at multiple-levels of abstraction, which is not covered by the existing upper-level ontologies. We introduce a methodology underpinned by a Minimal Domain of Discourse (MiDas) algorithm to automatically extract a minimal common domain of discourse (upper-domain ontology) from an existing domain ontology. Using the Multi-Modality, Multi-Resource Environment for Physiological and Clinical Research (Physio-MIMI) multi-center project in sleep medicine as a use case, we demonstrate the use of MiDas in extracting a minimal domain of discourse for sleep medicine, from Physio-MIMI’s Sleep Domain Ontology (SDO). We then extend the resulting domain of discourse with terms from the data dictionary of the Sleep Heart and Health Study (SHHS) to validate MiDas. To illustrate the wider applicability of MiDas, we automatically extract the respective domains of discourse from 6 sample domain ontologies from the National Center for Biomedical Ontologies (NCBO) and the OBO Foundry. PMID:22195180

  6. MiDas: automatic extraction of a common domain of discourse in sleep medicine for multi-center data integration.

    PubMed

    Sahoo, Satya S; Ogbuji, Chimezie; Luo, Lingyun; Dong, Xiao; Cui, Licong; Redline, Susan S; Zhang, Guo-Qiang

    2011-01-01

    Clinical studies often use data dictionaries with controlled sets of terms to facilitate data collection, limited interoperability and sharing at a local site. Multi-center retrospective clinical studies require that these data dictionaries, originating from individual participating centers, be harmonized in preparation for the integration of the corresponding clinical research data. Domain ontologies are often used to facilitate multi-center data integration by modeling terms from data dictionaries in a logic-based language, but interoperability among domain ontologies (using automated techniques) is an unresolved issue. Although many upper-level reference ontologies have been proposed to address this challenge, our experience in integrating multi-center sleep medicine data highlights the need for an upper level ontology that models a common set of terms at multiple-levels of abstraction, which is not covered by the existing upper-level ontologies. We introduce a methodology underpinned by a Minimal Domain of Discourse (MiDas) algorithm to automatically extract a minimal common domain of discourse (upper-domain ontology) from an existing domain ontology. Using the Multi-Modality, Multi-Resource Environment for Physiological and Clinical Research (Physio-MIMI) multi-center project in sleep medicine as a use case, we demonstrate the use of MiDas in extracting a minimal domain of discourse for sleep medicine, from Physio-MIMI's Sleep Domain Ontology (SDO). We then extend the resulting domain of discourse with terms from the data dictionary of the Sleep Heart and Health Study (SHHS) to validate MiDas. To illustrate the wider applicability of MiDas, we automatically extract the respective domains of discourse from 6 sample domain ontologies from the National Center for Biomedical Ontologies (NCBO) and the OBO Foundry.

  7. Ammonia Oxidation by Abstraction of Three Hydrogen Atoms from a Mo–NH 3 Complex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharya, Papri; Heiden, Zachariah M.; Wiedner, Eric S.

    We report ammonia oxidation by homolytic cleavage of all three H atoms from a Mo-15NH3 complex using the 2,4,6-tri-tert-butylphenoxyl radical to afford a Mo-alkylimido (Mo=15NR) complex (R = 2,4,6-tri-t-butylcyclohexa-2,5-dien-1-one). Reductive cleavage of Mo=15NR generates a terminal Mo≡N nitride, and a [Mo-15NH]+ complex is formed by protonation. Computational analysis describes the energetic profile for the stepwise removal of three H atoms from the Mo-15NH3 complex and the formation of Mo=15NR. Acknowledgment. This work was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Re-search Center funded by the U.S. Department of Energy (U.S. DOE), Office of Science, Officemore » of Basic Energy Sciences. EPR and mass spectrometry experiments were performed using EMSL, a national scientific user facility sponsored by the DOE’s Office of Biological and Environmental Research and located at PNNL. The authors thank Dr. Eric D. Walter and Dr. Rosalie Chu for assistance in performing EPR and mass spectroscopy analysis, respectively. Computational resources provided by the National Energy Re-search Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Pacific North-west National Laboratory is operated by Battelle for the U.S. DOE.« less

  8. Integrative Pericyclic Cascade: An Atom Economic, Multi C-C Bond-Forming Strategy for the Construction of Molecular Complexity.

    PubMed

    Tejedor, David; Delgado-Hernández, Samuel; Peyrac, Jesús; González-Platas, Javier; García-Tellado, Fernando

    2017-07-26

    An all-pericyclic manifold is developed for the construction of topologically diverse, structurally complex and natural product-like polycyclic chemotypes. The manifold uses readily accessible tertiary propargyl vinyl ethers as substrates and imidazole as a catalyst to form up to two new rings, three new C-C bonds, six stereogenic centers and one transannular oxo-bridge. The manifold is efficient, scalable and instrumentally simple to perform and entails a propargyl Claisen rearrangement-[1,3]H shift, an oxa-6π-electrocyclization, and an intramolecular Diels-Alder reaction. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Understanding Slat Noise Sources

    NASA Technical Reports Server (NTRS)

    Khorrami, Medhi R.

    2003-01-01

    Model-scale aeroacoustic tests of large civil transports point to the leading-edge slat as a dominant high-lift noise source in the low- to mid-frequencies during aircraft approach and landing. Using generic multi-element high-lift models, complementary experimental and numerical tests were carefully planned and executed at NASA in order to isolate slat noise sources and the underlying noise generation mechanisms. In this paper, a brief overview of the supporting computational effort undertaken at NASA Langley Research Center, is provided. Both tonal and broadband aspects of slat noise are discussed. Recent gains in predicting a slat s far-field acoustic noise, current shortcomings of numerical simulations, and other remaining open issues, are presented. Finally, an example of the ever-expanding role of computational simulations in noise reduction studies also is given.

  10. Cloud-Based Numerical Weather Prediction for Near Real-Time Forecasting and Disaster Response

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew; Case, Jonathan; Venners, Jason; Schroeder, Richard; Checchi, Milton; Zavodsky, Bradley; Limaye, Ashutosh; O'Brien, Raymond

    2015-01-01

    The use of cloud computing resources continues to grow within the public and private sector components of the weather enterprise as users become more familiar with cloud-computing concepts, and competition among service providers continues to reduce costs and other barriers to entry. Cloud resources can also provide capabilities similar to high-performance computing environments, supporting multi-node systems required for near real-time, regional weather predictions. Referred to as "Infrastructure as a Service", or IaaS, the use of cloud-based computing hardware in an on-demand payment system allows for rapid deployment of a modeling system in environments lacking access to a large, supercomputing infrastructure. Use of IaaS capabilities to support regional weather prediction may be of particular interest to developing countries that have not yet established large supercomputing resources, but would otherwise benefit from a regional weather forecasting capability. Recently, collaborators from NASA Marshall Space Flight Center and Ames Research Center have developed a scripted, on-demand capability for launching the NOAA/NWS Science and Training Resource Center (STRC) Environmental Modeling System (EMS), which includes pre-compiled binaries of the latest version of the Weather Research and Forecasting (WRF) model. The WRF-EMS provides scripting for downloading appropriate initial and boundary conditions from global models, along with higher-resolution vegetation, land surface, and sea surface temperature data sets provided by the NASA Short-term Prediction Research and Transition (SPoRT) Center. This presentation will provide an overview of the modeling system capabilities and benchmarks performed on the Amazon Elastic Compute Cloud (EC2) environment. In addition, the presentation will discuss future opportunities to deploy the system in support of weather prediction in developing countries supported by NASA's SERVIR Project, which provides capacity building activities in environmental monitoring and prediction across a growing number of regional hubs throughout the world. Capacity-building applications that extend numerical weather prediction to developing countries are intended to provide near real-time applications to benefit public health, safety, and economic interests, but may have a greater impact during disaster events by providing a source for local predictions of weather-related hazards, or impacts that local weather events may have during the recovery phase.

  11. Multi-scale theory-assisted nano-engineering of plasmonic-organic hybrid electro-optic device performance

    NASA Astrophysics Data System (ADS)

    Elder, Delwin L.; Johnson, Lewis E.; Tillack, Andreas F.; Robinson, Bruce H.; Haffner, Christian; Heni, Wolfgang; Hoessbacher, Claudia; Fedoryshyn, Yuriy; Salamin, Yannick; Baeuerle, Benedikt; Josten, Arne; Ayata, Masafumi; Koch, Ueli; Leuthold, Juerg; Dalton, Larry R.

    2018-02-01

    Multi-scale (correlated quantum and statistical mechanics) modeling methods have been advanced and employed to guide the improvement of organic electro-optic (OEO) materials, including by analyzing electric field poling induced electro-optic activity in nanoscopic plasmonic-organic hybrid (POH) waveguide devices. The analysis of in-device electro-optic activity emphasizes the importance of considering both the details of intermolecular interactions within organic electro-optic materials and interactions at interfaces between OEO materials and device architectures. Dramatic improvement in electro-optic device performance-including voltage-length performance, bandwidth, energy efficiency, and lower optical losses have been realized. These improvements are critical to applications in telecommunications, computing, sensor technology, and metrology. Multi-scale modeling methods illustrate the complexity of improving the electro-optic activity of organic materials, including the necessity of considering the trade-off between improving poling-induced acentric order through chromophore modification and the reduction of chromophore number density associated with such modification. Computational simulations also emphasize the importance of developing chromophore modifications that serve multiple purposes including matrix hardening for enhanced thermal and photochemical stability, control of matrix dimensionality, influence on material viscoelasticity, improvement of chromophore molecular hyperpolarizability, control of material dielectric permittivity and index of refraction properties, and control of material conductance. Consideration of new device architectures is critical to the implementation of chipscale integration of electronics and photonics and achieving the high bandwidths for applications such as next generation (e.g., 5G) telecommunications.

  12. Proteinortho: detection of (co-)orthologs in large-scale analysis.

    PubMed

    Lechner, Marcus; Findeiss, Sven; Steiner, Lydia; Marz, Manja; Stadler, Peter F; Prohaska, Sonja J

    2011-04-28

    Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware.

  13. Understanding DNA under oxidative stress and sensitization: the role of molecular modeling

    PubMed Central

    Dumont, Elise; Monari, Antonio

    2015-01-01

    DNA is constantly exposed to damaging threats coming from oxidative stress, i.e., from the presence of free radicals and reactive oxygen species. Sensitization from exogenous and endogenous compounds that strongly enhance the frequency of light-induced lesions also plays an important role. The experimental determination of DNA lesions, though a difficult subject, is somehow well established and allows to elucidate even extremely rare DNA lesions. In parallel, molecular modeling has become fundamental to clearly understand the fine mechanisms related to DNA defects induction. Indeed, it offers an unprecedented possibility to get access to an atomistic or even electronic resolution. Ab initio molecular dynamics may also describe the time-evolution of the molecular system and its reactivity. Yet the modeling of DNA (photo-)reactions does necessitate elaborate multi-scale methodologies to tackle a damage induction reactivity that takes place in a complex environment. The double-stranded DNA environment is first characterized by a very high flexibility, but also a strongly inhomogeneous electrostatic embedding. Additionally, one aims at capturing more subtle effects, such as the sequence selectivity which is of critical important for DNA damage. The structure and dynamics of the DNA/sensitizers complexes, as well as the photo-induced electron- and energy-transfer phenomena taking place upon sensitization, should be carefully modeled. Finally the factors inducing different repair ratios for different lesions should also be rationalized. In this review we will critically analyze the different computational strategies used to model DNA lesions. A clear picture of the complex interplay between reactivity and structural factors will be sketched. The use of proper multi-scale modeling leads to the in-depth comprehension of DNA lesions mechanisms and also to the rational design of new chemo-therapeutic agents. PMID:26236706

  14. Submolecular Gates Self-Assemble for Hot-Electron Transfer in Proteins.

    PubMed

    Filip-Granit, Neta; Goldberg, Eran; Samish, Ilan; Ashur, Idan; van der Boom, Milko E; Cohen, Hagai; Scherz, Avigdor

    2017-07-27

    Redox reactions play key roles in fundamental biological processes. The related spatial organization of donors and acceptors is assumed to undergo evolutionary optimization facilitating charge mobilization within the relevant biological context. Experimental information from submolecular functional sites is needed to understand the organization strategies and driving forces involved in the self-development of structure-function relationships. Here we exploit chemically resolved electrical measurements (CREM) to probe the atom-specific electrostatic potentials (ESPs) in artificial arrays of bacteriochlorophyll (BChl) derivatives that provide model systems for photoexcited (hot) electron donation and withdrawal. On the basis of computations we show that native BChl's in the photosynthetic reaction center (RC) self-assemble at their ground-state as aligned gates for functional charge transfer. The combined computational and experimental results further reveal how site-specific polarizability perpendicular to the molecular plane enhances the hot-electron transport. Maximal transport efficiency is predicted for a specific, ∼5 Å, distance above the center of the metalized BChl, which is in remarkably close agreement with the distance and mutual orientation of corresponding native cofactors. These findings provide new metrics and guidelines for analysis of biological redox centers and for designing charge mobilizing machines such as artificial photosynthesis.

  15. Molecular robots with sensors and intelligence.

    PubMed

    Hagiya, Masami; Konagaya, Akihiko; Kobayashi, Satoshi; Saito, Hirohide; Murata, Satoshi

    2014-06-17

    CONSPECTUS: What we can call a molecular robot is a set of molecular devices such as sensors, logic gates, and actuators integrated into a consistent system. The molecular robot is supposed to react autonomously to its environment by receiving molecular signals and making decisions by molecular computation. Building such a system has long been a dream of scientists; however, despite extensive efforts, systems having all three functions (sensing, computation, and actuation) have not been realized yet. This Account introduces an ongoing research project that focuses on the development of molecular robotics funded by MEXT (Ministry of Education, Culture, Sports, Science and Technology, Japan). This 5 year project started in July 2012 and is titled "Development of Molecular Robots Equipped with Sensors and Intelligence". The major issues in the field of molecular robotics all correspond to a feedback (i.e., plan-do-see) cycle of a robotic system. More specifically, these issues are (1) developing molecular sensors capable of handling a wide array of signals, (2) developing amplification methods of signals to drive molecular computing devices, (3) accelerating molecular computing, (4) developing actuators that are controllable by molecular computers, and (5) providing bodies of molecular robots encapsulating the above molecular devices, which implement the conformational changes and locomotion of the robots. In this Account, the latest contributions to the project are reported. There are four research teams in the project that specialize on sensing, intelligence, amoeba-like actuation, and slime-like actuation, respectively. The molecular sensor team is focusing on the development of molecular sensors that can handle a variety of signals. This team is also investigating methods to amplify signals from the molecular sensors. The molecular intelligence team is developing molecular computers and is currently focusing on a new photochemical technology for accelerating DNA-based computations. They also introduce novel computational models behind various kinds of molecular computers necessary for designing such computers. The amoeba robot team aims at constructing amoeba-like robots. The team is trying to incorporate motor proteins, including kinesin and microtubules (MTs), for use as actuators implemented in a liposomal compartment as a robot body. They are also developing a methodology to link DNA-based computation and molecular motor control. The slime robot team focuses on the development of slime-like robots. The team is evaluating various gels, including DNA gel and BZ gel, for use as actuators, as well as the body material to disperse various molecular devices in it. They also try to control the gel actuators by DNA signals coming from molecular computers.

  16. Metal binding mediated conformational change of XPA protein: a potential cytotoxic mechanism of nickel in the nucleotide excision repair

    PubMed Central

    Hu, Jianping; Hu, Ziheng; Zhang, Yan; Gou, Xiaojun; Mu, Ying; Wang, Lirong; Xie, Xiang-Qun

    2017-01-01

    Nucleotide excision repair (NER) is a pivotal life process for repairing DNA nucleotide mismatch caused by chemicals, metal ions, radiation, and other factors. As the initiation step of NER, the xeroderma pigmentosum complementation group A protein (XPA) recognizes damaged DNA molecules, and recruits the replication protein A (RPA), another important player in the NER process. The stability of the Zn2+-chelated Zn-finger domain of XPA center core portion (i.e., XPA98–210) is the foundation of its biological functionality, while the displacement of the Zn2+ by toxic metal ions (such as Ni2+, a known human carcinogen and allergen) may impair the effectiveness of NER and hence elevate the chance of carcinogenesis. In this study, we first calculated the force field parameters for the bonded model in the metal center of the XPA98–210 system, showing that the calculated results, including charges, bonds, angles etc., are congruent with previously reported results measured by spectrometry experiments and quantum chemistry computation. Then, comparative molecular dynamics simulations using these parameters revealed the changes in the conformation and motion mode of XPA98–210 Zn-finger after the substitution of Zn2+ by Ni2+. The results showed that Ni2+ dramatically disrupted the relative positions of the four Cys residues in the Zn-finger structure, forcing them to collapse from a tetrahedron into an almost planar structure. Finally, we acquired the binding mode of XPA98–210 with its ligands RPA70N and DNA based on molecular docking and structural alignment. We found that XPA98–210’s Zn-finger domain primarily binds to a V-shaped cleft in RPA70N, while the cationic band in its C-terminal subdomain participates in the recognition of damaged DNA. In addition, this article sheds light on the multi-component interaction pattern among XPA, DNA, and other NER-related proteins (i.e., RPA70N, RPA70A, RPA70B, RPA70C, RPA32, and RPA14) based on previously reported structural biology information. Thus, we derived a putative cytotoxic mechanism associated with the nickel ion, where the Ni2+ disrupts the conformation of the XPA Zn-finger, directly weakening its interaction with RPA70N, and thus lowering the effectiveness of the NER process. In sum, this work not only provides a theoretical insight into the multi-protein interactions involved in the NER process and potential cytotoxic mechanism associated with Ni2+ binding in XPA, but may also facilitate rational anti-cancer drug design based on the NER mechanism. PMID:27307058

  17. Multi-Instrument Tools and Services to Access NASA Earth Science Data from the GSFC Earth Sciences Data and Information Services Center

    NASA Technical Reports Server (NTRS)

    Kempler, Steve; Leptoukh, Greg; Lynnes, Chris

    2010-01-01

    The presentation purpose is to describe multi-instrument tools and services that facilitate access and usability of NASA Earth science data at Goddard Space Flight Center (GSFC). NASA's Earth observing system includes 14 satellites. Topics include EOSDIS facilities and system architecture, and overview of GSFC Earth Science Data and Information Services Center (GES DISC) mission, Mirador data search, Giovanni, multi-instrument data exploration, Google Earth[TM], data merging, and applications.

  18. Multilevel Parallelization of AutoDock 4.2.

    PubMed

    Norgan, Andrew P; Coffman, Paul K; Kocher, Jean-Pierre A; Katzmann, David J; Sosa, Carlos P

    2011-04-28

    Virtual (computational) screening is an increasingly important tool for drug discovery. AutoDock is a popular open-source application for performing molecular docking, the prediction of ligand-receptor interactions. AutoDock is a serial application, though several previous efforts have parallelized various aspects of the program. In this paper, we report on a multi-level parallelization of AutoDock 4.2 (mpAD4). Using MPI and OpenMP, AutoDock 4.2 was parallelized for use on MPI-enabled systems and to multithread the execution of individual docking jobs. In addition, code was implemented to reduce input/output (I/O) traffic by reusing grid maps at each node from docking to docking. Performance of mpAD4 was examined on two multiprocessor computers. Using MPI with OpenMP multithreading, mpAD4 scales with near linearity on the multiprocessor systems tested. In situations where I/O is limiting, reuse of grid maps reduces both system I/O and overall screening time. Multithreading of AutoDock's Lamarkian Genetic Algorithm with OpenMP increases the speed of execution of individual docking jobs, and when combined with MPI parallelization can significantly reduce the execution time of virtual screens. This work is significant in that mpAD4 speeds the execution of certain molecular docking workloads and allows the user to optimize the degree of system-level (MPI) and node-level (OpenMP) parallelization to best fit both workloads and computational resources.

  19. Chemical Structure-Biological Activity Models for Pharmacophores’ 3D-Interactions

    PubMed Central

    Putz, Mihai V.; Duda-Seiman, Corina; Duda-Seiman, Daniel; Putz, Ana-Maria; Alexandrescu, Iulia; Mernea, Maria; Avram, Speranta

    2016-01-01

    Within medicinal chemistry nowadays, the so-called pharmaco-dynamics seeks for qualitative (for understanding) and quantitative (for predicting) mechanisms/models by which given chemical structure or series of congeners actively act on biological sites either by focused interaction/therapy or by diffuse/hazardous influence. To this aim, the present review exposes three of the fertile directions in approaching the biological activity by chemical structural causes: the special computing trace of the algebraic structure-activity relationship (SPECTRAL-SAR) offering the full analytical counterpart for multi-variate computational regression, the minimal topological difference (MTD) as the revived precursor for comparative molecular field analyses (CoMFA) and comparative molecular similarity indices analysis (CoMSIA); all of these methods and algorithms were presented, discussed and exemplified on relevant chemical medicinal systems as proton pump inhibitors belonging to the 4-indolyl,2-guanidinothiazole class of derivatives blocking the acid secretion from parietal cells in the stomach, the 1-[(2-hydroxyethoxy)-methyl]-6-(phenylthio)thymine congeners’ (HEPT ligands) antiviral activity against Human Immunodeficiency Virus of first type (HIV-1) and new pharmacophores in treating severe genetic disorders (like depression and psychosis), respectively, all involving 3D pharmacophore interactions. PMID:27399692

  20. Computer Simulation of Fracture in Aerogels

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2006-01-01

    Aerogels are of interest to the aerospace community primarily for their thermal properties, notably their low thermal conductivities. While the gels are typically fragile, recent advances in the application of conformal polymer layers to these gels has made them potentially useful as lightweight structural materials as well. In this work, we investigate the strength and fracture behavior of silica aerogels using a molecular statics-based computer simulation technique. The gels' structure is simulated via a Diffusion Limited Cluster Aggregation (DLCA) algorithm, which produces fractal structures representing experimentally observed aggregates of so-called secondary particles, themselves composed of amorphous silica primary particles an order of magnitude smaller. We have performed multi-length-scale simulations of fracture in silica aerogels, in which the interaction b e e n two secondary particles is assumed to be described by a Morse pair potential parameterized such that the potential range is much smaller than the secondary particle size. These Morse parameters are obtained by atomistic simulation of models of the experimentally-observed amorphous silica "bridges," with the fracture behavior of these bridges modeled via molecular statics using a Morse/Coulomb potential for silica. We consider the energetics of the fracture, and compare qualitative features of low-and high-density gel fracture.

  1. Analyzing the requirements for a robust security criteria and management of multi-level security in the clouds

    NASA Astrophysics Data System (ADS)

    Farroha, Bassam S.; Farroha, Deborah L.

    2011-06-01

    The new corporate approach to efficient processing and storage is migrating from in-house service-center services to the newly coined approach of Cloud Computing. This approach advocates thin clients and providing services by the service provider over time-shared resources. The concept is not new, however the implementation approach presents a strategic shift in the way organizations provision and manage their IT resources. The requirements on some of the data sets targeted to be run on the cloud vary depending on the data type, originator, user, and confidentiality level. Additionally, the systems that fuse such data would have to deal with the classifying the product and clearing the computing resources prior to allowing new application to be executed. This indicates that we could end up with a multi-level security system that needs to follow specific rules and can send the output to a protected network and systems in order not to have data spill or contaminated resources. The paper discusses these requirements and potential impact on the cloud architecture. Additionally, the paper discusses the unexpected advantages of the cloud framework providing a sophisticated environment for information sharing and data mining.

  2. Estimating the Earth's geometry, rotation and gravity field using a multi-satellite SLR solution

    NASA Astrophysics Data System (ADS)

    Stefka, V.; Blossfeld, M.; Mueller, H.; Gerstl, M.; Panafidina, N.

    2012-12-01

    Satellite Laser Ranging (SLR) is the unique technique to determine station coordinates, Earth Orientation Parameter (EOP) and Stokes coefficients of the Earth's gravity field in one common adjustment. These parameters form the so called "three pillars" (Plag & Pearlman, 2009) of the Global Geodetic Observing System (GGOS). In its function as official analysis center of the International Laser Ranging Service (ILRS), DGFI is developing and maintaining software to process SLR observations called "DGFI Orbit and Geodetic parameter estimation Software" (DOGS). The software is used to analyze SLR observations and to compute multi-satellite solutions. To take benefit of different orbit performances (e.g. inclination and altitude), a solution using ten different spherical satellites (ETALON1/2, LAGEOS1/2, STELLA, STARLETTE, AJISAI, LARETS, LARES, BLITS) covering the period of 12 years of observations is computed. The satellites are relatively weighted using a variance component estimation (VCE). The obtained weights are analyzed w.r.t. the potential of the satellite to monitor changes in the Earths geometry, rotation and gravity field. The estimated parameters (station coordinates and EOP) are validated w.r.t. official time series of the IERS. The Stokes coefficients are compared to recent gravity field solutions.

  3. Estimating the Earth's gravity field using a multi-satellite SLR solution

    NASA Astrophysics Data System (ADS)

    Bloßfeld, Mathis; Stefka, Vojtech; Müller, Horst; Gerstl, Michael

    2013-04-01

    Satellite Laser Ranging (SLR) is the unique technique to determine station coordinates, Earth Orientation Parameter (EOP) and Stokes coefficients of the Earth's gravity field in one common adjustment. These parameters form the so called "three pillars" (Plag & Pearlman, 2009) of the Global Geodetic Observing System (GGOS). In its function as official analysis center of the International Laser Ranging Service (ILRS), DGFI is developing and maintaining software to process SLR observations called "DGFI Orbit and Geodetic parameter estimation Software" (DOGS). The software is used to analyze SLR observations and to compute multi-satellite solutions. To take benefit of different orbit performances (e.g. inclination and altitude), a solution using ten different spherical satellites (ETALON1/2, LAGEOS1/2, STELLA, STARLETTE, AJISAI, LARETS, LARES, BLITS) covering 12 years of observations is computed. The satellites are relatively weighted using a variance component estimation (VCE). The obtained weights are analyzed w.r.t. the potential of the satellite to monitor changes in the Earths geometry, rotation and gravity field. The estimated parameters (station coordinates and EOP) are validated w.r.t. official time series of the IERS. The obtained Stokes coefficients are compared to recent gravity field solutions and discussed in detail.

  4. Efficient Radiative Transfer for Dynamically Evolving Stratified Atmospheres

    NASA Astrophysics Data System (ADS)

    Judge, Philip G.

    2017-12-01

    We present a fast multi-level and multi-atom non-local thermodynamic equilibrium radiative transfer method for dynamically evolving stratified atmospheres, such as the solar atmosphere. The preconditioning method of Rybicki & Hummer (RH92) is adopted. But, pressed for the need of speed and stability, a “second-order escape probability” scheme is implemented within the framework of the RH92 method, in which frequency- and angle-integrals are carried out analytically. While minimizing the computational work needed, this comes at the expense of numerical accuracy. The iteration scheme is local, the formal solutions for the intensities are the only non-local component. At present the methods have been coded for vertical transport, applicable to atmospheres that are highly stratified. The probabilistic method seems adequately fast, stable, and sufficiently accurate for exploring dynamical interactions between the evolving MHD atmosphere and radiation using current computer hardware. Current 2D and 3D dynamics codes do not include this interaction as consistently as the current method does. The solutions generated may ultimately serve as initial conditions for dynamical calculations including full 3D radiative transfer. The National Center for Atmospheric Research is sponsored by the National Science Foundation.

  5. Swarm satellite mission scheduling & planning using Hybrid Dynamic Mutation Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Zixuan; Guo, Jian; Gill, Eberhard

    2017-08-01

    Space missions have traditionally been controlled by operators from a mission control center. Given the increasing number of satellites for some space missions, generating a command list for multiple satellites can be time-consuming and inefficient. Developing multi-satellite, onboard mission scheduling & planning techniques is, therefore, a key research field for future space mission operations. In this paper, an improved Genetic Algorithm (GA) using a new mutation strategy is proposed as a mission scheduling algorithm. This new mutation strategy, called Hybrid Dynamic Mutation (HDM), combines the advantages of both dynamic mutation strategy and adaptive mutation strategy, overcoming weaknesses such as early convergence and long computing time, which helps standard GA to be more efficient and accurate in dealing with complex missions. HDM-GA shows excellent performance in solving both unconstrained and constrained test functions. The experiments of using HDM-GA to simulate a multi-satellite, mission scheduling problem demonstrates that both the computation time and success rate mission requirements can be met. The results of a comparative test between HDM-GA and three other mutation strategies also show that HDM has outstanding performance in terms of speed and reliability.

  6. QCM Thermo-Gravimetric Analysis (QTGA) Comparisons

    NASA Technical Reports Server (NTRS)

    Rosecrans, Glenn; Meadows, George

    2004-01-01

    The ASTM E-1559 apparatus has been used for years at NASA/Goddard Space Flight Center (GSFC) to determine in situ outgassing rate information, as well as pertinent in situ TML and multiple VCM values. The apparatus also affords the opportunity to experimentally compute the evaporation rates of molecular species that are reemitted as the Quartz Crystal Microbalances (QCMs) are gradually warmed up at some controlled temperature. Typically the molecular mass that accumulates onto the test QCMs are a compilation of species that are outgassing from the sample due to their respective activation energies and the desorption processes that the sample undergoes at various tested temperatures. It has been speculated that if there is too much molecular buildup of condensed water vapor (ice) onto the QCM crystal that a significantly higher temperature would be needed to break these "ice" bonds. ASTM E-1559 data plots will be used to demonstrate the thermogravimetric effects of water and other miscible molecular species with various water/ice thicknesses and at different evaporation rates.

  7. Applying Molecular Bonding Concepts to the Solid State

    NASA Astrophysics Data System (ADS)

    Dunnington, Benjamin D.

    In this thesis, we describe the extension and application of Natural Bond Orbital (NBO) analysis to periodic systems. This enables the translation of rigorous, quantum mechanical calculation results of solid systems into the localized lone pairs and two-center bonds of Lewis structures. Such localized bonding descriptions form the basic language of chemistry, and application of these ideas to solids allows for the understanding of complex phenomena in bulk systems using readily accessible concepts from molecular science. In addition to the algorithmic adjustments needed for to account for periodic boundary conditions in the NBO process, we also discuss methodology to interface the ubiquitous plane wave basis sets of the solid state with the atom-centered basis functions needed as input for NBO analysis. We will describe one method using projection of the plane wave eigenstates, and a second projection-free method that involves the direct calculation of matrix elements of the plane wave Hamiltonian in an atom-centered basis. The reliance of many localized, post-computational analysis techniques on an atom-centered description of the orbitals, means these interfaces will have applicability beyond our NBO development. An ideal area for application of such molecular descriptions of periodic systems is heterogeneous catalysis, where reactants from a gas/liquid phase react on a solid catalyst surface. Previous studies of these systems have originated from the delocalized perspective of the bulk catalyst. NBO provides an explicit description of the perturbative effect of the catalyst on the covalent bonds of the reactant, which is correlated with the catalytic activity of the material. Such a shift to an adsorbate focused description of surface reactivity will enable understanding of catalysis across a variety of materials.

  8. MultiDK: A Multiple Descriptor Multiple Kernel Approach for Molecular Discovery and Its Application to Organic Flow Battery Electrolytes.

    PubMed

    Kim, Sungjin; Jinich, Adrián; Aspuru-Guzik, Alán

    2017-04-24

    We propose a multiple descriptor multiple kernel (MultiDK) method for efficient molecular discovery using machine learning. We show that the MultiDK method improves both the speed and accuracy of molecular property prediction. We apply the method to the discovery of electrolyte molecules for aqueous redox flow batteries. Using multiple-type-as opposed to single-type-descriptors, we obtain more relevant features for machine learning. Following the principle of "wisdom of the crowds", the combination of multiple-type descriptors significantly boosts prediction performance. Moreover, by employing multiple kernels-more than one kernel function for a set of the input descriptors-MultiDK exploits nonlinear relations between molecular structure and properties better than a linear regression approach. The multiple kernels consist of a Tanimoto similarity kernel and a linear kernel for a set of binary descriptors and a set of nonbinary descriptors, respectively. Using MultiDK, we achieve an average performance of r 2 = 0.92 with a test set of molecules for solubility prediction. We also extend MultiDK to predict pH-dependent solubility and apply it to a set of quinone molecules with different ionizable functional groups to assess their performance as flow battery electrolytes.

  9. SMMP v. 3.0—Simulating proteins and protein interactions in Python and Fortran

    NASA Astrophysics Data System (ADS)

    Meinke, Jan H.; Mohanty, Sandipan; Eisenmenger, Frank; Hansmann, Ulrich H. E.

    2008-03-01

    We describe a revised and updated version of the program package SMMP. SMMP is an open-source FORTRAN package for molecular simulation of proteins within the standard geometry model. It is designed as a simple and inexpensive tool for researchers and students to become familiar with protein simulation techniques. SMMP 3.0 sports a revised API increasing its flexibility, an implementation of the Lund force field, multi-molecule simulations, a parallel implementation of the energy function, Python bindings, and more. Program summaryTitle of program:SMMP Catalogue identifier:ADOJ_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADOJ_v3_0.html Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language used:FORTRAN, Python No. of lines in distributed program, including test data, etc.:52 105 No. of bytes in distributed program, including test data, etc.:599 150 Distribution format:tar.gz Computer:Platform independent Operating system:OS independent RAM:2 Mbytes Classification:3 Does the new version supersede the previous version?:Yes Nature of problem:Molecular mechanics computations and Monte Carlo simulation of proteins. Solution method:Utilizes ECEPP2/3, FLEX, and Lund potentials. Includes Monte Carlo simulation algorithms for canonical, as well as for generalized ensembles. Reasons for new version:API changes and increased functionality. Summary of revisions:Added Lund potential; parameters used in subroutines are now passed as arguments; multi-molecule simulations; parallelized energy calculation for ECEPP; Python bindings. Restrictions:The consumed CPU time increases with the size of protein molecule. Running time:Depends on the size of the simulated molecule.

  10. MULTI: a shared memory approach to cooperative molecular modeling.

    PubMed

    Darden, T; Johnson, P; Smith, H

    1991-03-01

    A general purpose molecular modeling system, MULTI, based on the UNIX shared memory and semaphore facilities for interprocess communication is described. In addition to the normal querying or monitoring of geometric data, MULTI also provides processes for manipulating conformations, and for displaying peptide or nucleic acid ribbons, Connolly surfaces, close nonbonded contacts, crystal-symmetry related images, least-squares superpositions, and so forth. This paper outlines the basic techniques used in MULTI to ensure cooperation among these specialized processes, and then describes how they can work together to provide a flexible modeling environment.

  11. Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Mount, Frances; Carreon, Patricia; Torney, Susan E.

    2001-01-01

    The Engineering and Mission Operations Directorates at NASA Johnson Space Center are combining laboratories and expertise to establish the Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations. This is a testbed for human centered design, development and evaluation of intelligent autonomous and assistant systems that will be needed for human exploration and development of space. This project will improve human-centered analysis, design and evaluation methods for developing intelligent software. This software will support human-machine cognitive and collaborative activities in future interplanetary work environments where distributed computer and human agents cooperate. We are developing and evaluating prototype intelligent systems for distributed multi-agent mixed-initiative operations. The primary target domain is control of life support systems in a planetary base. Technical approaches will be evaluated for use during extended manned tests in the target domain, the Bioregenerative Advanced Life Support Systems Test Complex (BIO-Plex). A spinoff target domain is the International Space Station (ISS) Mission Control Center (MCC). Prodl}cts of this project include human-centered intelligent software technology, innovative human interface designs, and human-centered software development processes, methods and products. The testbed uses adjustable autonomy software and life support systems simulation models from the Adjustable Autonomy Testbed, to represent operations on the remote planet. Ground operations prototypes and concepts will be evaluated in the Exploration Planning and Operations Center (ExPOC) and Jupiter Facility.

  12. Multi-core processing and scheduling performance in CMS

    NASA Astrophysics Data System (ADS)

    Hernández, J. M.; Evans, D.; Foulkes, S.

    2012-12-01

    Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resulting in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.

  13. MinOmics, an Integrative and Immersive Tool for Multi-Omics Analysis.

    PubMed

    Maes, Alexandre; Martinez, Xavier; Druart, Karen; Laurent, Benoist; Guégan, Sean; Marchand, Christophe H; Lemaire, Stéphane D; Baaden, Marc

    2018-06-21

    Proteomic and transcriptomic technologies resulted in massive biological datasets, their interpretation requiring sophisticated computational strategies. Efficient and intuitive real-time analysis remains challenging. We use proteomic data on 1417 proteins of the green microalga Chlamydomonas reinhardtii to investigate physicochemical parameters governing selectivity of three cysteine-based redox post translational modifications (PTM): glutathionylation (SSG), nitrosylation (SNO) and disulphide bonds (SS) reduced by thioredoxins. We aim to understand underlying molecular mechanisms and structural determinants through integration of redox proteome data from gene- to structural level. Our interactive visual analytics approach on an 8.3 m2 display wall of 25 MPixel resolution features stereoscopic three dimensions (3D) representation performed by UnityMol WebGL. Virtual reality headsets complement the range of usage configurations for fully immersive tasks. Our experiments confirm that fast access to a rich cross-linked database is necessary for immersive analysis of structural data. We emphasize the possibility to display complex data structures and relationships in 3D, intrinsic to molecular structure visualization, but less common for omics-network analysis. Our setup is powered by MinOmics, an integrated analysis pipeline and visualization framework dedicated to multi-omics analysis. MinOmics integrates data from various sources into a materialized physical repository. We evaluate its performance, a design criterion for the framework.

  14. Multi-species ion transport in ICF relevant conditions

    NASA Astrophysics Data System (ADS)

    Vold, Erik; Kagan, Grigory; Simakov, Andrei; Molvig, Kim; Yin, Lin; Albright, Brian

    2017-10-01

    Classical transport theory based on Chapman-Enskog methods provides self consistent approximations for kinetic fluxes of mass, heat and momentum for each ion species in a multi-ion plasma characterized with a small Knudsen number. A numerical method for solving the classic forms of multi-ion transport, self-consistently including heat and species mass fluxes relative to the center of mass, is given in [Kagan-Baalrud, arXiv '16] and similar transport coefficients result from recent derivations [Simakov-Molvig, PoP, '16]. We have implemented a combination of these methods in a standalone test code and in xRage, an adaptive-mesh radiation hydrodynamics code, at LANL. Transport mixing is examined between a DT fuel and a CH capsule shell in ICF conditions. The four ion species develop individual self-similar density profiles under the assumption of P-T equilibrium in 1D and show interesting early time transient pressure and center of mass velocity behavior when P-T equilibrium is not enforced. Some 2D results are explored to better understand the transport mix in combination with convective flow driven by macroscopic fluid instabilities at the fuel-capsule interface. Early transient and some 2D behaviors from the fluid transport are compared to kinetic code results. Work performed under the auspices of the U.S. DOE by the LANS, LLC, Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396. Funding provided by the Advanced Simulation and Computing (ASC) Program.

  15. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    NASA Astrophysics Data System (ADS)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  16. Structural, dynamic and photophysical properties of a fluorescent dye incorporated in an amorphous hydrophobic polymer bundle.

    PubMed

    De Mitri, N; Prampolini, G; Monti, S; Barone, V

    2014-08-21

    The properties of a low molecular weight organic dye, namely 4-naphthyloxy-1-methoxy-2,2,6,6-tetramethylpiperidine, covalently bound to an apolar polyolefin were investigated by means of a multi-level approach, combining classical molecular dynamics simulations, based on purposely parameterized force fields, and quantum mechanical calculations based on density functional theory (DFT) and its time-dependent extension (TD-DFT). The structure and dynamics of the dye in its embedding medium were analyzed and discussed taking the entangling effect of the surrounding polymer into account, and also by comparing the results to those obtained for a different environment, i.e. toluene solution. Finally, the influence was investigated of long lived cages found in the polymeric embedding on photophysical properties, in terms of the slow and fast dye's internal dynamics, by comparing computed IR and UV spectra with their experimental counterparts.

  17. Perspective: Advanced particle imaging

    DOE PAGES

    Chandler, David W.; Houston, Paul L.; Parker, David H.

    2017-05-26

    This study discuss, the first ion imaging experiment demonstrating the capability of collecting an image of the photofragments from a unimolecular dissociation event and analyzing that image to obtain the three-dimensional velocity distribution of the fragments, the efficacy and breadth of application of the ion imaging technique have continued to improve and grow. With the addition of velocity mapping, ion/electron centroiding, and slice imaging techniques, the versatility and velocity resolution have been unmatched. Recent improvements in molecular beam, laser, sensor, and computer technology are allowing even more advanced particle imaging experiments, and eventually we can expect multi-mass imaging with co-variancemore » and full coincidence capability on a single shot basis with repetition rates in the kilohertz range. This progress should further enable “complete” experiments—the holy grail of molecular dynamics—where all quantum numbers of reactants and products of a bimolecular scattering event are fully determined and even under our control.« less

  18. Accurate reaction-diffusion operator splitting on tetrahedral meshes for parallel stochastic molecular simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hepburn, I.; De Schutter, E., E-mail: erik@oist.jp; Theoretical Neurobiology & Neuroengineering, University of Antwerp, Antwerp 2610

    Spatial stochastic molecular simulations in biology are limited by the intense computation required to track molecules in space either in a discrete time or discrete space framework, which has led to the development of parallel methods that can take advantage of the power of modern supercomputers in recent years. We systematically test suggested components of stochastic reaction-diffusion operator splitting in the literature and discuss their effects on accuracy. We introduce an operator splitting implementation for irregular meshes that enhances accuracy with minimal performance cost. We test a range of models in small-scale MPI simulations from simple diffusion models to realisticmore » biological models and find that multi-dimensional geometry partitioning is an important consideration for optimum performance. We demonstrate performance gains of 1-3 orders of magnitude in the parallel implementation, with peak performance strongly dependent on model specification.« less

  19. Temperature- and composition-dependent hydrogen diffusivity in palladium from statistically-averaged molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Xiaowang; Heo, Tae Wook; Wood, Brandon C.

    Solid-state hydrogen storage materials undergo complex phase transformations whose kinetics is often limited by hydrogen diffusion. Among metal hydrides, palladium hydride undergoes a diffusional phase transformation upon hydrogen uptake, during which the hydrogen diffusivity varies with hydrogen composition and temperature. Here we perform robust statistically-averaged molecular dynamics simulations to obtain a well-converged analytical expression for hydrogen diffusivity in bulk palladium that is valid throughout all stages of the reaction. Our studies confirm significant dependence of the diffusivity on composition and temperature that elucidate key trends in the available experimental measurements. Whereas at low hydrogen compositions, a single process dominates, atmore » high hydrogen compositions, diffusion is found to exhibit behavior consistent with multiple hopping barriers. Further analysis, supported by nudged elastic band computations, suggests that the multi-barrier diffusion can be interpreted as two distinct mechanisms corresponding to hydrogen-rich and hydrogen-poor local environments.« less

  20. Temperature- and composition-dependent hydrogen diffusivity in palladium from statistically-averaged molecular dynamics

    DOE PAGES

    Zhou, Xiaowang; Heo, Tae Wook; Wood, Brandon C.; ...

    2018-03-09

    Solid-state hydrogen storage materials undergo complex phase transformations whose kinetics is often limited by hydrogen diffusion. Among metal hydrides, palladium hydride undergoes a diffusional phase transformation upon hydrogen uptake, during which the hydrogen diffusivity varies with hydrogen composition and temperature. Here we perform robust statistically-averaged molecular dynamics simulations to obtain a well-converged analytical expression for hydrogen diffusivity in bulk palladium that is valid throughout all stages of the reaction. Our studies confirm significant dependence of the diffusivity on composition and temperature that elucidate key trends in the available experimental measurements. Whereas at low hydrogen compositions, a single process dominates, atmore » high hydrogen compositions, diffusion is found to exhibit behavior consistent with multiple hopping barriers. Further analysis, supported by nudged elastic band computations, suggests that the multi-barrier diffusion can be interpreted as two distinct mechanisms corresponding to hydrogen-rich and hydrogen-poor local environments.« less

Top