Sample records for parallel computational nanotechnology

  1. Three-Dimensional Nanobiocomputing Architectures With Neuronal Hypercells

    DTIC Science & Technology

    2007-06-01

    Neumann architectures, and CMOS fabrication. Novel solutions of massive parallel distributed computing and processing (pipelined due to systolic... and processing platforms utilizing molecular hardware within an enabling organization and architecture. The design technology is based on utilizing a...Microsystems and Nanotechnologies investigated a novel 3D3 (Hardware Software Nanotechnology) technology to design super-high performance computing

  2. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.

  3. Highly-Parallel, Highly-Compact Computing Structures Implemented in Nanotechnology

    NASA Technical Reports Server (NTRS)

    Crawley, D. G.; Duff, M. J. B.; Fountain, T. J.; Moffat, C. D.; Tomlinson, C. D.

    1995-01-01

    In this paper, we describe work in which we are evaluating how the evolving properties of nano-electronic devices could best be utilized in highly parallel computing structures. Because of their combination of high performance, low power, and extreme compactness, such structures would have obvious applications in spaceborne environments, both for general mission control and for on-board data analysis. However, the anticipated properties of nano-devices mean that the optimum architecture for such systems is by no means certain. Candidates include single instruction multiple datastream (SIMD) arrays, neural networks, and multiple instruction multiple datastream (MIMD) assemblies.

  4. Optical solver of combinatorial problems: nanotechnological approach.

    PubMed

    Cohen, Eyal; Dolev, Shlomi; Frenkel, Sergey; Kryzhanovsky, Boris; Palagushkin, Alexandr; Rosenblit, Michael; Zakharov, Victor

    2013-09-01

    We present an optical computing system to solve NP-hard problems. As nano-optical computing is a promising venue for the next generation of computers performing parallel computations, we investigate the application of submicron, or even subwavelength, computing device designs. The system utilizes a setup of exponential sized masks with exponential space complexity produced in polynomial time preprocessing. The masks are later used to solve the problem in polynomial time. The size of the masks is reduced to nanoscaled density. Simulations were done to choose a proper design, and actual implementations show the feasibility of such a system.

  5. High throughput optical lithography by scanning a massive array of bowtie aperture antennas at near-field

    PubMed Central

    Wen, X.; Datta, A.; Traverso, L. M.; Pan, L.; Xu, X.; Moon, E. E.

    2015-01-01

    Optical lithography, the enabling process for defining features, has been widely used in semiconductor industry and many other nanotechnology applications. Advances of nanotechnology require developments of high-throughput optical lithography capabilities to overcome the optical diffraction limit and meet the ever-decreasing device dimensions. We report our recent experimental advancements to scale up diffraction unlimited optical lithography in a massive scale using the near field nanolithography capabilities of bowtie apertures. A record number of near-field optical elements, an array of 1,024 bowtie antenna apertures, are simultaneously employed to generate a large number of patterns by carefully controlling their working distances over the entire array using an optical gap metrology system. Our experimental results reiterated the ability of using massively-parallel near-field devices to achieve high-throughput optical nanolithography, which can be promising for many important nanotechnology applications such as computation, data storage, communication, and energy. PMID:26525906

  6. Scaling Properties of Algorithms in Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Bailey, David H.; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    At the present time, several technologies are pressing the limits of microminiature manufacturing. In semiconductor technology, for example, the Intel Pentium Pro (which is used in the Department of Energy's ASCI 'red' parallel supercomputer system) and the DEC Alpha 21164 (which is used in the CRAY T3E) both are fabricated using 0.35 micron process technology. Recently Texas Instruments (TI) announced the availability of 0.25 micron technology chips by the end of 1996 and plans to have 0.18 micron devices in production within two years. However, some significant challenges lie down the road. These include the skyrocketing cost of manufacturing plants, the 0.1 micron foreseeable limit of the photolithography process, quantum effects, data communication bandwidth limitations, heat dissipation, and others. Some related microminiature technologies include micro-electromechanical systems (MEMS), opto-electronic devices, quantum computing, biological computing, and others. All of these technologies require the fabrication of devices whose sizes are approaching the nanometer level. As such they are often collectively referred to with the name 'nanotechnology'. Clearly nanotechnology in this general sense is destined to be a very important technology of the 21st century. The ultimate dream in this arena is 'molecular nanotechnology', in other words the fabrication of devices and materials with most or all atoms and molecules in a pre-programmed position, possibly placed there by 'nano-robots'. This futuristic capability will probably not be achieved for at least two decades. However, it appears that somewhat less ambitious variations of molecular nanotechnology, such as devices and materials based on 'buckyballs' and 'nanotubes' may be realized significantly sooner, possibly within ten years or so. Even at the present time, semiconductor devices are approaching the regime where quantum chemical effects must be considered in design.

  7. Computational Nanotechnology of Molecular Materials, Electronics, and Actuators with Carbon Nanotubes and Fullerenes

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Menon, Madhu; Cho, Kyeongjae; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The role of computational nanotechnology in developing next generation of multifunctional materials, molecular scale electronic and computing devices, sensors, actuators, and machines is described through a brief review of enabling computational techniques and few recent examples derived from computer simulations of carbon nanotube based molecular nanotechnology.

  8. Perceptions of risk from nanotechnologies and trust in stakeholders: a cross sectional study of public, academic, government and business attitudes.

    PubMed

    Capon, Adam; Gillespie, James; Rolfe, Margaret; Smith, Wayne

    2015-04-26

    Policy makers and regulators are constantly required to make decisions despite the existence of substantial uncertainty regarding the outcomes of their proposed decisions. Understanding stakeholder views is an essential part of addressing this uncertainty, which provides insight into the possible social reactions and tolerance of unpredictable risks. In the field of nanotechnology, large uncertainties exist regarding the real and perceived risks this technology may have on society. Better evidence is needed to confront this issue. We undertook a computer assisted telephone interviewing (CATI) survey of the Australian public and a parallel survey of those involved in nanotechnology from the academic, business and government sectors. Analysis included comparisons of proportions and logistic regression techniques. We explored perceptions of nanotechnology risks both to health and in a range of products. We examined views on four trust actors. The general public's perception of risk was significantly higher than that expressed by other stakeholders. The public bestows less trust in certain trust actors than do academics or government officers, giving its greatest trust to scientists. Higher levels of public trust were generally associated with lower perceptions of risk. Nanotechnology in food and cosmetics/sunscreens were considered riskier applications irrespective of stakeholder, while familiarity with nanotechnology was associated with a reduced risk perception. Policy makers should consider the disparities in risk and trust perceptions between the public and influential stakeholders, placing greater emphasis on risk communication and the uncertainties of risk assessment in these areas of higher concern. Scientists being the highest trusted group are well placed to communicate the risks of nanotechnologies to the public.

  9. PREFACE: Proceedings of the International Conference on Nanoscience and Nanotechnology (Melbourne, 25-29 February 2008) Proceedings of the International Conference on Nanoscience and Nanotechnology (Melbourne, 25-29 February 2008)

    NASA Astrophysics Data System (ADS)

    Ford, Mike; Russo, Salvy; Gale, Julian

    2009-04-01

    image The International Conference on Nanoscience and Nanotechnology is held bi-annually in Australia, supported by the Australian Research Council and Australian Nanotechnology Network. The purpose of the conference is to provide a forum for discussion about all aspects of nanoscience and nanotechnology, to enable young Australian researchers a chance to meet and engage with leading global scientists in the field, and to set up the exchange mechanisms and collaborations that will enable the field to continue to develop and flourish. The second conference in this series co-chaired by Professor Paul Mulvaney and Professor Abid Khan attracted over eight hundred participants from across academia, industry, government and schools, with 8 plenary talks, 32 invited talks and more than 420 oral and poster papers spread across 6 parallel symposia. These symposia presented the status of international research from nanoelectronics to nanobiotechnology, a stream dedicated to commercialization issues and showcasing Australian success stories, and a final symposium discussing regulatory, environmental and health issues, and the next stage of the nanotechnology roadmap. The development of efficient algorithms and availability of computing power has seen calculation play a crucial role in the progress of nanoscience and nanotechnology, providing a window onto processes occurring at the molecular level that are not easily accessed by experiment alone. Consequently, a symposium was dedicated to nanocomputation, containing contributions ranging from first principles atomistic simulations of nanostructures to classical models of nanotube motion. The papers in this special issue are contributions to this symposium co-chaired by Salvy Russo, Julian Gale and Mike Ford.

  10. Computational Nanotechnology Molecular Electronics, Materials and Machines

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    This presentation covers research being performed on computational nanotechnology, carbon nanotubes and fullerenes at the NASA Ames Research Center. Topics cover include: nanomechanics of nanomaterials, nanotubes and composite materials, molecular electronics with nanotube junctions, kinky chemistry, and nanotechnology for solid-state quantum computers using fullerenes.

  11. Toward integration of in vivo molecular computing devices: successes and challenges

    PubMed Central

    Hayat, Sikander; Hinze, Thomas

    2008-01-01

    The computing power unleashed by biomolecule based massively parallel computational units has been the focus of many interdisciplinary studies that couple state of the art ideas from mathematical logic, theoretical computer science, bioengineering, and nanotechnology to fulfill some computational task. The output can influence, for instance, release of a drug at a specific target, gene expression, cell population, or be a purely mathematical entity. Analysis of the results of several studies has led to the emergence of a general set of rules concerning the implementation and optimization of in vivo computational units. Taking two recent studies on in vivo computing as examples, we discuss the impact of mathematical modeling and simulation in the field of synthetic biology and on in vivo computing. The impact of the emergence of gene regulatory networks and the potential of proteins acting as “circuit wires” on the problem of interconnecting molecular computing device subunits is also highlighted. PMID:19404433

  12. Computational Nanotechnology at NASA Ames Research Center, 1996

    NASA Technical Reports Server (NTRS)

    Globus, Al; Bailey, David; Langhoff, Steve; Pohorille, Andrew; Levit, Creon; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Some forms of nanotechnology appear to have enormous potential to improve aerospace and computer systems; computational nanotechnology, the design and simulation of programmable molecular machines, is crucial to progress. NASA Ames Research Center has begun a computational nanotechnology program including in-house work, external research grants, and grants of supercomputer time. Four goals have been established: (1) Simulate a hypothetical programmable molecular machine replicating itself and building other products. (2) Develop molecular manufacturing CAD (computer aided design) software and use it to design molecular manufacturing systems and products of aerospace interest, including computer components. (3) Characterize nanotechnologically accessible materials of aerospace interest. Such materials may have excellent strength and thermal properties. (4) Collaborate with experimentalists. Current in-house activities include: (1) Development of NanoDesign, software to design and simulate a nanotechnology based on functionalized fullerenes. Early work focuses on gears. (2) A design for high density atomically precise memory. (3) Design of nanotechnology systems based on biology. (4) Characterization of diamonoid mechanosynthetic pathways. (5) Studies of the laplacian of the electronic charge density to understand molecular structure and reactivity. (6) Studies of entropic effects during self-assembly. Characterization of properties of matter for clusters up to sizes exhibiting bulk properties. In addition, the NAS (NASA Advanced Supercomputing) supercomputer division sponsored a workshop on computational molecular nanotechnology on March 4-5, 1996 held at NASA Ames Research Center. Finally, collaborations with Bill Goddard at CalTech, Ralph Merkle at Xerox Parc, Don Brenner at NCSU (North Carolina State University), Tom McKendree at Hughes, and Todd Wipke at UCSC are underway.

  13. RNA nanotechnology for computer design and in vivo computation

    PubMed Central

    Qiu, Meikang; Khisamutdinov, Emil; Zhao, Zhengyi; Pan, Cheryl; Choi, Jeong-Woo; Leontis, Neocles B.; Guo, Peixuan

    2013-01-01

    Molecular-scale computing has been explored since 1989 owing to the foreseeable limitation of Moore's law for silicon-based computation devices. With the potential of massive parallelism, low energy consumption and capability of working in vivo, molecular-scale computing promises a new computational paradigm. Inspired by the concepts from the electronic computer, DNA computing has realized basic Boolean functions and has progressed into multi-layered circuits. Recently, RNA nanotechnology has emerged as an alternative approach. Owing to the newly discovered thermodynamic stability of a special RNA motif (Shu et al. 2011 Nat. Nanotechnol. 6, 658–667 (doi:10.1038/nnano.2011.105)), RNA nanoparticles are emerging as another promising medium for nanodevice and nanomedicine as well as molecular-scale computing. Like DNA, RNA sequences can be designed to form desired secondary structures in a straightforward manner, but RNA is structurally more versatile and more thermodynamically stable owing to its non-canonical base-pairing, tertiary interactions and base-stacking property. A 90-nucleotide RNA can exhibit 490 nanostructures, and its loops and tertiary architecture can serve as a mounting dovetail that eliminates the need for external linking dowels. Its enzymatic and fluorogenic activity creates diversity in computational design. Varieties of small RNA can work cooperatively, synergistically or antagonistically to carry out computational logic circuits. The riboswitch and enzymatic ribozyme activities and its special in vivo attributes offer a great potential for in vivo computation. Unique features in transcription, termination, self-assembly, self-processing and acid resistance enable in vivo production of RNA nanoparticles that harbour various regulators for intracellular manipulation. With all these advantages, RNA computation is promising, but it is still in its infancy. Many challenges still exist. Collaborations between RNA nanotechnologists and computer scientists are necessary to advance this nascent technology. PMID:24000362

  14. RNA nanotechnology for computer design and in vivo computation.

    PubMed

    Qiu, Meikang; Khisamutdinov, Emil; Zhao, Zhengyi; Pan, Cheryl; Choi, Jeong-Woo; Leontis, Neocles B; Guo, Peixuan

    2013-10-13

    Molecular-scale computing has been explored since 1989 owing to the foreseeable limitation of Moore's law for silicon-based computation devices. With the potential of massive parallelism, low energy consumption and capability of working in vivo, molecular-scale computing promises a new computational paradigm. Inspired by the concepts from the electronic computer, DNA computing has realized basic Boolean functions and has progressed into multi-layered circuits. Recently, RNA nanotechnology has emerged as an alternative approach. Owing to the newly discovered thermodynamic stability of a special RNA motif (Shu et al. 2011 Nat. Nanotechnol. 6, 658-667 (doi:10.1038/nnano.2011.105)), RNA nanoparticles are emerging as another promising medium for nanodevice and nanomedicine as well as molecular-scale computing. Like DNA, RNA sequences can be designed to form desired secondary structures in a straightforward manner, but RNA is structurally more versatile and more thermodynamically stable owing to its non-canonical base-pairing, tertiary interactions and base-stacking property. A 90-nucleotide RNA can exhibit 4⁹⁰ nanostructures, and its loops and tertiary architecture can serve as a mounting dovetail that eliminates the need for external linking dowels. Its enzymatic and fluorogenic activity creates diversity in computational design. Varieties of small RNA can work cooperatively, synergistically or antagonistically to carry out computational logic circuits. The riboswitch and enzymatic ribozyme activities and its special in vivo attributes offer a great potential for in vivo computation. Unique features in transcription, termination, self-assembly, self-processing and acid resistance enable in vivo production of RNA nanoparticles that harbour various regulators for intracellular manipulation. With all these advantages, RNA computation is promising, but it is still in its infancy. Many challenges still exist. Collaborations between RNA nanotechnologists and computer scientists are necessary to advance this nascent technology.

  15. National Strategic Computing Initiative Strategic Plan

    DTIC Science & Technology

    2016-07-01

    23 A.6 National Nanotechnology Initiative...Initiative: https://www.nitrd.gov/nitrdgroups/index.php?title=Big_Data_(BD_SSG)  National Nanotechnology Initiative: http://www.nano.gov  Precision...computing. While not limited to neuromorphic technologies, the National Nanotechnology Initiative’s first Grand Challenge seeks to achieve brain

  16. China and the United States--Global partners, competitors and collaborators in nanotechnology development.

    PubMed

    Gao, Yu; Jin, Biyu; Shen, Weiyu; Sinko, Patrick J; Xie, Xiaodong; Zhang, Huijuan; Jia, Lee

    2016-01-01

    USA and China are two leading countries engaged in nanotechnology research and development. They compete with each other for fruits in this innovative area in a parallel and compatible manner. Understanding the status and developmental prospects of nanotechnology in USA and China is important for policy-makers to decide nanotechnology priorities and funding, and to explore new ways for global cooperation on key issues. We here present the nanoscience and nanomedicine research and the related productivity measured by publications, and patent applications, governmental funding, policies and regulations, institutional translational research, industrial and enterprise growth in nanotechnology-related fields across China and USA. The comparison reveals some marked asymmetries of nanotechnology development in China and USA, which may be helpful for future directions to strengthen nanotechnology collaboration for both countries, and for the world as a whole. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Multiscale Multilevel Approach to Solution of Nanotechnology Problems

    NASA Astrophysics Data System (ADS)

    Polyakov, Sergey; Podryga, Viktoriia

    2018-02-01

    The paper is devoted to a multiscale multilevel approach for the solution of nanotechnology problems on supercomputer systems. The approach uses the combination of continuum mechanics models and the Newton dynamics for individual particles. This combination includes three scale levels: macroscopic, mesoscopic and microscopic. For gas-metal technical systems the following models are used. The quasihydrodynamic system of equations is used as a mathematical model at the macrolevel for gas and solid states. The system of Newton equations is used as a mathematical model at the mesoand microlevels; it is written for nanoparticles of the medium and larger particles moving in the medium. The numerical implementation of the approach is based on the method of splitting into physical processes. The quasihydrodynamic equations are solved by the finite volume method on grids of different types. The Newton equations of motion are solved by Verlet integration in each cell of the grid independently or in groups of connected cells. In the framework of the general methodology, four classes of algorithms and methods of their parallelization are provided. The parallelization uses the principles of geometric parallelism and the efficient partitioning of the computational domain. A special dynamic algorithm is used for load balancing the solvers. The testing of the developed approach was made by the example of the nitrogen outflow from a balloon with high pressure to a vacuum chamber through a micronozzle and a microchannel. The obtained results confirm the high efficiency of the developed methodology.

  18. DNA nanotechnology: a future perspective

    PubMed Central

    2013-01-01

    In addition to its genetic function, DNA is one of the most distinct and smart self-assembling nanomaterials. DNA nanotechnology exploits the predictable self-assembly of DNA oligonucleotides to design and assemble innovative and highly discrete nanostructures. Highly ordered DNA motifs are capable of providing an ultra-fine framework for the next generation of nanofabrications. The majority of these applications are based upon the complementarity of DNA base pairing: adenine with thymine, and guanine with cytosine. DNA provides an intelligent route for the creation of nanoarchitectures with programmable and predictable patterns. DNA strands twist along one helix for a number of bases before switching to the other helix by passing through a crossover junction. The association of two crossovers keeps the helices parallel and holds them tightly together, allowing the assembly of bigger structures. Because of the DNA molecule's unique and novel characteristics, it can easily be applied in a vast variety of multidisciplinary research areas like biomedicine, computer science, nano/optoelectronics, and bionanotechnology. PMID:23497147

  19. Convergence of nanotechnology with radiation therapy—insights and implications for clinical translation

    PubMed Central

    Chatterjee, Dev Kumar; Wolfe, Tatiana; Lee, Jihyoun; Brown, Aaron P; Singh, Pankaj Kumar; Bhattarai, Shanta Raj; Diagaradjane, Parmeswaran; Krishnan, Sunil

    2014-01-01

    Improvements in accuracy and efficacy in treating tumors with radiation therapy (RT) over the years have been fueled by parallel technological and conceptual advances in imaging and image-guidance techniques, radiation treatment machines, computational methods, and the understanding of the biology of tumor response to RT. Recent advances in our understanding of the hallmarks of cancer and the emergence of strategies to combat these traits of cancer have resulted in an expanding repertoire of targeted therapeutics, many of which can be exploited for enhancing the efficacy of RT. Complementing this advent of new treatment options is the evolution of our knowledge of the interaction between nanoscale materials and human tissues (nanomedicine). As with the changes in RT paradigms when the field has encountered newer and maturing disciplines, the incorporation of nanotechnology innovations into radiation oncology has the potential to refine or redefine its principles and revolutionize its practice. This review provides a summary of the principles, applications, challenges and outlook for the use of metallic nanoparticles in RT. PMID:25279336

  20. Nanotechnology: Principles and Applications

    NASA Astrophysics Data System (ADS)

    Logothetidis, S.

    Nanotechnology is one of the leading scientific fields today since it combines knowledge from the fields of Physics, Chemistry, Biology, Medicine, Informatics, and Engineering. It is an emerging technological field with great potential to lead in great breakthroughs that can be applied in real life. Novel nano- and biomaterials, and nanodevices are fabricated and controlled by nanotechnology tools and techniques, which investigate and tune the properties, responses, and functions of living and non-living matter, at sizes below 100 nm. The application and use of nanomaterials in electronic and mechanical devices, in optical and magnetic components, quantum computing, tissue engineering, and other biotechnologies, with smallest features, widths well below 100 nm, are the economically most important parts of the nanotechnology nowadays and presumably in the near future. The number of nanoproducts is rapidly growing since more and more nanoengineered materials are reaching the global market The continuous revolution in nanotechnology will result in the fabrication of nanomaterials with properties and functionalities which are going to have positive changes in the lives of our citizens, be it in health, environment, electronics or any other field. In the energy generation challenge where the conventional fuel resources cannot remain the dominant energy source, taking into account the increasing consumption demand and the CO2 emissions alternative renewable energy sources based on new technologies have to be promoted. Innovative solar cell technologies that utilize nanostructured materials and composite systems such as organic photovoltaics offer great technological potential due to their attractive properties such as the potential of large-scale and low-cost roll-to-roll manufacturing processes The advances in nanomaterials necessitate parallel progress of the nanometrology tools and techniques to characterize and manipulate nanostructures. Revolutionary new approaches in nanometrology will be required in the near future and the existing ones will have to be improved in terms of better resolution and sensitivity for elements and molecular species. Finally, the development of specific guidance for the safety evaluation of nanotechnology products is strongly recommended.

  1. Nanotechnology risk perceptions and communication: emerging technologies, emerging challenges.

    PubMed

    Pidgeon, Nick; Harthorn, Barbara; Satterfield, Terre

    2011-11-01

    Nanotechnology involves the fabrication, manipulation, and control of materials at the atomic level and may also bring novel uncertainties and risks. Potential parallels with other controversial technologies mean there is a need to develop a comprehensive understanding of processes of public perception of nanotechnology uncertainties, risks, and benefits, alongside related communication issues. Study of perceptions, at so early a stage in the development trajectory of a technology, is probably unique in the risk perception and communication field. As such it also brings new methodological and conceptual challenges. These include: dealing with the inherent diversity of the nanotechnology field itself; the unfamiliar and intangible nature of the concept, with few analogies to anchor mental models or risk perceptions; and the ethical and value questions underlying many nanotechnology debates. Utilizing the lens of social amplification of risk, and drawing upon the various contributions to this special issue of Risk Analysis on Nanotechnology Risk Perceptions and Communication, nanotechnology may at present be an attenuated hazard. The generic idea of "upstream public engagement" for emerging technologies such as nanotechnology is also discussed, alongside its importance for future work with emerging technologies in the risk communication field. © 2011 Society for Risk Analysis.

  2. Electron-correlated fragment-molecular-orbital calculations for biomolecular and nano systems.

    PubMed

    Tanaka, Shigenori; Mochizuki, Yuji; Komeiji, Yuto; Okiyama, Yoshio; Fukuzawa, Kaori

    2014-06-14

    Recent developments in the fragment molecular orbital (FMO) method for theoretical formulation, implementation, and application to nano and biomolecular systems are reviewed. The FMO method has enabled ab initio quantum-mechanical calculations for large molecular systems such as protein-ligand complexes at a reasonable computational cost in a parallelized way. There have been a wealth of application outcomes from the FMO method in the fields of biochemistry, medicinal chemistry and nanotechnology, in which the electron correlation effects play vital roles. With the aid of the advances in high-performance computing, the FMO method promises larger, faster, and more accurate simulations of biomolecular and related systems, including the descriptions of dynamical behaviors in solvent environments. The current status and future prospects of the FMO scheme are addressed in these contexts.

  3. A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network

    NASA Astrophysics Data System (ADS)

    Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien

    2017-03-01

    With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.

  4. A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network.

    PubMed

    Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien

    2017-03-21

    With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices' non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.

  5. A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network

    PubMed Central

    Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien

    2017-01-01

    With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing. PMID:28322262

  6. Nano-technology and nano-toxicology.

    PubMed

    Maynard, Robert L

    2012-01-01

    Rapid developments in nano-technology are likely to confer significant benefits on mankind. But, as with perhaps all new technologies, these benefits are likely to be accompanied by risks, perhaps by new risks. Nano-toxicology is developing in parallel with nano-technology and seeks to define the hazards and risks associated with nano-materials: only when risks have been identified they can be controlled. This article discusses the reasons for concern about the potential effects on health of exposure to nano-materials and relates these to the evidence of the effects on health of the ambient aerosol. A number of hypotheses are proposed and the dangers of adopting unsubstantiated hypotheses are stressed. Nano-toxicology presents many challenges and will need substantial financial support if it is to develop at a rate sufficient to cope with developments in nano-technology.

  7. Nano-technology and nano-toxicology

    PubMed Central

    Maynard, Robert L.

    2012-01-01

    Rapid developments in nano-technology are likely to confer significant benefits on mankind. But, as with perhaps all new technologies, these benefits are likely to be accompanied by risks, perhaps by new risks. Nano-toxicology is developing in parallel with nano-technology and seeks to define the hazards and risks associated with nano-materials: only when risks have been identified they can be controlled. This article discusses the reasons for concern about the potential effects on health of exposure to nano-materials and relates these to the evidence of the effects on health of the ambient aerosol. A number of hypotheses are proposed and the dangers of adopting unsubstantiated hypotheses are stressed. Nano-toxicology presents many challenges and will need substantial financial support if it is to develop at a rate sufficient to cope with developments in nano-technology. PMID:22662021

  8. Frontiers in Neuromorphics Workshop

    DTIC Science & Technology

    2017-04-14

    Policy:  Nanotechnology ‐ Inspired  Grand  Challenge  for  Future  Computing.    Our  goal  is  to  bring  together  scientific  disciplines  and...Dr. Helen Li – Pittsburgh University Title: Embrace the BRAIN Century: Challenges in Nanotechnology Enabled Neuromorphic Computing Design 3

  9. Analysis of the frontier technology of agricultural IoT and its predication research

    NASA Astrophysics Data System (ADS)

    Han, Shuqing; Zhang, Jianhua; Zhu, Mengshuai; Wu, Jianzhai; Shen, Chen; Kong, Fantao

    2017-09-01

    Agricultural IoT (Internet of Things) develops rapidly. Nanotechnology, biotechnology and optoelectronic technology are successfully integrated into the agricultural sensor technology. Big data, cloud computing and artificial intelligence technology have also been successfully used in IoT. This paper carries out the research on integration of agricultural sensor technology, nanotechnology, biotechnology and optoelectronic technology and the application of big data, cloud computing and artificial intelligence technology in agricultural IoT. The advantages and development of the integration of nanotechnology, biotechnology and optoelectronic technology with agricultural sensor technology were discussed. The application of big data, cloud computing and artificial intelligence technology in IoT and their development trend were analysed.

  10. Atherosclerosis and Nanotechnology: Diagnostic and Therapeutic Applications

    PubMed Central

    Kratz, Jeremy D.; Chaddha, Ashish; Bhattacharjee, Somnath

    2016-01-01

    Over the past several decades, tremendous advances have been made in the understanding, diagnosis, and treatment of coronary artery disease (CAD). However, with shifting demographics and evolving risk factors we now face new challenges that must be met in order to further advance are management of patients with CAD. In parallel with advances in our mechanistic appreciation of CAD and atherosclerosis, nanotechnology approaches have greatly expanded, offering the potential for significant improvements in our diagnostic and therapeutic management of CAD. To realize this potential we must go beyond to recognize new frontiers including knowledge gaps between understanding atherosclerosis to the translation of targeted molecular tools. This review highlights nanotechnology applications for imaging and therapeutic advancements in CAD. PMID:26809711

  11. Atherosclerosis and Nanotechnology: Diagnostic and Therapeutic Applications.

    PubMed

    Kratz, Jeremy D; Chaddha, Ashish; Bhattacharjee, Somnath; Goonewardena, Sascha N

    2016-02-01

    Over the past several decades, tremendous advances have been made in the understanding, diagnosis, and treatment of coronary artery disease (CAD). However, with shifting demographics and evolving risk factors we now face new challenges that must be met in order to further advance are management of patients with CAD. In parallel with advances in our mechanistic appreciation of CAD and atherosclerosis, nanotechnology approaches have greatly expanded, offering the potential for significant improvements in our diagnostic and therapeutic management of CAD. To realize this potential we must go beyond to recognize new frontiers including knowledge gaps between understanding atherosclerosis to the translation of targeted molecular tools. This review highlights nanotechnology applications for imaging and therapeutic advancements in CAD.

  12. Nano-Electronics and Bio-Electronics

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Kwak, Dochan (Technical Monitor)

    2001-01-01

    Viewgraph presentation on Nano-Electronics and Bio-Electronics is discussed. Topics discussed include: NASA Ames nanotechnology program, Potential Carbon Nanotube (CNT) application, CNT synthesis,Computational Nanotechnology, and protein nanotubes.

  13. Richard Feynman and computation

    NASA Astrophysics Data System (ADS)

    Hey, Tony

    1999-04-01

    The enormous contribution of Richard Feynman to modern physics is well known, both to teaching through his famous Feynman Lectures on Physics, and to research with his Feynman diagram approach to quantum field theory and his path integral formulation of quantum mechanics. Less well known perhaps is his long-standing interest in the physics of computation and this is the subject of this paper. Feynman lectured on computation at Caltech for most of the last decade of his life, first with John Hopfield and Carver Mead, and then with Gerry Sussman. The story of how these lectures came to be written up as the Feynman Lectures on Computation is briefly recounted. Feynman also discussed the fundamentals of computation with other legendary figures of the computer science and physics community such as Ed Fredkin, Rolf Landauer, Carver Mead, Marvin Minsky and John Wheeler. He was also instrumental in stimulating developments in both nanotechnology and quantum computing. During the 1980s Feynman re-visited long-standing interests both in parallel computing with Geoffrey Fox and Danny Hillis, and in reversible computation and quantum computing with Charles Bennett, Norman Margolus, Tom Toffoli and Wojciech Zurek. This paper records Feynman's links with the computational community and includes some reminiscences about his involvement with the fundamentals of computing.

  14. Machine Phase Fullerene Nanotechnology: 1996

    NASA Technical Reports Server (NTRS)

    Globus, Al; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    NASA has used exotic materials for spacecraft and experimental aircraft to good effect for many decades. In spite of many advances, transportation to space still costs about $10,000 per pound. Drexler has proposed a hypothetical nanotechnology based on diamond and investigated the properties of such molecular systems. These studies and others suggest enormous potential for aerospace systems. Unfortunately, methods to realize diamonoid nanotechnology are at best highly speculative. Recent computational efforts at NASA Ames Research Center and computation and experiment elsewhere suggest that a nanotechnology of machine phase functionalized fullerenes may be synthetically relatively accessible and of great aerospace interest. Machine phase materials are (hypothetical) materials consisting entirely or in large part of microscopic machines. In a sense, most living matter fits this definition. To begin investigation of fullerene nanotechnology, we used molecular dynamics to study the properties of carbon nanotube based gears and gear/shaft configurations. Experiments on C60 and quantum calculations suggest that benzyne may react with carbon nanotubes to form gear teeth. Han has computationally demonstrated that molecular gears fashioned from (14,0) single-walled carbon nanotubes and benzyne teeth should operate well at 50-100 gigahertz. Results suggest that rotation can be converted to rotating or linear motion, and linear motion may be converted into rotation. Preliminary results suggest that these mechanical systems can be cooled by a helium atmosphere. Furthermore, Deepak has successfully simulated using helical electric fields generated by a laser to power fullerene gears once a positive and negative charge have been added to form a dipole. Even with mechanical motion, cooling, and power; creating a viable nanotechnology requires support structures, computer control, a system architecture, a variety of components, and some approach to manufacture. Additional information is contained within the original extended abstract.

  15. Engineering Near-Field Transport of Energy using Nanostructured Materials

    DTIC Science & Technology

    2015-12-12

    increasingly important for a wide range of nanotechnology applications. Recent computational studies on near- field radiative heat transfer (NFRHT) suggest...SECURITY CLASSIFICATION OF: The transport of heat at the nanometer scale is becoming increasingly important for a wide range of nanotechnology...applications. Recent computational studies on near- field radiative heat transfer (NFRHT) suggest that radiative energy transport between suitably chosen

  16. Blue Horizons IV: Deterrence in the Age of Surprise

    DTIC Science & Technology

    2014-01-01

    technologies. It posits that the result of rapid advances in nanotechnology, biotechnology , directed energy, space, computers and communications...nanotechnology, and biotechnology . Each of these poses the risk of catastrophic attack to the United States, its citizens, and its infrastructure. Deterring...of advanced and potentially dangerous technologies. It posits that the result of rapid advances in nanotechnology, biotechnology , directed energy

  17. Nanotechnology and dentistry

    PubMed Central

    Ozak, Sule Tugba; Ozkan, Pelin

    2013-01-01

    Nanotechnology deals with the physical, chemical, and biological properties of structures and their components at nanoscale dimensions. Nanotechnology is based on the concept of creating functional structures by controlling atoms and molecules on a one-by-one basis. The use of this technology will allow many developments in the health sciences as well as in materials science, bio-technology, electronic and computer technology, aviation, and space exploration. With developments in materials science and biotechnology, nanotechnology is especially anticipated to provide advances in dentistry and innovations in oral health-related diagnostic and therapeutic methods. PMID:23408486

  18. Molecular Nanotechnology and Space Settlement

    NASA Technical Reports Server (NTRS)

    Globus, Al; Saini, Subhash (Technical Monitor)

    1998-01-01

    Atomically precise manipulation of matter is becoming increasingly common in laboratories around the world. As this control moves into aerospace systems, huge improvements in computers, high-strength materials, and other systems are expected. For example, studies suggest that it may be possible to build: 10(exp 18) MIPS computers, 10(exp 15) bytes/sq cm write once memory, $153-412/kg-of-cargo single- stage-to-orbit launch vehicles and active materials which sense their environment and react intelligently. All of NASA's enterprises should benefit significantly from molecular nanotechnology. Although the time may be measured in decades and the precise path to molecular nanotechnology is unclear, all paths (diamondoid, fullerene, self-assembly, biomolecular, etc.) will require very substantial computation. This talk will discuss fullerene nanotechnology and early work on hypothetical active materials consisting of large numbers of identical machines. The speaker will also discuss aerospace applications, particularly missions leading to widespread space settlement (e.g., small near-Earth - object retrieval). It is interesting to note that control of the tiny - individual atoms and molecules - may lead to colonization of the huge -first the solar system, then the galaxy.

  19. Computational Nanoelectronics and Nanotechnology at NASA ARC

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Kutler, Paul (Technical Monitor)

    1998-01-01

    Both physical and economic considerations indicate that the scaling era of CMOS will run out of steam around the year 2010. However, physical laws also indicate that it is possible to compute at a rate of a billion times present speeds with the expenditure of only one Watt of electrical power. NASA has long-term needs where ultra-small semiconductor devices are needed for critical applications: high performance, low power, compact computers for intelligent autonomous vehicles and Petaflop computing technology are some key examples. To advance the design, development, and production of future generation micro- and nano-devices, IT Modeling and Simulation Group has been started at NASA Ames with a goal to develop an integrated simulation environment that addresses problems related to nanoelectronics and molecular nanotechnology. Overview of nanoelectronics and nanotechnology research activities being carried out at Ames Research Center will be presented. We will also present the vision and the research objectives of the IT Modeling and Simulation Group including the applications of nanoelectronic based devices relevant to NASA missions.

  20. Nanotechnology: A Vast Field for the Creative Mind

    NASA Technical Reports Server (NTRS)

    Benavides, Jeannette

    2003-01-01

    Nanotechnology is a rapidly developing field worldwide. Nanotechnology is the development of smart systems for many different applications by building from the molecular level up. Current research, sponsored by The National Nanotechnology Alliance in the US will be described. Future needs in manpower of different disciplines will be discussed. Nanotechnology is a field of research that could allow developing countries to establish a technological infrastructure. The nature of nanotechnology requires professionals in many areas, such as engineers, chemists, physicists, mathematicians, computer scientists, materials scientists, etc. One of the materials that provide unique properties for nanotechnology is carbon nanotubes. At Goddard we have develop a process to produce nanotubes at lower costs and without metal catalysts which will be of great importance for the development of new materials for space applications and others outside NASA. Nanotechnology in general is a very broad and exciting field that will provide the technologies of tomorrow including biomedical applications for the betterment of mankind. There is room in this area for many researchers all over the world. The key is collaboration, nationally and internationally.

  1. Nanomedicine: The Medicine of Tomorrow

    NASA Astrophysics Data System (ADS)

    Logothetidis, S.

    Nowadays nanotechnology has become a technological field with great potential since it can be applied in almost every aspect of modern life. One of the sectors where nanotechnology is expected to play a vital role is the field of medical science. The interaction of nanotechnology with medicine gave birth to a completely new scientific field called nanomedicine. Nanomedicine is a field that aims to use the nanotechnology tools and principles in order to improve human health in every possible way. Nanotechnology provides monitoring tools and technology platforms that can be used in terms of detection, diagnostic, bioanalysis and imaging. New nanoscale drug-delivery systems are constantly designed with different morphological and chemical characteristics and unique specificity against tumours, offering a less harmful approach alternative to chemo- and radiotherapies. Furthermore, nanotechnology has led to great breakthroughs in the field of tissue engineering, making the replacement of damaged tissues and organs a much feasible procedure. The thorough analysis of bio and non-bio interactions achieved by versatile nanotools is essential for the design and development of highly performed medical implants. The continuous revolution in nanotechnology will result in the fabrication of nanostructures with properties and functionalities that can benefit patient's physiology faster and more effectively than conventional medical procedures and protocols. The number of nanoscale therapeutical products is rapidly growing since more and more nanomedical designs are reaching the global market. However the nanotoxic impact that these designs can have on human health is an era that requires still more investigation. The development of specific guidance documents at a European level for the safety evaluation of nanotechnology products in medicine is strongly recommended and the need for further research in nanotoxicology is identified. Ethical and moral concerns also need to be addressed in parallel with the new developments.

  2. Computers, Nanotechnology and Mind

    NASA Astrophysics Data System (ADS)

    Ekdahl, Bertil

    2008-10-01

    In 1958, two years after the Dartmouth conference, where the term artificial intelligence was coined, Herbert Simon and Allen Newell asserted the existence of "machines that think, that learn and create." They were further prophesying that the machines' capacity would increase and be on par with the human mind. Now, 50 years later, computers perform many more tasks than one could imagine in the 1950s but, virtually, no computer can do more than could the first digital computer, developed by John von Neumann in the 1940s. Computers still follow algorithms, they do not create them. However, the development of nanotechnology seems to have given rise to new hopes. With nanotechnology two things are supposed to happen. Firstly, due to the small scale it will be possible to construct huge computer memories which are supposed to be the precondition for building an artificial brain, secondly, nanotechnology will make it possible to scan the brain which in turn will make reverse engineering possible; the mind will be decoded by studying the brain. The consequence of such a belief is that the brain is no more than a calculator, i.e., all that the mind can do is in principle the results of arithmetical operations. Computers are equivalent to formal systems which in turn was an answer to an idea by Hilbert that proofs should contain ideal statements for which operations cannot be applied in a contentual way. The advocates of artificial intelligence will place content in a machine that is developed not only to be free of content but also cannot contain content. In this paper I argue that the hope for artificial intelligence is in vain.

  3. NASA Applications of Molecular Nanotechnology

    NASA Technical Reports Server (NTRS)

    Globus, Al; Bailey, David; Han, Jie; Jaffe, Richard; Levit, Creon; Merkle, Ralph; Srivastava, Deepak

    1998-01-01

    Laboratories throughout the world are rapidly gaining atomically precise control over matter. As this control extends to an ever wider variety of materials, processes and devices, opportunities for applications relevant to NASA's missions will be created. This document surveys a number of future molecular nanotechnology capabilities of aerospace interest. Computer applications, launch vehicle improvements, and active materials appear to be of particular interest. We also list a number of applications for each of NASA's enterprises. If advanced molecular nanotechnology can be developed, almost all of NASA's endeavors will be radically improved. In particular, a sufficiently advanced molecular nanotechnology can arguably bring large scale space colonization within our grasp.

  4. EDITORIAL: Quantum phenomena in Nanotechnology Quantum phenomena in Nanotechnology

    NASA Astrophysics Data System (ADS)

    Loss, Daniel

    2009-10-01

    Twenty years ago the Institute of Physics launched the journal Nanotechnology from its publishing house based in the home town of Paul Dirac, a legendary figure in the development of quantum mechanics at the turn of the last century. At the beginning of the 20th century, the adoption of quantum mechanical descriptions of events transformed the existing deterministic world view. But in many ways it also revolutionised the progress of research itself. For the first time since the 17th century when Francis Bacon established inductive reasoning as the means of advancing science from fact to axiom to law, theory was progressing ahead of experiments instead of providing explanations for observations that had already been made. Dirac's postulation of antimatter through purely theoretical investigation before its observation is the archetypal example of theory leading the way for experiment. The progress of nanotechnology and the development of tools and techniques that enabled the investigation of systems at the nanoscale brought with them many fascinating observations of phenomena that could only be explained through quantum mechanics, first theoretically deduced decades previously. At the nanoscale, quantum confinement effects dominate the electrical and optical properties of systems. They also render new opportunities for manipulating the response of systems. For example, a better understanding of these systems has enabled the rapid development of quantum dots with precisely determined properties, which can be exploited in a range of applications from medical imaging and photovoltaic solar cells to quantum computation, a radically new information technology being currently developed in many labs worldwide. As the first ever academic journal in nanotechnology, {\\it Nanotechnology} has been the forum for papers detailing progress of the science through extremely exciting times. In the early years of the journal, the investigation of electron spin led to the formulation of quantum cellular automata, a new paradigm for computing as reported by Craig S Lent and colleagues (Lent C S, Tougaw P D, Porod W and Bernstein G H 1993 Nanotechnology 4 49-57). The increasingly sophisticated manipulation of spin has been an enduring theme of research throughout this decade, providing a number of interesting developments such as spin pumping (Cota E, Aguado R, Creffield C E and Platero G 2003 Nanotechnology 14 152-6). The idea of spin qubits, proposed by D Loss and D P DiVincenzo (1998 Phys. Rev. A 57 120), developed into an established option for advancing research in quantum computing and continues to drive fruitful avenues of research, such as the integrated superconductive magnetic nanosensor recently devised by researchers in Italy (Granata C, Esposito E, Vettoliere A, Petti L and Russo M 2008 Nanotechnology 19 275501). The device has a spin sensitivity in units of the Bohr magneton of 100 spin Hz-1/2 and has large potential for applications in the measurement of nanoscale magnetization and quantum computing. The advance of science and technology at the nanoscale is inextricably enmeshed with advances in our understanding of quantum effects. As Nanotechnology celebrates its 20th volume, research into fundamental quantum phenomena continues to be an active field of research, providing fertile pasture for developing nanotechnologies.

  5. Bio-inspired nano tools for neuroscience.

    PubMed

    Das, Suradip; Carnicer-Lombarte, Alejandro; Fawcett, James W; Bora, Utpal

    2016-07-01

    Research and treatment in the nervous system is challenged by many physiological barriers posing a major hurdle for neurologists. The CNS is protected by a formidable blood brain barrier (BBB) which limits surgical, therapeutic and diagnostic interventions. The hostile environment created by reactive astrocytes in the CNS along with the limited regeneration capacity of the PNS makes functional recovery after tissue damage difficult and inefficient. Nanomaterials have the unique ability to interface with neural tissue in the nano-scale and are capable of influencing the function of a single neuron. The ability of nanoparticles to transcend the BBB through surface modifications has been exploited in various neuro-imaging techniques and for targeted drug delivery. The tunable topography of nanofibers provides accurate spatio-temporal guidance to regenerating axons. This review is an attempt to comprehend the progress in understanding the obstacles posed by the complex physiology of the nervous system and the innovations in design and fabrication of advanced nanomaterials drawing inspiration from natural phenomenon. We also discuss the development of nanomaterials for use in Neuro-diagnostics, Neuro-therapy and the fabrication of advanced nano-devices for use in opto-electronic and ultrasensitive electrophysiological applications. The energy efficient and parallel computing ability of the human brain has inspired the design of advanced nanotechnology based computational systems. However, extensive use of nanomaterials in neuroscience also raises serious toxicity issues as well as ethical concerns regarding nano implants in the brain. In conclusion we summarize these challenges and provide an insight into the huge potential of nanotechnology platforms in neuroscience. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Prospects and applications of nanobiotechnology: a medical perspective.

    PubMed

    Fakruddin, Md; Hossain, Zakir; Afroz, Hafsa

    2012-07-20

    Nanobiotechnology is the application of nanotechnology in biological fields. Nanotechnology is a multidisciplinary field that currently recruits approach, technology and facility available in conventional as well as advanced avenues of engineering, physics, chemistry and biology. A comprehensive review of the literature on the principles, limitations, challenges, improvements and applications of nanotechnology in medical science was performed. Nanobiotechnology has multitude of potentials for advancing medical science thereby improving health care practices around the world. Many novel nanoparticles and nanodevices are expected to be used, with an enormous positive impact on human health. While true clinical applications of nanotechnology are still practically inexistent, a significant number of promising medical projects are in an advanced experimental stage. Implementation of nanotechnology in medicine and physiology means that mechanisms and devices are so technically designed that they can interact with sub-cellular (i.e. molecular) levels of the body with a high degree of specificity. Thus therapeutic efficacy can be achieved to maximum with minimal side effects by means of the targeted cell or tissue-specific clinical intervention. More detailed research and careful clinical trials are still required to introduce diverse components of nanobiotechnology in random clinical applications with success. Ethical and moral concerns also need to be addressed in parallel with the new developments.

  7. Prospects and applications of nanobiotechnology: a medical perspective

    PubMed Central

    2012-01-01

    Background Nanobiotechnology is the application of nanotechnology in biological fields. Nanotechnology is a multidisciplinary field that currently recruits approach, technology and facility available in conventional as well as advanced avenues of engineering, physics, chemistry and biology. Method A comprehensive review of the literature on the principles, limitations, challenges, improvements and applications of nanotechnology in medical science was performed. Results Nanobiotechnology has multitude of potentials for advancing medical science thereby improving health care practices around the world. Many novel nanoparticles and nanodevices are expected to be used, with an enormous positive impact on human health. While true clinical applications of nanotechnology are still practically inexistent, a significant number of promising medical projects are in an advanced experimental stage. Implementation of nanotechnology in medicine and physiology means that mechanisms and devices are so technically designed that they can interact with sub-cellular (i.e. molecular) levels of the body with a high degree of specificity. Thus therapeutic efficacy can be achieved to maximum with minimal side effects by means of the targeted cell or tissue-specific clinical intervention. Conclusion More detailed research and careful clinical trials are still required to introduce diverse components of nanobiotechnology in random clinical applications with success. Ethical and moral concerns also need to be addressed in parallel with the new developments. PMID:22817658

  8. Molecular Nanotechnology and Designs of Future

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    Reviewing the status of current approaches and future projections, as already published in the scientific journals and books, the talk will summarize the direction in which computational and experimental molecular nanotechnologies are progressing. Examples of nanotechnological approach to the concepts of design and simulation of atomically precise materials in a variety of interdisciplinary areas will be presented. The concepts of hypothetical molecular machines and assemblers as explained in Drexler's and Merckle's already published work and Han et. al's WWW distributed molecular gears will be explained.

  9. An Undergraduate Course in Modeling and Simulation of Multiphysics Systems

    ERIC Educational Resources Information Center

    Ortiz-Rodriguez, Estanislao; Vazquez-Arenas, Jorge; Ricardez-Sandoval, Luis A.

    2010-01-01

    An overview of a course on modeling and simulation offered at the Nanotechnology Engineering undergraduate program at the University of Waterloo. The motivation for having this course in the undergraduate nanotechnology curriculum, the course structure, and its learning objectives are discussed. Further, one of the computational laboratories…

  10. Designing the Very Small: Micro and Nanotechnology. Resources in Technology.

    ERIC Educational Resources Information Center

    Jacobs, James A.

    1996-01-01

    This learning activity is designed to increase knowledge of materials science; engineering; and technology design and the manufacture of the very small devices used in watches, computers, and calculators. It looks at possible innovations to come from micro- and nanotechnology. Includes a student quiz. (Author/JOW)

  11. HEALTH AND ENVIRONMENTAL IMPACT OF NANOTECHNOLOGY: TOXICOLOGICAL ASSESSMENT OF MANUFACTURED NANOPARTICLES

    EPA Science Inventory

    The microtechnology of the second half of the 20th century has produced a technical revolution that has lead to the production of computers, the Internet and taken us into a new emerging era of nanotechnology. This issue of Toxicological Sciences includes two articles, "Pulmonar...

  12. Design-Oriented Introduction of Nanotechnology into the Electrical and Computer Engineering Curriculum

    ERIC Educational Resources Information Center

    Kim, Donghwi; Kamoua, Ridha; Pacelli, Andrea

    2006-01-01

    Nanoelectronics has the potential, and is indeed expected, to revolutionize information technology by the use of the impressive characteristics of nano-devices such as carbon nanotube transistors, molecular diodes and transistors, etc. A great effort is being put into creating an introductory course in nano-technology. However, practically all…

  13. Nanotechnology Review: Molecular Electronics to Molecular Motors

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Saini, Subhash (Technical Monitor)

    1998-01-01

    Reviewing the status of current approaches and future projections, as already published in scientific journals and books, the talk will summarize the direction in which computational and experimental nanotechnologies are progressing. Examples of nanotechnological approaches to the concepts of design and simulation of carbon nanotube based molecular electronic and mechanical devices will be presented. The concepts of nanotube based gears and motors will be discussed. The above is a non-technical review talk which covers long term precompetitive basic research in already published material that has been presented before many US scientific meeting audiences.

  14. Artificial intelligence in nanotechnology.

    PubMed

    Sacha, G M; Varona, P

    2013-11-15

    During the last decade there has been increasing use of artificial intelligence tools in nanotechnology research. In this paper we review some of these efforts in the context of interpreting scanning probe microscopy, the study of biological nanosystems, the classification of material properties at the nanoscale, theoretical approaches and simulations in nanoscience, and generally in the design of nanodevices. Current trends and future perspectives in the development of nanocomputing hardware that can boost artificial-intelligence-based applications are also discussed. Convergence between artificial intelligence and nanotechnology can shape the path for many technological developments in the field of information sciences that will rely on new computer architectures and data representations, hybrid technologies that use biological entities and nanotechnological devices, bioengineering, neuroscience and a large variety of related disciplines.

  15. Artificial intelligence in nanotechnology

    NASA Astrophysics Data System (ADS)

    Sacha, G. M.; Varona, P.

    2013-11-01

    During the last decade there has been increasing use of artificial intelligence tools in nanotechnology research. In this paper we review some of these efforts in the context of interpreting scanning probe microscopy, the study of biological nanosystems, the classification of material properties at the nanoscale, theoretical approaches and simulations in nanoscience, and generally in the design of nanodevices. Current trends and future perspectives in the development of nanocomputing hardware that can boost artificial-intelligence-based applications are also discussed. Convergence between artificial intelligence and nanotechnology can shape the path for many technological developments in the field of information sciences that will rely on new computer architectures and data representations, hybrid technologies that use biological entities and nanotechnological devices, bioengineering, neuroscience and a large variety of related disciplines.

  16. The effect of nanotechnology on education

    NASA Astrophysics Data System (ADS)

    Viriyavejakul, Chantana

    2008-04-01

    The research objective was to study 1) the situation and readiness of the Thai education for the integration of nanotechnology and 2) to propose the plans, the strategies and guidelines for educational reform to adapt nanotechnology to the system. The data collection was done by 4 methods: 1) documentary study, 2) observation, 3) informal interviews, and 4) group discussion. The findings revealed that: 1. William Wresch's Theory (1997) was used in this research to study of the situation and readiness of the Thai education for the integration of nanotechnology. 1) Getting connected to nanotechnology by search engine websites, libraries, magazines, books, and discussions with experts. 2) Curriculum integration: nanotechnology should be integrated in many branches of engineering, such as industrial, computer, civil, chemical, electrical, mechanical, etc. 3) Resources for educators: nanotechnology knowledge should be spread in academic circles by publications and the Internet websites. 4) Training and professional resources for teachers: Teachers should be trained by experts in nanotechnology and researchers from the National Nanotechnology Center. This will help trainees get correct knowledge, comprehension, and awareness in order to apply to their professions and businesses in the future. 2. As for the plans, the strategies, and guidelines for educational reform to adapt nanotechnology to the present system, I analyzed the world nanotechnology situation that might have an effect on Thai society. The study is based on the National Plan to Develop Nanotechnology. The goal of this plan is to develop nanotechnology to be the national strategy within 10 years (2004-2013) and have it integrated into the Thai system. There are 4 parts in this plan: 1) nanomaterials, 2) nanoelectronics, 3) nanobiotechnology, and 4) human resources development. Data for human resource development should be worked with the present technology and use the country's resources to produce many products of nanotechnology, such as 1) handicrafts, decorations, and gifts, 2) agricultural products and food, 3) beverages, such as alcoholic and non- alcoholic drinks, and 5) textiles.

  17. Tweeting nano: how public discourses about nanotechnology develop in social media environments

    NASA Astrophysics Data System (ADS)

    Runge, Kristin K.; Yeo, Sara K.; Cacciatore, Michael; Scheufele, Dietram A.; Brossard, Dominique; Xenos, Michael; Anderson, Ashley; Choi, Doo-hun; Kim, Jiyoun; Li, Nan; Liang, Xuan; Stubbings, Maria; Su, Leona Yi-Fan

    2013-01-01

    The growing popularity of social media as a channel for distributing and debating scientific information raises questions about the types of discourse that surround emerging technologies, such as nanotechnology, in online environments, as well as the different forms of information that audiences encounter when they use these online tools of information sharing. This study maps the landscape surrounding social media traffic about nanotechnology. Specifically, we use computational linguistic software to analyze a census of all English-language nanotechnology-related tweets expressing opinions posted on Twitter between September 1, 2010 and August 31, 2011. Results show that 55 % of tweets expressed certainty and 45 % expressed uncertainty. Twenty-seven percent of tweets expressed optimistic outlooks, 32 % expressed neutral outlooks and 41 % expressed pessimistic outlooks. Tweets were mapped by U.S. state, and our data show that tweets are more likely to originate from states with a federally funded National Nanotechnology Initiative center or network. The trend toward certainty in opinion coupled with the distinct geographic origins of much of the social media traffic on Twitter for nanotechnology-related opinion has significant implications for understanding how key online influencers are debating and positioning the issue of nanotechnology for lay and policy audiences.

  18. Nanotech: propensity in foods and bioactives.

    PubMed

    Kuan, Chiu-Yin; Yee-Fung, Wai; Yuen, Kah-Hay; Liong, Min-Tze

    2012-01-01

    Nanotechnology is seeing higher propensity in various industries, including food and bioactives. New nanomaterials are constantly being developed from both natural biodegradable polymers of plant and animal origins such as polysaccharides and derivatives, peptides and proteins, lipids and fats, and biocompatible synthetic biopolyester polymers such as polylactic acid (PLA), polyhydroxyalkonoates (PHA), and polycaprolactone (PCL). Applications in food industries include molecular synthesis of new functional food compounds, innovative food packaging, food safety, and security monitoring. The relevance of bioactives includes targeted delivery systems with improved bioavailability using nanostructure vehicles such as association colloids, lipid based nanoencapsulator, nanoemulsions, biopolymeric nanoparticles, nanolaminates, and nanofibers. The extensive use of nanotechnology has led to the need for parallel safety assessment and regulations to protect public health and adverse effects to the environment. This review covers the use of biopolymers in the production of nanomaterials and the propensity of nanotechnology in food and bioactives. The exposure routes of nanoparticles, safety challenges, and measures undertaken to ensure optimal benefits that outweigh detriments are also discussed.

  19. Technical Risk Prevention in the Workplace

    NASA Astrophysics Data System (ADS)

    Ricaud, Myriam

    Nanotechnology has become a major economic and technological issue today. Indeed, nanometric dimensions give matter novel physical, chemical, and biological properties with a host of applications. Nanotechnology is thus having an increasing impact on new and emerging industries, such as computing, electronics, aerospace, and alternative energy supplies, but also on traditional forms of industry such as the automobile, aeronautics, food, pharmaceutical, and cosmetics sectors. In this way, nanotechnology has led to both gradual and radical innovation in many areas of industry: biochips, drug delivery, self-cleaning and antipollution concretes, antibacterial clothing, antiscratch paints, and the list continues [1, 2, 3].

  20. Thin-Film Nanocapacitor and Its Characterization

    ERIC Educational Resources Information Center

    Hunter, David N.; Pickering, Shawn L.; Jia, Dongdong

    2007-01-01

    An undergraduate thin-film nanotechnology laboratory was designed. Nanocapacitors were fabricated on silicon substrates by sputter deposition. A mask was designed to form the shape of the capacitor and its electrodes. Thin metal layers of Au with a 80 nm thickness were deposited and used as two infinitely large parallel plates for a capacitor.…

  1. A Nanotechnology Enhancement to Moore’s Law

    DTIC Science & Technology

    2013-01-01

    suggested that quantummechanics may be playing a role in consciousness , if a quantum mechanical model of mind and consciousness was developed, this would...necessary enhancement by an increasingly maturing nanotechnology and facing the inevitable quantum -mechanical atomic and nuclei limits. Since we cannot...important. (ii) Quantum computing: The other types of transistor material are rapidly developed in laboratories worldwide, for example, Spintronics

  2. Nanotechnology for missiles

    NASA Astrophysics Data System (ADS)

    Ruffin, Paul B.

    2004-07-01

    Nanotechnology development is progressing very rapidly. Several billions of dollars have been invested in nanoscience research since 2000. Pioneering nanotechnology research efforts have been primarily conducted at research institutions and centers. This paper identifies developments in nanoscience and technology that could provide significant advances in missile systems applications. Nanotechnology offers opportunities in the areas of advanced materials for coatings, including thin-film optical coatings, light-weight, strong armor and missile structural components, embedded computing, and "smart" structures; nano-particles for explosives, warheads, turbine engine systems, and propellants to enhance missile propulsion; nano-sensors for autonomous chemical detection; and nano-tube arrays for fuel storage and power generation. The Aviation and Missile Research, Development, and Engineering Center (AMRDEC) is actively collaborating with academia, industry, and other Government agencies to accelerate the development and transition of nanotechnology to favorably impact Army Transformation. Currently, we are identifying near-term applications and quantifying requirements for nanotechnology use in Army missile systems, as well as monitoring and screening research and developmental efforts in the industrial community for military applications. Combining MicroElectroMechanical Systems (MEMS) and nanotechnology is the next step toward providing technical solutions for the Army"s transformation. Several research and development projects that are currently underway at AMRDEC in this technology area are discussed. A top-level roadmap of MEMS/nanotechnology development projects for aviation and missile applications is presented at the end.

  3. Integration of nanoscale memristor synapses in neuromorphic computing architectures

    NASA Astrophysics Data System (ADS)

    Indiveri, Giacomo; Linares-Barranco, Bernabé; Legenstein, Robert; Deligeorgis, George; Prodromakis, Themistoklis

    2013-09-01

    Conventional neuro-computing architectures and artificial neural networks have often been developed with no or loose connections to neuroscience. As a consequence, they have largely ignored key features of biological neural processing systems, such as their extremely low-power consumption features or their ability to carry out robust and efficient computation using massively parallel arrays of limited precision, highly variable, and unreliable components. Recent developments in nano-technologies are making available extremely compact and low power, but also variable and unreliable solid-state devices that can potentially extend the offerings of availing CMOS technologies. In particular, memristors are regarded as a promising solution for modeling key features of biological synapses due to their nanoscale dimensions, their capacity to store multiple bits of information per element and the low energy required to write distinct states. In this paper, we first review the neuro- and neuromorphic computing approaches that can best exploit the properties of memristor and scale devices, and then propose a novel hybrid memristor-CMOS neuromorphic circuit which represents a radical departure from conventional neuro-computing approaches, as it uses memristors to directly emulate the biophysics and temporal dynamics of real synapses. We point out the differences between the use of memristors in conventional neuro-computing architectures and the hybrid memristor-CMOS circuit proposed, and argue how this circuit represents an ideal building block for implementing brain-inspired probabilistic computing paradigms that are robust to variability and fault tolerant by design.

  4. Comparative analysis of the labelling of nanotechnologies across four stakeholder groups

    NASA Astrophysics Data System (ADS)

    Capon, Adam; Gillespie, James; Rolfe, Margaret; Smith, Wayne

    2015-08-01

    Societies are constantly challenged to develop policies around the introduction of new technologies, which by their very nature contain great uncertainty. This uncertainty gives prominence to varying viewpoints which are value laden and have the ability to drastically shift policy. The issue of nanotechnologies is a prime example. The labelling of products that contain new technologies has been one policy tool governments have used to address concerns around uncertainty. Our study develops evidence regarding opinions on the labelling of products made by nanotechnologies. We undertook a computer-assisted telephone (CATI) survey of the Australian public and those involved in nanotechnologies from the academic, business and government sectors using a standardised questionnaire. Analysis was undertaken using descriptive and logistic regression techniques. We explored reluctance to purchase as a result of labelling products which contained manufactured nanomaterials both generally and across five broad products (food, cosmetics/sunscreens, medicines, pesticides, tennis racquets/computers) which represent the broad categories of products regulated by differing government agencies in Australia. We examined the relationship between reluctance to purchase and risk perception, trust, and familiarity. We found irrespective of stakeholder, most supported the labelling of products which contained manufactured nanomaterials. Perception of risk was the main driver of reluctance to purchase, while trust and familiarity were likely to have an indirect effect through risk perception. Food is likely to be the greatest product impacted by labelling. Risk perception surrounding nanotechnologies and label `framing' on the product are key issues to be addressed in the implementation of a labelling scheme.

  5. Nanotechnology: Opportunities and Challenges

    NASA Technical Reports Server (NTRS)

    Meyyappan, Meyya

    2003-01-01

    Nanotechnology seeks to exploit novel physical, chemical, biological, mechanical, electrical, and other properties, which arise primarily due to the nanoscale nature of certain materials. A key example is carbon nanotubes (CNTs) which exhibit unique electrical and extraordinary mechanical properties and offer remarkable potential for revolutionary applications in electronics devices, computing, and data storage technology, sensors, composites, nanoelectromechanical systems (NEMS), and as tip in scanning probe microscopy (SPM) for imaging and nanolithography. Thus the CNT synthesis, characterization, and applications touch upon all disciplines of science and engineering. This presentation will provide an overview and progress report on this and other major research candidates in Nanotechnology and address opportunities and challenges ahead.

  6. Worldwide Emerging Environmental Issues Affecting the U.S. Military. June 2007 Report

    DTIC Science & Technology

    2007-06-01

    Idle Nighttime Computers Cited as Energy Wasters……………………………………….10 8.8 Nanotechnology Safety Issues……………………………………………………………...10 8.8.1 French Group to...Impact on Business Savings and Reducing CO2 http://www.csrwire.com/News/8951.html 8.8 Nanotechnology Safety Issues 8.8.1 French Group to Study...Nanotech Environmental Health and Safety The Observatory for Micro and NanoTechnologies (Minatec, France), a part of the National Center for Scientific

  7. Science and technology convergence: with emphasis for nanotechnology-inspired convergence

    NASA Astrophysics Data System (ADS)

    Bainbridge, William S.; Roco, Mihail C.

    2016-07-01

    Convergence offers a new universe of discovery, innovation, and application opportunities through specific theories, principles, and methods to be implemented in research, education, production, and other societal activities. Using a holistic approach with shared goals, convergence seeks to transcend existing human limitations to achieve improved conditions for work, learning, aging, physical, and cognitive wellness. This paper outlines ten key theories that offer complementary perspectives on this complex dynamic. Principles and methods are proposed to facilitate and enhance science and technology convergence. Several convergence success stories in the first part of the 21st century—including nanotechnology and other emerging technologies—are discussed in parallel with case studies focused on the future. The formulation of relevant theories, principles, and methods aims at establishing the convergence science.

  8. Nanoinformatics knowledge infrastructures: bringing efficient information management to nanomedical research.

    PubMed

    de la Iglesia, D; Cachau, R E; García-Remesal, M; Maojo, V

    2013-11-27

    Nanotechnology represents an area of particular promise and significant opportunity across multiple scientific disciplines. Ongoing nanotechnology research ranges from the characterization of nanoparticles and nanomaterials to the analysis and processing of experimental data seeking correlations between nanoparticles and their functionalities and side effects. Due to their special properties, nanoparticles are suitable for cellular-level diagnostics and therapy, offering numerous applications in medicine, e.g. development of biomedical devices, tissue repair, drug delivery systems and biosensors. In nanomedicine, recent studies are producing large amounts of structural and property data, highlighting the role for computational approaches in information management. While in vitro and in vivo assays are expensive, the cost of computing is falling. Furthermore, improvements in the accuracy of computational methods (e.g. data mining, knowledge discovery, modeling and simulation) have enabled effective tools to automate the extraction, management and storage of these vast data volumes. Since this information is widely distributed, one major issue is how to locate and access data where it resides (which also poses data-sharing limitations). The novel discipline of nanoinformatics addresses the information challenges related to nanotechnology research. In this paper, we summarize the needs and challenges in the field and present an overview of extant initiatives and efforts.

  9. Nanoinformatics knowledge infrastructures: bringing efficient information management to nanomedical research

    NASA Astrophysics Data System (ADS)

    de la Iglesia, D.; Cachau, R. E.; García-Remesal, M.; Maojo, V.

    2013-01-01

    Nanotechnology represents an area of particular promise and significant opportunity across multiple scientific disciplines. Ongoing nanotechnology research ranges from the characterization of nanoparticles and nanomaterials to the analysis and processing of experimental data seeking correlations between nanoparticles and their functionalities and side effects. Due to their special properties, nanoparticles are suitable for cellular-level diagnostics and therapy, offering numerous applications in medicine, e.g. development of biomedical devices, tissue repair, drug delivery systems and biosensors. In nanomedicine, recent studies are producing large amounts of structural and property data, highlighting the role for computational approaches in information management. While in vitro and in vivo assays are expensive, the cost of computing is falling. Furthermore, improvements in the accuracy of computational methods (e.g. data mining, knowledge discovery, modeling and simulation) have enabled effective tools to automate the extraction, management and storage of these vast data volumes. Since this information is widely distributed, one major issue is how to locate and access data where it resides (which also poses data-sharing limitations). The novel discipline of nanoinformatics addresses the information challenges related to nanotechnology research. In this paper, we summarize the needs and challenges in the field and present an overview of extant initiatives and efforts.

  10. On the convergence of nanotechnology and Big Data analysis for computer-aided diagnosis.

    PubMed

    Rodrigues, Jose F; Paulovich, Fernando V; de Oliveira, Maria Cf; de Oliveira, Osvaldo N

    2016-04-01

    An overview is provided of the challenges involved in building computer-aided diagnosis systems capable of precise medical diagnostics based on integration and interpretation of data from different sources and formats. The availability of massive amounts of data and computational methods associated with the Big Data paradigm has brought hope that such systems may soon be available in routine clinical practices, which is not the case today. We focus on visual and machine learning analysis of medical data acquired with varied nanotech-based techniques and on methods for Big Data infrastructure. Because diagnosis is essentially a classification task, we address the machine learning techniques with supervised and unsupervised classification, making a critical assessment of the progress already made in the medical field and the prospects for the near future. We also advocate that successful computer-aided diagnosis requires a merge of methods and concepts from nanotechnology and Big Data analysis.

  11. DNA nanotechnology: understanding and optimisation through simulation

    NASA Astrophysics Data System (ADS)

    Ouldridge, Thomas E.

    2015-01-01

    DNA nanotechnology promises to provide controllable self-assembly on the nanoscale, allowing for the design of static structures, dynamic machines and computational architectures. In this article, I review the state-of-the art of DNA nanotechnology, highlighting the need for a more detailed understanding of the key processes, both in terms of theoretical modelling and experimental characterisation. I then consider coarse-grained models of DNA, mesoscale descriptions that have the potential to provide great insight into the operation of DNA nanotechnology if they are well designed. In particular, I discuss a number of nanotechnological systems that have been studied with oxDNA, a recently developed coarse-grained model, highlighting the subtle interplay of kinetic, thermodynamic and mechanical factors that can determine behaviour. Finally, new results highlighting the importance of mechanical tension in the operation of a two-footed walker are presented, demonstrating that recovery from an unintended 'overstepped' configuration can be accelerated by three to four orders of magnitude by application of a moderate tension to the walker's track. More generally, the walker illustrates the possibility of biasing strand-displacement processes to affect the overall rate.

  12. A Boost for the Emerging Field of RNA Nanotechnology

    PubMed Central

    2011-01-01

    This Nano Focus article highlights recent advances in RNA nanotechnology as presented at the First International Conference of RNA Nanotechnology and Therapeutics, which took place in Cleveland, OH, USA (October 23–25, 2010) (http://www.eng.uc.edu/nanomedicine/RNA2010/), chaired by Peixuan Guo and co-chaired by David Rueda and Scott Tenenbaum. The conference was the first of its kind to bring together more than 30 invited speakers in the frontier of RNA nanotechnology from France, Sweden, South Korea, China, and throughout the United States to discuss RNA nanotechnology and its applications. It provided a platform for researchers from academia, government, and the pharmaceutical industry to share existing knowledge, vision, technology, and challenges in the field and promoted collaborations among researchers interested in advancing this emerging scientific discipline. The meeting covered a range of topics, including biophysical and single-molecule approaches for characterization of RNA nanostructures; structure studies on RNA nanoparticles by chemical or biochemical approaches, computation, prediction, and modeling of RNA nanoparticle structures; methods for the assembly of RNA nanoparticles; chemistry for RNA synthesis, conjugation, and labeling; and application of RNA nanoparticles in therapeutics. A special invited talk on the well-established principles of DNA nanotechnology was arranged to provide models for RNA nanotechnology. An Administrator from National Institutes of Health (NIH) National Cancer Institute (NCI) Alliance for Nanotechnology in Cancer discussed the current nanocancer research directions and future funding opportunities at NCI. As indicated by the feedback received from the invited speakers and the meeting participants, this meeting was extremely successful, exciting, and informative, covering many groundbreaking findings, pioneering ideas, and novel discoveries. PMID:21604810

  13. Technical structure of the global nanoscience and nanotechnology literature

    NASA Astrophysics Data System (ADS)

    Kostoff, Ronald N.; Koytcheff, Raymond G.; Lau, Clifford G. Y.

    2007-10-01

    Text mining was used to extract technical intelligence from the open source global nanotechnology and nanoscience research literature. An extensive nanotechnology/nanoscience-focused query was applied to the Science Citation Index/Social Science Citation Index (SCI/SSCI) databases. The nanotechnology/nanoscience research literature technical structure (taxonomy) was obtained using computational linguistics/document clustering and factor analysis. The infrastructure (prolific authors, key journals/institutions/countries, most cited authors/journals/documents) for each of the clusters generated by the document clustering algorithm was obtained using bibliometrics. Another novel addition was the use of phrase auto-correlation maps to show technical thrust areas based on phrase co-occurrence in Abstracts, and the use of phrase-phrase cross-correlation maps to show technical thrust areas based on phrase relations due to the sharing of common co-occurring phrases. The ˜400 most cited nanotechnology papers since 1991 were grouped, and their characteristics generated. Whereas the main analysis provided technical thrusts of all nanotechnology papers retrieved, analysis of the most cited papers allowed their characteristics to be displayed. Finally, most cited papers from selected time periods were extracted, along with all publications from those time periods, and the institutions and countries were compared based on their representation in the most cited documents list relative to their representation in the most publications list.

  14. Overview of Micro- and Nano-Technology Tools for Stem Cell Applications: Micropatterned and Microelectronic Devices

    PubMed Central

    Cagnin, Stefano; Cimetta, Elisa; Guiducci, Carlotta; Martini, Paolo; Lanfranchi, Gerolamo

    2012-01-01

    In the past few decades the scientific community has been recognizing the paramount role of the cell microenvironment in determining cell behavior. In parallel, the study of human stem cells for their potential therapeutic applications has been progressing constantly. The use of advanced technologies, enabling one to mimic the in vivo stem cell microenviroment and to study stem cell physiology and physio-pathology, in settings that better predict human cell biology, is becoming the object of much research effort. In this review we will detail the most relevant and recent advances in the field of biosensors and micro- and nano-technologies in general, highlighting advantages and disadvantages. Particular attention will be devoted to those applications employing stem cells as a sensing element. PMID:23202240

  15. Overview of micro- and nano-technology tools for stem cell applications: micropatterned and microelectronic devices.

    PubMed

    Cagnin, Stefano; Cimetta, Elisa; Guiducci, Carlotta; Martini, Paolo; Lanfranchi, Gerolamo

    2012-11-19

    In the past few decades the scientific community has been recognizing the paramount role of the cell microenvironment in determining cell behavior. In parallel, the study of human stem cells for their potential therapeutic applications has been progressing constantly. The use of advanced technologies, enabling one to mimic the in vivo stem cell microenviroment and to study stem cell physiology and physio-pathology, in settings that better predict human cell biology, is becoming the object of much research effort. In this review we will detail the most relevant and recent advances in the field of biosensors and micro- and nano-technologies in general, highlighting advantages and disadvantages. Particular attention will be devoted to those applications employing stem cells as a sensing element.

  16. Collectively loading an application in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.

    Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job.

  17. Evolving telemedicine/ehealth technology.

    PubMed

    Ferrante, Frank E

    2005-06-01

    This paper describes emerging technologies to support a rapidly changing and expanding scope of telemedicine/telehealth applications. Of primary interest here are wireless systems, emerging broadband, nanotechnology, intelligent agent applications, and grid computing. More specifically, the paper describes the changes underway in wireless designs aimed at enhancing security; some of the current work involving the development of nanotechnology applications and research into the use of intelligent agents/artificial intelligence technology to establish what are termed "Knowbots"; and a sampling of the use of Web services, such as grid computing capabilities, to support medical applications. In addition, the expansion of these technologies and the need for cost containment to sustain future health care for an increasingly mobile and aging population is discussed.

  18. PREFACE: Nanoscale Devices and System Integration Conference (NDSI-2004)

    NASA Astrophysics Data System (ADS)

    Khizroev, Sakhrat; Litvinov, Dmitri

    2004-10-01

    The inaugural conference on Nanoscale Devices and System Integration (NDSI-2004) was held in Miami, Florida, 15-19 February, 2004. The focus of the conference was `real-life' devices and systems that have recently emerged as a result of various nanotechnology initiatives in chemistry and chemical engineering, physics, electrical engineering, materials science and engineering, biomedical engineering, computer science, robotics, and environmental science. The conference had a single session all-invited speaker format, with the presenters making the `Who's Who in Nanotechnology' list. Contributed work was showcased at a special poster session. The conference, sponsored by the Institute of Electrical and Electronics Engineers (IEEE) and the US Air Force, and endorsed by Materials Research Society (MRS), drew more than 160 participants from fourteen countries. To strengthen the connection between fundamental research and `real-life' applications, the conference featured a large number of presenters from both academia and industry. Among the participating companies were NEC, IBM, Toshiba, AMD, Samsung, Seagate, and Veeco. Nanotechnology has triggered a new wave of research collaborations between researchers from academia and industry with a broad range of specializations. Such a global approach has resulted in a number of breakthrough accomplishments. One of the main goals of this conference was to identify these accomplishments and put the novel technology initiatives and the emerging research teams on the map. Among the key nanotechnology applications demonstrated at NDSI-2004 were carbon-nanotube-based transistors, quantum computing systems, nanophotonic devices, single-molecule electronic devices and biological magnetic sources. Due to the unprecedented success of the conference, the organizing committee of NDSI has unanimously chosen to turn NDSI into an annual international nanotechnology event. The next NDSI is scheduled for 4-6 April, 2005, in Houston, Texas. Details can be found on the conference web site at http://www.nanointernational.org. This special issue of Nanotechnology features selected papers from NDSI-2004.

  19. Nanoinformatics knowledge infrastructures: bringing efficient information management to nanomedical research

    PubMed Central

    de la Iglesia, D; Cachau, R E; García-Remesal, M; Maojo, V

    2014-01-01

    Nanotechnology represents an area of particular promise and significant opportunity across multiple scientific disciplines. Ongoing nanotechnology research ranges from the characterization of nanoparticles and nanomaterials to the analysis and processing of experimental data seeking correlations between nanoparticles and their functionalities and side effects. Due to their special properties, nanoparticles are suitable for cellular-level diagnostics and therapy, offering numerous applications in medicine, e.g. development of biomedical devices, tissue repair, drug delivery systems and biosensors. In nanomedicine, recent studies are producing large amounts of structural and property data, highlighting the role for computational approaches in information management. While in vitro and in vivo assays are expensive, the cost of computing is falling. Furthermore, improvements in the accuracy of computational methods (e.g. data mining, knowledge discovery, modeling and simulation) have enabled effective tools to automate the extraction, management and storage of these vast data volumes. Since this information is widely distributed, one major issue is how to locate and access data where it resides (which also poses data-sharing limitations). The novel discipline of nanoinformatics addresses the information challenges related to nanotechnology research. In this paper, we summarize the needs and challenges in the field and present an overview of extant initiatives and efforts. PMID:24932210

  20. The National Nanotechnology Initiative: Research and Development Leading to a Revolution in Technology and Industry. Supplement to the President’s FY 2010 Budget

    DTIC Science & Technology

    2009-05-01

    both space and terrestrial (defense, automotive , computer, etc.) uses . NSF, EPA: These agencies funded the second Center for Environmental...performance of nanomaterials in commercial products within widely different industries , including aerospace, automotive , chemical, food, forest products...each of its nanotechnology R&D programs in order to foster a rapid transition from R&D to agency/ industry dual- use . Industry partners have included

  1. NanoDesign: Concepts and Software for a Nanotechnology Based on Functionalized Fullerenes

    NASA Technical Reports Server (NTRS)

    Globus, Al; Jaffe, Richard; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Eric Drexler has proposed a hypothetical nanotechnology based on diamond and investigated the properties of such molecular systems. While attractive, diamonoid nanotechnology is not physically accessible with straightforward extensions of current laboratory techniques. We propose a nanotechnology based on functionalized fullerenes and investigate carbon nanotube based gears with teeth added via a benzyne reaction known to occur with C60. The gears are single-walled carbon nanotubes with appended coenzyme groups for teeth. Fullerenes are in widespread laboratory use and can be functionalized in many ways. Companion papers computationally demonstrate the properties of these gears (they appear to work) and the accessibility of the benzyne/nanotube reaction. This paper describes the molecular design techniques and rationale as well as the software that implements these design techniques. The software is a set of persistent C++ objects controlled by TCL command scripts. The c++/tcl interface is automatically generated by a software system called tcl_c++ developed by the author and described here. The objects keep track of different portions of the molecular machinery to allow different simulation techniques and boundary conditions to be applied as appropriate. This capability has been required to demonstrate (computationally) our gear's feasibility. A new distributed software architecture featuring a WWW universal client, CORBA distributed objects, and agent software is under consideration. The software architecture is intended to eventually enable a widely disbursed group to develop complex simulated molecular machines.

  2. EDITORIAL: Synaptic electronics Synaptic electronics

    NASA Astrophysics Data System (ADS)

    Demming, Anna; Gimzewski, James K.; Vuillaume, Dominique

    2013-09-01

    Conventional computers excel in logic and accurate scientific calculations but make hard work of open ended problems that human brains handle easily. Even von Neumann—the mathematician and polymath who first developed the programming architecture that forms the basis of today's computers—was already looking to the brain for future developments before his death in 1957 [1]. Neuromorphic computing uses approaches that better mimic the working of the human brain. Recent developments in nanotechnology are now providing structures with very accommodating properties for neuromorphic approaches. This special issue, with guest editors James K Gimzewski and Dominique Vuillaume, is devoted to research at the serendipitous interface between the two disciplines. 'Synaptic electronics', looks at artificial devices with connections that demonstrate behaviour similar to synapses in the nervous system allowing a new and more powerful approach to computing. Synapses and connecting neurons respond differently to incident signals depending on the history of signals previously experienced, ultimately leading to short term and long term memory behaviour. The basic characteristics of a synapse can be replicated with around ten simple transistors. However with the human brain having around 1011 neurons and 1015 synapses, artificial neurons and synapses from basic transistors are unlikely to accommodate the scalability required. The discovery of nanoscale elements that function as 'memristors' has provided a key tool for the implementation of synaptic connections [2]. Leon Chua first developed the concept of the 'The memristor—the missing circuit element' in 1971 [3]. In this special issue he presents a tutorial describing how memristor research has fed into our understanding of synaptic behaviour and how they can be applied in information processing [4]. He also describes, 'The new principle of local activity, which uncovers a minuscule life-enabling "Goldilocks zone", dubbed the edge of chaos, where complex phenomena, including creativity and intelligence, may emerge'. Also in this issue R Stanley Williams and colleagues report results from simulations that demonstrate the potential for using Mott transistors as building blocks for scalable neuristor-based integrated circuits without transistors [5]. The scalability of neural chip designs is also tackled in the design reported by Narayan Srinivasa and colleagues in the US [6]. Meanwhile Carsten Timm and Massimiliano Di Ventra describe simulations of a molecular transistor in which electrons strongly coupled to a vibrational mode lead to a Franck-Condon (FC) blockade that mimics the spiking action potentials in synaptic memory behaviour [7]. The 'atomic switches' used to demonstrate synaptic behaviour by a collaboration of researchers in California and Japan also come under further scrutiny in this issue. James K Gimzewski and colleagues consider the difference between the behaviour of an atomic switch in isolation and in a network [8]. As the authors point out, 'The work presented represents steps in a unified approach of experimentation and theory of complex systems to make atomic switch networks a uniquely scalable platform for neuromorphic computing'. Researchers in Germany [9] and Sweden [10] also report on theoretical approaches to modelling networks of memristive elements and complementary resistive switches for synaptic devices. As Vincent Derycke and colleagues in France point out, 'Actual experimental demonstrations of neural network type circuits based on non-conventional/non-CMOS memory devices and displaying function learning capabilities remain very scarce'. They describe how their work using carbon nanotubes provides a rare demonstration of actual function learning with synapses based on nanoscale building blocks [11]. However, this is far from the only experimental work reported in this issue, others include: short-term memory of TiO2-based electrochemical capacitors [12]; a neuromorphic circuit composed of a nanoscale 1-kbit resistive random-access memory (RRAM) cross-point array of synapses and complementary metal-oxide-semiconductor (CMOS) neuron circuits [13]; a WO3-x-based nanoionics device from Masakazu Aono's group with a wide scale of reprogrammable memorization functions [14]; a new spike-timing dependent plasticity scheme based on a MOS transistor as a selector and a RRAM as a variable resistance device [15]; a new hybrid memristor-CMOS neuromorphic circuit [16]; and a photo-assisted atomic switch [17]. Synaptic electronics evidently has many emerging facets, and Duygu Kuzum, Shimeng Yu, and H-S Philip Wong in the US provide a review of the field, including the materials, devices and applications [18]. In embracing the expertise acquired over thousands of years of evolution, biomimetics and bio-inspired design is a common, smart approach to technological innovation. Yet in successfully mimicking the physiological mechanisms of the human mind synaptic electronics research has a potential impact that is arguably unprecedented. That the quirks and eccentricities recently unearthed in the behaviour of nanomaterials should lend themselves so accommodatingly to emulating synaptic functions promises some very exciting developments in the field, as the articles in this special issue emphasize. References [1] von Neumann J (ed) 2012 The Computer and the Brain 3rd edn (Yale: Yale University Press) [2] Strukov D B, Snider G S, Stewart D R and Williams R S 2008 The missing memristor found Nature 453 80-3 [3] Chua L O 1971 Memristor—the missing circuit element IEEE Trans. Circuit Theory 18 507-19 [4] Chua L O 2013 Memristor, Hodgkin-Huxley, and Edge of Chaos Nanotechnology 24 383001 [5] Pickett M D and Williams R S 2013 Phase transitions enable computational universality in neuristor-based cellular automata Nanotechnology 24 384002 [6] Cruz-Albrecht J M, Derosier T and Srinivasa N 2013 Scalable neural chip with synaptic electronics using CMOS integrated memristors Nanotechnology 24 384011 [7] Timm C and Di Ventra M 2013 Molecular neuron based on the Franck-Condon blockade Nanotechnology 24 384001 [8] Sillin H O, Aguilera R, Shieh H-H, Avizienis A V, Aono M, Stieg A Z and Gimzewski J K 2013 A theoretical and experimental study of neuromorphic atomic switch networks for reservoir computing Nanotechnology 24 384004 [9] Linn E, Menzel S, Ferch S and Waser R 2013 Compact modeling of CRS devices based on ECM cells for memory, logic and neuromorphic applications Nanotechnology 24 384008 [10] Konkoli Z and Wendin G 2013 A generic simulator for large networks of memristive elements Nanotechnology 24 384007 [11] Gacem K, Retrouvey J-M, Chabi D, Filoramo A, Zhao W, Klein J-O and Derycke V 2013 Neuromorphic function learning with carbon nanotube-based synapses Nanotechnology 24 384013 [12] Lim H, Kim I, Kim J-S, Hwang C S and Jeong D S 2013 Short-term memory of TiO2-based electrochemical capacitors: empirical analysis with adoption of a sliding threshold Nanotechnology 24 384005 [13] Park S, Noh J, Choo M-L, Sheri A M, Chang M, Kim Y-B, Kim C J, Jeon M, Lee B-G, Lee B H and Hwang H 2013 Nanoscale RRAM-based synaptic electronics: toward a neuromorphic computing device Nanotechnology 24 384009 [14] Yang R, Terabe K, Yao Y, Tsuruoka T, Hasegawa T, Gimzewski J K and Aono M 2013 Synaptic plasticity and memory functions achieved in WO3-x-based nanoionics device by using principle of atomic switch operation Nanotechnology 24 384002 [15] Ambrogio S, Balatti S, Nardi F, Facchinetti S and Ielmini D 2013 Spike-timing dependent plasticity in a transistor-selected resistive switching memory Nanotechnology 24 384012 [16] Indiveria G, Linares-Barranco B, Legenstein R, Deligeorgis G and Prodromakise T 2013 Integration of nanoscale memristor synapses in neuromorphic computing architectures Nanotechnology 24 384010 [17] Hino T, Hasegawa T, Tanaka H, Tsuruoka T, Terabe K, Ogawa T and Aono M 2013 Volatile and nonvolatile selective switching of a photo-assited initialized atomic switch Nanotechnology 24 384006 [18] Kuzum D, Yu S and Wong H-S P 2013 Synaptic electronics: materials, devices and applications Nanotechnology 24 382001

  3. The Real-World Connection.

    ERIC Educational Resources Information Center

    Estes, Charles R.

    1994-01-01

    Discusses theoretical versus applied science and the use of the scientific method for analysis of social issues. Topics addressed include the use of simulation and modeling; the growth in computer power, including nanotechnology; distributed computing; self-evolving programs; spiritual matters; human engineering, i.e., molding individuals;…

  4. Scaling to Nanotechnology Limits with the PIMS Computer Architecture and a new Scaling Rule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Debenedictis, Erik P.

    2015-02-01

    We describe a new approach to computing that moves towards the limits of nanotechnology using a newly formulated sc aling rule. This is in contrast to the current computer industry scali ng away from von Neumann's original computer at the rate of Moore's Law. We extend Moore's Law to 3D, which l eads generally to architectures that integrate logic and memory. To keep pow er dissipation cons tant through a 2D surface of the 3D structure requires using adiabatic principles. We call our newly proposed architecture Processor In Memory and Storage (PIMS). We propose a new computational model that integratesmore » processing and memory into "tiles" that comprise logic, memory/storage, and communications functions. Since the programming model will be relatively stable as a system scales, programs repr esented by tiles could be executed in a PIMS system built with today's technology or could become the "schematic diagram" for implementation in an ultimate 3D nanotechnology of the future. We build a systems software approach that offers advantages over and above the technological and arch itectural advantages. Firs t, the algorithms may be more efficient in the conventional sens e of having fewer steps. Second, the algorithms may run with higher power efficiency per operation by being a better match for the adiabatic scaling ru le. The performance analysis based on demonstrated ideas in physical science suggests 80,000 x improvement in cost per operation for the (arguably) gene ral purpose function of emulating neurons in Deep Learning.« less

  5. Nanotechnology in respiratory medicine.

    PubMed

    Omlor, Albert Joachim; Nguyen, Juliane; Bals, Robert; Dinh, Quoc Thai

    2015-05-29

    Like two sides of the same coin, nanotechnology can be both boon and bane for respiratory medicine. Nanomaterials open new ways in diagnostics and treatment of lung diseases. Nanoparticle based drug delivery systems can help against diseases such as lung cancer, tuberculosis, and pulmonary fibrosis. Moreover, nanoparticles can be loaded with DNA and act as vectors for gene therapy in diseases like cystic fibrosis. Even lung diagnostics with computer tomography (CT) or magnetic resonance imaging (MRI) profits from new nanoparticle based contrast agents. However, the risks of nanotechnology also have to be taken into consideration as engineered nanomaterials resemble natural fine dusts and fibers, which are known to be harmful for the respiratory system in many cases. Recent studies have shown that nanoparticles in the respiratory tract can influence the immune system, can create oxidative stress and even cause genotoxicity. Another important aspect to assess the safety of nanotechnology based products is the absorption of nanoparticles. It was demonstrated that the amount of pulmonary nanoparticle uptake not only depends on physical and chemical nanoparticle characteristics but also on the health status of the organism. The huge diversity in nanotechnology could revolutionize medicine but makes safety assessment a challenging task.

  6. Versatile RNA tetra-U helix linking motif as a toolkit for nucleic acid nanotechnology.

    PubMed

    Bui, My N; Brittany Johnson, M; Viard, Mathias; Satterwhite, Emily; Martins, Angelica N; Li, Zhihai; Marriott, Ian; Afonin, Kirill A; Khisamutdinov, Emil F

    2017-04-01

    RNA nanotechnology employs synthetically modified ribonucleic acid (RNA) to engineer highly stable nanostructures in one, two, and three dimensions for medical applications. Despite the tremendous advantages in RNA nanotechnology, unmodified RNA itself is fragile and prone to enzymatic degradation. In contrast to use traditionally modified RNA strands e.g. 2'-fluorine, 2'-amine, 2'-methyl, we studied the effect of RNA/DNA hybrid approach utilizing a computer-assisted RNA tetra-uracil (tetra-U) motif as a toolkit to address questions related to assembly efficiency, versatility, stability, and the production costs of hybrid RNA/DNA nanoparticles. The tetra-U RNA motif was implemented to construct four functional triangles using RNA, DNA and RNA/DNA mixtures, resulting in fine-tunable enzymatic and thermodynamic stabilities, immunostimulatory activity and RNAi capability. Moreover, the tetra-U toolkit has great potential in the fabrication of rectangular, pentagonal, and hexagonal NPs, representing the power of simplicity of RNA/DNA approach for RNA nanotechnology and nanomedicine community. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Computational Nanoelectronics and Nanotechnology at NASA ARC

    NASA Technical Reports Server (NTRS)

    Saini, Subhash

    1998-01-01

    Both physical and economic considerations indicate that the scaling era of CMOS will run out of steam around the year 2010. However, physical laws also indicate that it is possible to compute at a rate of a billion times present speeds with the expenditure of only one Watt of electrical power. NASA has long-term needs where ultra-small semiconductor devices are needed for critical applications: high performance, low power, compact computers for intelligent autonomous vehicles and Petaflop computing technolpgy are some key examples. To advance the design, development, and production of future generation micro- and nano-devices, IT Modeling and Simulation Group has been started at NASA Ames with a goal to develop an integrated simulation environment that addresses problems related to nanoelectronics and molecular nanotecnology. Overview of nanoelectronics and nanotechnology research activities being carried out at Ames Research Center will be presented. We will also present the vision and the research objectives of the IT Modeling and Simulation Group including the applications of nanoelectronic based devices relevant to NASA missions.

  8. Nanotechnology at NASA Ames

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Meyyappan, Meyya; Yan, Jerry (Technical Monitor)

    2000-01-01

    Advanced miniaturization, a key thrust area to enable new science and exploration missions, provides ultrasmall sensors, power sources, communication, navigation, and propulsion systems with very low mass, volume, and power consumption. Revolutions in electronics and computing will allow reconfigurable, autonomous, 'thinking' spacecraft. Nanotechnology presents a whole new spectrum of opportunities to build device components and systems for entirely new space architectures: (1) networks of ultrasmall probes on planetary surfaces; (2) micro-rovers that drive, hop, fly, and burrow; and (3) collections of microspacecraft making a variety of measurements.

  9. Aggregating job exit statuses of a plurality of compute nodes executing a parallel application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.

    Aggregating job exit statuses of a plurality of compute nodes executing a parallel application, including: identifying a subset of compute nodes in the parallel computer to execute the parallel application; selecting one compute node in the subset of compute nodes in the parallel computer as a job leader compute node; initiating execution of the parallel application on the subset of compute nodes; receiving an exit status from each compute node in the subset of compute nodes, where the exit status for each compute node includes information describing execution of some portion of the parallel application by the compute node; aggregatingmore » each exit status from each compute node in the subset of compute nodes; and sending an aggregated exit status for the subset of compute nodes in the parallel computer.« less

  10. Design of a fault-tolerant reversible control unit in molecular quantum-dot cellular automata

    NASA Astrophysics Data System (ADS)

    Bahadori, Golnaz; Houshmand, Monireh; Zomorodi-Moghadam, Mariam

    Quantum-dot cellular automata (QCA) is a promising emerging nanotechnology that has been attracting considerable attention due to its small feature size, ultra-low power consuming, and high clock frequency. Therefore, there have been many efforts to design computational units based on this technology. Despite these advantages of the QCA-based nanotechnologies, their implementation is susceptible to a high error rate. On the other hand, using the reversible computing leads to zero bit erasures and no energy dissipation. As the reversible computation does not lose information, the fault detection happens with a high probability. In this paper, first we propose a fault-tolerant control unit using reversible gates which improves on the previous design. The proposed design is then synthesized to the QCA technology and is simulated by the QCADesigner tool. Evaluation results indicate the performance of the proposed approach.

  11. A model for the development of university curricula in nanoelectronics

    NASA Astrophysics Data System (ADS)

    Bruun, E.; Nielsen, I.

    2010-12-01

    Nanotechnology is having an increasing impact on university curricula in electrical engineering and in physics. Major influencers affecting developments in university programmes related to nanoelectronics are discussed and a model for university programme development is described. The model takes into account that nanotechnology affects not only physics but also electrical engineering and computer engineering because of the advent of new nanoelectronics devices. The model suggests that curriculum development tends to follow one of three major tracks: physics; electrical engineering; computer engineering. Examples of European curricula following this framework are identified and described. These examples may serve as sources of inspiration for future developments and the model presented may provide guidelines for a systematic selection of topics in the university programmes.

  12. Three-dimensional integration of nanotechnologies for computing and data storage on a single chip

    NASA Astrophysics Data System (ADS)

    Shulaker, Max M.; Hills, Gage; Park, Rebecca S.; Howe, Roger T.; Saraswat, Krishna; Wong, H.-S. Philip; Mitra, Subhasish

    2017-07-01

    The computing demands of future data-intensive applications will greatly exceed the capabilities of current electronics, and are unlikely to be met by isolated improvements in transistors, data storage technologies or integrated circuit architectures alone. Instead, transformative nanosystems, which use new nanotechnologies to simultaneously realize improved devices and new integrated circuit architectures, are required. Here we present a prototype of such a transformative nanosystem. It consists of more than one million resistive random-access memory cells and more than two million carbon-nanotube field-effect transistors—promising new nanotechnologies for use in energy-efficient digital logic circuits and for dense data storage—fabricated on vertically stacked layers in a single chip. Unlike conventional integrated circuit architectures, the layered fabrication realizes a three-dimensional integrated circuit architecture with fine-grained and dense vertical connectivity between layers of computing, data storage, and input and output (in this instance, sensing). As a result, our nanosystem can capture massive amounts of data every second, store it directly on-chip, perform in situ processing of the captured data, and produce ‘highly processed’ information. As a working prototype, our nanosystem senses and classifies ambient gases. Furthermore, because the layers are fabricated on top of silicon logic circuitry, our nanosystem is compatible with existing infrastructure for silicon-based technologies. Such complex nano-electronic systems will be essential for future high-performance and highly energy-efficient electronic systems.

  13. Three-dimensional integration of nanotechnologies for computing and data storage on a single chip.

    PubMed

    Shulaker, Max M; Hills, Gage; Park, Rebecca S; Howe, Roger T; Saraswat, Krishna; Wong, H-S Philip; Mitra, Subhasish

    2017-07-05

    The computing demands of future data-intensive applications will greatly exceed the capabilities of current electronics, and are unlikely to be met by isolated improvements in transistors, data storage technologies or integrated circuit architectures alone. Instead, transformative nanosystems, which use new nanotechnologies to simultaneously realize improved devices and new integrated circuit architectures, are required. Here we present a prototype of such a transformative nanosystem. It consists of more than one million resistive random-access memory cells and more than two million carbon-nanotube field-effect transistors-promising new nanotechnologies for use in energy-efficient digital logic circuits and for dense data storage-fabricated on vertically stacked layers in a single chip. Unlike conventional integrated circuit architectures, the layered fabrication realizes a three-dimensional integrated circuit architecture with fine-grained and dense vertical connectivity between layers of computing, data storage, and input and output (in this instance, sensing). As a result, our nanosystem can capture massive amounts of data every second, store it directly on-chip, perform in situ processing of the captured data, and produce 'highly processed' information. As a working prototype, our nanosystem senses and classifies ambient gases. Furthermore, because the layers are fabricated on top of silicon logic circuitry, our nanosystem is compatible with existing infrastructure for silicon-based technologies. Such complex nano-electronic systems will be essential for future high-performance and highly energy-efficient electronic systems.

  14. Learning in Transformational Computer Games: Exploring Design Principles for a Nanotechnology Game

    ERIC Educational Resources Information Center

    Masek, Martin; Murcia, Karen; Morrison, Jason; Newhouse, Paul; Hackling, Mark

    2012-01-01

    Transformational games are digital computer and video applications purposefully designed to create engaging and immersive learning environments for delivering specified learning goals, outcomes and experiences. The virtual world of a transformational game becomes the social environment within which learning occurs as an outcome of the complex…

  15. Modeling and computational simulation and the potential of virtual and augmented reality associated to the teaching of nanoscience and nanotechnology

    NASA Astrophysics Data System (ADS)

    Ribeiro, Allan; Santos, Helen

    With the advent of new information and communication technologies (ICTs), the communicative interaction changes the way of being and acting of people, at the same time that changes the way of work activities related to education. In this range of possibilities provided by the advancement of computational resources include virtual reality (VR) and augmented reality (AR), are highlighted as new forms of information visualization in computer applications. While the RV allows user interaction with a virtual environment totally computer generated; in RA the virtual images are inserted in real environment, but both create new opportunities to support teaching and learning in formal and informal contexts. Such technologies are able to express representations of reality or of the imagination, as systems in nanoscale and low dimensionality, being imperative to explore, in the most diverse areas of knowledge, the potential offered by ICT and emerging technologies. In this sense, this work presents computer applications of virtual and augmented reality developed with the use of modeling and simulation in computational approaches to topics related to nanoscience and nanotechnology, and articulated with innovative pedagogical practices.

  16. Excited State Dynamics in Carbon Nanotubes

    NASA Astrophysics Data System (ADS)

    Miyamoto, Yoshiyuki

    2004-03-01

    Carbon nanotube, one of the most promising materials for nano-technology, still suffers from its imperfection in crystalline structure that will make performance of nanotube behind theoretical limit. From the first-principles simulations, I propose efficient methods to overcome the imperfection. I show that photo-induced ion dynamics can (1) identify defects in nanotubes, (2) stabilize defected nanotubes, and (3) purify contaminated nanotubes. All of these methods can be alternative to conventional heat treatments and will be important techniques for realizing nanotube-devices. Ion dynamics under electronic excitation has been simulated with use of the computer code FPSEID (First-Principles Simulation tool for Electron Ion Dynamics) [1], which combines the time-dependent density functional method [2] to classical molecular dynamics. This very challenging approach is time-consuming but can automatically treat the level alternation of differently occupied states, and can observe initiation of non-adiabatic decay of excitation. The time-dependent Kohn-Sham equation has been solved by using the Suzuki-Trotter split operator method [3], which is a numerically stable method being suitable for plane wave basis, non-local pseudopotentials, and parallel computing. This work has been done in collaboration with Prof. Angel Rubio, Prof. David Tomanek, Dr. Savas Berber and Mina Yoon. Most of present calculations have been done by using the SX5 Vector-Parallel system in the NEC Fuchu-plant, and the Earth Simulator in Yokohama Japan. [1] O. Sugino and Y. Miyamoto, Phys. Rev. B59, 2579 (1999); ibid, B66 089901(E) (2001) [2] E. Runge and E. K. U. Gross, Phys. Rev. Lett. 52, 997 (1984). [3] M. Suzuki, J. Phys. Soc. Jpn. 61, L3015 (1992).

  17. Nanotechnology meets 3D in vitro models: tissue engineered tumors and cancer therapies.

    PubMed

    da Rocha, E L; Porto, L M; Rambo, C R

    2014-01-01

    Advances in nanotechnology are providing to medicine a new dimension. Multifunctional nanomaterials with diagnostics and treatment modalities integrated in one nanoparticle or in cooperative nanosystems are promoting new insights to cancer treatment and diagnosis. The recent convergence between tissue engineering and cancer is gradually moving towards the development of 3D disease models that more closely resemble in vivo characteristics of tumors. However, the current nanomaterials based therapies are accomplished mainly in 2D cell cultures or in complex in vivo models. The development of new platforms to evaluate nano-based therapies in parallel with possible toxic effects will allow the design of nanomaterials for biomedical applications prior to in vivo studies. Therefore, this review focuses on how 3D in vitro models can be applied to study tumor biology, nanotoxicology and to evaluate nanomaterial based therapies. © 2013.

  18. Computational Nanotechnology of Materials, Electronics and Machines: Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak

    2001-01-01

    This report presents the goals and research of the Integrated Product Team (IPT) on Devices and Nanotechnology. NASA's needs for this technology are discussed and then related to the research focus of the team. The two areas of focus for technique development are: 1) large scale classical molecular dynamics on a shared memory architecture machine; and 2) quantum molecular dynamics methodology. The areas of focus for research are: 1) nanomechanics/materials; 2) carbon based electronics; 3) BxCyNz composite nanotubes and junctions; 4) nano mechano-electronics; and 5) nano mechano-chemistry.

  19. The role of networks and artificial intelligence in nanotechnology design and analysis.

    PubMed

    Hudson, D L; Cohen, M E

    2004-05-01

    Techniques with their origins in artificial intelligence have had a great impact on many areas of biomedicine. Expert-based systems have been used to develop computer-assisted decision aids. Neural networks have been used extensively in disease classification and more recently in many bioinformatics applications including genomics and drug design. Network theory in general has proved useful in modeling all aspects of biomedicine from healthcare organizational structure to biochemical pathways. These methods show promise in applications involving nanotechnology both in the design phase and in interpretation of system functioning.

  20. Computational Nanotechnology of Nanotubes, Composites, and Electronics

    NASA Technical Reports Server (NTRS)

    Srivastava, D.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    This viewgraph presentation addresses carbon nanotubes, their mechanical and thermal properties, and their structure, as well as possible miniature devices which may be assembled in the future from carbon nanotubes.

  1. Prosthetics / Limb Loss

    MedlinePlus

    ... the use of leading-edge technology such as robotics, tissue engineering, and nanotechnology to design and build ... advantage of the latest advances in computer and robotics technology. www.research.va.gov 23 va research ...

  2. The Potential of Micro Electro Mechanical Systems and Nanotechnology for the U.S. Army

    DTIC Science & Technology

    2001-05-01

    Quantitative Structure Activity Relationship ( QSAR ) model . The QSAR model calculates the proper composition of the polymer-carbon black matrix...example, the BEI Gyrochip Model QRS11 from Systron Donner Inertial Division has a startup time of less than 1 second, a Mean Time Between Failure (MTBF... modeling from many equations per atom to a few lines of code. This approach is amenable to parallel processing. Nevertheless, their programs require

  3. Parallel computational fluid dynamics '91; Conference Proceedings, Stuttgart, Germany, Jun. 10-12, 1991

    NASA Technical Reports Server (NTRS)

    Reinsch, K. G. (Editor); Schmidt, W. (Editor); Ecer, A. (Editor); Haeuser, Jochem (Editor); Periaux, J. (Editor)

    1992-01-01

    A conference was held on parallel computational fluid dynamics and produced related papers. Topics discussed in these papers include: parallel implicit and explicit solvers for compressible flow, parallel computational techniques for Euler and Navier-Stokes equations, grid generation techniques for parallel computers, and aerodynamic simulation om massively parallel systems.

  4. Enabling Computational Nanotechnology through JavaGenes in a Cycle Scavenging Environment

    NASA Technical Reports Server (NTRS)

    Globus, Al; Menon, Madhu; Srivastava, Deepak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    A genetic algorithm procedure is developed and implemented for fitting parameters for many-body inter-atomic force field functions for simulating nanotechnology atomistic applications using portable Java on cycle-scavenged heterogeneous workstations. Given a physics based analytic functional form for the force field, correlated parameters in a multi-dimensional environment are typically chosen to fit properties given either by experiments and/or by higher accuracy quantum mechanical simulations. The implementation automates this tedious procedure using an evolutionary computing algorithm operating on hundreds of cycle-scavenged computers. As a proof of concept, we demonstrate the procedure for evaluating the Stillinger-Weber (S-W) potential by (a) reproducing the published parameters for Si using S-W energies in the fitness function, and (b) evolving a "new" set of parameters using semi-empirical tightbinding energies in the fitness function. The "new" parameters are significantly better suited for Si cluster energies and forces as compared to even the published S-W potential.

  5. The Research of the Parallel Computing Development from the Angle of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Peng, Zhensheng; Gong, Qingge; Duan, Yanyu; Wang, Yun

    2017-10-01

    Cloud computing is the development of parallel computing, distributed computing and grid computing. The development of cloud computing makes parallel computing come into people’s lives. Firstly, this paper expounds the concept of cloud computing and introduces two several traditional parallel programming model. Secondly, it analyzes and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce respectively. Finally, it takes MPI, OpenMP models compared to Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing.

  6. Spatial data analytics on heterogeneous multi- and many-core parallel architectures using python

    USGS Publications Warehouse

    Laura, Jason R.; Rey, Sergio J.

    2017-01-01

    Parallel vector spatial analysis concerns the application of parallel computational methods to facilitate vector-based spatial analysis. The history of parallel computation in spatial analysis is reviewed, and this work is placed into the broader context of high-performance computing (HPC) and parallelization research. The rise of cyber infrastructure and its manifestation in spatial analysis as CyberGIScience is seen as a main driver of renewed interest in parallel computation in the spatial sciences. Key problems in spatial analysis that have been the focus of parallel computing are covered. Chief among these are spatial optimization problems, computational geometric problems including polygonization and spatial contiguity detection, the use of Monte Carlo Markov chain simulation in spatial statistics, and parallel implementations of spatial econometric methods. Future directions for research on parallelization in computational spatial analysis are outlined.

  7. Broadcasting collective operation contributions throughout a parallel computer

    DOEpatents

    Faraj, Ahmad [Rochester, MN

    2012-02-21

    Methods, systems, and products are disclosed for broadcasting collective operation contributions throughout a parallel computer. The parallel computer includes a plurality of compute nodes connected together through a data communications network. Each compute node has a plurality of processors for use in collective parallel operations on the parallel computer. Broadcasting collective operation contributions throughout a parallel computer according to embodiments of the present invention includes: transmitting, by each processor on each compute node, that processor's collective operation contribution to the other processors on that compute node using intra-node communications; and transmitting on a designated network link, by each processor on each compute node according to a serial processor transmission sequence, that processor's collective operation contribution to the other processors on the other compute nodes using inter-node communications.

  8. Application Portable Parallel Library

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  9. Powering the programmed nanostructure and function of gold nanoparticles with catenated DNA machines

    NASA Astrophysics Data System (ADS)

    Elbaz, Johann; Cecconello, Alessandro; Fan, Zhiyuan; Govorov, Alexander O.; Willner, Itamar

    2013-06-01

    DNA nanotechnology is a rapidly developing research area in nanoscience. It includes the development of DNA machines, tailoring of DNA nanostructures, application of DNA nanostructures for computing, and more. Different DNA machines were reported in the past and DNA-guided assembly of nanoparticles represents an active research effort in DNA nanotechnology. Several DNA-dictated nanoparticle structures were reported, including a tetrahedron, a triangle or linear nanoengineered nanoparticle structures; however, the programmed, dynamic reversible switching of nanoparticle structures and, particularly, the dictated switchable functions emerging from the nanostructures, are missing elements in DNA nanotechnology. Here we introduce DNA catenane systems (interlocked DNA rings) as molecular DNA machines for the programmed, reversible and switchable arrangement of different-sized gold nanoparticles. We further demonstrate that the machine-powered gold nanoparticle structures reveal unique emerging switchable spectroscopic features, such as plasmonic coupling or surface-enhanced fluorescence.

  10. Performance of parallel computation using CUDA for solving the one-dimensional elasticity equations

    NASA Astrophysics Data System (ADS)

    Darmawan, J. B. B.; Mungkasi, S.

    2017-01-01

    In this paper, we investigate the performance of parallel computation in solving the one-dimensional elasticity equations. Elasticity equations are usually implemented in engineering science. Solving these equations fast and efficiently is desired. Therefore, we propose the use of parallel computation. Our parallel computation uses CUDA of the NVIDIA. Our research results show that parallel computation using CUDA has a great advantage and is powerful when the computation is of large scale.

  11. Carbon Nanotubes for Space Applications

    NASA Technical Reports Server (NTRS)

    Meyyappan, Meyya

    2000-01-01

    The potential of nanotube technology for NASA missions is significant and is properly recognized by NASA management. Ames has done much pioneering research in the last five years on carbon nanotube growth, characterization, atomic force microscopy, sensor development and computational nanotechnology. NASA Johnson Space Center has focused on laser ablation production of nanotubes and composites development. These in-house efforts, along with strategic collaboration with academia and industry, are geared towards meeting the agency's mission requirements. This viewgraph presentation (including an explanation for each slide) outlines the research focus for Ames nanotechnology, including details on carbon nanotubes' properties, applications, and synthesis.

  12. Increasing processor utilization during parallel computation rundown

    NASA Technical Reports Server (NTRS)

    Jones, W. H.

    1986-01-01

    Some parallel processing environments provide for asynchronous execution and completion of general purpose parallel computations from a single computational phase. When all the computations from such a phase are complete, a new parallel computational phase is begun. Depending upon the granularity of the parallel computations to be performed, there may be a shortage of available work as a particular computational phase draws to a close (computational rundown). This can result in the waste of computing resources and the delay of the overall problem. In many practical instances, strict sequential ordering of phases of parallel computation is not totally required. In such cases, the beginning of one phase can be correctly computed before the end of a previous phase is completed. This allows additional work to be generated somewhat earlier to keep computing resources busy during each computational rundown. The conditions under which this can occur are identified and the frequency of occurrence of such overlapping in an actual parallel Navier-Stokes code is reported. A language construct is suggested and possible control strategies for the management of such computational phase overlapping are discussed.

  13. Broadcasting a message in a parallel computer

    DOEpatents

    Berg, Jeremy E [Rochester, MN; Faraj, Ahmad A [Rochester, MN

    2011-08-02

    Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

  14. Force user's manual: A portable, parallel FORTRAN

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Arenstorf, Norbert S.; Ramanan, Aruna V.

    1990-01-01

    The use of Force, a parallel, portable FORTRAN on shared memory parallel computers is described. Force simplifies writing code for parallel computers and, once the parallel code is written, it is easily ported to computers on which Force is installed. Although Force is nearly the same for all computers, specific details are included for the Cray-2, Cray-YMP, Convex 220, Flex/32, Encore, Sequent, Alliant computers on which it is installed.

  15. Application of modelling and nanotechnology-based approaches: The emergence of breakthroughs in theranostics of central nervous system disorders.

    PubMed

    Hassanzadeh, Parichehr; Atyabi, Fatemeh; Dinarvand, Rassoul

    2017-08-01

    The limited efficiency of the current treatment options against the central nervous system (CNS) disorders has created increasing demands towards the development of novel theranostic strategies. The enormous research efforts in nanotechnology have led to the production of highly-advanced nanodevices and biomaterials in a variety of geometries and configurations for targeted delivery of genes, drugs, or growth factors across the blood-brain barrier. Meanwhile, the richness or reliability of data, drug delivery methods, therapeutic effects or potential toxicity of nanoparticles, occurrence of the unexpected phenomena due to the polydisperse or polymorphic nature of nanomaterials, and personalized theranostics have remained as challenging issues. In this respect, computational modelling has emerged as a powerful tool for rational design of nanoparticles with optimized characteristics including the selectivity, improved bioactivity, and reduced toxicity that might lead to the effective delivery of therapeutic agents. High-performance simulation techniques by shedding more light on the dynamical behaviour of neural networks and pathomechanisms of CNS disorders may provide imminent breakthroughs in nanomedicine. In the present review, the importance of integration of nanotechnology-based approaches with computational techniques for targeted delivery of theranostics to the CNS has been highlighted. Copyright © 2017. Published by Elsevier Inc.

  16. Bio-inspired nano-sensor-enhanced CNN visual computer.

    PubMed

    Porod, Wolfgang; Werblin, Frank; Chua, Leon O; Roska, Tamas; Rodriguez-Vazquez, Angel; Roska, Botond; Fay, Patrick; Bernstein, Gary H; Huang, Yih-Fang; Csurgay, Arpad I

    2004-05-01

    Nanotechnology opens new ways to utilize recent discoveries in biological image processing by translating the underlying functional concepts into the design of CNN (cellular neural/nonlinear network)-based systems incorporating nanoelectronic devices. There is a natural intersection joining studies of retinal processing, spatio-temporal nonlinear dynamics embodied in CNN, and the possibility of miniaturizing the technology through nanotechnology. This intersection serves as the springboard for our multidisciplinary project. Biological feature and motion detectors map directly into the spatio-temporal dynamics of CNN for target recognition, image stabilization, and tracking. The neural interactions underlying color processing will drive the development of nanoscale multispectral sensor arrays for image fusion. Implementing such nanoscale sensors on a CNN platform will allow the implementation of device feedback control, a hallmark of biological sensory systems. These biologically inspired CNN subroutines are incorporated into the new world of analog-and-logic algorithms and software, containing also many other active-wave computing mechanisms, including nature-inspired (physics and chemistry) as well as PDE-based sophisticated spatio-temporal algorithms. Our goal is to design and develop several miniature prototype devices for target detection, navigation, tracking, and robotics. This paper presents an example illustrating the synergies emerging from the convergence of nanotechnology, biotechnology, and information and cognitive science.

  17. Architecture Adaptive Computing Environment

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    2006-01-01

    Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.

  18. Parallel computing works

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of manymore » computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.« less

  19. Distributing an executable job load file to compute nodes in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gooding, Thomas M.

    Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications linkmore » over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.« less

  20. Berkeley Lab - Science Video Glossary

    Science.gov Websites

    source neutrino astronomy protein crystallography quantum dot supercomputing supernova synchrotron universe neutrino astronomy supernova Earth Science atmospheric aerosols bioremediation carbon cycle nanotechnology neutrino neutrino astronomy O, P petabytes petaflop computing photon plasma plasmon protein

  1. A 21st Century Science, Technology, and Innovation Strategy for Americas National Security

    DTIC Science & Technology

    2016-05-01

    areas. Advanced Computing and Communications The exponential growth of the digital economy, driven by ubiquitous computing and communication...weapons- focused R&D, many of the capabilities being developed have significant dual-use potential. Digital connectivity, for instance, brings...scale than traditional recombinant DNA techniques, and to share these designs digitally . Nanotechnology promises the ability to engineer entirely

  2. Nanotechnology: what is it and why is small so big?

    PubMed

    Leary, James F

    2010-10-01

    SIZE matters… the size of the scalpel determines the precision of the surgery. Nanotechnology affords us the chance to construct nanotools that are on the size scale of molecules, allowing us to treat each cell of the human body as a patient. Nanomedicine will allow for eradication of disease at the single-cell level. Since nanotools are self-assembling, nanomedicine has the potential to perform parallel processing medicine on a massive scale. These nanotools can be made of biocompatible and biodegradable nanomaterials. They can be "smart" in that they can use sophisticated targeting strategies, which can perform error checking to prevent harm if even a very small fraction of them are mistargeted. Built-in molecular biosensors can provide controlled drug delivery with feedback control for individual cell dosing. If designed to repair existing cells rather than to just destroy diseased cells, these nanomedical devices can perform in-situ regenerative medicine, programming cells along less dangerous cell pathways to prevent tissues and organs from being destroyed by the treatments and thus providing an attractive alternative to allogeneic organ transplants. Nanomedical tools, while tiny in size, can have a huge impact on medicine and health care. Earlier and more sensitive diagnosis will lead to presymptomatic diagnosis and treatment of disease before permanent damage occurs to tissues and organs. This should result in the delivery of better medicine at lower costs with better outcomes. Lastly, and importantly, some of the first uses of nanotechnology and nanomedicine are occurring in the field of ophthalmology. Some of the potential benefits of nanotechnology for future treatment of retinopathies and optic nerve damage are discussed at the end of this paper.

  3. Matching pursuit parallel decomposition of seismic data

    NASA Astrophysics Data System (ADS)

    Li, Chuanhui; Zhang, Fanchang

    2017-07-01

    In order to improve the computation speed of matching pursuit decomposition of seismic data, a matching pursuit parallel algorithm is designed in this paper. We pick a fixed number of envelope peaks from the current signal in every iteration according to the number of compute nodes and assign them to the compute nodes on average to search the optimal Morlet wavelets in parallel. With the help of parallel computer systems and Message Passing Interface, the parallel algorithm gives full play to the advantages of parallel computing to significantly improve the computation speed of the matching pursuit decomposition and also has good expandability. Besides, searching only one optimal Morlet wavelet by every compute node in every iteration is the most efficient implementation.

  4. A NANO enhancement to Moore's law

    NASA Astrophysics Data System (ADS)

    Wu, Jerry; Shen, Yin-Lin; Reinhardt, Kitt; Szu, Harold

    2012-06-01

    In the past 46 years, Intel Moore observed an exponential doubling in the number of transistors in every 18 months through the size reduction of individual transistor components since 1965. In this paper, we are exploring the nanotechnology impact upon the Law. Since we cannot break down the atomic size barrier, the fact implies a fundamental size limit at the atomic or Nanotechnology scale. This means, no more simple 18 month doubling as in Moore's Law, but other forms of transistor doubling may happen at a different slope in new directions. We are particularly interested in the Nano enhancement area. (i) 3-D: If the progress in shrinking the in-plane dimensions (2D) is to slow down, vertical integration (3D) can help increasing the areal device transistor density and keep us on the modified Moore's Law curve including the 3rd dimension. As the devices continue to shrink further into the 20 to 30 nm range, the consideration of thermal properties and transport in such nanoscale devices becomes increasingly important. (ii) Carbon Computing: Instead of traditional Transistors, the other types of transistors material are rapidly developed in Laboratories Worldwide, e.g. IBM Spintronics bandgap material and Samsung Nano-storage material, HD display Nanotechnology, which are modifying the classical Moore's Law. We shall consider the overall limitation of phonon engineering, fundamental information unit 'Qubyte' in quantum computing, Nano/Micro Electrical Mechanical System (NEMS), Carbon NanoTubes (CNTs), single layer Graphemes, single strip Nano-Ribbons, etc., and their variable degree of fabrication maturities for the computing and information processing applications.

  5. Computer hardware fault administration

    DOEpatents

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  6. Data communications in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Data communications in a parallel active messaging interface ('PAMI') or a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution of a compute node, including specification of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications instruction, the instruction characterized by instruction type, the instruction specifying a transmission of transfer data from the origin endpoint to a target endpoint and transmitting, in accordance witht the instruction type, the transfer data from the origin endpoin to the target endpoint.

  7. Synergistic Interplay of Medicinal Chemistry and Formulation Strategies in Nanotechnology - From Drug Discovery to Nanocarrier Design and Development.

    PubMed

    Sunoqrot, Suhair; Hamed, Rania; Abdel-Halim, Heba; Tarawneh, Ola

    2017-01-01

    Over the last few decades, nanotechnology has given rise to promising new therapies and diagnostic tools for a wide range of diseases, especially cancer. The unique properties of nanocarriers such as liposomes, polymeric nanoparticles, micelles, and bioconjugates have mainly been exploited to enhance drug solubility, dissolution, and bioavailability. The most important advantage offered by nanotechnology is the ability to specifically target organs, tissues, and individual cells, which ultimately reduces the systemic side effects and improves the therapeutic index of drug molecules. The contribution of medicinal chemistry to nanotechnology is evident in the abundance of new active molecules that are being discovered but are faced with tremendous delivery challenges by conventional formulation strategies. Additionally, medicinal chemistry plays a crucial role in all the steps involved in the preparation of nanocarriers, where structure-activity relationships of the drug molecule as well as the nanocarrier are harnessed to enhance the design, efficacy, and safety of nanoformulations. The aim of this review is to provide an overview of the contributions of medicinal chemistry to nanotechnology, from supplying drug candidates and inspiring high-throughput nanocarrier design strategies, to structure-activity relationship elucidation and construction of computational models for better understanding of nanocarrier physicochemical properties and biological behavior. These two fields are undoubtedly interconnected and we will continue to see the fruits of that communion for years to come. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  8. Utilizing Societal Concerns in Nanomaterials R and D Decision-Making

    NASA Astrophysics Data System (ADS)

    Beaver, E. R.

    2007-12-01

    Nanotechnology has demonstrated the potential to transform many aspects of human existence. From extraordinary ways to deliver pharmaceuticals to the tiniest of electronic circuits to the finest of filters, technologies made with nanomaterials can shrink the computer chip or remove contaminants from water. Nanotechnology systems can deliver drugs to targets inside individual cells or serve as components in sensors that detect chemical and biological agents. As nanomaterials are commercialized, there are potential consequences, both positive and negative. At each stage of development of nanotechnology, it is important to assess both the potential benefits and the potential costs. Scientists need more tools to simultaneously develop new technologies and consider whether their methods or materials might contribute to health or environmental hazards down the line. Early information on potential risks guides the development of these materials as they move toward industrial production. Some indications of problems have turned-up. If thorough assessments are not done, the potential exists to extend projects (and their related expense) long past the point where "fatal flaws" could be identified. As nanotechnology graduates from infancy and it is possible to predict the course and impacts of the technology beyond the next several years. There are well-founded concerns with nanotechnology, especially relating to human health, the environment, and public perception in general. This paper will include a matrix of potential effects, means of ranking or scoring them and examples of how the technique can be applied to current research and development.

  9. Systematic review: the applications of nanotechnology in gastroenterology.

    PubMed

    Brakmane, G; Winslet, M; Seifalian, A M

    2012-08-01

    Over the past 30 years, nanotechnology has evolved dramatically. It has captured the interest of variety of fields from computing and electronics to biology and medicine. Recent discoveries have made invaluable changes to future prospects in nanomedicine; and introduced the concept of theranostics. This term offers a patient specific 'two in one' modality that comprises of diagnostic and therapeutic tools. Not only nanotechnology has shown great impact on improvements in drug delivery and imaging techniques, but also there have been several ground-breaking discoveries in regenerative medicine. Gastroenterology invites multidisciplinary approach owing to high complexity of gastrointestinal (GI) system; it includes physicians, surgeons, radiologists, pharmacologists and many more. In this article, we concentrate on current developments in nano-gastroenterology. Literature search was performed using Web of Science and Pubmed search engines with terms--nanotechnology, nanomedicine and gastroenterology. Article search was concentrated on developments since 2005. We have described original and innovative approaches in gastrointestinal drug delivery, inflammatory disease and cancer-target treatments. Here, we have reviewed advances in GI imaging using nanoparticles as fluorescent contrast, and their potential for site-specific targeting. This review has also depicted various approaches and novel discoveries in GI regenerative medicine using nanomaterials for scaffold designs and induced pluripotent stem cells as cell source. Developments in nanotechnology have opened new range of possibilities to help our patients. This includes novel drug delivery vehicles, diagnostic tools for early and targeted disease detection and nanocomposite materials for tissue constructs to overcome cosmetic or physical disabilities. © 2012 Blackwell Publishing Ltd.

  10. High-performance computing — an overview

    NASA Astrophysics Data System (ADS)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  11. Ultimate computing. Biomolecular consciousness and nano Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hameroff, S.R.

    1987-01-01

    The book advances the premise that the cytoskeleton is the cell's nervous system, the biological controller/computer. If indeed cytoskeletal dynamics in the nanoscale (billionth meter, billionth second) are the texture of intracellular information processing, emerging ''NanoTechnologies'' (scanning tunneling microscopy, Feynman machines, von Neumann replicators, etc.) should enable direct monitoring, decoding and interfacing between biological and technological information devices. This in turn could result in important biomedical applications and perhaps a merger of mind and machine: Ultimate Computing.

  12. Issues and concerns in nanotech product development and its commercialization.

    PubMed

    Kaur, Indu Pal; Kakkar, Vandita; Deol, Parneet Kaur; Yadav, Monika; Singh, Mandeep; Sharma, Ikksheta

    2014-11-10

    The revolutionary and ubiquitous nature of nanotechnology has fetched it a considerable attention in the past few decades. Even though its enablement and application to various sectors including pharmaceutical drug development is increasing with the enormous government aided funding for nanotechnology-based products, however the parallel commercialization of these systems has not picked up a similar impetus. The technology however does address the unmet needs of pharmaceutical industry, including the reformulation of drugs to improve their solubility, bioavailability or toxicity profiles as observed from the wide array of high-quality research publications appearing in various scientific journals and magazines. Based on our decade-long experience in the field of nanotech-based drug delivery systems and extensive literature survey, we perceive that the major hiccups to the marketing of these nanotechnology products can be categorized as 1) inadequate regulatory framework; 2) lack of support and acceptance by the public, practicing physician, and industry; 3) developmental considerations like scalability, reproducibility, characterization, quality control, and suitable translation; 4) toxicological issues and safety profiles; 5) lack of available multidisciplinary platforms; and, 6) poor intellectual property protection. The present review dwells on these issues elaborating the trends followed by the industry, regulatory role of the USFDA and their implication, and the challenges set forth for a successful translation of these products from the lab and different clinical phases to the market. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Advanced Contrast Agents for Multimodal Biomedical Imaging Based on Nanotechnology.

    PubMed

    Calle, Daniel; Ballesteros, Paloma; Cerdán, Sebastián

    2018-01-01

    Clinical imaging modalities have reached a prominent role in medical diagnosis and patient management in the last decades. Different image methodologies as Positron Emission Tomography, Single Photon Emission Tomography, X-Rays, or Magnetic Resonance Imaging are in continuous evolution to satisfy the increasing demands of current medical diagnosis. Progress in these methodologies has been favored by the parallel development of increasingly more powerful contrast agents. These are molecules that enhance the intrinsic contrast of the images in the tissues where they accumulate, revealing noninvasively the presence of characteristic molecular targets or differential physiopathological microenvironments. The contrast agent field is currently moving to improve the performance of these molecules by incorporating the advantages that modern nanotechnology offers. These include, mainly, the possibilities to combine imaging and therapeutic capabilities over the same theranostic platform or improve the targeting efficiency in vivo by molecular engineering of the nanostructures. In this review, we provide an introduction to multimodal imaging methods in biomedicine, the sub-nanometric imaging agents previously used and the development of advanced multimodal and theranostic imaging agents based in nanotechnology. We conclude providing some illustrative examples from our own laboratories, including recent progress in theranostic formulations of magnetoliposomes containing ω-3 poly-unsaturated fatty acids to treat inflammatory diseases, or the use of stealth liposomes engineered with a pH-sensitive nanovalve to release their cargo specifically in the acidic extracellular pH microenvironment of tumors.

  14. PREFACE: TNT 2004: Trends in Nanotechnology

    NASA Astrophysics Data System (ADS)

    Correia, Antonio; Serena, Pedro A.; Saenz, Juan Jose; Welland, Mark; Reifenberger, Ron

    2005-05-01

    This special issue of Nanotechnology presents representative contributions describing the main topics covered at the fifth `Trends in Nanotechnology' (TNT2004) international conference, held in Segovia, Spain, 13-17 September 2004. During the past few years many international or regional conferences have emerged in response to the growing awareness of the importance of nanotechnology as a key issue for the future of scientific and technological development. Among these, the conference series `Trends in Nanotechnology' (Toledo, Spain, 2000; Segovia, Spain, 2001; Santiago de Compostela, Spain, 2002; Salamanca, Spain, 2003; and Segovia, Spain, 2004) has become one of the most important meeting points in the nanotechnology field: it provides fresh ideas, brings together well-known speakers, and promotes a suitable environment for discussions, exchanging ideas, and enhancing scientific and personal relations among participants. TNT2004 was organized in a similar way to the four previous TNT conferences, with an impressive scientific programme, without parallel sessions, covering a wide spectrum of nanotechnology research. In 2004, more than 370 scientists worldwide attended this event and contributed more than 80 talks, 236 posters, and stimulating discussions about their most recent research. The aim of the conference was to focus on the applications of nanotechnology and to bring together, in a scientific forum, various worldwide groups belonging to industry, universities and government institutions. TNT2004 was particularly effective at transmitting information and establishing contacts among workers in this field. Graduate students attending such conferences understand the importance of interdisciplinary skills in facilitating their future lines of research. Sixty-four graduate students received a grant (from NASA, ONRIFO, IRC, iNANO, SME, NSERC/CRSNG, EU PHANTOMS Network or TNT) allowing them to present their work. During this event, 22 prizes for the best posters were awarded. We would like to thank all the participants for their assistance, as well as the authors for their written contributions. TNT2004 is the successful consequence of a coordinated effort from several institutions: PHANTOMS Foundation, Universidad Autónoma de Madrid, Consejo Superior de Investigaciones Científicas, Universidad Carlos III de Madrid, Universidad Complutense de Madrid, Universidad SEK, Universidad de Salamanca, CMP Científica, University of Cambridge/IRC, NIMS, Nanotechnology Research Institute (NRI), University of Purdue, Georgia Institute of Technology and IEEE. In addition, we are indebted to the following institutions, companies and government agencies for their help and financial support: PHANTOMS Network/European Commission (IST/FET Program), NASA, Air Force Office of Scientific Research, Motorola, IoP, iNANO, NSERC/CRSNG (Nano Innovation Platform), Junta de Castilla y León, Donostia International Physics Center, Sociedad de Microscopía Española (SME), Nanonet, Wiley-VCH, Raith GmbH, The European Office of Aerospace Research and Development (EOARD), The Office of Naval Research International Field Office (ONRIFO), World Scientific and Imperial College Press, Ministerio de Educación y Ciencia, Parque Científico de Barcelona and Parque Científico de Madrid. We would also like to thank the following companies for their participation: NanoTec, Raith GmbH, Scientec, BFI Optilas, Schaefer, Interface Ltd, World Scientific and Imperial College Press and Institute of Physics Publishing. We invite readers of this special issue to join us at the next `Trends in Nanotechnology' conference, which will take place at Oviedo (Spain) in 2005, (http://www.tnt2005.org).

  15. Videos

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  16. Publications

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  17. Collaboration

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  18. Business

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  19. Features

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  20. Visitors

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  1. Mission

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  2. Community

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  3. Giving

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  4. The new ethical trilemma: Security, privacy and transparency

    NASA Astrophysics Data System (ADS)

    Ganascia, Jean-Gabriel

    2011-09-01

    Numerous ethical and societal issues are related to the development of nanotechnology. Among them, the risk for privacy has long been discussed. Some people say that technology is neutral and that it does not really change the nature of problems, which are mainly political, while others state that its contemporary developments considerably amplify them; there are even persons who assert that it will make privacy protection obsolete. This article discusses those different positions by making reference to the classical Panopticon that is an architecture for surveillance, which characterizes the total absence of privacy. It envisages the possible evolutions of the Panopticon due to the development of nanotechnologies. It shows that the influence of nanotechnology on privacy concerns cannot be dissociated from the influence of computers and biotechnologies, i.e. from what is currently called the NBIC convergence. Lastly, it concludes on the new ethical trade-off that has to be made between three contradictory requirements that are security, transparency and privacy.

  5. The 2nd Symposium on the Frontiers of Massively Parallel Computations

    NASA Technical Reports Server (NTRS)

    Mills, Ronnie (Editor)

    1988-01-01

    Programming languages, computer graphics, neural networks, massively parallel computers, SIMD architecture, algorithms, digital terrain models, sort computation, simulation of charged particle transport on the massively parallel processor and image processing are among the topics discussed.

  6. Data communications in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-11-12

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer composed of compute nodes that execute a parallel application, each compute node including application processors that execute the parallel application and at least one management processor dedicated to gathering information regarding data communications. The PAMI is composed of data communications endpoints, each endpoint composed of a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources. Embodiments function by gathering call site statistics describing data communications resulting from execution of data communications instructions and identifying in dependence upon the call cite statistics a data communications algorithm for use in executing a data communications instruction at a call site in the parallel application.

  7. Parallel Computation of the Jacobian Matrix for Nonlinear Equation Solvers Using MATLAB

    NASA Technical Reports Server (NTRS)

    Rose, Geoffrey K.; Nguyen, Duc T.; Newman, Brett A.

    2017-01-01

    Demonstrating speedup for parallel code on a multicore shared memory PC can be challenging in MATLAB due to underlying parallel operations that are often opaque to the user. This can limit potential for improvement of serial code even for the so-called embarrassingly parallel applications. One such application is the computation of the Jacobian matrix inherent to most nonlinear equation solvers. Computation of this matrix represents the primary bottleneck in nonlinear solver speed such that commercial finite element (FE) and multi-body-dynamic (MBD) codes attempt to minimize computations. A timing study using MATLAB's Parallel Computing Toolbox was performed for numerical computation of the Jacobian. Several approaches for implementing parallel code were investigated while only the single program multiple data (spmd) method using composite objects provided positive results. Parallel code speedup is demonstrated but the goal of linear speedup through the addition of processors was not achieved due to PC architecture.

  8. Performance Evaluation in Network-Based Parallel Computing

    NASA Technical Reports Server (NTRS)

    Dezhgosha, Kamyar

    1996-01-01

    Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.

  9. Nanotechnology for the detection and therapy of stroke.

    PubMed

    Kyle, Stuart; Saha, Sikha

    2014-11-01

    Over the years, nanotechnology has greatly developed, moving from careful design strategies and synthesis of novel nanostructures to producing them for specific medical and biological applications. The use of nanotechnology in diagnostics, drug delivery, and tissue engineering holds great promise for the treatment of stroke in the future. Nanoparticles are employed to monitor grafted cells upon implantation, or to enhance the imagery of the tissue, which is coupled with a noninvasive imaging modality such as magnetic resonance imaging, computed axial tomography or positron emission tomography scan. Contrast imaging agents used can range from iron oxide, perfluorocarbon, cerium oxide or platinum nanoparticles to quantum dots. The use of nanomaterial scaffolds for neuroregeneration is another area of nanomedicine, which involves the creation of an extracellular matrix mimic that not only serves as a structural support but promotes neuronal growth, inhibits glial differentiation, and controls hemostasis. Promisingly, carbon nanotubes can act as scaffolds for stem cell therapy and functionalizing these scaffolds may enhance their therapeutic potential for treatment of stroke. This Progress Report highlights the recent developments in nanotechnology for the detection and therapy of stroke. Recent advances in the use of nanomaterials as tissue engineering scaffolds for neuroregeneration will also be discussed. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Parallel Computing Using Web Servers and "Servlets".

    ERIC Educational Resources Information Center

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  11. Responsible nanotechnology development

    NASA Astrophysics Data System (ADS)

    Forloni, Gianluigi

    2012-08-01

    Nanotechnologies have an increasing relevance in our life, numerous products already on the market are associated with this new technology. Although the chemical constituents of nanomaterials are often well known, the properties at the nano level are completely different from the bulk materials. Independently from the specific application the knowledge in this field involves different type of scientific competence. The accountability of the nanomaterial research imply the parallel development of innovative methodological approaches to assess and manage the risks associated to the exposure for humans and environmental to the nanomaterials for their entire life-cycle: production, application, use and waste discharge. The vast numbers of applications and the enormous amount of variables influencing the characteristics of the nanomaterials make particularly difficult the elaboration of appropriate nanotoxicological protocols. According to the official declarations exist an awareness of the public institutions in charge of the regulatory system, about the environmental, health and safety implications of nanotechnology, but the scientific information is insufficient to support appropriate mandatory rules. Public research programmers must play an important role in providing greater incentives and encouragement for nanotechnologies that support sustainable development to avoid endangering humanity's well being in the long-term. The existing imbalance in funds allocated to nanotech research needs to be corrected so that impact assessment and minimization and not only application come high in the agenda. Research funding should consider as a priority the elimination of knowledge gaps instead of promoting technological application only. With the creation of a public register collecting nanomaterials and new applications it is possible, starting from the information available, initiate a sustainable route, allowing the gradual development of a rational and informed approach to the nanotoxicology. The establishment of an effective strategy cannot ignore the distinction between different nanoparticles on their use and the type of exposure to which we are subjected. Categorization is essential to orchestrate toxicological rules realistic and effective. The responsible development of nanotechnology means a common effort, by scientists, producers, stakeholders, and public institutions to develop appropriate programs to systematically approach the complex issue of the nanotoxicology.

  12. A class of parallel algorithms for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.

  13. Parallel computing of a climate model on the dawn 1000 by domain decomposition method

    NASA Astrophysics Data System (ADS)

    Bi, Xunqiang

    1997-12-01

    In this paper the parallel computing of a grid-point nine-level atmospheric general circulation model on the Dawn 1000 is introduced. The model was developed by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences (CAS). The Dawn 1000 is a MIMD massive parallel computer made by National Research Center for Intelligent Computer (NCIC), CAS. A two-dimensional domain decomposition method is adopted to perform the parallel computing. The potential ways to increase the speed-up ratio and exploit more resources of future massively parallel supercomputation are also discussed.

  14. Web Policies

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  15. Research Opportunities

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  16. Business opportunities

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  17. Emergency Communication

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  18. Civilian Nuclear Program

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  19. Radical Supercomputing

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  20. Media Contacts

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  1. Capabilities: Science Pillars

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  2. Social Media

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  3. Location and Infrastructure

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  4. Dual Career Services

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  5. Science Briefs

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  6. Teachers (K-12)

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  7. Career Videos

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  8. Students (K-12)

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  9. Environmental Management System

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  10. About Us

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  11. Energy Sustainability

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  12. Energy Security Solutions

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  13. Reusing Water

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  14. Community Leaders Survey

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  15. Green Purchasing

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  16. Mission, Vision, Values

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  17. News Releases

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  18. Office of Science

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  19. Regional Education Partners

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  20. Invoicing, Payments Info

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  1. Obeying Environmental Laws

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  2. Education Office Housing

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  3. Looking inside plutonium

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  4. Community Videos

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  5. Cultural Preservation

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  6. Speakers Bureau

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  7. Copyright, Legal

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  8. Protecting Wildlife

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  9. Community Feature Stories

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  10. Lab Organizations

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  11. Economic Development

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  12. Higher Education

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  13. Leadership, Governance

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  14. Quantum Institute

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  15. STEM Education Programs

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  16. October 2015

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  17. LANL Contacts

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  18. Applied Energy Program

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  19. STEM Education

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  20. Bradbury Science Museum

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  1. User Facilities

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  2. Our History

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  3. Travel Reimbursement

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  4. Operational Excellence

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  5. Parallel Computing Strategies for Irregular Algorithms

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  6. Parallel solution of sparse one-dimensional dynamic programming problems

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1989-01-01

    Parallel computation offers the potential for quickly solving large computational problems. However, it is often a non-trivial task to effectively use parallel computers. Solution methods must sometimes be reformulated to exploit parallelism; the reformulations are often more complex than their slower serial counterparts. We illustrate these points by studying the parallelization of sparse one-dimensional dynamic programming problems, those which do not obviously admit substantial parallelization. We propose a new method for parallelizing such problems, develop analytic models which help us to identify problems which parallelize well, and compare the performance of our algorithm with existing algorithms on a multiprocessor.

  7. Decomposition method for fast computation of gigapixel-sized Fresnel holograms on a graphics processing unit cluster.

    PubMed

    Jackin, Boaz Jessie; Watanabe, Shinpei; Ootsu, Kanemitsu; Ohkawa, Takeshi; Yokota, Takashi; Hayasaki, Yoshio; Yatagai, Toyohiko; Baba, Takanobu

    2018-04-20

    A parallel computation method for large-size Fresnel computer-generated hologram (CGH) is reported. The method was introduced by us in an earlier report as a technique for calculating Fourier CGH from 2D object data. In this paper we extend the method to compute Fresnel CGH from 3D object data. The scale of the computation problem is also expanded to 2 gigapixels, making it closer to real application requirements. The significant feature of the reported method is its ability to avoid communication overhead and thereby fully utilize the computing power of parallel devices. The method exhibits three layers of parallelism that favor small to large scale parallel computing machines. Simulation and optical experiments were conducted to demonstrate the workability and to evaluate the efficiency of the proposed technique. A two-times improvement in computation speed has been achieved compared to the conventional method, on a 16-node cluster (one GPU per node) utilizing only one layer of parallelism. A 20-times improvement in computation speed has been estimated utilizing two layers of parallelism on a very large-scale parallel machine with 16 nodes, where each node has 16 GPUs.

  8. MPI_XSTAR: MPI-based Parallelization of the XSTAR Photoionization Program

    NASA Astrophysics Data System (ADS)

    Danehkar, Ashkbiz; Nowak, Michael A.; Lee, Julia C.; Smith, Randall K.

    2018-02-01

    We describe a program for the parallel implementation of multiple runs of XSTAR, a photoionization code that is used to predict the physical properties of an ionized gas from its emission and/or absorption lines. The parallelization program, called MPI_XSTAR, has been developed and implemented in the C++ language by using the Message Passing Interface (MPI) protocol, a conventional standard of parallel computing. We have benchmarked parallel multiprocessing executions of XSTAR, using MPI_XSTAR, against a serial execution of XSTAR, in terms of the parallelization speedup and the computing resource efficiency. Our experience indicates that the parallel execution runs significantly faster than the serial execution, however, the efficiency in terms of the computing resource usage decreases with increasing the number of processors used in the parallel computing.

  9. Data communications in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-10-29

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a data communications instruction, the instruction characterized by an instruction type, the instruction specifying a transmission of transfer data from the origin endpoint to a target endpoint and transmitting, in accordance with the instruction type, the transfer data from the origin endpoint to the target endpoint.

  10. A CFD Heterogeneous Parallel Solver Based on Collaborating CPU and GPU

    NASA Astrophysics Data System (ADS)

    Lai, Jianqi; Tian, Zhengyu; Li, Hua; Pan, Sha

    2018-03-01

    Since Graphic Processing Unit (GPU) has a strong ability of floating-point computation and memory bandwidth for data parallelism, it has been widely used in the areas of common computing such as molecular dynamics (MD), computational fluid dynamics (CFD) and so on. The emergence of compute unified device architecture (CUDA), which reduces the complexity of compiling program, brings the great opportunities to CFD. There are three different modes for parallel solution of NS equations: parallel solver based on CPU, parallel solver based on GPU and heterogeneous parallel solver based on collaborating CPU and GPU. As we can see, GPUs are relatively rich in compute capacity but poor in memory capacity and the CPUs do the opposite. We need to make full use of the GPUs and CPUs, so a CFD heterogeneous parallel solver based on collaborating CPU and GPU has been established. Three cases are presented to analyse the solver’s computational accuracy and heterogeneous parallel efficiency. The numerical results agree well with experiment results, which demonstrate that the heterogeneous parallel solver has high computational precision. The speedup on a single GPU is more than 40 for laminar flow, it decreases for turbulent flow, but it still can reach more than 20. What’s more, the speedup increases as the grid size becomes larger.

  11. Carbohydrate Nanotechnology: Hierarchical Assemblies and Information Processing from Oligosaccharide-Synthetic Lectin Host-Guest

    DTIC Science & Technology

    2014-09-17

    pyrrole H k protons of the receptor. Additionally, a C–H … π interaction between the phenyl ring and H 4 and two more H- bonds between the hydroxyl...group of C3 and an amino H e and pyrrole H k proton of the receptor were observed. Side views b) parallel and c) perpendicular to the biphenyl...biphenyl base and pyrroles , suggesting a geometry where one molecule of β-Man is encaged by two molecules of 1, and the two molecules of 1 are in close

  12. Carbohydrate Nanotechnology: Hierarchical Assemblies and Information Processing with Oligosaccharide-Synthetic Lectin Host-Guest Systems

    DTIC Science & Technology

    2013-08-05

    pyrrole Hk protons of the receptor. Additionally, a C–H…π interaction between the phenyl ring and H4 and two more H- bonds between the hydroxyl group of...C3 and an amino He and pyrrole Hk proton of the receptor were observed. Side views b) parallel and c) perpendicular to the biphenyl linkage of the...located on the biphenyl base and pyrroles , suggesting a geometry where one molecule of β-Man is encaged by two molecules of 1, and the two molecules of

  13. Self-assembled three-dimensional chiral colloidal architecture

    NASA Astrophysics Data System (ADS)

    Ben Zion, Matan Yah; He, Xiaojin; Maass, Corinna C.; Sha, Ruojie; Seeman, Nadrian C.; Chaikin, Paul M.

    2017-11-01

    Although stereochemistry has been a central focus of the molecular sciences since Pasteur, its province has previously been restricted to the nanometric scale. We have programmed the self-assembly of micron-sized colloidal clusters with structural information stemming from a nanometric arrangement. This was done by combining DNA nanotechnology with colloidal science. Using the functional flexibility of DNA origami in conjunction with the structural rigidity of colloidal particles, we demonstrate the parallel self-assembly of three-dimensional microconstructs, evincing highly specific geometry that includes control over position, dihedral angles, and cluster chirality.

  14. Nanotube News

    ERIC Educational Resources Information Center

    Journal of College Science Teaching, 2005

    2005-01-01

    Smaller, faster computers, bullet-proof t-shirts, and itty-bitty robots--such are the promises of nanotechnology and the cylinder-shaped collection of carbon molecules known as nanotubes. But for these exciting ideas to become realities, scientists must understand how these miracle molecules perform under all sorts of conditions. This brief…

  15. Laboratory Directed Research & Development (LDRD)

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  16. Los Alamos Science Facilities

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  17. Payments to the Lab

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  18. Nuclear Deterrence and Stockpile Stewardship

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  19. Emerging Threats and Opportunities

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  20. Living in Los Alamos

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  1. Protecting Against Nuclear Threats

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  2. Ion Beam Materials Lab

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  3. Frontiers in Science Lectures

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  4. 70+ Years of Innovations

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  5. Center for Nonlinear Studies

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  6. Taking Care of our Trails

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  7. What We Monitor & Why

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gooding, Thomas M.

    Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications linkmore » over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.« less

  9. SIAM Conference on Parallel Processing for Scientific Computing, 4th, Chicago, IL, Dec. 11-13, 1989, Proceedings

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack (Editor); Messina, Paul (Editor); Sorensen, Danny C. (Editor); Voigt, Robert G. (Editor)

    1990-01-01

    Attention is given to such topics as an evaluation of block algorithm variants in LAPACK and presents a large-grain parallel sparse system solver, a multiprocessor method for the solution of the generalized Eigenvalue problem on an interval, and a parallel QR algorithm for iterative subspace methods on the CM2. A discussion of numerical methods includes the topics of asynchronous numerical solutions of PDEs on parallel computers, parallel homotopy curve tracking on a hypercube, and solving Navier-Stokes equations on the Cedar Multi-Cluster system. A section on differential equations includes a discussion of a six-color procedure for the parallel solution of elliptic systems using the finite quadtree structure, data parallel algorithms for the finite element method, and domain decomposition methods in aerodynamics. Topics dealing with massively parallel computing include hypercube vs. 2-dimensional meshes and massively parallel computation of conservation laws. Performance and tools are also discussed.

  10. Handling Big Data in Medical Imaging: Iterative Reconstruction with Large-Scale Automated Parallel Computation

    PubMed Central

    Lee, Jae H.; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T.; Seo, Youngho

    2014-01-01

    The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting. PMID:27081299

  11. Handling Big Data in Medical Imaging: Iterative Reconstruction with Large-Scale Automated Parallel Computation.

    PubMed

    Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho

    2014-11-01

    The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.

  12. Parallel Markov chain Monte Carlo - bridging the gap to high-performance Bayesian computation in animal breeding and genetics.

    PubMed

    Wu, Xiao-Lin; Sun, Chuanyu; Beissinger, Timothy M; Rosa, Guilherme Jm; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel

    2012-09-25

    Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs.

  13. Parallel Markov chain Monte Carlo - bridging the gap to high-performance Bayesian computation in animal breeding and genetics

    PubMed Central

    2012-01-01

    Background Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Results Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Conclusions Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs. PMID:23009363

  14. Parallel approach in RDF query processing

    NASA Astrophysics Data System (ADS)

    Vajgl, Marek; Parenica, Jan

    2017-07-01

    Parallel approach is nowadays a very cheap solution to increase computational power due to possibility of usage of multithreaded computational units. This hardware became typical part of nowadays personal computers or notebooks and is widely spread. This contribution deals with experiments how evaluation of computational complex algorithm of the inference over RDF data can be parallelized over graphical cards to decrease computational time.

  15. A comparative study of serial and parallel aeroelastic computations of wings

    NASA Technical Reports Server (NTRS)

    Byun, Chansup; Guruswamy, Guru P.

    1994-01-01

    A procedure for computing the aeroelasticity of wings on parallel multiple-instruction, multiple-data (MIMD) computers is presented. In this procedure, fluids are modeled using Euler equations, and structures are modeled using modal or finite element equations. The procedure is designed in such a way that each discipline can be developed and maintained independently by using a domain decomposition approach. In the present parallel procedure, each computational domain is scalable. A parallel integration scheme is used to compute aeroelastic responses by solving fluid and structural equations concurrently. The computational efficiency issues of parallel integration of both fluid and structural equations are investigated in detail. This approach, which reduces the total computational time by a factor of almost 2, is demonstrated for a typical aeroelastic wing by using various numbers of processors on the Intel iPSC/860.

  16. Research in parallel computing

    NASA Technical Reports Server (NTRS)

    Ortega, James M.; Henderson, Charles

    1994-01-01

    This report summarizes work on parallel computations for NASA Grant NAG-1-1529 for the period 1 Jan. - 30 June 1994. Short summaries on highly parallel preconditioners, target-specific parallel reductions, and simulation of delta-cache protocols are provided.

  17. Tiny plastic lung mimics human pulmonary function

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  18. Science and Innovation at Los Alamos

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  19. Public Reading Room: Environmental Documents, Reports

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  20. NanoElectronics and BioElectronics

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak

    2001-01-01

    This viewgraph presentation reviews the use of Carbon Nanotube electronics in the bioelectronics. Included is a brief review of the carbon nanotube manufacturing, the use of carbon nanotubes in Atomic Force Microscopy (AFM), and Computational Nanotechnology, that allows designers to understand nanotube characteristics and serves as a design tool.

  1. Parallel computations and control of adaptive structures

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alvin, Kenneth F.; Belvin, W. Keith; Chong, K. P. (Editor); Liu, S. C. (Editor); Li, J. C. (Editor)

    1991-01-01

    The equations of motion for structures with adaptive elements for vibration control are presented for parallel computations to be used as a software package for real-time control of flexible space structures. A brief introduction of the state-of-the-art parallel computational capability is also presented. Time marching strategies are developed for an effective use of massive parallel mapping, partitioning, and the necessary arithmetic operations. An example is offered for the simulation of control-structure interaction on a parallel computer and the impact of the approach presented for applications in other disciplines than aerospace industry is assessed.

  2. Design of a massively parallel computer using bit serial processing elements

    NASA Technical Reports Server (NTRS)

    Aburdene, Maurice F.; Khouri, Kamal S.; Piatt, Jason E.; Zheng, Jianqing

    1995-01-01

    A 1-bit serial processor designed for a parallel computer architecture is described. This processor is used to develop a massively parallel computational engine, with a single instruction-multiple data (SIMD) architecture. The computer is simulated and tested to verify its operation and to measure its performance for further development.

  3. A matter of accuracy. Nanobiochips in diagnostics and in research: ethical issues as value trade-offs.

    PubMed

    Le Roux, Ronan

    2015-04-01

    The paper deals with the introduction of nanotechnology in biochips. Based on interviews and theoretical reflections, it explores blind spots left by technology assessment and ethical investigations. These have focused on possible consequences of increased diffusability of a diagnostic device, neglecting both the context of research as well as increased accuracy, despite it being a more essential feature of nanobiochip projects. Also, rather than one of many parallel aspects (technical, legal and social) in innovation processes, ethics is considered here as a ubiquitous system of choices between sometimes antagonistic values. Thus, the paper investigates what is at stake when accuracy is balanced with other practical values in different contexts. Dramatic nanotechnological increase of accuracy in biochips can raise ethical issues, since it is at odds with other values such as diffusability and reliability. But those issues will not be as revolutionary as is often claimed: neither in diagnostics, because accuracy of measurements is not accuracy of diagnostics; nor in research, because a boost in measurement accuracy is not sufficient to overcome significance-chasing malpractices. The conclusion extends to methodological recommendations.

  4. Nanotechnology on duty in medical applications.

    PubMed

    Kubik, T; Bogunia-Kubik, K; Sugisaka, M

    2005-02-01

    At the beginning of 21(st) century, fifty years after discovery of deoxyribonucleic acid (DNA) double helix structure, scientific world is faced with a great progress in many disciplines of biological research, especially in the field of molecular biology and operating on nucleid acid molecules. Many molecular biology techniques have been implemented successfully in biology, biotechnology, medical science, diagnostics, and many more. The introduction of polymerase chain reaction (PCR) resulted in improving old and designing new laboratory devices for PCR amplification and analysis of amplified DNA fragments. In parallel to these efforts, the nature of DNA molecules and their construction have attracted many researchers. In addition, some studies concerning mimicking living systems, as well as developing and constructing artificial nanodevices, such as biomolecular sensors and artificial cells, have been conducted. This review is focused on the potential of nanotechnology in health care and medicine, including the development of nanoparticles for diagnostic and screening purposes, the manufacture of unique drug delivery systems, antisense and gene therapy applications and the enablement of tissue engineering, including the future of nanorobot construction.

  5. Social and ethical dimensions of nanoscale science and engineering research.

    PubMed

    Sweeney, Aldrin E

    2006-07-01

    Continuing advances in human ability to manipulate matter at the atomic and molecular levels (i.e. nanoscale science and engineering) offer many previously unimagined possibilities for scientific discovery and technological development. Paralleling these advances in the various science and engineering sub-disciplines is the increasing realization that a number of associated social, ethical, environmental, economic and legal dimensions also need to be explored. An important component of such exploration entails the identification and analysis of the ways in which current and prospective researchers in these fields conceptualize these dimensions of their work. Within the context of a National Science Foundation funded Research Experiences for Undergraduates (REU) program in nanomaterials processing and characterization at the University of Central Florida (2002-2004), here I present for discussion (i) details of a "nanotechnology ethics" seminar series developed specifically for students participating in the program, and (ii) an analysis of students' and participating research faculty's perspectives concerning social and ethical issues associated with nanotechnology research. I conclude with a brief discussion of implications presented by these issues for general scientific literacy and public science education policy.

  6. 75 FR 75707 - Request for Public Comment on the Draft National Nanotechnology Initiative Strategy for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-06

    ... Nanotechnology Initiative Strategy for Nanotechnology-Related Environmental, Health, and Safety Research AGENCY... the public regarding the draft National Nanotechnology Initiative (NNI) Strategy for Nanotechnology... confidential. Overview: The National Nanotechnology Initiative Strategy for Nanotechnology-Related...

  7. Performance Evaluation of Parallel Branch and Bound Search with the Intel iPSC (Intel Personal SuperComputer) Hypercube Computer.

    DTIC Science & Technology

    1986-12-01

    17 III. Analysis of Parallel Design ................................................ 18 Parallel Abstract Data ...Types ........................................... 18 Abstract Data Type .................................................. 19 Parallel ADT...22 Data -Structure Design ........................................... 23 Object-Oriented Design

  8. Hypercluster Parallel Processor

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Cole, Gary L.; Milner, Edward J.; Quealy, Angela

    1992-01-01

    Hypercluster computer system includes multiple digital processors, operation of which coordinated through specialized software. Configurable according to various parallel-computing architectures of shared-memory or distributed-memory class, including scalar computer, vector computer, reduced-instruction-set computer, and complex-instruction-set computer. Designed as flexible, relatively inexpensive system that provides single programming and operating environment within which one can investigate effects of various parallel-computing architectures and combinations on performance in solution of complicated problems like those of three-dimensional flows in turbomachines. Hypercluster software and architectural concepts are in public domain.

  9. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  10. Targeted Nanotechnology for Cancer Imaging

    PubMed Central

    Toy, Randall; Bauer, Lisa; Hoimes, Christopher; Ghaghada, Ketan B.; Karathanasis, Efstathios

    2014-01-01

    Targeted nanoparticle imaging agents provide many benefits and new opportunities to facilitate accurate diagnosis of cancer and significantly impact patient outcome. Due to the highly engineerable nature of nanotechnology, targeted nanoparticles exhibit significant advantages including increased contrast sensitivity, binding avidity and targeting specificity. Considering the various nanoparticle designs and their adjustable ability to target a specific site and generate detectable signals, nanoparticles can be optimally designed in terms of biophysical interactions (i.e., intravascular and interstitial transport) and biochemical interactions (i.e., targeting avidity towards cancer-related biomarkers) for site-specific detection of very distinct microenvironments. This review seeks to illustrate that the design of a nanoparticle dictates its in vivo journey and targeting of hard-to-reach cancer sites, facilitating early and accurate diagnosis and interrogation of the most aggressive forms of cancer. We will report various targeted nanoparticles for cancer imaging using X-ray computed tomography, ultrasound, magnetic resonance imaging, nuclear imaging and optical imaging. Finally, to realize the full potential of targeted nanotechnology for cancer imaging, we will describe the challenges and opportunities for the clinical translation and widespread adaptation of targeted nanoparticles imaging agents. PMID:25116445

  11. Panel: If I Only Knew Then What I Know Now

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  12. A scalable parallel black oil simulator on distributed memory parallel computers

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Liu, Hui; Chen, Zhangxin

    2015-11-01

    This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.

  13. Automatic Generation of OpenMP Directives and Its Application to Computational Fluid Dynamics Codes

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Jin, Haoqiang; Frumkin, Michael; Yan, Jerry (Technical Monitor)

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate OpenMP-based parallel programs with nominal user assistance. We outline techniques used in the implementation of the tool and discuss the application of this tool on the NAS Parallel Benchmarks and several computational fluid dynamics codes. This work demonstrates the great potential of using the tool to quickly port parallel programs and also achieve good performance that exceeds some of the commercial tools.

  14. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    NASA Astrophysics Data System (ADS)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  15. [Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].

    PubMed

    Furuta, Takuya; Sato, Tatsuhiko

    2015-01-01

    Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.

  16. Evolving binary classifiers through parallel computation of multiple fitness cases.

    PubMed

    Cagnoni, Stefano; Bergenti, Federico; Mordonini, Monica; Adorni, Giovanni

    2005-06-01

    This paper describes two versions of a novel approach to developing binary classifiers, based on two evolutionary computation paradigms: cellular programming and genetic programming. Such an approach achieves high computation efficiency both during evolution and at runtime. Evolution speed is optimized by allowing multiple solutions to be computed in parallel. Runtime performance is optimized explicitly using parallel computation in the case of cellular programming or implicitly taking advantage of the intrinsic parallelism of bitwise operators on standard sequential architectures in the case of genetic programming. The approach was tested on a digit recognition problem and compared with a reference classifier.

  17. Studying an Eulerian Computer Model on Different High-performance Computer Platforms and Some Applications

    NASA Astrophysics Data System (ADS)

    Georgiev, K.; Zlatev, Z.

    2010-11-01

    The Danish Eulerian Model (DEM) is an Eulerian model for studying the transport of air pollutants on large scale. Originally, the model was developed at the National Environmental Research Institute of Denmark. The model computational domain covers Europe and some neighbour parts belong to the Atlantic Ocean, Asia and Africa. If DEM model is to be applied by using fine grids, then its discretization leads to a huge computational problem. This implies that such a model as DEM must be run only on high-performance computer architectures. The implementation and tuning of such a complex large-scale model on each different computer is a non-trivial task. Here, some comparison results of running of this model on different kind of vector (CRAY C92A, Fujitsu, etc.), parallel computers with distributed memory (IBM SP, CRAY T3E, Beowulf clusters, Macintosh G4 clusters, etc.), parallel computers with shared memory (SGI Origin, SUN, etc.) and parallel computers with two levels of parallelism (IBM SMP, IBM BlueGene/P, clusters of multiprocessor nodes, etc.) will be presented. The main idea in the parallel version of DEM is domain partitioning approach. Discussions according to the effective use of the cache and hierarchical memories of the modern computers as well as the performance, speed-ups and efficiency achieved will be done. The parallel code of DEM, created by using MPI standard library, appears to be highly portable and shows good efficiency and scalability on different kind of vector and parallel computers. Some important applications of the computer model output are presented in short.

  18. A Debugger for Computational Grid Applications

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Jost, Gabriele; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This viewgraph presentation gives an overview of a debugger for computational grid applications. Details are given on NAS parallel tools groups (including parallelization support tools, evaluation of various parallelization strategies, and distributed and aggregated computing), debugger dependencies, scalability, initial implementation, the process grid, and information on Globus.

  19. Development of Sensors for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Medelius, Pedro

    2005-01-01

    Advances in technology have led to the availability of smaller and more accurate sensors. Computer power to process large amounts of data is no longer the prevailing issue; thus multiple and redundant sensors can be used to obtain more accurate and comprehensive measurements in a space vehicle. The successful integration and commercialization of micro- and nanotechnology for aerospace applications require that a close and interactive relationship be developed between the technology provider and the end user early in the project. Close coordination between the developers and the end users is critical since qualification for flight is time-consuming and expensive. The successful integration of micro- and nanotechnology into space vehicles requires a coordinated effort throughout the design, development, installation, and integration processes

  20. Free energy calculation of permeant-membrane interactions using molecular dynamics simulations.

    PubMed

    Elvati, Paolo; Violi, Angela

    2012-01-01

    Nanotoxicology, the science concerned with the safe use of nanotechnology and nanostructure design for biological applications, is a field of research that has recently received great attention, as a result of the rapid growth in nanotechnology. Many nanostructures are of a scale and chemical composition similar to many biomolecular environments, and recent papers have reported evident toxicity of selected nanoparticles. Molecular simulations can help develop a mechanistic understanding of how structural properties affect bioactivity. In this chapter, we describe how to compute the free energy of interactions between cellular membranes and benzene, the main constituent of some toxic carbonaceous particles, with well-tempered metadynamics. This algorithm reconstructs the free energy surface and accelerates rare events in a coarse-grained representation of the system.

  1. Structural DNA Nanotechnology: State of the Art and Future Perspective

    PubMed Central

    2015-01-01

    Over the past three decades DNA has emerged as an exceptional molecular building block for nanoconstruction due to its predictable conformation and programmable intra- and intermolecular Watson–Crick base-pairing interactions. A variety of convenient design rules and reliable assembly methods have been developed to engineer DNA nanostructures of increasing complexity. The ability to create designer DNA architectures with accurate spatial control has allowed researchers to explore novel applications in many directions, such as directed material assembly, structural biology, biocatalysis, DNA computing, nanorobotics, disease diagnosis, and drug delivery. This Perspective discusses the state of the art in the field of structural DNA nanotechnology and presents some of the challenges and opportunities that exist in DNA-based molecular design and programming. PMID:25029570

  2. Application of a Scalable, Parallel, Unstructured-Grid-Based Navier-Stokes Solver

    NASA Technical Reports Server (NTRS)

    Parikh, Paresh

    2001-01-01

    A parallel version of an unstructured-grid based Navier-Stokes solver, USM3Dns, previously developed for efficient operation on a variety of parallel computers, has been enhanced to incorporate upgrades made to the serial version. The resultant parallel code has been extensively tested on a variety of problems of aerospace interest and on two sets of parallel computers to understand and document its characteristics. An innovative grid renumbering construct and use of non-blocking communication are shown to produce superlinear computing performance. Preliminary results from parallelization of a recently introduced "porous surface" boundary condition are also presented.

  3. How to Build an AppleSeed: A Parallel Macintosh Cluster for Numerically Intensive Computing

    NASA Astrophysics Data System (ADS)

    Decyk, V. K.; Dauger, D. E.

    We have constructed a parallel cluster consisting of a mixture of Apple Macintosh G3 and G4 computers running the Mac OS, and have achieved very good performance on numerically intensive, parallel plasma particle-incell simulations. A subset of the MPI message-passing library was implemented in Fortran77 and C. This library enabled us to port code, without modification, from other parallel processors to the Macintosh cluster. Unlike Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the main stream of computing.

  4. Parallel simulation of tsunami inundation on a large-scale supercomputer

    NASA Astrophysics Data System (ADS)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the finite difference calculation, (2) communication between adjacent layers for the calculations to connect each layer, and (3) global communication to obtain the time step which satisfies the CFL condition in the whole domain. A preliminary test on the K computer showed the parallel efficiency on 1024 cores was 57% relative to 64 cores. We estimate that the parallel efficiency will be considerably improved by applying a 2-D domain decomposition instead of the present 1-D domain decomposition in future work. The present parallel tsunami model was applied to the 2011 Great Tohoku tsunami. The coarsest resolution layer covers a 758 km × 1155 km region with a 405 m grid spacing. A nesting of five layers was used with the resolution ratio of 1/3 between nested layers. The finest resolution region has 5 m resolution and covers most of the coastal region of Sendai city. To complete 2 hours of simulation time, the serial (non-parallel) computation took approximately 4 days on a workstation. To complete the same simulation on 1024 cores of the K computer, it took 45 minutes which is more than two times faster than real-time. This presentation discusses the updated parallel computational performance and the efficient use of the K computer when considering the characteristics of the tsunami inundation simulation model in relation to the characteristics and capabilities of the K computer.

  5. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment.

    PubMed

    Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che

    2014-01-16

    To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks.

  6. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    PubMed Central

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks. PMID:24428926

  7. Parallel CE/SE Computations via Domain Decomposition

    NASA Technical Reports Server (NTRS)

    Himansu, Ananda; Jorgenson, Philip C. E.; Wang, Xiao-Yen; Chang, Sin-Chung

    2000-01-01

    This paper describes the parallelization strategy and achieved parallel efficiency of an explicit time-marching algorithm for solving conservation laws. The Space-Time Conservation Element and Solution Element (CE/SE) algorithm for solving the 2D and 3D Euler equations is parallelized with the aid of domain decomposition. The parallel efficiency of the resultant algorithm on a Silicon Graphics Origin 2000 parallel computer is checked.

  8. Parallel Algorithms for Least Squares and Related Computations.

    DTIC Science & Technology

    1991-03-22

    for dense computations in linear algebra . The work has recently been published in a general reference book on parallel algorithms by SIAM. AFO SR...written his Ph.D. dissertation with the principal investigator. (See publication 6.) • Parallel Algorithms for Dense Linear Algebra Computations. Our...and describe and to put into perspective a selection of the more important parallel algorithms for numerical linear algebra . We give a major new

  9. Reliability models for dataflow computer systems

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.; Buckles, B. P.

    1985-01-01

    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

  10. Parallel computing method for simulating hydrological processesof large rivers under climate change

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, Y.

    2016-12-01

    Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.

  11. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  12. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  13. Why not make a PC cluster of your own? 5. AppleSeed: A Parallel Macintosh Cluster for Scientific Computing

    NASA Astrophysics Data System (ADS)

    Decyk, Viktor K.; Dauger, Dean E.

    We have constructed a parallel cluster consisting of Apple Macintosh G4 computers running both Classic Mac OS as well as the Unix-based Mac OS X, and have achieved very good performance on numerically intensive, parallel plasma particle-in-cell simulations. Unlike other Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the mainstream of computing.

  14. 76 FR 2428 - Request for Public Comment on the Draft National Nanotechnology Initiative Strategy for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-13

    ... Nanotechnology Initiative Strategy for Nanotechnology-Related Environmental, Health, and Safety Research AGENCY... the public regarding the draft National Nanotechnology Initiative (NNI) Strategy for Nanotechnology... considered proprietary, personal, sensitive, or confidential. Overview: The National Nanotechnology...

  15. THC-MP: High performance numerical simulation of reactive transport and multiphase flow in porous media

    NASA Astrophysics Data System (ADS)

    Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu

    2015-07-01

    The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.

  16. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  17. Methods of parallel computation applied on granular simulations

    NASA Astrophysics Data System (ADS)

    Martins, Gustavo H. B.; Atman, Allbens P. F.

    2017-06-01

    Every year, parallel computing has becoming cheaper and more accessible. As consequence, applications were spreading over all research areas. Granular materials is a promising area for parallel computing. To prove this statement we study the impact of parallel computing in simulations of the BNE (Brazil Nut Effect). This property is due the remarkable arising of an intruder confined to a granular media when vertically shaken against gravity. By means of DEM (Discrete Element Methods) simulations, we study the code performance testing different methods to improve clock time. A comparison between serial and parallel algorithms, using OpenMP® is also shown. The best improvement was obtained by optimizing the function that find contacts using Verlet's cells.

  18. Parallel computation using boundary elements in solid mechanics

    NASA Technical Reports Server (NTRS)

    Chien, L. S.; Sun, C. T.

    1990-01-01

    The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.

  19. Parallel Algorithms for the Exascale Era

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robey, Robert W.

    New parallel algorithms are needed to reach the Exascale level of parallelism with millions of cores. We look at some of the research developed by students in projects at LANL. The research blends ideas from the early days of computing while weaving in the fresh approach brought by students new to the field of high performance computing. We look at reproducibility of global sums and why it is important to parallel computing. Next we look at how the concept of hashing has led to the development of more scalable algorithms suitable for next-generation parallel computers. Nearly all of this workmore » has been done by undergraduates and published in leading scientific journals.« less

  20. Research in computer science

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1986-01-01

    Various graduate research activities in the field of computer science are reported. Among the topics discussed are: (1) failure probabilities in multi-version software; (2) Gaussian Elimination on parallel computers; (3) three dimensional Poisson solvers on parallel/vector computers; (4) automated task decomposition for multiple robot arms; (5) multi-color incomplete cholesky conjugate gradient methods on the Cyber 205; and (6) parallel implementation of iterative methods for solving linear equations.

  1. A high-speed linear algebra library with automatic parallelism

    NASA Technical Reports Server (NTRS)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  2. RRAM-based parallel computing architecture using k-nearest neighbor classification for pattern recognition

    NASA Astrophysics Data System (ADS)

    Jiang, Yuning; Kang, Jinfeng; Wang, Xinan

    2017-03-01

    Resistive switching memory (RRAM) is considered as one of the most promising devices for parallel computing solutions that may overcome the von Neumann bottleneck of today’s electronic systems. However, the existing RRAM-based parallel computing architectures suffer from practical problems such as device variations and extra computing circuits. In this work, we propose a novel parallel computing architecture for pattern recognition by implementing k-nearest neighbor classification on metal-oxide RRAM crossbar arrays. Metal-oxide RRAM with gradual RESET behaviors is chosen as both the storage and computing components. The proposed architecture is tested by the MNIST database. High speed (~100 ns per example) and high recognition accuracy (97.05%) are obtained. The influence of several non-ideal device properties is also discussed, and it turns out that the proposed architecture shows great tolerance to device variations. This work paves a new way to achieve RRAM-based parallel computing hardware systems with high performance.

  3. Symplectic molecular dynamics simulations on specially designed parallel computers.

    PubMed

    Borstnik, Urban; Janezic, Dusanka

    2005-01-01

    We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.

  4. Parallelization of fine-scale computation in Agile Multiscale Modelling Methodology

    NASA Astrophysics Data System (ADS)

    Macioł, Piotr; Michalik, Kazimierz

    2016-10-01

    Nowadays, multiscale modelling of material behavior is an extensively developed area. An important obstacle against its wide application is high computational demands. Among others, the parallelization of multiscale computations is a promising solution. Heterogeneous multiscale models are good candidates for parallelization, since communication between sub-models is limited. In this paper, the possibility of parallelization of multiscale models based on Agile Multiscale Methodology framework is discussed. A sequential, FEM based macroscopic model has been combined with concurrently computed fine-scale models, employing a MatCalc thermodynamic simulator. The main issues, being investigated in this work are: (i) the speed-up of multiscale models with special focus on fine-scale computations and (ii) on decreasing the quality of computations enforced by parallel execution. Speed-up has been evaluated on the basis of Amdahl's law equations. The problem of `delay error', rising from the parallel execution of fine scale sub-models, controlled by the sequential macroscopic sub-model is discussed. Some technical aspects of combining third-party commercial modelling software with an in-house multiscale framework and a MPI library are also discussed.

  5. Parallel algorithms for mapping pipelined and parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  6. Smart Dust--Friend or Foe?

    ERIC Educational Resources Information Center

    Roman, Harry T.

    2012-01-01

    Nanotechnology is now making it possible to create radically new tiny machines and sensors on par with the size of dust motes. This technology is rapidly progressing and will make profound impacts on the nation's global competitiveness. It promises to be a most pervasive technological advance, comparable to what computers did for an individual's…

  7. Synthesizing parallel imaging applications using the CAP (computer-aided parallelization) tool

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Mazzariol, Marc; Messerli, Vincent; Hersch, Roger D.

    1997-12-01

    Imaging applications such as filtering, image transforms and compression/decompression require vast amounts of computing power when applied to large data sets. These applications would potentially benefit from the use of parallel processing. However, dedicated parallel computers are expensive and their processing power per node lags behind that of the most recent commodity components. Furthermore, developing parallel applications remains a difficult task: writing and debugging the application is difficult (deadlocks), programs may not be portable from one parallel architecture to the other, and performance often comes short of expectations. In order to facilitate the development of parallel applications, we propose the CAP computer-aided parallelization tool which enables application programmers to specify at a high-level of abstraction the flow of data between pipelined-parallel operations. In addition, the CAP tool supports the programmer in developing parallel imaging and storage operations. CAP enables combining efficiently parallel storage access routines and image processing sequential operations. This paper shows how processing and I/O intensive imaging applications must be implemented to take advantage of parallelism and pipelining between data access and processing. This paper's contribution is (1) to show how such implementations can be compactly specified in CAP, and (2) to demonstrate that CAP specified applications achieve the performance of custom parallel code. The paper analyzes theoretically the performance of CAP specified applications and demonstrates the accuracy of the theoretical analysis through experimental measurements.

  8. CSM parallel structural methods research

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1989-01-01

    Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

  9. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  10. Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.

  11. Current situation and industrialization of Taiwan nanotechnology

    NASA Astrophysics Data System (ADS)

    Su, Hsin-Ning; Lee, Pei-Chun; Tsai, Min-Hua; Chien, Kuo-Ming

    2007-12-01

    Nanotechnology is projected to be a very promising field, and the impact of nanotechnology on society is increasingly significant as the research funding and manufactured goods increase exponentially. A clearer picture of Taiwan's current and future nanotechnology industry is an essential component for future planning. Therefore, this investigation studies the progress of industrializing nanotechnology in Taiwan by surveying 150 companies. Along with understanding Taiwan's current nanotechnology industrialization, this paper also suggests ways to promote Taiwan's nanotechnology. The survey results are summarized and serve as the basis for planning a nanotechnology industrialization strategy.

  12. The science of computing - Parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1985-01-01

    Although parallel computation architectures have been known for computers since the 1920s, it was only in the 1970s that microelectronic components technologies advanced to the point where it became feasible to incorporate multiple processors in one machine. Concommitantly, the development of algorithms for parallel processing also lagged due to hardware limitations. The speed of computing with solid-state chips is limited by gate switching delays. The physical limit implies that a 1 Gflop operational speed is the maximum for sequential processors. A computer recently introduced features a 'hypercube' architecture with 128 processors connected in networks at 5, 6 or 7 points per grid, depending on the design choice. Its computing speed rivals that of supercomputers, but at a fraction of the cost. The added speed with less hardware is due to parallel processing, which utilizes algorithms representing different parts of an equation that can be broken into simpler statements and processed simultaneously. Present, highly developed computer languages like FORTRAN, PASCAL, COBOL, etc., rely on sequential instructions. Thus, increased emphasis will now be directed at parallel processing algorithms to exploit the new architectures.

  13. Computational Nanotechnology Program

    NASA Technical Reports Server (NTRS)

    Scuseria, Gustavo E.

    1997-01-01

    The objectives are: (1) development of methodological and computational tool for the quantum chemistry study of carbon nanostructures and (2) development of the fundamental understanding of the bonding, reactivity, and electronic structure of carbon nanostructures. Our calculations have continued to play a central role in understanding the outcome of the carbon nanotube macroscopic production experiment. The calculations on buckyonions offer the resolution of a long controversy between experiment and theory. Our new tight binding method offers increased speed for realistic simulations of large carbon nanostructures.

  14. Sandia QIS Capabilities.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, Richard P.

    2017-07-01

    Sandia National Laboratories has developed a broad set of capabilities in quantum information science (QIS), including elements of quantum computing, quantum communications, and quantum sensing. The Sandia QIS program is built atop unique DOE investments at the laboratories, including the MESA microelectronics fabrication facility, the Center for Integrated Nanotechnologies (CINT) facilities (joint with LANL), the Ion Beam Laboratory, and ASC High Performance Computing (HPC) facilities. Sandia has invested $75 M of LDRD funding over 12 years to develop unique, differentiating capabilities that leverage these DOE infrastructure investments.

  15. Parallel-Processing Test Bed For Simulation Software

    NASA Technical Reports Server (NTRS)

    Blech, Richard; Cole, Gary; Townsend, Scott

    1996-01-01

    Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).

  16. System-wide power management control via clock distribution network

    DOEpatents

    Coteus, Paul W.; Gara, Alan; Gooding, Thomas M.; Haring, Rudolf A.; Kopcsay, Gerard V.; Liebsch, Thomas A.; Reed, Don D.

    2015-05-19

    An apparatus, method and computer program product for automatically controlling power dissipation of a parallel computing system that includes a plurality of processors. A computing device issues a command to the parallel computing system. A clock pulse-width modulator encodes the command in a system clock signal to be distributed to the plurality of processors. The plurality of processors in the parallel computing system receive the system clock signal including the encoded command, and adjusts power dissipation according to the encoded command.

  17. Parallel Computing:. Some Activities in High Energy Physics

    NASA Astrophysics Data System (ADS)

    Willers, Ian

    This paper examines some activities in High Energy Physics that utilise parallel computing. The topic includes all computing from the proposed SIMD front end detectors, the farming applications, high-powered RISC processors and the large machines in the computer centers. We start by looking at the motivation behind using parallelism for general purpose computing. The developments around farming are then described from its simplest form to the more complex system in Fermilab. Finally, there is a list of some developments that are happening close to the experiments.

  18. Implementation of DFT application on ternary optical computer

    NASA Astrophysics Data System (ADS)

    Junjie, Peng; Youyi, Fu; Xiaofeng, Zhang; Shuai, Kong; Xinyu, Wei

    2018-03-01

    As its characteristics of huge number of data bits and low energy consumption, optical computing may be used in the applications such as DFT etc. which needs a lot of computation and can be implemented in parallel. According to this, DFT implementation methods in full parallel as well as in partial parallel are presented. Based on resources ternary optical computer (TOC), extensive experiments were carried out. Experimental results show that the proposed schemes are correct and feasible. They provide a foundation for further exploration of the applications on TOC that needs a large amount calculation and can be processed in parallel.

  19. Special purpose parallel computer architecture for real-time control and simulation in robotic applications

    NASA Technical Reports Server (NTRS)

    Fijany, Amir (Inventor); Bejczy, Antal K. (Inventor)

    1993-01-01

    This is a real-time robotic controller and simulator which is a MIMD-SIMD parallel architecture for interfacing with an external host computer and providing a high degree of parallelism in computations for robotic control and simulation. It includes a host processor for receiving instructions from the external host computer and for transmitting answers to the external host computer. There are a plurality of SIMD microprocessors, each SIMD processor being a SIMD parallel processor capable of exploiting fine grain parallelism and further being able to operate asynchronously to form a MIMD architecture. Each SIMD processor comprises a SIMD architecture capable of performing two matrix-vector operations in parallel while fully exploiting parallelism in each operation. There is a system bus connecting the host processor to the plurality of SIMD microprocessors and a common clock providing a continuous sequence of clock pulses. There is also a ring structure interconnecting the plurality of SIMD microprocessors and connected to the clock for providing the clock pulses to the SIMD microprocessors and for providing a path for the flow of data and instructions between the SIMD microprocessors. The host processor includes logic for controlling the RRCS by interpreting instructions sent by the external host computer, decomposing the instructions into a series of computations to be performed by the SIMD microprocessors, using the system bus to distribute associated data among the SIMD microprocessors, and initiating activity of the SIMD microprocessors to perform the computations on the data by procedure call.

  20. Exact parallel algorithms for some members of the traveling salesman problem family

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pekny, J.F.

    1989-01-01

    The traveling salesman problem and its many generalizations comprise one of the best known combinatorial optimization problem families. Most members of the family are NP-complete problems so that exact algorithms require an unpredictable and sometimes large computational effort. Parallel computers offer hope for providing the power required to meet these demands. A major barrier to applying parallel computers is the lack of parallel algorithms. The contributions presented in this thesis center around new exact parallel algorithms for the asymmetric traveling salesman problem (ATSP), prize collecting traveling salesman problem (PCTSP), and resource constrained traveling salesman problem (RCTSP). The RCTSP is amore » particularly difficult member of the family since finding a feasible solution is an NP-complete problem. An exact sequential algorithm is also presented for the directed hamiltonian cycle problem (DHCP). The DHCP algorithm is superior to current heuristic approaches and represents the first exact method applicable to large graphs. Computational results presented for each of the algorithms demonstrates the effectiveness of combining efficient algorithms with parallel computing methods. Performance statistics are reported for randomly generated ATSPs with 7,500 cities, PCTSPs with 200 cities, RCTSPs with 200 cities, DHCPs with 3,500 vertices, and assignment problems of size 10,000. Sequential results were collected on a Sun 4/260 engineering workstation, while parallel results were collected using a 14 and 100 processor BBN Butterfly Plus computer. The computational results represent the largest instances ever solved to optimality on any type of computer.« less

  1. Use of parallel computing in mass processing of laser data

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Bratuś, R.; Prochaska, M.; Rzonca, A.

    2015-12-01

    The first part of the paper includes a description of the rules used to generate the algorithm needed for the purpose of parallel computing and also discusses the origins of the idea of research on the use of graphics processors in large scale processing of laser scanning data. The next part of the paper includes the results of an efficiency assessment performed for an array of different processing options, all of which were substantially accelerated with parallel computing. The processing options were divided into the generation of orthophotos using point clouds, coloring of point clouds, transformations, and the generation of a regular grid, as well as advanced processes such as the detection of planes and edges, point cloud classification, and the analysis of data for the purpose of quality control. Most algorithms had to be formulated from scratch in the context of the requirements of parallel computing. A few of the algorithms were based on existing technology developed by the Dephos Software Company and then adapted to parallel computing in the course of this research study. Processing time was determined for each process employed for a typical quantity of data processed, which helped confirm the high efficiency of the solutions proposed and the applicability of parallel computing to the processing of laser scanning data. The high efficiency of parallel computing yields new opportunities in the creation and organization of processing methods for laser scanning data.

  2. Integrating nanotechnology into school education: a review of the literature

    NASA Astrophysics Data System (ADS)

    Ghattas, Nadira I.; Carver, Jeffrey S.

    2012-11-01

    Background : In this era of rapid technical advancement, there are growing debates around the idea of nanotechnology, which are both timely and controversial. Nanotechnology materials are being utilized in our daily lives in many ways, often without consumer knowledge. Due to the explosion of nanotechnology applications, there is a necessity to update school science curricula by integrating nanotechnology-related concepts that are both relevant and meaningful to students. The integration of nanotechnology in school science curricula comes in response to nanoscientific development and our mission as educators to instill and arouse students' curiosity in learning about both what is and what will be more dominantly occupying the marketplace. Purpose : The purpose of this review was to set a baseline for the current work being conducted in moving nanotechnology-based activities into the school science setting. Design and methods: The review was implemented by searching LexisNexis Academic, EBSCOhost, Academic Search Complete, Education Search Complete as well as Google Scholar using search terms of nanotechnology, nanotechnology in schools, nanotechnology activities, history of nanotechnology, implications of nanotechnology, issues of nanotechnology and related combinations with nanotechnology as a consistent keyword. Returned articles were categorized by thematic content with primary and seminal work being given priority for inclusion. Conclusions : Current literature in the area of nanotechnology integration into school science curricula presented seven key categories of discussion: the origins of nanotechnology, challenges for educational implementation, currently available school activities, current consumer product applications, ethical issues, recommendations for educational policy, and implications of nanotechnology. There is limited availability of school-based activities. There are strong proponents for including nanotechnology in school science curricula. However, barriers to that inclusion are both real and perceived and are consistent with barriers reported for including other new science topics in the curricula, such as time, curricular and cognitive overload, and inclusion on assessment.

  3. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    PubMed

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  4. Creating a Parallel Version of VisIt for Microsoft Windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitlock, B J; Biagas, K S; Rawson, P L

    2011-12-07

    VisIt is a popular, free interactive parallel visualization and analysis tool for scientific data. Users can quickly generate visualizations from their data, animate them through time, manipulate them, and save the resulting images or movies for presentations. VisIt was designed from the ground up to work on many scales of computers from modest desktops up to massively parallel clusters. VisIt is comprised of a set of cooperating programs. All programs can be run locally or in client/server mode in which some run locally and some run remotely on compute clusters. The VisIt program most able to harness today's computing powermore » is the VisIt compute engine. The compute engine is responsible for reading simulation data from disk, processing it, and sending results or images back to the VisIt viewer program. In a parallel environment, the compute engine runs several processes, coordinating using the Message Passing Interface (MPI) library. Each MPI process reads some subset of the scientific data and filters the data in various ways to create useful visualizations. By using MPI, VisIt has been able to scale well into the thousands of processors on large computers such as dawn and graph at LLNL. The advent of multicore CPU's has made parallelism the 'new' way to achieve increasing performance. With today's computers having at least 2 cores and in many cases up to 8 and beyond, it is more important than ever to deploy parallel software that can use that computing power not only on clusters but also on the desktop. We have created a parallel version of VisIt for Windows that uses Microsoft's MPI implementation (MSMPI) to process data in parallel on the Windows desktop as well as on a Windows HPC cluster running Microsoft Windows Server 2008. Initial desktop parallel support for Windows was deployed in VisIt 2.4.0. Windows HPC cluster support has been completed and will appear in the VisIt 2.5.0 release. We plan to continue supporting parallel VisIt on Windows so our users will be able to take full advantage of their multicore resources.« less

  5. HeNCE: A Heterogeneous Network Computing Environment

    DOE PAGES

    Beguelin, Adam; Dongarra, Jack J.; Geist, George Al; ...

    1994-01-01

    Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE) is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM).more » The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.« less

  6. EDITORIAL: Moore and more progress in electronics and photonics Moore and more progress in electronics and photonics

    NASA Astrophysics Data System (ADS)

    Meyyappan, Meyya

    2009-10-01

    This year marks the 20th volume of Nanotechnology, the first journal dedicated to the emerging field of nanotechnology, pre-empting the US National Nanotechnology Initiative (NNI) by ten years. Throughout the evolution and revolution of nanomaterials and devices, Nanotechnology has been at the forefront. The journal's first article on nanoelectronics reported research on electronic transport through three- dimensionally confined semiconductor quantum dots by Professor Mark Reed, now Editor-in-Chief, and his colleagues at the time at Texas Instruments in Dallas (Reed M A, Randall J N and Luscombe J H 1990 Nanotechnology 1 63-6). In the first decade of the journal, papers on nanoelectronics were scarce and primarily reported research on resonant tunnelling devices, transport in quantum dots and other III-V devices. With the ability to produce single-walled carbon nanotubes (SWCNTs) and semiconducting nanowires on patterned substrates using CVD and similar techniques, nanoscale electronics and photonics flourished. A pioneering contribution by Collins et al (Collins P G, Bando H and Zettl A 1998 Nanotechnology 9 153-7) discussed conductivity measurements on SWCNTs using scanning tunnelling microscopy. In the same issue, Fritzsche et al (Fritzsche W, Böhm Unger E and Köhler J M 1998 Nanotechnology 9 177-3) discussed making electrical contacts to a single molecule, another early contribution in molecular electronics. There have been numerous interesting and trend-setting articles. My personal favourite is an article from Hewlett-Packard researchers Greg Snider, Phil Kuekes and Stan Williams (2004 Nanotechnology 15 881-91) discussing an approach to building a defect-tolerant computer out of defective configurable FETs and switches. The construction of defect-free materials, devices and components may well begin to pose an obstacle to nanotechnology, so this pioneering article exhibits extraordinary foresight in attempting to construct a useful machine from defective parts. The field of optoelectronics and photonics has been benefiting from the ability to synthesize semiconducting nanowires and quantum dots. Advances in light-emitting diodes, photodetectors, nanolasers, solar cells, and field emission devices have been abundantly reported in the journal. The future of these devices depends on our ability to control the size, orientation and properties of one- and zero-dimensional materials. The forecast for electronics and photonics has vastly underestimated developments, with predictions such as 'future computers will weigh no less than 1.5 tons'. Over the past twenty years, the number of transistors on a chip has risen from just 1 million to 2 billion, and is still increasing. Now the biggest question is: what will take over from Moore's law in about a decade? This question has been driving the research agenda in electronics across the industrial and academic world. The first answer appears to be integrating other functional components with logic and memory such as miniature camera modules, GPS, accelerometers, biometric identification, health monitoring systems, etc. Such integration is actively being pursued by industry. In contrast, a lot of new research is still driven by material innovations, for example, carbon nanotube based electronics. Rudimentary devices and circuits using SWCNTs have been demonstrated to outperform silicon devices of comparable size. However, controlling the chirality and diameter of SWCNTs is still a problem, as is the manufacture of 300-400 mm wafers with over 5-10 billion transistors, and all of this assumes that continuing on the path of CMOS but using a different material is the right approach in the first place. In the meantime, silicon and germanium in the form of nanowires may make their way into electronics. Then there is molecular electronics where conducting organic molecules could now become the heart of electronic components, although the precision and controllability of electrical contact with molecules remain challenging. The journal Nanotechnology has grown with the field, from a modest four issues per year for several years to what is now a weekly publication with a dedicated section to electronics and photonics. We look forward to more and more of your highest-quality papers.

  7. Parallelized multi–graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy

    PubMed Central

    Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.

    2014-01-01

    Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868

  8. Parallelized multi-graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy.

    PubMed

    Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P

    2014-07-01

    Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.

  9. Seeing the forest for the trees: Networked workstations as a parallel processing computer

    NASA Technical Reports Server (NTRS)

    Breen, J. O.; Meleedy, D. M.

    1992-01-01

    Unlike traditional 'serial' processing computers in which one central processing unit performs one instruction at a time, parallel processing computers contain several processing units, thereby, performing several instructions at once. Many of today's fastest supercomputers achieve their speed by employing thousands of processing elements working in parallel. Few institutions can afford these state-of-the-art parallel processors, but many already have the makings of a modest parallel processing system. Workstations on existing high-speed networks can be harnessed as nodes in a parallel processing environment, bringing the benefits of parallel processing to many. While such a system can not rival the industry's latest machines, many common tasks can be accelerated greatly by spreading the processing burden and exploiting idle network resources. We study several aspects of this approach, from algorithms to select nodes to speed gains in specific tasks. With ever-increasing volumes of astronomical data, it becomes all the more necessary to utilize our computing resources fully.

  10. Six Years of Parallel Computing at NAS (1987 - 1993): What Have we Learned?

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    In the fall of 1987 the age of parallelism at NAS began with the installation of a 32K processor CM-2 from Thinking Machines. In 1987 this was described as an "experiment" in parallel processing. In the six years since, NAS acquired a series of parallel machines, and conducted an active research and development effort focused on the use of highly parallel machines for applications in the computational aerosciences. In this time period parallel processing for scientific applications evolved from a fringe research topic into the one of main activities at NAS. In this presentation I will review the history of parallel computing at NAS in the context of the major progress, which has been made in the field in general. I will attempt to summarize the lessons we have learned so far, and the contributions NAS has made to the state of the art. Based on these insights I will comment on the current state of parallel computing (including the HPCC effort) and try to predict some trends for the next six years.

  11. Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R

    Methods, apparatuses, and computer program products for endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface (`PAMI`) of a parallel computer are provided. Embodiments include establishing by a parallel application a data communications geometry, the geometry specifying a set of endpoints that are used in collective operations of the PAMI, including associating with the geometry a list of collective algorithms valid for use with the endpoints of the geometry. Embodiments also include registering in each endpoint in the geometry a dispatch callback function for a collective operation and executing without blocking, through a single onemore » of the endpoints in the geometry, an instruction for the collective operation.« less

  12. A parallel variable metric optimization algorithm

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.

    1973-01-01

    An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.

  13. Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R

    Endpoint-based parallel data processing with non-blocking collective instructions in a PAMI of a parallel computer is disclosed. The PAMI is composed of data communications endpoints, each including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task. The compute nodes are coupled for data communications through the PAMI. The parallel application establishes a data communications geometry specifying a set of endpoints that are used in collective operations of the PAMI by associating with the geometry a list of collective algorithms valid for use with themore » endpoints of the geometry; registering in each endpoint in the geometry a dispatch callback function for a collective operation; and executing without blocking, through a single one of the endpoints in the geometry, an instruction for the collective operation.« less

  14. Template based parallel checkpointing in a massively parallel computer system

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  15. Computational strategies for three-dimensional flow simulations on distributed computer systems. Ph.D. Thesis Semiannual Status Report, 15 Aug. 1993 - 15 Feb. 1994

    NASA Technical Reports Server (NTRS)

    Weed, Richard Allen; Sankar, L. N.

    1994-01-01

    An increasing amount of research activity in computational fluid dynamics has been devoted to the development of efficient algorithms for parallel computing systems. The increasing performance to price ratio of engineering workstations has led to research to development procedures for implementing a parallel computing system composed of distributed workstations. This thesis proposal outlines an ongoing research program to develop efficient strategies for performing three-dimensional flow analysis on distributed computing systems. The PVM parallel programming interface was used to modify an existing three-dimensional flow solver, the TEAM code developed by Lockheed for the Air Force, to function as a parallel flow solver on clusters of workstations. Steady flow solutions were generated for three different wing and body geometries to validate the code and evaluate code performance. The proposed research will extend the parallel code development to determine the most efficient strategies for unsteady flow simulations.

  16. Concurrent extensions to the FORTRAN language for parallel programming of computational fluid dynamics algorithms

    NASA Technical Reports Server (NTRS)

    Weeks, Cindy Lou

    1986-01-01

    Experiments were conducted at NASA Ames Research Center to define multi-tasking software requirements for multiple-instruction, multiple-data stream (MIMD) computer architectures. The focus was on specifying solutions for algorithms in the field of computational fluid dynamics (CFD). The program objectives were to allow researchers to produce usable parallel application software as soon as possible after acquiring MIMD computer equipment, to provide researchers with an easy-to-learn and easy-to-use parallel software language which could be implemented on several different MIMD machines, and to enable researchers to list preferred design specifications for future MIMD computer architectures. Analysis of CFD algorithms indicated that extensions of an existing programming language, adaptable to new computer architectures, provided the best solution to meeting program objectives. The CoFORTRAN Language was written in response to these objectives and to provide researchers a means to experiment with parallel software solutions to CFD algorithms on machines with parallel architectures.

  17. Editorial: Trends in Nanotechnology (TNT2005)

    NASA Astrophysics Data System (ADS)

    Correia, Antonio; Serena, Pedro A.; José Saenz, Juan; Reifenberger, Ron; Ordejón, Pablo

    2006-05-01

    This special issue of physica status solidi (a) presents representative contributions describing the main topics covered at the sixth Trends in Nanotechnology (TNT2005) International Conference, held in Oviedo (Spain), 29 August-2 September 2005.During the last years many international or national conferences have emerged in response to the growing awareness of the importance of nanotechnology as key issue for the future scientific and technological development. Among these, the conference series Trends in Nanotechnology has become one of the most important meeting points in the nanotechnology field: it provides fresh organisation ideas, brings together well known speakers, and promotes a suitable environment for discussions, exchanging ideas, enhancing scientific and personal relations among participants. TNT2005 was organised in a similar way to the five prior TNT conferences, with an impressive scientific programme including 40 Keynote lectures and two Nobel prizes, without parallel sessions, covering a wide spectrum of Nanotechnology research. In 2005, more than 360 scientists worldwide attended this event and contributed with more than 60 oral contributions and 250 posters, stimulating discussions about their most recent research.The aim of the conference was to focus on the applications of Nanotechnology and to bring together, in a scientific forum, various worldwide groups belonging to industry, universities and government institutions. TNT2005 was particularly effective at transmitting information and establishing contacts among workers in this field. Graduate students attending such conferences have understood the importance of interdisciplinary skills to afford their future research lines. 76 graduate students received a grant allowing them to present their work. 28 prizes to the best posters were awarded during this event. We would like to thank all the participants for their assistance, as well as the authors for their written contributions.TNT2005 is the successful consequence of a coordinated effort among several organising Institutions: PHANTOMS Foundation, Universidad Autónoma de Madrid, Consejo Superior de Investigaciones Científicas, Universidad Carlos III de Madrid, Universidad Complutense de Madrid, Universidad de Oviedo, Donostia International Physics Center, Nanomaterials Laboratory-NIMS, CEA/LETI and CEA/DSM/DFRMC, University of Purdue and Georgia Institute of Technology. In addition, we are indebted to the following institutions, companies and government agencies for their help and financial support: NASA, Air Force Office of Scientific Research, iNANO, NSERC/CRSNG (Nano Innovation Platform), Sociedad de Microscopía Española (SME), Wiley-VCH, Raith GmbH, The European Office of Aerospace Research and Development (EOARD), The Office of Naval Research International Field Office (ONRIFO), World Scientific and Imperial College Press, Ministerio de Educación y Ciencia, Parque Científico de Barcelona, Parque Científico de Madrid, Tyndall National institute, Nanoquanta, GDR-E Nano-E, Minatec, Dupont, Physica Status Solidi, Zeiss, Ayuntamiento de Oviedo, Gobierno de Principado de Asturias, Asturiana de Zinc, cajAstur, Aleastur, Aceralia-Grupo Arcelor, Saint-Gobain Cristaleria, Mediadores Asociados Asturianos and Inderscience Publishers.We would like also to thanks the following companies for their participation: NanoTec, Raith GmbH, Scientec, NT-MDT, Schaefer Techniques, Suss Microtec, Carl Zeiss, Biometa, Wiley-VCH, World Scientific and Imperial College Press and Atomic Force.We invite readers of this special issue to join us in Grenoble (France), where the next Trends in Nanotechnology 2006 edition will take place (http://www.tnt2006.org).

  18. Parallelization of interpolation, solar radiation and water flow simulation modules in GRASS GIS using OpenMP

    NASA Astrophysics Data System (ADS)

    Hofierka, Jaroslav; Lacko, Michal; Zubal, Stanislav

    2017-10-01

    In this paper, we describe the parallelization of three complex and computationally intensive modules of GRASS GIS using the OpenMP application programming interface for multi-core computers. These include the v.surf.rst module for spatial interpolation, the r.sun module for solar radiation modeling and the r.sim.water module for water flow simulation. We briefly describe the functionality of the modules and parallelization approaches used in the modules. Our approach includes the analysis of the module's functionality, identification of source code segments suitable for parallelization and proper application of OpenMP parallelization code to create efficient threads processing the subtasks. We document the efficiency of the solutions using the airborne laser scanning data representing land surface in the test area and derived high-resolution digital terrain model grids. We discuss the performance speed-up and parallelization efficiency depending on the number of processor threads. The study showed a substantial increase in computation speeds on a standard multi-core computer while maintaining the accuracy of results in comparison to the output from original modules. The presented parallelization approach showed the simplicity and efficiency of the parallelization of open-source GRASS GIS modules using OpenMP, leading to an increased performance of this geospatial software on standard multi-core computers.

  19. Fast hydrological model calibration based on the heterogeneous parallel computing accelerated shuffled complex evolution method

    NASA Astrophysics Data System (ADS)

    Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Hong, Yang; Zuo, Depeng; Ren, Minglei; Lei, Tianjie; Liang, Ke

    2018-01-01

    Hydrological model calibration has been a hot issue for decades. The shuffled complex evolution method developed at the University of Arizona (SCE-UA) has been proved to be an effective and robust optimization approach. However, its computational efficiency deteriorates significantly when the amount of hydrometeorological data increases. In recent years, the rise of heterogeneous parallel computing has brought hope for the acceleration of hydrological model calibration. This study proposed a parallel SCE-UA method and applied it to the calibration of a watershed rainfall-runoff model, the Xinanjiang model. The parallel method was implemented on heterogeneous computing systems using OpenMP and CUDA. Performance testing and sensitivity analysis were carried out to verify its correctness and efficiency. Comparison results indicated that heterogeneous parallel computing-accelerated SCE-UA converged much more quickly than the original serial version and possessed satisfactory accuracy and stability for the task of fast hydrological model calibration.

  20. Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Chen, Yousu; Wu, Di

    2015-12-09

    Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Messagemore » Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.« less

  1. A hybrid parallel architecture for electrostatic interactions in the simulation of dissipative particle dynamics

    NASA Astrophysics Data System (ADS)

    Yang, Sheng-Chun; Lu, Zhong-Yuan; Qian, Hu-Jun; Wang, Yong-Lei; Han, Jie-Ping

    2017-11-01

    In this work, we upgraded the electrostatic interaction method of CU-ENUF (Yang, et al., 2016) which first applied CUNFFT (nonequispaced Fourier transforms based on CUDA) to the reciprocal-space electrostatic computation and made the computation of electrostatic interaction done thoroughly in GPU. The upgraded edition of CU-ENUF runs concurrently in a hybrid parallel way that enables the computation parallelizing on multiple computer nodes firstly, then further on the installed GPU in each computer. By this parallel strategy, the size of simulation system will be never restricted to the throughput of a single CPU or GPU. The most critical technical problem is how to parallelize a CUNFFT in the parallel strategy, which is conquered effectively by deep-seated research of basic principles and some algorithm skills. Furthermore, the upgraded method is capable of computing electrostatic interactions for both the atomistic molecular dynamics (MD) and the dissipative particle dynamics (DPD). Finally, the benchmarks conducted for validation and performance indicate that the upgraded method is able to not only present a good precision when setting suitable parameters, but also give an efficient way to compute electrostatic interactions for huge simulation systems. Program Files doi:http://dx.doi.org/10.17632/zncf24fhpv.1 Licensing provisions: GNU General Public License 3 (GPL) Programming language: C, C++, and CUDA C Supplementary material: The program is designed for effective electrostatic interactions of large-scale simulation systems, which runs on particular computers equipped with NVIDIA GPUs. It has been tested on (a) single computer node with Intel(R) Core(TM) i7-3770@ 3.40 GHz (CPU) and GTX 980 Ti (GPU), and (b) MPI parallel computer nodes with the same configurations. Nature of problem: For molecular dynamics simulation, the electrostatic interaction is the most time-consuming computation because of its long-range feature and slow convergence in simulation space, which approximately take up most of the total simulation time. Although the parallel method CU-ENUF (Yang et al., 2016) based on GPU has achieved a qualitative leap compared with previous methods in electrostatic interactions computation, the computation capability is limited to the throughput capacity of a single GPU for super-scale simulation system. Therefore, we should look for an effective method to handle the calculation of electrostatic interactions efficiently for a simulation system with super-scale size. Solution method: We constructed a hybrid parallel architecture, in which CPU and GPU are combined to accelerate the electrostatic computation effectively. Firstly, the simulation system is divided into many subtasks via domain-decomposition method. Then MPI (Message Passing Interface) is used to implement the CPU-parallel computation with each computer node corresponding to a particular subtask, and furthermore each subtask in one computer node will be executed in GPU in parallel efficiently. In this hybrid parallel method, the most critical technical problem is how to parallelize a CUNFFT (nonequispaced fast Fourier transform based on CUDA) in the parallel strategy, which is conquered effectively by deep-seated research of basic principles and some algorithm skills. Restrictions: The HP-ENUF is mainly oriented to super-scale system simulations, in which the performance superiority is shown adequately. However, for a small simulation system containing less than 106 particles, the mode of multiple computer nodes has no apparent efficiency advantage or even lower efficiency due to the serious network delay among computer nodes, than the mode of single computer node. References: (1) S.-C. Yang, H.-J. Qian, Z.-Y. Lu, Appl. Comput. Harmon. Anal. 2016, http://dx.doi.org/10.1016/j.acha.2016.04.009. (2) S.-C. Yang, Y.-L. Wang, G.-S. Jiao, H.-J. Qian, Z.-Y. Lu, J. Comput. Chem. 37 (2016) 378. (3) S.-C. Yang, Y.-L. Zhu, H.-J. Qian, Z.-Y. Lu, Appl. Chem. Res. Chin. Univ., 2017, http://dx.doi.org/10.1007/s40242-016-6354-5. (4) Y.-L. Zhu, H. Liu, Z.-W. Li, H.-J. Qian, G. Milano, Z.-Y. Lu, J. Comput. Chem. 34 (2013) 2197.

  2. Analytical Nanoscience and Nanotechnology: Where we are and where we are heading.

    PubMed

    Laura Soriano, María; Zougagh, Mohammed; Valcárcel, Miguel; Ríos, Ángel

    2018-01-15

    The main aim of this paper is to offer an objective and critical overview of the situation and trends in Analytical Nanoscience and Nanotechnology (AN&N), which is an important break point in the evolution of Analytical Chemistry in the XXI century as they were computers and instruments in the second half of XX century. The first part of this overview is devoted to provide a general approach to AN&N by describing the state of the art of this recent topic, being the importance of it also emphasized. Secondly, particular but very relevant trends in this topic are outlined: the analysis of the nanoworld, the so "third way" in AN&N, the growing importance of bioanalysis, the evaluation of both nanosensors and nanosorbents, the impact of AN&N in bioimaging and in nanotoxicological studies, as well as the crucial importance of reliability of the nanotechnological processes and results for solving real analytical problems in the frame of Social Responsibility (SR) of science and technology. Several reflections are included at the end of this overview written as a bird's eye view, which is not an easy task for experts in AN&N. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Cooperative storage of shared files in a parallel computing system with dynamic block size

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-11-10

    Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).

  4. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm and Reduced Parallel Diagonal Dominant (RPDD) algorithm have been carefully studied on different parallel platforms for different applications, and a NASA simulation code developed by Man M. Rai and his colleagues has been parallelized and implemented based on data dependency analysis. These achievements are addressed in detail in the paper.

  5. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-01-10

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  6. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  7. MPI implementation of PHOENICS: A general purpose computational fluid dynamics code

    NASA Astrophysics Data System (ADS)

    Simunovic, S.; Zacharia, T.; Baltas, N.; Spalding, D. B.

    1995-03-01

    PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. The Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.

  8. MPI implementation of PHOENICS: A general purpose computational fluid dynamics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, S.; Zacharia, T.; Baltas, N.

    1995-04-01

    PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. Themore » Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.« less

  9. Biocellion: accelerating computer simulation of multicellular biological system models

    PubMed Central

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-01-01

    Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572

  10. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  11. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  12. Performance analysis of three dimensional integral equation computations on a massively parallel computer. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Logan, Terry G.

    1994-01-01

    The purpose of this study is to investigate the performance of the integral equation computations using numerical source field-panel method in a massively parallel processing (MPP) environment. A comparative study of computational performance of the MPP CM-5 computer and conventional Cray-YMP supercomputer for a three-dimensional flow problem is made. A serial FORTRAN code is converted into a parallel CM-FORTRAN code. Some performance results are obtained on CM-5 with 32, 62, 128 nodes along with those on Cray-YMP with a single processor. The comparison of the performance indicates that the parallel CM-FORTRAN code near or out-performs the equivalent serial FORTRAN code for some cases.

  13. Parallel aeroelastic computations for wing and wing-body configurations

    NASA Technical Reports Server (NTRS)

    Byun, Chansup

    1994-01-01

    The objective of this research is to develop computationally efficient methods for solving fluid-structural interaction problems by directly coupling finite difference Euler/Navier-Stokes equations for fluids and finite element dynamics equations for structures on parallel computers. This capability will significantly impact many aerospace projects of national importance such as Advanced Subsonic Civil Transport (ASCT), where the structural stability margin becomes very critical at the transonic region. This research effort will have direct impact on the High Performance Computing and Communication (HPCC) Program of NASA in the area of parallel computing.

  14. Implementation of a 3D mixing layer code on parallel computers

    NASA Technical Reports Server (NTRS)

    Roe, K.; Thakur, R.; Dang, T.; Bogucz, E.

    1995-01-01

    This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.

  15. NASA Workshop on Computational Structural Mechanics 1987, part 1

    NASA Technical Reports Server (NTRS)

    Sykes, Nancy P. (Editor)

    1989-01-01

    Topics in Computational Structural Mechanics (CSM) are reviewed. CSM parallel structural methods, a transputer finite element solver, architectures for multiprocessor computers, and parallel eigenvalue extraction are among the topics discussed.

  16. Parallel computation with molecular-motor-propelled agents in nanofabricated networks.

    PubMed

    Nicolau, Dan V; Lard, Mercy; Korten, Till; van Delft, Falco C M J M; Persson, Malin; Bengtsson, Elina; Månsson, Alf; Diez, Stefan; Linke, Heiner; Nicolau, Dan V

    2016-03-08

    The combinatorial nature of many important mathematical problems, including nondeterministic-polynomial-time (NP)-complete problems, places a severe limitation on the problem size that can be solved with conventional, sequentially operating electronic computers. There have been significant efforts in conceiving parallel-computation approaches in the past, for example: DNA computation, quantum computation, and microfluidics-based computation. However, these approaches have not proven, so far, to be scalable and practical from a fabrication and operational perspective. Here, we report the foundations of an alternative parallel-computation system in which a given combinatorial problem is encoded into a graphical, modular network that is embedded in a nanofabricated planar device. Exploring the network in a parallel fashion using a large number of independent, molecular-motor-propelled agents then solves the mathematical problem. This approach uses orders of magnitude less energy than conventional computers, thus addressing issues related to power consumption and heat dissipation. We provide a proof-of-concept demonstration of such a device by solving, in a parallel fashion, the small instance {2, 5, 9} of the subset sum problem, which is a benchmark NP-complete problem. Finally, we discuss the technical advances necessary to make our system scalable with presently available technology.

  17. Computational Nanotechnology of Molecular Materials, Electronics and Machines

    NASA Technical Reports Server (NTRS)

    Srivastava, D.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    This viewgraph presentation covers carbon nanotubes, their characteristics, and their potential future applications. The presentation include predictions on the development of nanostructures and their applications, the thermal characteristics of carbon nanotubes, mechano-chemical effects upon carbon nanotubes, molecular electronics, and models for possible future nanostructure devices. The presentation also proposes a neural model for signal processing.

  18. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    NASA Astrophysics Data System (ADS)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.

  19. A highly efficient multi-core algorithm for clustering extremely large datasets

    PubMed Central

    2010-01-01

    Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922

  20. Support for Debugging Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Jost, Gabriele; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This viewgraph presentation provides information on the technical aspects of debugging computer code that has been automatically converted for use in a parallel computing system. Shared memory parallelization and distributed memory parallelization entail separate and distinct challenges for a debugging program. A prototype system has been developed which integrates various tools for the debugging of automatically parallelized programs including the CAPTools Database which provides variable definition information across subroutines as well as array distribution information.

  1. Parallel language constructs for tensor product computations on loosely coupled architectures

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Van Rosendale, John

    1989-01-01

    A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. The authors focus on tensor product array computations, a simple but important class of numerical algorithms. They consider first the problem of programming one-dimensional kernel routines, such as parallel tridiagonal solvers, and then look at how such parallel kernels can be combined to form parallel tensor product algorithms.

  2. Development of a Distributed Parallel Computing Framework to Facilitate Regional/Global Gridded Crop Modeling with Various Scenarios

    NASA Astrophysics Data System (ADS)

    Jang, W.; Engda, T. A.; Neff, J. C.; Herrick, J.

    2017-12-01

    Many crop models are increasingly used to evaluate crop yields at regional and global scales. However, implementation of these models across large areas using fine-scale grids is limited by computational time requirements. In order to facilitate global gridded crop modeling with various scenarios (i.e., different crop, management schedule, fertilizer, and irrigation) using the Environmental Policy Integrated Climate (EPIC) model, we developed a distributed parallel computing framework in Python. Our local desktop with 14 cores (28 threads) was used to test the distributed parallel computing framework in Iringa, Tanzania which has 406,839 grid cells. High-resolution soil data, SoilGrids (250 x 250 m), and climate data, AgMERRA (0.25 x 0.25 deg) were also used as input data for the gridded EPIC model. The framework includes a master file for parallel computing, input database, input data formatters, EPIC model execution, and output analyzers. Through the master file for parallel computing, the user-defined number of threads of CPU divides the EPIC simulation into jobs. Then, Using EPIC input data formatters, the raw database is formatted for EPIC input data and the formatted data moves into EPIC simulation jobs. Then, 28 EPIC jobs run simultaneously and only interesting results files are parsed and moved into output analyzers. We applied various scenarios with seven different slopes and twenty-four fertilizer ranges. Parallelized input generators create different scenarios as a list for distributed parallel computing. After all simulations are completed, parallelized output analyzers are used to analyze all outputs according to the different scenarios. This saves significant computing time and resources, making it possible to conduct gridded modeling at regional to global scales with high-resolution data. For example, serial processing for the Iringa test case would require 113 hours, while using the framework developed in this study requires only approximately 6 hours, a nearly 95% reduction in computing time.

  3. Continuous development of schemes for parallel computing of the electrostatics in biological systems: implementation in DelPhi.

    PubMed

    Li, Chuan; Petukh, Marharyta; Li, Lin; Alexov, Emil

    2013-08-15

    Due to the enormous importance of electrostatics in molecular biology, calculating the electrostatic potential and corresponding energies has become a standard computational approach for the study of biomolecules and nano-objects immersed in water and salt phase or other media. However, the electrostatics of large macromolecules and macromolecular complexes, including nano-objects, may not be obtainable via explicit methods and even the standard continuum electrostatics methods may not be applicable due to high computational time and memory requirements. Here, we report further development of the parallelization scheme reported in our previous work (Li, et al., J. Comput. Chem. 2012, 33, 1960) to include parallelization of the molecular surface and energy calculations components of the algorithm. The parallelization scheme utilizes different approaches such as space domain parallelization, algorithmic parallelization, multithreading, and task scheduling, depending on the quantity being calculated. This allows for efficient use of the computing resources of the corresponding computer cluster. The parallelization scheme is implemented in the popular software DelPhi and results in speedup of several folds. As a demonstration of the efficiency and capability of this methodology, the electrostatic potential, and electric field distributions are calculated for the bovine mitochondrial supercomplex illustrating their complex topology, which cannot be obtained by modeling the supercomplex components alone. Copyright © 2013 Wiley Periodicals, Inc.

  4. Redundant binary number representation for an inherently parallel arithmetic on optical computers.

    PubMed

    De Biase, G A; Massini, A

    1993-02-10

    A simple redundant binary number representation suitable for digital-optical computers is presented. By means of this representation it is possible to build an arithmetic with carry-free parallel algebraic sums carried out in constant time and parallel multiplication in log N time. This redundant number representation naturally fits the 2's complement binary number system and permits the construction of inherently parallel arithmetic units that are used in various optical technologies. Some properties of this number representation and several examples of computation are presented.

  5. Backtracking and Re-execution in the Automatic Debugging of Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Matthews, Gregory; Hood, Robert; Johnson, Stephen; Leggett, Peter; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In this work we describe a new approach using relative debugging to find differences in computation between a serial program and a parallel version of th it program. We use a combination of re-execution and backtracking in order to find the first difference in computation that may ultimately lead to an incorrect value that the user has indicated. In our prototype implementation we use static analysis information from a parallelization tool in order to perform the backtracking as well as the mapping required between serial and parallel computations.

  6. Silver Nanoparticles: An Influential Element in Plant Nanobiotechnology.

    PubMed

    Sarmast, Mostafa K; Salehi, H

    2016-07-01

    Profound interest and progress has been made since the invention of nanotechnology in 1959. However, its application in plant tissue culture and biotechnology has not been fully acknowledged in parallel with other facets of this technology. In this manuscript, the AgNPs effects on plant tissue culture and biotechnology encompass their antimicrobial effects and their mechanisms of action addressed to some extends. Moreover, their effects on seedling growth also reviewed. Most of the presented papers in the field of plant science have focused on antimicrobial effects of silver nanoparticles but its interesting inhibitory effects of plant senescence phytohormone ethylene, most likely can open a new window for future research.

  7. Traffic Simulations on Parallel Computers Using Domain Decomposition Techniques

    DOT National Transportation Integrated Search

    1995-01-01

    Large scale simulations of Intelligent Transportation Systems (ITS) can only be acheived by using the computing resources offered by parallel computing architectures. Domain decomposition techniques are proposed which allow the performance of traffic...

  8. A Multi-Level Parallelization Concept for High-Fidelity Multi-Block Solvers

    NASA Technical Reports Server (NTRS)

    Hatay, Ferhat F.; Jespersen, Dennis C.; Guruswamy, Guru P.; Rizk, Yehia M.; Byun, Chansup; Gee, Ken; VanDalsem, William R. (Technical Monitor)

    1997-01-01

    The integration of high-fidelity Computational Fluid Dynamics (CFD) analysis tools with the industrial design process benefits greatly from the robust implementations that are transportable across a wide range of computer architectures. In the present work, a hybrid domain-decomposition and parallelization concept was developed and implemented into the widely-used NASA multi-block Computational Fluid Dynamics (CFD) packages implemented in ENSAERO and OVERFLOW. The new parallel solver concept, PENS (Parallel Euler Navier-Stokes Solver), employs both fine and coarse granularity in data partitioning as well as data coalescing to obtain the desired load-balance characteristics on the available computer platforms. This multi-level parallelism implementation itself introduces no changes to the numerical results, hence the original fidelity of the packages are identically preserved. The present implementation uses the Message Passing Interface (MPI) library for interprocessor message passing and memory accessing. By choosing an appropriate combination of the available partitioning and coalescing capabilities only during the execution stage, the PENS solver becomes adaptable to different computer architectures from shared-memory to distributed-memory platforms with varying degrees of parallelism. The PENS implementation on the IBM SP2 distributed memory environment at the NASA Ames Research Center obtains 85 percent scalable parallel performance using fine-grain partitioning of single-block CFD domains using up to 128 wide computational nodes. Multi-block CFD simulations of complete aircraft simulations achieve 75 percent perfect load-balanced executions using data coalescing and the two levels of parallelism. SGI PowerChallenge, SGI Origin 2000, and a cluster of workstations are the other platforms where the robustness of the implementation is tested. The performance behavior on the other computer platforms with a variety of realistic problems will be included as this on-going study progresses.

  9. Parallel computing on Unix workstation arrays

    NASA Astrophysics Data System (ADS)

    Reale, F.; Bocchino, F.; Sciortino, S.

    1994-12-01

    We have tested arrays of general-purpose Unix workstations used as MIMD systems for massive parallel computations. In particular we have solved numerically a demanding test problem with a 2D hydrodynamic code, generally developed to study astrophysical flows, by exucuting it on arrays either of DECstations 5000/200 on Ethernet LAN, or of DECstations 3000/400, equipped with powerful Alpha processors, on FDDI LAN. The code is appropriate for data-domain decomposition, and we have used a library for parallelization previously developed in our Institute, and easily extended to work on Unix workstation arrays by using the PVM software toolset. We have compared the parallel efficiencies obtained on arrays of several processors to those obtained on a dedicated MIMD parallel system, namely a Meiko Computing Surface (CS-1), equipped with Intel i860 processors. We discuss the feasibility of using non-dedicated parallel systems and conclude that the convenience depends essentially on the size of the computational domain as compared to the relative processor power and network bandwidth. We point out that for future perspectives a parallel development of processor and network technology is important, and that the software still offers great opportunities of improvement, especially in terms of latency times in the message-passing protocols. In conditions of significant gain in terms of speedup, such workstation arrays represent a cost-effective approach to massive parallel computations.

  10. Parallelization strategies for continuum-generalized method of moments on the multi-thread systems

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.

    2017-07-01

    Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.

  11. Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.

    PubMed

    Tao, Liang; Kwan, Hon Keung

    2012-07-01

    Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.

  12. Parallel Computational Fluid Dynamics: Current Status and Future Requirements

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; VanDalsem, William R.; Dagum, Leonardo; Kutler, Paul (Technical Monitor)

    1994-01-01

    One or the key objectives of the Applied Research Branch in the Numerical Aerodynamic Simulation (NAS) Systems Division at NASA Allies Research Center is the accelerated introduction of highly parallel machines into a full operational environment. In this report we discuss the performance results obtained from the implementation of some computational fluid dynamics (CFD) applications on the Connection Machine CM-2 and the Intel iPSC/860. We summarize some of the experiences made so far with the parallel testbed machines at the NAS Applied Research Branch. Then we discuss the long term computational requirements for accomplishing some of the grand challenge problems in computational aerosciences. We argue that only massively parallel machines will be able to meet these grand challenge requirements, and we outline the computer science and algorithm research challenges ahead.

  13. Scalable and massively parallel Monte Carlo photon transport simulations for heterogeneous computing platforms

    NASA Astrophysics Data System (ADS)

    Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian

    2018-01-01

    We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.

  14. A parallel-processing approach to computing for the geographic sciences; applications and systems enhancements

    USGS Publications Warehouse

    Crane, Michael; Steinwand, Dan; Beckmann, Tim; Krpan, Greg; Liu, Shu-Guang; Nichols, Erin; Haga, Jim; Maddox, Brian; Bilderback, Chris; Feller, Mark; Homer, George

    2001-01-01

    The overarching goal of this project is to build a spatially distributed infrastructure for information science research by forming a team of information science researchers and providing them with similar hardware and software tools to perform collaborative research. Four geographically distributed Centers of the U.S. Geological Survey (USGS) are developing their own clusters of low-cost, personal computers into parallel computing environments that provide a costeffective way for the USGS to increase participation in the high-performance computing community. Referred to as Beowulf clusters, these hybrid systems provide the robust computing power required for conducting information science research into parallel computing systems and applications.

  15. Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmalz, Mark S

    2011-07-24

    Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less

  16. Parallelizing flow-accumulation calculations on graphics processing units—From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm

    NASA Astrophysics Data System (ADS)

    Qin, Cheng-Zhi; Zhan, Lijun

    2012-06-01

    As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU-based algorithms based on existing parallelization strategies.

  17. Parallelized Stochastic Cutoff Method for Long-Range Interacting Systems

    NASA Astrophysics Data System (ADS)

    Endo, Eishin; Toga, Yuta; Sasaki, Munetaka

    2015-07-01

    We present a method of parallelizing the stochastic cutoff (SCO) method, which is a Monte-Carlo method for long-range interacting systems. After interactions are eliminated by the SCO method, we subdivide a lattice into noninteracting interpenetrating sublattices. This subdivision enables us to parallelize the Monte-Carlo calculation in the SCO method. Such subdivision is found by numerically solving the vertex coloring of a graph created by the SCO method. We use an algorithm proposed by Kuhn and Wattenhofer to solve the vertex coloring by parallel computation. This method was applied to a two-dimensional magnetic dipolar system on an L × L square lattice to examine its parallelization efficiency. The result showed that, in the case of L = 2304, the speed of computation increased about 102 times by parallel computation with 288 processors.

  18. Impact of nanotechnology on drug delivery.

    PubMed

    Farokhzad, Omid C; Langer, Robert

    2009-01-27

    Nanotechnology is the engineering and manufacturing of materials at the atomic and molecular scale. In its strictest definition from the National Nanotechnology Initiative, nanotechnology refers to structures roughly in the 1-100 nm size regime in at least one dimension. Despite this size restriction, nanotechnology commonly refers to structures that are up to several hundred nanometers in size and that are developed by top-down or bottom-up engineering of individual components. Herein, we focus on the application of nanotechnology to drug delivery and highlight several areas of opportunity where current and emerging nanotechnologies could enable entirely novel classes of therapeutics.

  19. Comparing nanoparticle risk perceptions to other known EHS risks

    NASA Astrophysics Data System (ADS)

    Berube, David M.; Cummings, Christopher L.; Frith, Jordan H.; Binder, Andrew R.; Oldendick, Robert

    2011-08-01

    Over the last decade social scientific researchers have examined how the public perceives risks associated with nanotechnology. The body of literature that has emerged has been methodologically diverse. The findings have confirmed that some publics perceive nanotechnology as riskier than others, experts feel nanotechnology is less risky than the public does, and despite risks the public is optimistic about nanotechnology development. However, the extant literature on nanotechnology and risk suffers from sometimes widely divergent findings and has failed to provide a detailed picture of how the public actually feels about nanotechnology risks when compared to other risks. This study addresses the deficiencies in the literature by providing a comparative approach to gauging nanotechnology risks. The findings show that the public does not fear nanotechnology compared to other risks. Out of 24 risks presented to the participants, nanotechnology ranked 19th in terms of overall risk and 20th in terms of "high risk."

  20. Monitoring nanotechnology using patent classifications: an overview and comparison of nanotechnology classification schemes

    NASA Astrophysics Data System (ADS)

    Jürgens, Björn; Herrero-Solana, Victor

    2017-04-01

    Patents are an essential information source used to monitor, track, and analyze nanotechnology. When it comes to search nanotechnology-related patents, a keyword search is often incomplete and struggles to cover such an interdisciplinary discipline. Patent classification schemes can reveal far better results since they are assigned by experts who classify the patent documents according to their technology. In this paper, we present the most important classifications to search nanotechnology patents and analyze how nanotechnology is covered in the main patent classification systems used in search systems nowadays: the International Patent Classification (IPC), the United States Patent Classification (USPC), and the Cooperative Patent Classification (CPC). We conclude that nanotechnology has a significantly better patent coverage in the CPC since considerable more nanotechnology documents were retrieved than by using other classifications, and thus, recommend its use for all professionals involved in nanotechnology patent searches.

  1. Identifying failure in a tree network of a parallel computer

    DOEpatents

    Archer, Charles J.; Pinnow, Kurt W.; Wallenfelt, Brian P.

    2010-08-24

    Methods, parallel computers, and products are provided for identifying failure in a tree network of a parallel computer. The parallel computer includes one or more processing sets including an I/O node and a plurality of compute nodes. For each processing set embodiments include selecting a set of test compute nodes, the test compute nodes being a subset of the compute nodes of the processing set; measuring the performance of the I/O node of the processing set; measuring the performance of the selected set of test compute nodes; calculating a current test value in dependence upon the measured performance of the I/O node of the processing set, the measured performance of the set of test compute nodes, and a predetermined value for I/O node performance; and comparing the current test value with a predetermined tree performance threshold. If the current test value is below the predetermined tree performance threshold, embodiments include selecting another set of test compute nodes. If the current test value is not below the predetermined tree performance threshold, embodiments include selecting from the test compute nodes one or more potential problem nodes and testing individually potential problem nodes and links to potential problem nodes.

  2. Design of on-board parallel computer on nano-satellite

    NASA Astrophysics Data System (ADS)

    You, Zheng; Tian, Hexiang; Yu, Shijie; Meng, Li

    2007-11-01

    This paper provides one scheme of the on-board parallel computer system designed for the Nano-satellite. Based on the development request that the Nano-satellite should have a small volume, low weight, low power cost, and intelligence, this scheme gets rid of the traditional one-computer system and dual-computer system with endeavor to improve the dependability, capability and intelligence simultaneously. According to the method of integration design, it employs the parallel computer system with shared memory as the main structure, connects the telemetric system, attitude control system, and the payload system by the intelligent bus, designs the management which can deal with the static tasks and dynamic task-scheduling, protect and recover the on-site status and so forth in light of the parallel algorithms, and establishes the fault diagnosis, restoration and system restructure mechanism. It accomplishes an on-board parallel computer system with high dependability, capability and intelligence, a flexible management on hardware resources, an excellent software system, and a high ability in extension, which satisfies with the conception and the tendency of the integration electronic design sufficiently.

  3. Optical Symbolic Computing

    NASA Astrophysics Data System (ADS)

    Neff, John A.

    1989-12-01

    Experiments originating from Gestalt psychology have shown that representing information in a symbolic form provides a more effective means to understanding. Computer scientists have been struggling for the last two decades to determine how best to create, manipulate, and store collections of symbolic structures. In the past, much of this struggling led to software innovations because that was the path of least resistance. For example, the development of heuristics for organizing the searching through knowledge bases was much less expensive than building massively parallel machines that could search in parallel. That is now beginning to change with the emergence of parallel architectures which are showing the potential for handling symbolic structures. This paper will review the relationships between symbolic computing and parallel computing architectures, and will identify opportunities for optics to significantly impact the performance of such computing machines. Although neural networks are an exciting subset of massively parallel computing structures, this paper will not touch on this area since it is receiving a great deal of attention in the literature. That is, the concepts presented herein do not consider the distributed representation of knowledge.

  4. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  5. Enhancing PC Cluster-Based Parallel Branch-and-Bound Algorithms for the Graph Coloring Problem

    NASA Astrophysics Data System (ADS)

    Taoka, Satoshi; Takafuji, Daisuke; Watanabe, Toshimasa

    A branch-and-bound algorithm (BB for short) is the most general technique to deal with various combinatorial optimization problems. Even if it is used, computation time is likely to increase exponentially. So we consider its parallelization to reduce it. It has been reported that the computation time of a parallel BB heavily depends upon node-variable selection strategies. And, in case of a parallel BB, it is also necessary to prevent increase in communication time. So, it is important to pay attention to how many and what kind of nodes are to be transferred (called sending-node selection strategy). In this paper, for the graph coloring problem, we propose some sending-node selection strategies for a parallel BB algorithm by adopting MPI for parallelization and experimentally evaluate how these strategies affect computation time of a parallel BB on a PC cluster network.

  6. Biocellion: accelerating computer simulation of multicellular biological system models.

    PubMed

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Modelling parallel programs and multiprocessor architectures with AXE

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Fineman, Charles E.

    1991-01-01

    AXE, An Experimental Environment for Parallel Systems, was designed to model and simulate for parallel systems at the process level. It provides an integrated environment for specifying computation models, multiprocessor architectures, data collection, and performance visualization. AXE is being used at NASA-Ames for developing resource management strategies, parallel problem formulation, multiprocessor architectures, and operating system issues related to the High Performance Computing and Communications Program. AXE's simple, structured user-interface enables the user to model parallel programs and machines precisely and efficiently. Its quick turn-around time keeps the user interested and productive. AXE models multicomputers. The user may easily modify various architectural parameters including the number of sites, connection topologies, and overhead for operating system activities. Parallel computations in AXE are represented as collections of autonomous computing objects known as players. Their use and behavior is described. Performance data of the multiprocessor model can be observed on a color screen. These include CPU and message routing bottlenecks, and the dynamic status of the software.

  8. Efficient parallel resolution of the simplified transport equations in mixed-dual formulation

    NASA Astrophysics Data System (ADS)

    Barrault, M.; Lathuilière, B.; Ramet, P.; Roman, J.

    2011-03-01

    A reactivity computation consists of computing the highest eigenvalue of a generalized eigenvalue problem, for which an inverse power algorithm is commonly used. Very fine modelizations are difficult to treat for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. A first implementation of a Lagrangian based domain decomposition method brings to a poor parallel efficiency because of an increase in the power iterations [1]. In order to obtain a high parallel efficiency, we improve the parallelization scheme by changing the location of the loop over the subdomains in the overall algorithm and by benefiting from the characteristics of the Raviart-Thomas finite element. The new parallel algorithm still allows us to locally adapt the numerical scheme (mesh, finite element order). However, it can be significantly optimized for the matching grid case. The good behavior of the new parallelization scheme is demonstrated for the matching grid case on several hundreds of nodes for computations based on a pin-by-pin discretization.

  9. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  10. Hybrid massively parallel fast sweeping method for static Hamilton-Jacobi equations

    NASA Astrophysics Data System (ADS)

    Detrixhe, Miles; Gibou, Frédéric

    2016-10-01

    The fast sweeping method is a popular algorithm for solving a variety of static Hamilton-Jacobi equations. Fast sweeping algorithms for parallel computing have been developed, but are severely limited. In this work, we present a multilevel, hybrid parallel algorithm that combines the desirable traits of two distinct parallel methods. The fine and coarse grained components of the algorithm take advantage of heterogeneous computer architecture common in high performance computing facilities. We present the algorithm and demonstrate its effectiveness on a set of example problems including optimal control, dynamic games, and seismic wave propagation. We give results for convergence, parallel scaling, and show state-of-the-art speedup values for the fast sweeping method.

  11. Turbomachinery CFD on parallel computers

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.

    1992-01-01

    The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.

  12. A Review of High-Performance Computational Strategies for Modeling and Imaging of Electromagnetic Induction Data

    NASA Astrophysics Data System (ADS)

    Newman, Gregory A.

    2014-01-01

    Many geoscientific applications exploit electrostatic and electromagnetic fields to interrogate and map subsurface electrical resistivity—an important geophysical attribute for characterizing mineral, energy, and water resources. In complex three-dimensional geologies, where many of these resources remain to be found, resistivity mapping requires large-scale modeling and imaging capabilities, as well as the ability to treat significant data volumes, which can easily overwhelm single-core and modest multicore computing hardware. To treat such problems requires large-scale parallel computational resources, necessary for reducing the time to solution to a time frame acceptable to the exploration process. The recognition that significant parallel computing processes must be brought to bear on these problems gives rise to choices that must be made in parallel computing hardware and software. In this review, some of these choices are presented, along with the resulting trade-offs. We also discuss future trends in high-performance computing and the anticipated impact on electromagnetic (EM) geophysics. Topics discussed in this review article include a survey of parallel computing platforms, graphics processing units to multicore CPUs with a fast interconnect, along with effective parallel solvers and associated solver libraries effective for inductive EM modeling and imaging.

  13. Toward an automated parallel computing environment for geosciences

    NASA Astrophysics Data System (ADS)

    Zhang, Huai; Liu, Mian; Shi, Yaolin; Yuen, David A.; Yan, Zhenzhen; Liang, Guoping

    2007-08-01

    Software for geodynamic modeling has not kept up with the fast growing computing hardware and network resources. In the past decade supercomputing power has become available to most researchers in the form of affordable Beowulf clusters and other parallel computer platforms. However, to take full advantage of such computing power requires developing parallel algorithms and associated software, a task that is often too daunting for geoscience modelers whose main expertise is in geosciences. We introduce here an automated parallel computing environment built on open-source algorithms and libraries. Users interact with this computing environment by specifying the partial differential equations, solvers, and model-specific properties using an English-like modeling language in the input files. The system then automatically generates the finite element codes that can be run on distributed or shared memory parallel machines. This system is dynamic and flexible, allowing users to address different problems in geosciences. It is capable of providing web-based services, enabling users to generate source codes online. This unique feature will facilitate high-performance computing to be integrated with distributed data grids in the emerging cyber-infrastructures for geosciences. In this paper we discuss the principles of this automated modeling environment and provide examples to demonstrate its versatility.

  14. Computer architecture evaluation for structural dynamics computations: Project summary

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1989-01-01

    The intent of the proposed effort is the examination of the impact of the elements of parallel architectures on the performance realized in a parallel computation. To this end, three major projects are developed: a language for the expression of high level parallelism, a statistical technique for the synthesis of multicomputer interconnection networks based upon performance prediction, and a queueing model for the analysis of shared memory hierarchies.

  15. Multi-threading: A new dimension to massively parallel scientific computation

    NASA Astrophysics Data System (ADS)

    Nielsen, Ida M. B.; Janssen, Curtis L.

    2000-06-01

    Multi-threading is becoming widely available for Unix-like operating systems, and the application of multi-threading opens new ways for performing parallel computations with greater efficiency. We here briefly discuss the principles of multi-threading and illustrate the application of multi-threading for a massively parallel direct four-index transformation of electron repulsion integrals. Finally, other potential applications of multi-threading in scientific computing are outlined.

  16. Surface Modification Engineered Assembly of Novel Quantum Dot Architectures for Advanced Applications

    DTIC Science & Technology

    2008-02-09

    Campbell, S. Ogata, and F. Shimojo, “ Multimillion atom simulations of nanosystems on parallel computers,” in Proceedings of the International...nanomesas: multimillion -atom molecular dynamics simulations on parallel computers,” J. Appl. Phys. 94, 6762 (2003). 21. P. Vashishta, R. K. Kalia...and A. Nakano, “ Multimillion atom molecular dynamics simulations of nanoparticles on parallel computers,” Journal of Nanoparticle Research 5, 119-135

  17. Using parallel computing for the display and simulation of the space debris environment

    NASA Astrophysics Data System (ADS)

    Möckel, M.; Wiedemann, C.; Flegel, S.; Gelhaus, J.; Vörsmann, P.; Klinkrad, H.; Krag, H.

    2011-07-01

    Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction to OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.

  18. Using parallel computing for the display and simulation of the space debris environment

    NASA Astrophysics Data System (ADS)

    Moeckel, Marek; Wiedemann, Carsten; Flegel, Sven Kevin; Gelhaus, Johannes; Klinkrad, Heiner; Krag, Holger; Voersmann, Peter

    Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction of OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.

  19. Commercialization of Nanotechnology

    DTIC Science & Technology

    2007-03-01

    NATO LECTURES M. Meyyappan Commercialization of Nanotechnology Abstract Nanotechnology is an enabling technology and as such, will have an...years), medium term (10 years) and long term (> 15 years) prospects. In addition, the challenges currently being faced to commercialize nanotechnology ...multinational corporations, government funding etc. will be presented. It is important to recognize that nanotechnology is not any one

  20. Quantum information, cognition, and music.

    PubMed

    Dalla Chiara, Maria L; Giuntini, Roberto; Leporini, Roberto; Negri, Eleonora; Sergioli, Giuseppe

    2015-01-01

    Parallelism represents an essential aspect of human mind/brain activities. One can recognize some common features between psychological parallelism and the characteristic parallel structures that arise in quantum theory and in quantum computation. The article is devoted to a discussion of the following questions: a comparison between classical probabilistic Turing machines and quantum Turing machines.possible applications of the quantum computational semantics to cognitive problems.parallelism in music.

  1. Quantum information, cognition, and music

    PubMed Central

    Dalla Chiara, Maria L.; Giuntini, Roberto; Leporini, Roberto; Negri, Eleonora; Sergioli, Giuseppe

    2015-01-01

    Parallelism represents an essential aspect of human mind/brain activities. One can recognize some common features between psychological parallelism and the characteristic parallel structures that arise in quantum theory and in quantum computation. The article is devoted to a discussion of the following questions: a comparison between classical probabilistic Turing machines and quantum Turing machines.possible applications of the quantum computational semantics to cognitive problems.parallelism in music. PMID:26539139

  2. Restricted access Improved hydrogeophysical characterization and monitoring through parallel modeling and inversion of time-domain resistivity andinduced-polarization data

    USGS Publications Warehouse

    Johnson, Timothy C.; Versteeg, Roelof J.; Ward, Andy; Day-Lewis, Frederick D.; Revil, André

    2010-01-01

    Electrical geophysical methods have found wide use in the growing discipline of hydrogeophysics for characterizing the electrical properties of the subsurface and for monitoring subsurface processes in terms of the spatiotemporal changes in subsurface conductivity, chargeability, and source currents they govern. Presently, multichannel and multielectrode data collections systems can collect large data sets in relatively short periods of time. Practitioners, however, often are unable to fully utilize these large data sets and the information they contain because of standard desktop-computer processing limitations. These limitations can be addressed by utilizing the storage and processing capabilities of parallel computing environments. We have developed a parallel distributed-memory forward and inverse modeling algorithm for analyzing resistivity and time-domain induced polar-ization (IP) data. The primary components of the parallel computations include distributed computation of the pole solutions in forward mode, distributed storage and computation of the Jacobian matrix in inverse mode, and parallel execution of the inverse equation solver. We have tested the corresponding parallel code in three efforts: (1) resistivity characterization of the Hanford 300 Area Integrated Field Research Challenge site in Hanford, Washington, U.S.A., (2) resistivity characterization of a volcanic island in the southern Tyrrhenian Sea in Italy, and (3) resistivity and IP monitoring of biostimulation at a Superfund site in Brandywine, Maryland, U.S.A. Inverse analysis of each of these data sets would be limited or impossible in a standard serial computing environment, which underscores the need for parallel high-performance computing to fully utilize the potential of electrical geophysical methods in hydrogeophysical applications.

  3. Cancer Nanotechnology Plan

    Cancer.gov

    The Cancer Nanotechnology Plan serves as a strategic document to the NCI Alliance for Nanotechnology in Cancer as well as a guiding document to the cancer nanotechnology and oncology fields, as a whole.

  4. Perceived risks and perceived benefits of different nanotechnology foods and nanotechnology food packaging.

    PubMed

    Siegrist, Michael; Stampfli, Nathalie; Kastenholz, Hans; Keller, Carmen

    2008-09-01

    Nanotechnology has the potential to generate new food products and new food packaging. In a mail survey in the German speaking part of Switzerland, lay people's (N=337) perceptions of 19 nanotechnology applications were examined. The goal was to identify food applications that are more likely and food applications that are less likely to be accepted by the public. The psychometric paradigm was employed, and applications were described in short scenarios. Results suggest that affect and perceived control are important factors influencing risk and benefit perception. Nanotechnology food packaging was assessed as less problematic than nanotechnology foods. Analyses of individual data showed that the importance of naturalness in food products and trust were significant factors influencing the perceived risk and the perceived benefit of nanotechnology foods and nanotechnology food packaging.

  5. Locating hardware faults in a data communications network of a parallel computer

    DOEpatents

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-01-12

    Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.

  6. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  7. Parallel implementation of geometrical shock dynamics for two dimensional converging shock waves

    NASA Astrophysics Data System (ADS)

    Qiu, Shi; Liu, Kuang; Eliasson, Veronica

    2016-10-01

    Geometrical shock dynamics (GSD) theory is an appealing method to predict the shock motion in the sense that it is more computationally efficient than solving the traditional Euler equations, especially for converging shock waves. However, to solve and optimize large scale configurations, the main bottleneck is the computational cost. Among the existing numerical GSD schemes, there is only one that has been implemented on parallel computers, with the purpose to analyze detonation waves. To extend the computational advantage of the GSD theory to more general applications such as converging shock waves, a numerical implementation using a spatial decomposition method has been coupled with a front tracking approach on parallel computers. In addition, an efficient tridiagonal system solver for massively parallel computers has been applied to resolve the most expensive function in this implementation, resulting in an efficiency of 0.93 while using 32 HPCC cores. Moreover, symmetric boundary conditions have been developed to further reduce the computational cost, achieving a speedup of 19.26 for a 12-sided polygonal converging shock.

  8. A Federal Vision for Future Computing: A Nanotechnology-Inspired Grand Challenge

    DTIC Science & Technology

    2016-07-29

    Science Foundation (NSF), Department of Defense (DOD), National Institute of Standards and Technology (NIST), Intelligence Community (IC) Introduction...multiple Federal agencies: • Intelligent big data sensors that act autonomously and are programmable via the network for increased flexibility, and... intelligence for scientific discovery enabled by rapid extreme-scale data analysis, capable of understanding and making sense of results and thereby

  9. Nanoscience and Nanotechnology

    DTIC Science & Technology

    1992-05-05

    Stanford has fabricated gate lengths down to 65 nm, and are entering into consortia to fabricate modulation doped field effect transistors (MODFETs...and from the substrate exposes the resist over a greater area than the beam xpot size. Correcting for these effects (where possible) is computationally...the lithographic pattern (proximity effects ). The push to smaller dimensions is concentrated on controlling and understanding these phenomena rather

  10. Website on Protein Interaction and Protein Structure Related Work

    NASA Technical Reports Server (NTRS)

    Samanta, Manoj; Liang, Shoudan; Biegel, Bryan (Technical Monitor)

    2003-01-01

    In today's world, three seemingly diverse fields - computer information technology, nanotechnology and biotechnology are joining forces to enlarge our scientific knowledge and solve complex technological problems. Our group is dedicated to conduct theoretical research exploring the challenges in this area. The major areas of research include: 1) Yeast Protein Interactions; 2) Protein Structures; and 3) Current Transport through Small Molecules.

  11. 77 FR 27470 - Center for Scientific Review Notice of Closed Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-10

    ..., Prevention and Intervention for Addictions Study Section. Date: June 7-8, 2012. Time: 8:00 a.m. to 5:00 p.m...: Bioengineering Sciences & Technologies Integrated Review Group; Nanotechnology Study Section. Date: June 7-8..., Computational Biology and Technology Study Section. Date: June 7-8, 2012. Time: 8:30 a.m. to 6:00 p.m. Agenda...

  12. A Comparison of Automatic Parallelization Tools/Compilers on the SGI Origin 2000 Using the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Frumkin, Michael; Hribar, Michelle; Jin, Hao-Qiang; Waheed, Abdul; Yan, Jerry

    1998-01-01

    Porting applications to new high performance parallel and distributed computing platforms is a challenging task. Since writing parallel code by hand is extremely time consuming and costly, porting codes would ideally be automated by using some parallelization tools and compilers. In this paper, we compare the performance of the hand written NAB Parallel Benchmarks against three parallel versions generated with the help of tools and compilers: 1) CAPTools: an interactive computer aided parallelization too] that generates message passing code, 2) the Portland Group's HPF compiler and 3) using compiler directives with the native FORTAN77 compiler on the SGI Origin2000.

  13. A transient FETI methodology for large-scale parallel implicit computations in structural mechanics

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel; Crivelli, Luis; Roux, Francois-Xavier

    1992-01-01

    Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because explicit schemes are also easier to parallelize than implicit ones. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet -- and perhaps will never -- be offset by the speed of parallel hardware. Therefore, it is essential to develop efficient and robust alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating low-frequency dynamics. Here we present a domain decomposition method for implicit schemes that requires significantly less storage than factorization algorithms, that is several times faster than other popular direct and iterative methods, that can be easily implemented on both shared and local memory parallel processors, and that is both computationally and communication-wise efficient. The proposed transient domain decomposition method is an extension of the method of Finite Element Tearing and Interconnecting (FETI) developed by Farhat and Roux for the solution of static problems. Serial and parallel performance results on the CRAY Y-MP/8 and the iPSC-860/128 systems are reported and analyzed for realistic structural dynamics problems. These results establish the superiority of the FETI method over both the serial/parallel conjugate gradient algorithm with diagonal scaling and the serial/parallel direct method, and contrast the computational power of the iPSC-860/128 parallel processor with that of the CRAY Y-MP/8 system.

  14. Nanotechnology Innovations

    NASA Technical Reports Server (NTRS)

    Malroy, Eric

    2010-01-01

    Nanotechnology is rapidly affecting all engineering disciplines as new products and applications are being found and brought to market. This session will present an overview of nanotechnology and let you learn about the advances in the field and how it could impact you. Some of the areas touched upon will be nanomaterials with their multifunctional capabilities, nanotechnology impact on energy systems, nanobiotechnology including nanomedicine, and nanotechnology relevant to space systems with a focus on ECLSS. Also, some important advances related to thermal systems will be presented as well as future predictions on nanotechnology.

  15. Development of a Model for the Representation of Nanotechnology-Specific Terminology

    PubMed Central

    Bailey, LeeAnn O.; Kennedy, Christopher H.; Fritts, Martin J.; Hartel, Francis W.

    2006-01-01

    Nanotechnology is an important, rapidly-evolving, multidisciplinary field [1]. The tremendous growth in this area necessitates the establishment of a common, open-source terminology to support the diverse biomedical applications of nanotechnology. Currently, the consensus process to define and categorize conceptual entities pertaining to nanotechnology is in a rudimentary stage. We have constructed a nanotechnology-specific conceptual hierarchy that can be utilized by end users to retrieve accurate, controlled terminology regarding emerging nanotechnology and corresponding clinical applications. PMID:17238469

  16. Restoration of neurological functions by neuroprosthetic technologies: future prospects and trends towards micro-, nano-, and biohybrid systems.

    PubMed

    Stieglitz, T

    2007-01-01

    Today applications of neural prostheses that successfully help patients to increase their activities of daily living and participate in social life again are quite simple implants that yield definite tissue response and are well recognized as foreign body. Latest developments in genetic engineering, nanotechnologies and materials sciences have paved the way to new scenarios towards highly complex systems to interface the human nervous system. Combinations of neural cells with microimplants promise stable biohybrid interfaces. Nanotechnology opens the door to macromolecular landscapes on implants that mimic the biologic topology and surface interaction of biologic cells. Computer sciences dream of technical cognitive systems that act and react due to knowledge-based conclusion mechanisms to a changing or adaptive environment. Different sciences start to interact and discuss the synergies when methods and paradigms from biology, computer sciences and engineering, neurosciences, psychology will be combined. They envision the era of "converging technologies" to completely change the understanding of science and postulate a new vision of humans. In this chapter, these research lines will be discussed on some examples as well as the societal implications and ethical questions that arise from these new opportunities.

  17. Massively parallel sparse matrix function calculations with NTPoly

    NASA Astrophysics Data System (ADS)

    Dawson, William; Nakajima, Takahito

    2018-04-01

    We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.

  18. Parallel Domain Decomposition Formulation and Software for Large-Scale Sparse Symmetrical/Unsymmetrical Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Watson, Willie R. (Technical Monitor)

    2005-01-01

    The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.

  19. Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture

    NASA Technical Reports Server (NTRS)

    Jones, W. H.

    1983-01-01

    The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.

  20. Applications of Parallel Computation in Micro-Mechanics and Finite Element Method

    NASA Technical Reports Server (NTRS)

    Tan, Hui-Qian

    1996-01-01

    This project discusses the application of parallel computations related with respect to material analyses. Briefly speaking, we analyze some kind of material by elements computations. We call an element a cell here. A cell is divided into a number of subelements called subcells and all subcells in a cell have the identical structure. The detailed structure will be given later in this paper. It is obvious that the problem is "well-structured". SIMD machine would be a better choice. In this paper we try to look into the potentials of SIMD machine in dealing with finite element computation by developing appropriate algorithms on MasPar, a SIMD parallel machine. In section 2, the architecture of MasPar will be discussed. A brief review of the parallel programming language MPL also is given in that section. In section 3, some general parallel algorithms which might be useful to the project will be proposed. And, combining with the algorithms, some features of MPL will be discussed in more detail. In section 4, the computational structure of cell/subcell model will be given. The idea of designing the parallel algorithm for the model will be demonstrated. Finally in section 5, a summary will be given.

  1. Optics Program Modified for Multithreaded Parallel Computing

    NASA Technical Reports Server (NTRS)

    Lou, John; Bedding, Dave; Basinger, Scott

    2006-01-01

    A powerful high-performance computer program for simulating and analyzing adaptive and controlled optical systems has been developed by modifying the serial version of the Modeling and Analysis for Controlled Optical Systems (MACOS) program to impart capabilities for multithreaded parallel processing on computing systems ranging from supercomputers down to Symmetric Multiprocessing (SMP) personal computers. The modifications included the incorporation of OpenMP, a portable and widely supported application interface software, that can be used to explicitly add multithreaded parallelism to an application program under a shared-memory programming model. OpenMP was applied to parallelize ray-tracing calculations, one of the major computing components in MACOS. Multithreading is also used in the diffraction propagation of light in MACOS based on pthreads [POSIX Thread, (where "POSIX" signifies a portable operating system for UNIX)]. In tests of the parallelized version of MACOS, the speedup in ray-tracing calculations was found to be linear, or proportional to the number of processors, while the speedup in diffraction calculations ranged from 50 to 60 percent, depending on the type and number of processors. The parallelized version of MACOS is portable, and, to the user, its interface is basically the same as that of the original serial version of MACOS.

  2. PyPele Rewritten To Use MPI

    NASA Technical Reports Server (NTRS)

    Hockney, George; Lee, Seungwon

    2008-01-01

    A computer program known as PyPele, originally written as a Pythonlanguage extension module of a C++ language program, has been rewritten in pure Python language. The original version of PyPele dispatches and coordinates parallel-processing tasks on cluster computers and provides a conceptual framework for spacecraft-mission- design and -analysis software tools to run in an embarrassingly parallel mode. The original version of PyPele uses SSH (Secure Shell a set of standards and an associated network protocol for establishing a secure channel between a local and a remote computer) to coordinate parallel processing. Instead of SSH, the present Python version of PyPele uses Message Passing Interface (MPI) [an unofficial de-facto standard language-independent application programming interface for message- passing on a parallel computer] while keeping the same user interface. The use of MPI instead of SSH and the preservation of the original PyPele user interface make it possible for parallel application programs written previously for the original version of PyPele to run on MPI-based cluster computers. As a result, engineers using the previously written application programs can take advantage of embarrassing parallelism without need to rewrite those programs.

  3. Massively parallel information processing systems for space applications

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.

    1979-01-01

    NASA is developing massively parallel systems for ultra high speed processing of digital image data collected by satellite borne instrumentation. Such systems contain thousands of processing elements. Work is underway on the design and fabrication of the 'Massively Parallel Processor', a ground computer containing 16,384 processing elements arranged in a 128 x 128 array. This computer uses existing technology. Advanced work includes the development of semiconductor chips containing thousands of feedthrough paths. Massively parallel image analog to digital conversion technology is also being developed. The goal is to provide compact computers suitable for real-time onboard processing of images.

  4. n-body simulations using message passing parallel computers.

    NASA Astrophysics Data System (ADS)

    Grama, A. Y.; Kumar, V.; Sameh, A.

    The authors present new parallel formulations of the Barnes-Hut method for n-body simulations on message passing computers. These parallel formulations partition the domain efficiently incurring minimal communication overhead. This is in contrast to existing schemes that are based on sorting a large number of keys or on the use of global data structures. The new formulations are augmented by alternate communication strategies which serve to minimize communication overhead. The impact of these communication strategies is experimentally studied. The authors report on experimental results obtained from an astrophysical simulation on an nCUBE2 parallel computer.

  5. NANOTECHNOLOGY, NANOMEDICINE; ETHICAL ASPECTS.

    PubMed

    Gökçay, Banu; Arda, Berna

    2015-01-01

    Nanotechnology is a field that we often hear of its name nowadays. Altough what we know about it is soo poor, we admire this field of technlogy, moreover some societies even argues that nanotechnology will cause second endustrial revolution. In addition, nanotechnology makes our basic scientific knowledge upside down and is soo powerfull that it is potent in nearly every scientific field. Thereby, it is imposible to say that nanotechnology; which is soo effective on human and human life; will not cause social and ethical outcomes. In general, the definition of nanotechnology is the reconfiguration of nanomaterials by human; there also are different definitions according to the history of nanotechnology and different point of views. First of all, in comparison to the other tehnology fields, what is the cause of excellence of nanotechnology, what human can do is to foresee the advantages and disadvantages of it, what are the roles of developed and developping countries for the progression of nanotechnology, what is the attitude of nanoethics and what is view of global politics to nanotechological research according to international regulations are all the focus of interests of this study. Last but not least, our apprehension capacity of nanotechnology, our style of adoption and evaluation of it and the way that how we locate nanotechnology in our lifes and ethical values are the other focus of interests.

  6. A design methodology for portable software on parallel computers

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Miller, Keith W.; Chrisman, Dan A.

    1993-01-01

    This final report for research that was supported by grant number NAG-1-995 documents our progress in addressing two difficulties in parallel programming. The first difficulty is developing software that will execute quickly on a parallel computer. The second difficulty is transporting software between dissimilar parallel computers. In general, we expect that more hardware-specific information will be included in software designs for parallel computers than in designs for sequential computers. This inclusion is an instance of portability being sacrificed for high performance. New parallel computers are being introduced frequently. Trying to keep one's software on the current high performance hardware, a software developer almost continually faces yet another expensive software transportation. The problem of the proposed research is to create a design methodology that helps designers to more precisely control both portability and hardware-specific programming details. The proposed research emphasizes programming for scientific applications. We completed our study of the parallelizability of a subsystem of the NASA Earth Radiation Budget Experiment (ERBE) data processing system. This work is summarized in section two. A more detailed description is provided in Appendix A ('Programming Practices to Support Eventual Parallelism'). Mr. Chrisman, a graduate student, wrote and successfully defended a Ph.D. dissertation proposal which describes our research associated with the issues of software portability and high performance. The list of research tasks are specified in the proposal. The proposal 'A Design Methodology for Portable Software on Parallel Computers' is summarized in section three and is provided in its entirety in Appendix B. We are currently studying a proposed subsystem of the NASA Clouds and the Earth's Radiant Energy System (CERES) data processing system. This software is the proof-of-concept for the Ph.D. dissertation. We have implemented and measured the performance of a portion of this subsystem on the Intel iPSC/2 parallel computer. These results are provided in section four. Our future work is summarized in section five, our acknowledgements are stated in section six, and references for published papers associated with NAG-1-995 are provided in section seven.

  7. Convergence issues in domain decomposition parallel computation of hovering rotor

    NASA Astrophysics Data System (ADS)

    Xiao, Zhongyun; Liu, Gang; Mou, Bin; Jiang, Xiong

    2018-05-01

    Implicit LU-SGS time integration algorithm has been widely used in parallel computation in spite of its lack of information from adjacent domains. When applied to parallel computation of hovering rotor flows in a rotating frame, it brings about convergence issues. To remedy the problem, three LU factorization-based implicit schemes (consisting of LU-SGS, DP-LUR and HLU-SGS) are investigated comparatively. A test case of pure grid rotation is designed to verify these algorithms, which show that LU-SGS algorithm introduces errors on boundary cells. When partition boundaries are circumferential, errors arise in proportion to grid speed, accumulating along with the rotation, and leading to computational failure in the end. Meanwhile, DP-LUR and HLU-SGS methods show good convergence owing to boundary treatment which are desirable in domain decomposition parallel computations.

  8. How to make your own response boxes: A step-by-step guide for the construction of reliable and inexpensive parallel-port response pads from computer mice.

    PubMed

    Voss, Andreas; Leonhart, Rainer; Stahl, Christoph

    2007-11-01

    Psychological research is based in large parts on response latencies, which are often registered by keypresses on a standard computer keyboard. Recording response latencies with a standard keyboard is problematic because keypresses are buffered within the keyboard hardware before they are signaled to the computer, adding error variance to the recorded latencies. This can be circumvented by using external response pads connected to the computer's parallel port. In this article, we describe how to build inexpensive, reliable, and easy-to-use response pads with six keys from two standard computer mice that can be connected to the PC's parallel port. We also address the problem of recording data from the parallel port with different software packages under Microsoft's Windows XP.

  9. Managing the Life Cycle Risks of Nanomaterials

    DTIC Science & Technology

    2009-07-01

    ISO International Organization for Standardization ISN Institute for Soldier Nanotechnologies LCA Life Cycle Assessment LCCA Life Cycle Cost Analysis...similar to their smaller Existing ISO /TS 27687:2008 Nanotechnologies -- Terminology and definitions for nano-objects -- Nanoparticle, nanofibre and...Nanotechnology Under Development ISO /CD TR 80004-1 Nanotechnologies - Terminology and definitions – Framework ISO /AWI TS 80004-2 Nanotechnologies

  10. On some methods for improving time of reachability sets computation for the dynamic system control problem

    NASA Astrophysics Data System (ADS)

    Zimovets, Artem; Matviychuk, Alexander; Ushakov, Vladimir

    2016-12-01

    The paper presents two different approaches to reduce the time of computer calculation of reachability sets. First of these two approaches use different data structures for storing the reachability sets in the computer memory for calculation in single-threaded mode. Second approach is based on using parallel algorithms with reference to the data structures from the first approach. Within the framework of this paper parallel algorithm of approximate reachability set calculation on computer with SMP-architecture is proposed. The results of numerical modelling are presented in the form of tables which demonstrate high efficiency of parallel computing technology and also show how computing time depends on the used data structure.

  11. Perceptions and attitude effects on nanotechnology acceptance: an exploratory framework

    NASA Astrophysics Data System (ADS)

    Ganesh Pillai, Rajani; Bezbaruah, Achintya N.

    2017-02-01

    Existing literature in people's attitude toward nanotechnology and acceptance of nanotechnology applications has generally investigated the impact of factors at the individual or context levels. While this vast body of research is very informative, a comprehensive understanding of how attitude toward nanotechnology are formed and factors influencing the acceptance of nanotechnology are elusive. This paper proposes an exploratory nanotechnology perception-attitude-acceptance framework (Nano-PAAF) to build a systematic understanding of the phenomenon. The framework proposes that perceptions of risks and benefits of nanotechnology are influenced by cognitive, affective, and sociocultural factors. The sociodemographic factors of consumers and contextual factors mitigate the influence of cognitive, affective, and sociocultural factors on the perception of risks and benefits. The perceived risks and benefits in turn influence people's attitude toward nanotechnology, which then influences acceptance of nanotechnology products. This framework will need further development over time to incorporate emerging knowledge and is expected to be useful for researchers, decision and policy makers, industry, and business entities.

  12. Performance analysis of parallel branch and bound search with the hypercube architecture

    NASA Technical Reports Server (NTRS)

    Mraz, Richard T.

    1987-01-01

    With the availability of commercial parallel computers, researchers are examining new classes of problems which might benefit from parallel computing. This paper presents results of an investigation of the class of search intensive problems. The specific problem discussed is the Least-Cost Branch and Bound search method of deadline job scheduling. The object-oriented design methodology was used to map the problem into a parallel solution. While the initial design was good for a prototype, the best performance resulted from fine-tuning the algorithm for a specific computer. The experiments analyze the computation time, the speed up over a VAX 11/785, and the load balance of the problem when using loosely coupled multiprocessor system based on the hypercube architecture.

  13. Dynamic modeling of parallel robots for computed-torque control implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Codourey, A.

    1998-12-01

    In recent years, increased interest in parallel robots has been observed. Their control with modern theory, such as the computed-torque method, has, however, been restrained, essentially due to the difficulty in establishing a simple dynamic model that can be calculated in real time. In this paper, a simple method based on the virtual work principle is proposed for modeling parallel robots. The mass matrix of the robot, needed for decoupling control strategies, does not explicitly appear in the formulation; however, it can be computed separately, based on kinetic energy considerations. The method is applied to the DELTA parallel robot, leadingmore » to a very efficient model that has been implemented in a real-time computed-torque control algorithm.« less

  14. Architecture-Adaptive Computing Environment: A Tool for Teaching Parallel Programming

    NASA Technical Reports Server (NTRS)

    Dorband, John E.; Aburdene, Maurice F.

    2002-01-01

    Recently, networked and cluster computation have become very popular. This paper is an introduction to a new C based parallel language for architecture-adaptive programming, aCe C. The primary purpose of aCe (Architecture-adaptive Computing Environment) is to encourage programmers to implement applications on parallel architectures by providing them the assurance that future architectures will be able to run their applications with a minimum of modification. A secondary purpose is to encourage computer architects to develop new types of architectures by providing an easily implemented software development environment and a library of test applications. This new language should be an ideal tool to teach parallel programming. In this paper, we will focus on some fundamental features of aCe C.

  15. Portability and Cross-Platform Performance of an MPI-Based Parallel Polygon Renderer

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1999-01-01

    Visualizing the results of computations performed on large-scale parallel computers is a challenging problem, due to the size of the datasets involved. One approach is to perform the visualization and graphics operations in place, exploiting the available parallelism to obtain the necessary rendering performance. Over the past several years, we have been developing algorithms and software to support visualization applications on NASA's parallel supercomputers. Our results have been incorporated into a parallel polygon rendering system called PGL. PGL was initially developed on tightly-coupled distributed-memory message-passing systems, including Intel's iPSC/860 and Paragon, and IBM's SP2. Over the past year, we have ported it to a variety of additional platforms, including the HP Exemplar, SGI Origin2OOO, Cray T3E, and clusters of Sun workstations. In implementing PGL, we have had two primary goals: cross-platform portability and high performance. Portability is important because (1) our manpower resources are limited, making it difficult to develop and maintain multiple versions of the code, and (2) NASA's complement of parallel computing platforms is diverse and subject to frequent change. Performance is important in delivering adequate rendering rates for complex scenes and ensuring that parallel computing resources are used effectively. Unfortunately, these two goals are often at odds. In this paper we report on our experiences with portability and performance of the PGL polygon renderer across a range of parallel computing platforms.

  16. Shaping up nucleic acid computation.

    PubMed

    Chen, Xi; Ellington, Andrew D

    2010-08-01

    Nucleic acid-based nanotechnology has always been perceived as novel, but has begun to move from theoretical demonstrations to practical applications. In particular, the large address spaces available to nucleic acids can be exploited to encode algorithms and/or act as circuits and thereby process molecular information. In this review we not only revisit several milestones in the field of nucleic acid-based computation, but also highlight how the prospects for nucleic acid computation go beyond just a large address space. Functional nucleic acid elements (aptamers, ribozymes, and deoxyribozymes) can serve as inputs and outputs to the environment, and can act as logical elements. Into the future, the chemical dynamics of nucleic acids may prove as useful as hybridization for computation. Copyright © 2010 Elsevier Ltd. All rights reserved.

  17. Molecular dynamics simulations and applications in computational toxicology and nanotoxicology.

    PubMed

    Selvaraj, Chandrabose; Sakkiah, Sugunadevi; Tong, Weida; Hong, Huixiao

    2018-02-01

    Nanotoxicology studies toxicity of nanomaterials and has been widely applied in biomedical researches to explore toxicity of various biological systems. Investigating biological systems through in vivo and in vitro methods is expensive and time taking. Therefore, computational toxicology, a multi-discipline field that utilizes computational power and algorithms to examine toxicology of biological systems, has gained attractions to scientists. Molecular dynamics (MD) simulations of biomolecules such as proteins and DNA are popular for understanding of interactions between biological systems and chemicals in computational toxicology. In this paper, we review MD simulation methods, protocol for running MD simulations and their applications in studies of toxicity and nanotechnology. We also briefly summarize some popular software tools for execution of MD simulations. Published by Elsevier Ltd.

  18. A compositional reservoir simulator on distributed memory parallel computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rame, M.; Delshad, M.

    1995-12-31

    This paper presents the application of distributed memory parallel computes to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. Amore » portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented.« less

  19. Hybrid massively parallel fast sweeping method for static Hamilton–Jacobi equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detrixhe, Miles, E-mail: mdetrixhe@engineering.ucsb.edu; University of California Santa Barbara, Santa Barbara, CA, 93106; Gibou, Frédéric, E-mail: fgibou@engineering.ucsb.edu

    The fast sweeping method is a popular algorithm for solving a variety of static Hamilton–Jacobi equations. Fast sweeping algorithms for parallel computing have been developed, but are severely limited. In this work, we present a multilevel, hybrid parallel algorithm that combines the desirable traits of two distinct parallel methods. The fine and coarse grained components of the algorithm take advantage of heterogeneous computer architecture common in high performance computing facilities. We present the algorithm and demonstrate its effectiveness on a set of example problems including optimal control, dynamic games, and seismic wave propagation. We give results for convergence, parallel scaling,more » and show state-of-the-art speedup values for the fast sweeping method.« less

  20. Hybrid parallel computing architecture for multiview phase shifting

    NASA Astrophysics Data System (ADS)

    Zhong, Kai; Li, Zhongwei; Zhou, Xiaohui; Shi, Yusheng; Wang, Congjun

    2014-11-01

    The multiview phase-shifting method shows its powerful capability in achieving high resolution three-dimensional (3-D) shape measurement. Unfortunately, this ability results in very high computation costs and 3-D computations have to be processed offline. To realize real-time 3-D shape measurement, a hybrid parallel computing architecture is proposed for multiview phase shifting. In this architecture, the central processing unit can co-operate with the graphic processing unit (GPU) to achieve hybrid parallel computing. The high computation cost procedures, including lens distortion rectification, phase computation, correspondence, and 3-D reconstruction, are implemented in GPU, and a three-layer kernel function model is designed to simultaneously realize coarse-grained and fine-grained paralleling computing. Experimental results verify that the developed system can perform 50 fps (frame per second) real-time 3-D measurement with 260 K 3-D points per frame. A speedup of up to 180 times is obtained for the performance of the proposed technique using a NVIDIA GT560Ti graphics card rather than a sequential C in a 3.4 GHZ Inter Core i7 3770.

  1. On efficiency of fire simulation realization: parallelization with greater number of computational meshes

    NASA Astrophysics Data System (ADS)

    Valasek, Lukas; Glasa, Jan

    2017-12-01

    Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.

  2. A Hybrid MPI/OpenMP Approach for Parallel Groundwater Model Calibration on Multicore Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan

    2010-01-01

    Groundwater model calibration is becoming increasingly computationally time intensive. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelism in software and hardware to reduce calibration time on multicore computers with minimal parallelization effort. At first, HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for a uranium transport model with over a hundred species involving nearly a hundred reactions, and a field scale coupled flow and transport model. In the first application, a single parallelizable loop is identified to consume over 97% of the total computational time. With a few lines of OpenMP compiler directives inserted into the code,more » the computational time reduces about ten times on a compute node with 16 cores. The performance is further improved by selectively parallelizing a few more loops. For the field scale application, parallelizable loops in 15 of the 174 subroutines in HGC5 are identified to take more than 99% of the execution time. By adding the preconditioned conjugate gradient solver and BICGSTAB, and using a coloring scheme to separate the elements, nodes, and boundary sides, the subroutines for finite element assembly, soil property update, and boundary condition application are parallelized, resulting in a speedup of about 10 on a 16-core compute node. The Levenberg-Marquardt (LM) algorithm is added into HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, compute nodes at the number of adjustable parameters (when the forward difference is used for Jacobian approximation), or twice that number (if the center difference is used), are used to reduce the calibration time from days and weeks to a few hours for the two applications. This approach can be extended to global optimization scheme and Monte Carol analysis where thousands of compute nodes can be efficiently utilized.« less

  3. Variable-Complexity Multidisciplinary Optimization on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Grossman, Bernard; Mason, William H.; Watson, Layne T.; Haftka, Raphael T.

    1998-01-01

    This report covers work conducted under grant NAG1-1562 for the NASA High Performance Computing and Communications Program (HPCCP) from December 7, 1993, to December 31, 1997. The objective of the research was to develop new multidisciplinary design optimization (MDO) techniques which exploit parallel computing to reduce the computational burden of aircraft MDO. The design of the High-Speed Civil Transport (HSCT) air-craft was selected as a test case to demonstrate the utility of our MDO methods. The three major tasks of this research grant included: development of parallel multipoint approximation methods for the aerodynamic design of the HSCT, use of parallel multipoint approximation methods for structural optimization of the HSCT, mathematical and algorithmic development including support in the integration of parallel computation for items (1) and (2). These tasks have been accomplished with the development of a response surface methodology that incorporates multi-fidelity models. For the aerodynamic design we were able to optimize with up to 20 design variables using hundreds of expensive Euler analyses together with thousands of inexpensive linear theory simulations. We have thereby demonstrated the application of CFD to a large aerodynamic design problem. For the predicting structural weight we were able to combine hundreds of structural optimizations of refined finite element models with thousands of optimizations based on coarse models. Computations have been carried out on the Intel Paragon with up to 128 nodes. The parallel computation allowed us to perform combined aerodynamic-structural optimization using state of the art models of a complex aircraft configurations.

  4. Nanoparticles, nanotechnology and pulmonary nanotoxicology.

    PubMed

    Ferreira, A J; Cemlyn-Jones, J; Robalo Cordeiro, C

    2013-01-01

    The recently emergent field of Nanotechnology involves the production and use of structures at the nanoscale. Research at atomic, molecular or macromolecular levels, has led to new materials, systems and structures on a scale consisting of particles less than 100 nm and showing unique and unusual physical, chemical and biological properties, which has enabled new applications in diverse fields, creating a multimillion-dollar high-tech industry. Nanotechnologies have a wide variety of uses from nanomedicine, consumer goods, electronics, communications and computing to environmental applications, efficient energy sources, agriculture, water purification, textiles, and aerospace industry, among many others. The different characteristics of nanoparticles such as size, shape, surface charge, chemical properties, solubility and degree of agglomeration will determine their effects on biological systems and human health, and the likelihood of respiratory hazards. There are a number of new studies about the potential occupational and environmental effects of nanoparticles and general precautionary measures are now fully justified. Adverse respiratory effects include multifocal granulomas, peribronchial inflammation, progressive interstitial fibrosis, chronic inflammatory responses, collagen deposition and oxidative stress. The authors present an overview of the most important studies about respiratory nanotoxicology and the effects of nanoparticles and engineered nanomaterials on the respiratory system. Copyright © 2012 Sociedade Portuguesa de Pneumologia. Published by Elsevier España. All rights reserved.

  5. Line-plane broadcasting in a data communications network of a parallel computer

    DOEpatents

    Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.

    2010-06-08

    Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.

  6. Line-plane broadcasting in a data communications network of a parallel computer

    DOEpatents

    Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.

    2010-11-23

    Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.

  7. A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Owen, Jeffrey E.

    1988-01-01

    A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.

  8. A parallel implementation of an off-lattice individual-based model of multicellular populations

    NASA Astrophysics Data System (ADS)

    Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe

    2015-07-01

    As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.

  9. NANOTECHNOLOGY, NANOMEDICINE; ETHICAL ASPECTS

    PubMed Central

    GÖKÇAY, Banu; ARDA, Berna

    2017-01-01

    Nanotechnology is a field that we often hear of its name nowadays. Altough what we know about it is soo poor, we admire this field of technlogy, moreover some societies even argues that nanotechnology will cause second endustrial revolution. In addition, nanotechnology makes our basic scientific knowledge upside down and is soo powerfull that it is potent in nearly every scientific field. Thereby, it is imposible to say that nanotechnology; which is soo effective on human and human life; will not cause social and ethical outcomes. In general, the definition of nanotechnology is the reconfiguration of nanomaterials by human; there also are different definitions according to the history of nanotechnology and different point of views. First of all, in comparison to the other tehnology fields, what is the cause of excellence of nanotechnology, what human can do is to foresee the advantages and disadvantages of it, what are the roles of developed and developping countries for the progression of nanotechnology, what is the attitude of nanoethics and what is view of global politics to nanotechological research according to international regulations are all the focus of interests of this study. Last but not least, our apprehension capacity of nanotechnology, our style of adoption and evaluation of it and the way that how we locate nanotechnology in our lifes and ethical values are the other focus of interests. PMID:28424570

  10. Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density

    NASA Astrophysics Data System (ADS)

    Hohl, A.; Delmelle, E. M.; Tang, W.

    2015-07-01

    Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.

  11. Accelerating EPI distortion correction by utilizing a modern GPU-based parallel computation.

    PubMed

    Yang, Yao-Hao; Huang, Teng-Yi; Wang, Fu-Nien; Chuang, Tzu-Chao; Chen, Nan-Kuei

    2013-04-01

    The combination of phase demodulation and field mapping is a practical method to correct echo planar imaging (EPI) geometric distortion. However, since phase dispersion accumulates in each phase-encoding step, the calculation complexity of phase modulation is Ny-fold higher than conventional image reconstructions. Thus, correcting EPI images via phase demodulation is generally a time-consuming task. Parallel computing by employing general-purpose calculations on graphics processing units (GPU) can accelerate scientific computing if the algorithm is parallelized. This study proposes a method that incorporates the GPU-based technique into phase demodulation calculations to reduce computation time. The proposed parallel algorithm was applied to a PROPELLER-EPI diffusion tensor data set. The GPU-based phase demodulation method reduced the EPI distortion correctly, and accelerated the computation. The total reconstruction time of the 16-slice PROPELLER-EPI diffusion tensor images with matrix size of 128 × 128 was reduced from 1,754 seconds to 101 seconds by utilizing the parallelized 4-GPU program. GPU computing is a promising method to accelerate EPI geometric correction. The resulting reduction in computation time of phase demodulation should accelerate postprocessing for studies performed with EPI, and should effectuate the PROPELLER-EPI technique for clinical practice. Copyright © 2011 by the American Society of Neuroimaging.

  12. Super and parallel computers and their impact on civil engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamat, M.P.

    1986-01-01

    This book presents the papers given at a conference on the use of supercomputers in civil engineering. Topics considered at the conference included solving nonlinear equations on a hypercube, a custom architectured parallel processing system, distributed data processing, algorithms, computer architecture, parallel processing, vector processing, computerized simulation, and cost benefit analysis.

  13. Xyce

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomquist, Heidi K.; Fixel, Deborah A.; Fett, David Brian

    The Xyce Parallel Electronic Simulator simulates electronic circuit behavior in DC, AC, HB, MPDE and transient mode using standard analog (DAE) and/or device (PDE) device models including several age and radiation aware devices. It supports a variety of computing platforms (both serial and parallel) computers. Lastly, it uses a variety of modern solution algorithms dynamic parallel load-balancing and iterative solvers.

  14. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

    PubMed Central

    Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation. PMID:26681933

  15. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    PubMed

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  16. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning.

    PubMed

    Liu, Yang; Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  17. Seminal nanotechnology literature: a review.

    PubMed

    Kostoff, Ronald N; Koytcheff, Raymond G; Lau, Clifford G Y

    2009-11-01

    This paper uses complementary text mining techniques to identify and retrieve the high impact (seminal) nanotechnology literature over a span of time. Following a brief scientometric analysis of the seminal articles retrieved, these seminal articles are then used as a basis for a comprehensive literature survey of nanoscience and nanotechnology. The paper ends with a global analysis of the relation of seminal nanotechnology document production to total nanotechnology document production.

  18. A parallel algorithm for the two-dimensional time fractional diffusion equation with implicit difference method.

    PubMed

    Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie

    2014-01-01

    It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.

  19. Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; VanderWijngaart, Rob F.

    2003-01-01

    We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.

  20. Methods for design and evaluation of parallel computating systems (The PISCES project)

    NASA Technical Reports Server (NTRS)

    Pratt, Terrence W.; Wise, Robert; Haught, Mary JO

    1989-01-01

    The PISCES project started in 1984 under the sponsorship of the NASA Computational Structural Mechanics (CSM) program. A PISCES 1 programming environment and parallel FORTRAN were implemented in 1984 for the DEC VAX (using UNIX processes to simulate parallel processes). This system was used for experimentation with parallel programs for scientific applications and AI (dynamic scene analysis) applications. PISCES 1 was ported to a network of Apollo workstations by N. Fitzgerald.

  1. A new parallel-vector finite element analysis software on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Qin, Jiangning; Nguyen, Duc T.

    1993-01-01

    A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.

  2. Solving very large, sparse linear systems on mesh-connected parallel computers

    NASA Technical Reports Server (NTRS)

    Opsahl, Torstein; Reif, John

    1987-01-01

    The implementation of Pan and Reif's Parallel Nested Dissection (PND) algorithm on mesh connected parallel computers is described. This is the first known algorithm that allows very large, sparse linear systems of equations to be solved efficiently in polylog time using a small number of processors. How the processor bound of PND can be matched to the number of processors available on a given parallel computer by slowing down the algorithm by constant factors is described. Also, for the important class of problems where G(A) is a grid graph, a unique memory mapping that reduces the inter-processor communication requirements of PND to those that can be executed on mesh connected parallel machines is detailed. A description of an implementation on the Goodyear Massively Parallel Processor (MPP), located at Goddard is given. Also, a detailed discussion of data mappings and performance issues is given.

  3. High order parallel numerical schemes for solving incompressible flows

    NASA Technical Reports Server (NTRS)

    Lin, Avi; Milner, Edward J.; Liou, May-Fun; Belch, Richard A.

    1992-01-01

    The use of parallel computers for numerically solving flow fields has gained much importance in recent years. This paper introduces a new high order numerical scheme for computational fluid dynamics (CFD) specifically designed for parallel computational environments. A distributed MIMD system gives the flexibility of treating different elements of the governing equations with totally different numerical schemes in different regions of the flow field. The parallel decomposition of the governing operator to be solved is the primary parallel split. The primary parallel split was studied using a hypercube like architecture having clusters of shared memory processors at each node. The approach is demonstrated using examples of simple steady state incompressible flows. Future studies should investigate the secondary split because, depending on the numerical scheme that each of the processors applies and the nature of the flow in the specific subdomain, it may be possible for a processor to seek better, or higher order, schemes for its particular subcase.

  4. Parallelization of the FLAPW method

    NASA Astrophysics Data System (ADS)

    Canning, A.; Mannstadt, W.; Freeman, A. J.

    2000-08-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

  5. PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.

    PubMed

    Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar

    2014-01-01

    Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  6. Partitioning problems in parallel, pipelined and distributed computing

    NASA Technical Reports Server (NTRS)

    Bokhari, S.

    1985-01-01

    The problem of optimally assigning the modules of a parallel program over the processors of a multiple computer system is addressed. A Sum-Bottleneck path algorithm is developed that permits the efficient solution of many variants of this problem under some constraints on the structure of the partitions. In particular, the following problems are solved optimally for a single-host, multiple satellite system: partitioning multiple chain structured parallel programs, multiple arbitrarily structured serial programs and single tree structured parallel programs. In addition, the problems of partitioning chain structured parallel programs across chain connected systems and across shared memory (or shared bus) systems are also solved under certain constraints. All solutions for parallel programs are equally applicable to pipelined programs. These results extend prior research in this area by explicitly taking concurrency into account and permit the efficient utilization of multiple computer architectures for a wide range of problems of practical interest.

  7. Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm

    NASA Technical Reports Server (NTRS)

    Povitsky, A.

    1998-01-01

    In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.

  8. Parallel Algorithms and Patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robey, Robert W.

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  9. Parallel Computing for Probabilistic Response Analysis of High Temperature Composites

    NASA Technical Reports Server (NTRS)

    Sues, R. H.; Lua, Y. J.; Smith, M. D.

    1994-01-01

    The objective of this Phase I research was to establish the required software and hardware strategies to achieve large scale parallelism in solving PCM problems. To meet this objective, several investigations were conducted. First, we identified the multiple levels of parallelism in PCM and the computational strategies to exploit these parallelisms. Next, several software and hardware efficiency investigations were conducted. These involved the use of three different parallel programming paradigms and solution of two example problems on both a shared-memory multiprocessor and a distributed-memory network of workstations.

  10. Beyond input-output computings: error-driven emergence with parallel non-distributed slime mold computer.

    PubMed

    Aono, Masashi; Gunji, Yukio-Pegio

    2003-10-01

    The emergence derived from errors is the key importance for both novel computing and novel usage of the computer. In this paper, we propose an implementable experimental plan for the biological computing so as to elicit the emergent property of complex systems. An individual plasmodium of the true slime mold Physarum polycephalum acts in the slime mold computer. Modifying the Elementary Cellular Automaton as it entails the global synchronization problem upon the parallel computing provides the NP-complete problem solved by the slime mold computer. The possibility to solve the problem by giving neither all possible results nor explicit prescription of solution-seeking is discussed. In slime mold computing, the distributivity in the local computing logic can change dynamically, and its parallel non-distributed computing cannot be reduced into the spatial addition of multiple serial computings. The computing system based on exhaustive absence of the super-system may produce, something more than filling the vacancy.

  11. Reconstruction for time-domain in vivo EPR 3D multigradient oximetric imaging--a parallel processing perspective.

    PubMed

    Dharmaraj, Christopher D; Thadikonda, Kishan; Fletcher, Anthony R; Doan, Phuc N; Devasahayam, Nallathamby; Matsumoto, Shingo; Johnson, Calvin A; Cook, John A; Mitchell, James B; Subramanian, Sankaran; Krishna, Murali C

    2009-01-01

    Three-dimensional Oximetric Electron Paramagnetic Resonance Imaging using the Single Point Imaging modality generates unpaired spin density and oxygen images that can readily distinguish between normal and tumor tissues in small animals. It is also possible with fast imaging to track the changes in tissue oxygenation in response to the oxygen content in the breathing air. However, this involves dealing with gigabytes of data for each 3D oximetric imaging experiment involving digital band pass filtering and background noise subtraction, followed by 3D Fourier reconstruction. This process is rather slow in a conventional uniprocessor system. This paper presents a parallelization framework using OpenMP runtime support and parallel MATLAB to execute such computationally intensive programs. The Intel compiler is used to develop a parallel C++ code based on OpenMP. The code is executed on four Dual-Core AMD Opteron shared memory processors, to reduce the computational burden of the filtration task significantly. The results show that the parallel code for filtration has achieved a speed up factor of 46.66 as against the equivalent serial MATLAB code. In addition, a parallel MATLAB code has been developed to perform 3D Fourier reconstruction. Speedup factors of 4.57 and 4.25 have been achieved during the reconstruction process and oximetry computation, for a data set with 23 x 23 x 23 gradient steps. The execution time has been computed for both the serial and parallel implementations using different dimensions of the data and presented for comparison. The reported system has been designed to be easily accessible even from low-cost personal computers through local internet (NIHnet). The experimental results demonstrate that the parallel computing provides a source of high computational power to obtain biophysical parameters from 3D EPR oximetric imaging, almost in real-time.

  12. Nanotechnology in food science: Functionality, applicability, and safety assessment.

    PubMed

    He, Xiaojia; Hwang, Huey-Min

    2016-10-01

    Rapid development of nanotechnology is expected to transform many areas of food science and food industry with increasing investment and market share. In this article, current applications of nanotechnology in food systems are briefly reviewed. Functionality and applicability of food-related nanotechnology are highlighted in order to provide a comprehensive view on the development and safety assessment of nanotechnology in the food industry. While food nanotechnology offers great potential benefits, there are emerging concerns arising from its novel physicochemical properties. Therefore, the safety concerns and regulatory policies on its manufacturing, processing, packaging, and consumption are briefly addressed. At the end of this article, the perspectives of nanotechnology in active and intelligent packaging applications are highlighted. Copyright © 2016. Published by Elsevier B.V.

  13. Nanorobots: Future in dentistry

    PubMed Central

    Shetty, Neetha J.; Swati, P.; David, K.

    2013-01-01

    The purpose of this paper is to review the phenomenon of nanotechnology as it might apply to dentistry as a new field called nanodentistry. Treatment possibilities might include the application of nanotechnology to local anesthesia, dentition renaturalization, the permanent cure for hypersensitivity, complete orthodontic realignment in a single visit, covalently bonded diamondized enamel, and continuous oral health maintenance using mechanical dentifrobots. Dental nanorobots could be constructed to destroy caries-causing bacteria or to repair tooth blemishes where decay has set in, by using a computer to direct these tiny workers in their tasks. Dental nanorobots might be programed to use specific motility mechanisms to crawl or swim through human tissue with navigational precision, to acquire energy, to sense and manipulate their surroundings, to achieve safe cytopenetration, and to use any of a multitude of techniques to monitor, interrupt, or alter nerve-impulse traffic in individual nerve cells in real time. PMID:23960556

  14. Launch of the London Centre for Nanotechnology.

    PubMed

    Aeppli, Gabriel; Pankhurst, Quentin

    2006-12-01

    Is nanomedicine an area with the promise that its proponents claim? Professors Gabriel Aeppli and Quentin Pankhurst explore the issues in light of the new London Centre for Nanotechnology (LCN)--a joint enterprise between Imperial College and University College London--opened on November 7, 2006. The center is a multidisciplinary research initiative that aims to bridge the physical, engineering and biomedical sciences. In this interview, Professor Gabriel Aeppli, LCN co-Director, and Deputy Director Professor Quentin Pankhurst discuss the advent and future role of the LCN with Nanomedicine's Morag Robertson. Professor Aeppli was formerly with NEC, Bell Laboratories and MIT and has more than 15 years' experience in the computer and telecommunications industry. Professor Pankhurst is a physicist with more than 20 years' experience of working with magnetic materials and nanoparticles, who now works closely with clinicians and medics on innovative healthcare applications. He also recently formed the new start-up company Endomagnetics Inc.

  15. Proceedings ICASS 2017

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Schaaf, Peter

    2018-07-01

    This special issue of the high impact international peer reviewed journal Applied Surface Science represents the proceedings of the 2nd International Conference on Applied Surface Science ICASS held 12-16 June 2017 in Dalian China. The conference provided a forum for researchers in all areas of applied surface science to present their work. The main topics of the conference are in line with the most popular areas of research reported in Applied Surface Science. Thus, this issue includes current research on the role and use of surfaces in chemical and physical processes, related to catalysis, electrochemistry, surface engineering and functionalization, biointerfaces, semiconductors, 2D-layered materials, surface nanotechnology, energy, new/functional materials and nanotechnology. Also the various techniques and characterization methods will be discussed. Hence, scientific research on the atomic and molecular level of material properties investigated with specific surface analytical techniques and/or computational methods is essential for any further progress in these fields.

  16. Room-temperature creation and spin-orbit torque-induced manipulation of skyrmions in thin film

    NASA Astrophysics Data System (ADS)

    Yu, Guoqiang; Upadhyaya, Pramey; Li, Xiang; Li, Wenyuan; Im, Se Kwon K.; Fan, Yabin; Wong, Kin L.; Tserkovnyak, Yaroslav; Amiri, Pedram Khalili; Wang, Kang L.

    Magnetic skyrmions, which are topologically protected spin texture, are promising candidates for ultra-low energy and ultra-high density magnetic data storage and computing applications1, 2. To date, most experiments on skyrmions have been carried out at low temperatures. The choice of materials available is limited and there is a lack of electrical means to control of skyrmions. Here, we experimentally demonstrate a method for creating skyrmion bubbles phase in the ferromagnetic thin film at room temperature. We further demonstrate that the created skyrmion bubbles can be manipulated by electric current. This room-temperature creation and manipulation of skyrmion in thin film is of particular interest for applications, being suitable for room-temperature operation and compatible with existing semiconductor manufacturing tools. 1. Nagaosa, N., Tokura, Y. Nature Nanotechnology 8, 899-911 (2013). 2. Fert, A., et al., Nature Nanotechnology 8, 152-156 (2013).

  17. 75 FR 51829 - Public Workshop on Medical Devices and Nanotechnology: Manufacturing, Characterization, and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-23

    ...] Public Workshop on Medical Devices and Nanotechnology: Manufacturing, Characterization, and... entitled ``Medical Devices & Nanotechnology: Manufacturing, Characterization, and Biocompatibility... experience or expertise with nanotechnology. There will be a limited number of round-table participants. FDA...

  18. 75 FR 30874 - National Nanotechnology Coordination Office, Nanoscale Science, Engineering and Technology...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-02

    ... OFFICE OF SCIENCE AND TECHNOLOGY POLICY National Nanotechnology Coordination Office, Nanoscale... Technology; The National Nanotechnology Initiative (NNI) Strategic Planning Stakeholder Workshop: Public Meeting ACTION: Notice of public meeting. SUMMARY: The National Nanotechnology Coordination Office (NNCO...

  19. Access and visualization using clusters and other parallel computers

    NASA Technical Reports Server (NTRS)

    Katz, Daniel S.; Bergou, Attila; Berriman, Bruce; Block, Gary; Collier, Jim; Curkendall, Dave; Good, John; Husman, Laura; Jacob, Joe; Laity, Anastasia; hide

    2003-01-01

    JPL's Parallel Applications Technologies Group has been exploring the issues of data access and visualization of very large data sets over the past 10 or so years. this work has used a number of types of parallel computers, and today includes the use of commodity clusters. This talk will highlight some of the applications and tools we have developed, including how they use parallel computing resources, and specifically how we are using modern clusters. Our applications focus on NASA's needs; thus our data sets are usually related to Earth and Space Science, including data delivered from instruments in space, and data produced by telescopes on the ground.

  20. Standardisation in the field of nanotechnology: some issues of legitimacy.

    PubMed

    Forsberg, Ellen-Marie

    2012-12-01

    Nanotechnology will allegedly have a revolutionary impact in a wide range of fields, but has also created novel concerns about health, safety and the environment (HSE). Nanotechnology regulation has nevertheless lagged behind nanotechnology development. In 2004 the International Organization for Standardization established a technical committee for producing nanotechnology standards for terminology, measurements, HSE issues and product specifications. These standards are meant to play a role in nanotechnology development, as well as in national and international nanotechnology regulation, and will therefore have consequences for consumers, workers and the environment. This paper gives an overview of the work in the technical committee on nanotechnology and discusses some challenges with regard to legitimacy in such work. The paper focuses particularly on stakeholder involvement and the potential problems of scientific robustness when standardising in such early stages of the scientific development. The intention of the paper is to raise some important issues rather than to draw strong conclusions. However, the paper will be concluded with some suggestions for improving legitimacy in the TC 229 and a call for increased public awareness about standardisation in the field of nanotechnology.

  1. CMOS VLSI Layout and Verification of a SIMD Computer

    NASA Technical Reports Server (NTRS)

    Zheng, Jianqing

    1996-01-01

    A CMOS VLSI layout and verification of a 3 x 3 processor parallel computer has been completed. The layout was done using the MAGIC tool and the verification using HSPICE. Suggestions for expanding the computer into a million processor network are presented. Many problems that might be encountered when implementing a massively parallel computer are discussed.

  2. CFD Analysis and Design Optimization Using Parallel Computers

    NASA Technical Reports Server (NTRS)

    Martinelli, Luigi; Alonso, Juan Jose; Jameson, Antony; Reuther, James

    1997-01-01

    A versatile and efficient multi-block method is presented for the simulation of both steady and unsteady flow, as well as aerodynamic design optimization of complete aircraft configurations. The compressible Euler and Reynolds Averaged Navier-Stokes (RANS) equations are discretized using a high resolution scheme on body-fitted structured meshes. An efficient multigrid implicit scheme is implemented for time-accurate flow calculations. Optimum aerodynamic shape design is achieved at very low cost using an adjoint formulation. The method is implemented on parallel computing systems using the MPI message passing interface standard to ensure portability. The results demonstrate that, by combining highly efficient algorithms with parallel computing, it is possible to perform detailed steady and unsteady analysis as well as automatic design for complex configurations using the present generation of parallel computers.

  3. Optimistic barrier synchronization

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1992-01-01

    Barrier synchronization is fundamental operation in parallel computation. In many contexts, at the point a processor enters a barrier it knows that it has already processed all the work required of it prior to synchronization. The alternative case, when a processor cannot enter a barrier with the assurance that it has already performed all the necessary pre-synchronization computation, is treated. The problem arises when the number of pre-sychronization messages to be received by a processor is unkown, for example, in a parallel discrete simulation or any other computation that is largely driven by an unpredictable exchange of messages. We describe an optimistic O(log sup 2 P) barrier algorithm for such problems, study its performance on a large-scale parallel system, and consider extensions to general associative reductions as well as associative parallel prefix computations.

  4. Parallel grid generation algorithm for distributed memory computers

    NASA Technical Reports Server (NTRS)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  5. Efficient computation of hashes

    NASA Astrophysics Data System (ADS)

    Lopes, Raul H. C.; Franqueira, Virginia N. L.; Hobson, Peter R.

    2014-06-01

    The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.

  6. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, R.E.; Gustafson, J.L.; Montry, G.R.

    1999-08-10

    A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

  7. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1999-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  8. Managing internode data communications for an uninitialized process in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior tomore » initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.« less

  9. Managing internode data communications for an uninitialized process in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.

  10. Implementing and analyzing the multi-threaded LP-inference

    NASA Astrophysics Data System (ADS)

    Bolotova, S. Yu; Trofimenko, E. V.; Leschinskaya, M. V.

    2018-03-01

    The logical production equations provide new possibilities for the backward inference optimization in intelligent production-type systems. The strategy of a relevant backward inference is aimed at minimization of a number of queries to external information source (either to a database or an interactive user). The idea of the method is based on the computing of initial preimages set and searching for the true preimage. The execution of each stage can be organized independently and in parallel and the actual work at a given stage can also be distributed between parallel computers. This paper is devoted to the parallel algorithms of the relevant inference based on the advanced scheme of the parallel computations “pipeline” which allows to increase the degree of parallelism. The author also provides some details of the LP-structures implementation.

  11. A Parallel Processing Algorithm for Remote Sensing Classification

    NASA Technical Reports Server (NTRS)

    Gualtieri, J. Anthony

    2005-01-01

    A current thread in parallel computation is the use of cluster computers created by networking a few to thousands of commodity general-purpose workstation-level commuters using the Linux operating system. For example on the Medusa cluster at NASA/GSFC, this provides for super computing performance, 130 G(sub flops) (Linpack Benchmark) at moderate cost, $370K. However, to be useful for scientific computing in the area of Earth science, issues of ease of programming, access to existing scientific libraries, and portability of existing code need to be considered. In this paper, I address these issues in the context of tools for rendering earth science remote sensing data into useful products. In particular, I focus on a problem that can be decomposed into a set of independent tasks, which on a serial computer would be performed sequentially, but with a cluster computer can be performed in parallel, giving an obvious speedup. To make the ideas concrete, I consider the problem of classifying hyperspectral imagery where some ground truth is available to train the classifier. In particular I will use the Support Vector Machine (SVM) approach as applied to hyperspectral imagery. The approach will be to introduce notions about parallel computation and then to restrict the development to the SVM problem. Pseudocode (an outline of the computation) will be described and then details specific to the implementation will be given. Then timing results will be reported to show what speedups are possible using parallel computation. The paper will close with a discussion of the results.

  12. Nanotechnology: A Vast Field For The Creative Mind

    NASA Technical Reports Server (NTRS)

    Benavides, Jeannette

    2004-01-01

    This viewgraph presentation gives examples of possible future uses of nanotechnology, with some emphasis on carbon nanotubes and medical applications. The presentation provides an overview of organizations conducting nanotechnology research in the United States, and suggests a timeline for nanotechnology development.

  13. Comparative analysis of nanotechnology awareness in consumers and experts in South Korea.

    PubMed

    Kim, Yu-Ri; Lee, Eun Jeong; Park, Sung Ha; Kwon, Hyo Jin; An, Seong Soo A; Son, Sang Wook; Seo, Young Rok; Pie, Jae-Eun; Yoon, Myoung; Kim, Ja Hei; Kim, Meyoung-Kon

    2014-01-01

    This study examined the need for public communication about nanotechnologies and nanoparticles by providing a comparative analysis of the differences in risk awareness of nanotechnologies and nanoparticles between consumers and experts. A total of 1,007 consumers and 150 experts participated in this study. A questionnaire was prepared examining their awareness of nanotechnologies and nanomaterials and their view of the necessity for information and education about the latest nanotechnologies and nanomaterials. Our results indicated that the expert group recognized that they knew more than consumers about nanotechnology and that there was a need for relevant education in nanotechnology and nanomaterials among consumers. We found that the consumer group had a more positive attitude toward nanotechnology, even though they did not know much about it. Moreover, the consumer group was inconclusive about the type of information on nanotechnology deemed necessary for the public, as well as the suitable party to be responsible for education and for delivering the information. An education and promotion program targeting consumers should be established to overcome the differences between consumers and experts in their awareness of nanotechnology. Specifically, the establishment of concepts for nanomaterials or nanoproducts is required immediately. With clear standards on nanomaterials, consumers can make informed decisions in selecting nanoproducts in the market.

  14. Comparative analysis of nanotechnology awareness in consumers and experts in South Korea

    PubMed Central

    Kim, Yu-Ri; Lee, Eun Jeong; Park, Sung Ha; Kwon, Hyo Jin; An, Seong Soo A; Son, Sang Wook; Seo, Young Rok; Pie, Jae-Eun; Yoon, Myoung; Kim, Ja Hei; Kim, Meyoung-Kon

    2014-01-01

    Purpose This study examined the need for public communication about nanotechnologies and nanoparticles by providing a comparative analysis of the differences in risk awareness of nanotechnologies and nanoparticles between consumers and experts. Methods A total of 1,007 consumers and 150 experts participated in this study. A questionnaire was prepared examining their awareness of nanotechnologies and nanomaterials and their view of the necessity for information and education about the latest nanotechnologies and nanomaterials. Results Our results indicated that the expert group recognized that they knew more than consumers about nanotechnology and that there was a need for relevant education in nanotechnology and nanomaterials among consumers. We found that the consumer group had a more positive attitude toward nanotechnology, even though they did not know much about it. Moreover, the consumer group was inconclusive about the type of information on nanotechnology deemed necessary for the public, as well as the suitable party to be responsible for education and for delivering the information. Conclusion An education and promotion program targeting consumers should be established to overcome the differences between consumers and experts in their awareness of nanotechnology. Specifically, the establishment of concepts for nanomaterials or nanoproducts is required immediately. With clear standards on nanomaterials, consumers can make informed decisions in selecting nanoproducts in the market. PMID:25565823

  15. PISCES: An environment for parallel scientific computation

    NASA Technical Reports Server (NTRS)

    Pratt, T. W.

    1985-01-01

    The parallel implementation of scientific computing environment (PISCES) is a project to provide high-level programming environments for parallel MIMD computers. Pisces 1, the first of these environments, is a FORTRAN 77 based environment which runs under the UNIX operating system. The Pisces 1 user programs in Pisces FORTRAN, an extension of FORTRAN 77 for parallel processing. The major emphasis in the Pisces 1 design is in providing a carefully specified virtual machine that defines the run-time environment within which Pisces FORTRAN programs are executed. Each implementation then provides the same virtual machine, regardless of differences in the underlying architecture. The design is intended to be portable to a variety of architectures. Currently Pisces 1 is implemented on a network of Apollo workstations and on a DEC VAX uniprocessor via simulation of the task level parallelism. An implementation for the Flexible Computing Corp. FLEX/32 is under construction. An introduction to the Pisces 1 virtual computer and the FORTRAN 77 extensions is presented. An example of an algorithm for the iterative solution of a system of equations is given. The most notable features of the design are the provision for several granularities of parallelism in programs and the provision of a window mechanism for distributed access to large arrays of data.

  16. Parallel Preconditioning for CFD Problems on the CM-5

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.

  17. Self-assembled three-dimensional chiral colloidal architecture.

    PubMed

    Ben Zion, Matan Yah; He, Xiaojin; Maass, Corinna C; Sha, Ruojie; Seeman, Nadrian C; Chaikin, Paul M

    2017-11-03

    Although stereochemistry has been a central focus of the molecular sciences since Pasteur, its province has previously been restricted to the nanometric scale. We have programmed the self-assembly of micron-sized colloidal clusters with structural information stemming from a nanometric arrangement. This was done by combining DNA nanotechnology with colloidal science. Using the functional flexibility of DNA origami in conjunction with the structural rigidity of colloidal particles, we demonstrate the parallel self-assembly of three-dimensional microconstructs, evincing highly specific geometry that includes control over position, dihedral angles, and cluster chirality. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  18. Applications of Parallel Process HiMAP for Large Scale Multidisciplinary Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Potsdam, Mark; Rodriguez, David; Kwak, Dochay (Technical Monitor)

    2000-01-01

    HiMAP is a three level parallel middleware that can be interfaced to a large scale global design environment for code independent, multidisciplinary analysis using high fidelity equations. Aerospace technology needs are rapidly changing. Computational tools compatible with the requirements of national programs such as space transportation are needed. Conventional computation tools are inadequate for modern aerospace design needs. Advanced, modular computational tools are needed, such as those that incorporate the technology of massively parallel processors (MPP).

  19. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  20. Parallel algorithms for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Amin-Javaheri, Masoud; Orin, David E.

    1989-01-01

    The development of an O(log2N) parallel algorithm for the manipulator inertia matrix is presented. It is based on the most efficient serial algorithm which uses the composite rigid body method. Recursive doubling is used to reformulate the linear recurrence equations which are required to compute the diagonal elements of the matrix. It results in O(log2N) levels of computation. Computation of the off-diagonal elements involves N linear recurrences of varying-size and a new method, which avoids redundant computation of position and orientation transforms for the manipulator, is developed. The O(log2N) algorithm is presented in both equation and graphic forms which clearly show the parallelism inherent in the algorithm.

  1. Nanotechnology: moving from microarrays toward nanoarrays.

    PubMed

    Chen, Hua; Li, Jun

    2007-01-01

    Microarrays are important tools for high-throughput analysis of biomolecules. The use of microarrays for parallel screening of nucleic acid and protein profiles has become an industry standard. A few limitations of microarrays are the requirement for relatively large sample volumes and elongated incubation time, as well as the limit of detection. In addition, traditional microarrays make use of bulky instrumentation for the detection, and sample amplification and labeling are quite laborious, which increase analysis cost and delays the time for obtaining results. These problems limit microarray techniques from point-of-care and field applications. One strategy for overcoming these problems is to develop nanoarrays, particularly electronics-based nanoarrays. With further miniaturization, higher sensitivity, and simplified sample preparation, nanoarrays could potentially be employed for biomolecular analysis in personal healthcare and monitoring of trace pathogens. In this chapter, it is intended to introduce the concept and advantage of nanotechnology and then describe current methods and protocols for novel nanoarrays in three aspects: (1) label-free nucleic acids analysis using nanoarrays, (2) nanoarrays for protein detection by conventional optical fluorescence microscopy as well as by novel label-free methods such as atomic force microscopy, and (3) nanoarray for enzymatic-based assay. These nanoarrays will have significant applications in drug discovery, medical diagnosis, genetic testing, environmental monitoring, and food safety inspection.

  2. Structure, stability and behaviour of nucleic acids in ionic liquids

    PubMed Central

    Tateishi-Karimata, Hisae; Sugimoto, Naoki

    2014-01-01

    Nucleic acids have become a powerful tool in nanotechnology because of their conformational polymorphism. However, lack of a medium in which nucleic acid structures exhibit long-term stability has been a bottleneck. Ionic liquids (ILs) are potential solvents in the nanotechnology field. Hydrated ILs, such as choline dihydrogen phosphate (choline dhp) and deep eutectic solvent (DES) prepared from choline chloride and urea, are ‘green’ solvents that ensure long-term stability of biomolecules. An understanding of the behaviour of nucleic acids in hydrated ILs is necessary for developing DNA materials. We here review current knowledge about the structures and stabilities of nucleic acids in choline dhp and DES. Interestingly, in choline dhp, A–T base pairs are more stable than G–C base pairs, the reverse of the situation in buffered NaCl solution. Moreover, DNA triplex formation is markedly stabilized in hydrated ILs compared with aqueous solution. In choline dhp, the stability of Hoogsteen base pairs is comparable to that of Watson–Crick base pairs. Moreover, the parallel form of the G-quadruplex is stabilized in DES compared with aqueous solution. The behaviours of various DNA molecules in ILs detailed here should be useful for designing oligonucleotides for the development of nanomaterials and nanodevices. PMID:25013178

  3. Manyscale Computing for Sensor Processing in Support of Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Schmalz, M.; Chapman, W.; Hayden, E.; Sahni, S.; Ranka, S.

    2014-09-01

    Increasing image and signal data burden associated with sensor data processing in support of space situational awareness implies continuing computational throughput growth beyond the petascale regime. In addition to growing applications data burden and diversity, the breadth, diversity and scalability of high performance computing architectures and their various organizations challenge the development of a single, unifying, practicable model of parallel computation. Therefore, models for scalable parallel processing have exploited architectural and structural idiosyncrasies, yielding potential misapplications when legacy programs are ported among such architectures. In response to this challenge, we have developed a concise, efficient computational paradigm and software called Manyscale Computing to facilitate efficient mapping of annotated application codes to heterogeneous parallel architectures. Our theory, algorithms, software, and experimental results support partitioning and scheduling of application codes for envisioned parallel architectures, in terms of work atoms that are mapped (for example) to threads or thread blocks on computational hardware. Because of the rigor, completeness, conciseness, and layered design of our manyscale approach, application-to-architecture mapping is feasible and scalable for architectures at petascales, exascales, and above. Further, our methodology is simple, relying primarily on a small set of primitive mapping operations and support routines that are readily implemented on modern parallel processors such as graphics processing units (GPUs) and hybrid multi-processors (HMPs). In this paper, we overview the opportunities and challenges of manyscale computing for image and signal processing in support of space situational awareness applications. We discuss applications in terms of a layered hardware architecture (laboratory > supercomputer > rack > processor > component hierarchy). Demonstration applications include performance analysis and results in terms of execution time as well as storage, power, and energy consumption for bus-connected and/or networked architectures. The feasibility of the manyscale paradigm is demonstrated by addressing four principal challenges: (1) architectural/structural diversity, parallelism, and locality, (2) masking of I/O and memory latencies, (3) scalability of design as well as implementation, and (4) efficient representation/expression of parallel applications. Examples will demonstrate how manyscale computing helps solve these challenges efficiently on real-world computing systems.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demeure, I.M.

    The research presented here is concerned with representation techniques and tools to support the design, prototyping, simulation, and evaluation of message-based parallel, distributed computations. The author describes ParaDiGM-Parallel, Distributed computation Graph Model-a visual representation technique for parallel, message-based distributed computations. ParaDiGM provides several views of a computation depending on the aspect of concern. It is made of two complementary submodels, the DCPG-Distributed Computing Precedence Graph-model, and the PAM-Process Architecture Model-model. DCPGs are precedence graphs used to express the functionality of a computation in terms of tasks, message-passing, and data. PAM graphs are used to represent the partitioning of a computationmore » into schedulable units or processes, and the pattern of communication among those units. There is a natural mapping between the two models. He illustrates the utility of ParaDiGM as a representation technique by applying it to various computations (e.g., an adaptive global optimization algorithm, the client-server model). ParaDiGM representations are concise. They can be used in documenting the design and the implementation of parallel, distributed computations, in describing such computations to colleagues, and in comparing and contrasting various implementations of the same computation. He then describes VISA-VISual Assistant, a software tool to support the design, prototyping, and simulation of message-based parallel, distributed computations. VISA is based on the ParaDiGM model. In particular, it supports the editing of ParaDiGM graphs to describe the computations of interest, and the animation of these graphs to provide visual feedback during simulations. The graphs are supplemented with various attributes, simulation parameters, and interpretations which are procedures that can be executed by VISA.« less

  5. Resource Letter N-1: Nanotechnology

    NASA Astrophysics Data System (ADS)

    Cela, Devin; Dresselhaus, Mildred; Helen Zeng, Tingying; Terrones, Mauricio; Souza Filho, Antonio G.; Ferreira, Odair P.

    2014-01-01

    This Resource Letter provides a guide to the literature on Nanotechnology. Journal articles, books, websites, and other documents are cited on the following topics: attributes of various types of nanomaterials, nanotechnology in the context of different academic fields, and the effects of nanotechnology on society.

  6. Scan line graphics generation on the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1988-01-01

    Described here is how researchers implemented a scan line graphics generation algorithm on the Massively Parallel Processor (MPP). Pixels are computed in parallel and their results are applied to the Z buffer in large groups. To perform pixel value calculations, facilitate load balancing across the processors and apply the results to the Z buffer efficiently in parallel requires special virtual routing (sort computation) techniques developed by the author especially for use on single-instruction multiple-data (SIMD) architectures.

  7. Thinking on the application of nanotechnology in the mechanism research on the acupuncture treatment of female climacteric syndrome

    NASA Astrophysics Data System (ADS)

    Xu, Yunxiang; Cai, Jinyuan; Chen, Guizhen; Chen, Pengdian

    2009-08-01

    By analyzing the relationship between nanotechnology and medical science, especially nanotechnology and acupuncture and moxibustion science, the application of nanotechnological methods for the mechanism research on acupuncture and moxibustion for the treatment of women climacteric syndrome was discussed. It is indicated that nanotechnology is one of the fastest developmental, the most potential and the far-reaching high and new technologies in current world, and it greatly promotes the development of medical science and acupuncture and moxibustion science. Nanotechnology will make the development of acupuncture& moxibustion science possess a unprecedented field. It's pointed out that breakthrough will be achieved from the research of the application of nanotechnology on mechanism research on acupuncture and moxibustion for the treatment of women climacteric syndrome.

  8. The paradigm compiler: Mapping a functional language for the connection machine

    NASA Technical Reports Server (NTRS)

    Dennis, Jack B.

    1989-01-01

    The Paradigm Compiler implements a new approach to compiling programs written in high level languages for execution on highly parallel computers. The general approach is to identify the principal data structures constructed by the program and to map these structures onto the processing elements of the target machine. The mapping is chosen to maximize performance as determined through compile time global analysis of the source program. The source language is Sisal, a functional language designed for scientific computations, and the target language is Paris, the published low level interface to the Connection Machine. The data structures considered are multidimensional arrays whose dimensions are known at compile time. Computations that build such arrays usually offer opportunities for highly parallel execution; they are data parallel. The Connection Machine is an attractive target for these computations, and the parallel for construct of the Sisal language is a convenient high level notation for data parallel algorithms. The principles and organization of the Paradigm Compiler are discussed.

  9. Parallel computing in genomic research: advances and applications

    PubMed Central

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today’s genomic experiments have to process the so-called “biological big data” that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. PMID:26604801

  10. Parallel computing in genomic research: advances and applications.

    PubMed

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

  11. Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fijany, A.; Milman, M.; Redding, D.

    1994-12-31

    In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm,more » designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.« less

  12. Automated Generation of Message-Passing Programs: An Evaluation Using CAPTools

    NASA Technical Reports Server (NTRS)

    Hribar, Michelle R.; Jin, Haoqiang; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Scientists at NASA Ames Research Center have been developing computational aeroscience applications on highly parallel architectures over the past ten years. During that same time period, a steady transition of hardware and system software also occurred, forcing us to expend great efforts into migrating and re-coding our applications. As applications and machine architectures become increasingly complex, the cost and time required for this process will become prohibitive. In this paper, we present the first set of results in our evaluation of interactive parallelization tools. In particular, we evaluate CAPTool's ability to parallelize computational aeroscience applications. CAPTools was tested on serial versions of the NAS Parallel Benchmarks and ARC3D, a computational fluid dynamics application, on two platforms: the SGI Origin 2000 and the Cray T3E. This evaluation includes performance, amount of user interaction required, limitations and portability. Based on these results, a discussion on the feasibility of computer aided parallelization of aerospace applications is presented along with suggestions for future work.

  13. Development of an Attitude Scale to Assess K-12 Teachers' Attitudes toward Nanotechnology

    NASA Astrophysics Data System (ADS)

    Lan, Yu-Ling

    2012-05-01

    To maximize the contributions of nanotechnology to this society, at least 60 countries have put efforts into this field. In Taiwan, a government-funded K-12 Nanotechnology Programme was established to train K-12 teachers with adequate nanotechnology literacy to foster the next generation of Taiwanese people with sufficient knowledge in nanotechnology. In the present study, the Nanotechnology Attitude Scale for K-12 teachers (NAS-T) was developed to assess K-12 teachers' attitudes toward nanotechnology. The NAS-T included 23 Likert-scale items that can be grouped into three components: importance of nanotechnology, affective tendencies in science teaching, and behavioural tendencies to teach nanotechnology. A sample of 233 K-12 teachers who have participated in the K-12 Nanotechnology Programme was included in the present study to investigate the psychometric properties of the NAS-T. The exploratory factor analysis of this teacher sample suggested that the NAS-T was a three-factor model that explained 64.11% of the total variances. This model was also confirmed by the confirmatory factor analysis to validate the factor structure of the NAS-T. The Cronbach's alpha values of three NAS-T subscales ranged from 0.89 to 0.95. Moderate to strong correlations among teachers' NAS-T domain scores, self-perception of own nanoscience knowledge, and their science-teaching efficacy demonstrated good convergent validity of the NAS-T. As a whole, psychometric properties of the NAS-T indicated that this instrument is an effective instrument for assessing K-12 teachers' attitudes toward nanotechnology. The NAS-T will serve as a valuable tool to evaluate teachers' attitude changes after participating in the K-12 Nanotechnology Programme.

  14. Connectionist Models and Parallelism in High Level Vision.

    DTIC Science & Technology

    1985-01-01

    GRANT NUMBER(s) Jerome A. Feldman N00014-82-K-0193 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENt. PROJECT, TASK Computer Science...Connectionist Models 2.1 Background and Overviev % Computer science is just beginning to look seriously at parallel computation : it may turn out that...the chair. The program includes intermediate level networks that compute more complex joints and ones that compute parallelograms in the image. These

  15. GPU accelerated dynamic functional connectivity analysis for functional MRI data.

    PubMed

    Akgün, Devrim; Sakoğlu, Ünal; Esquivel, Johnny; Adinoff, Bryon; Mete, Mutlu

    2015-07-01

    Recent advances in multi-core processors and graphics card based computational technologies have paved the way for an improved and dynamic utilization of parallel computing techniques. Numerous applications have been implemented for the acceleration of computationally-intensive problems in various computational science fields including bioinformatics, in which big data problems are prevalent. In neuroimaging, dynamic functional connectivity (DFC) analysis is a computationally demanding method used to investigate dynamic functional interactions among different brain regions or networks identified with functional magnetic resonance imaging (fMRI) data. In this study, we implemented and analyzed a parallel DFC algorithm based on thread-based and block-based approaches. The thread-based approach was designed to parallelize DFC computations and was implemented in both Open Multi-Processing (OpenMP) and Compute Unified Device Architecture (CUDA) programming platforms. Another approach developed in this study to better utilize CUDA architecture is the block-based approach, where parallelization involves smaller parts of fMRI time-courses obtained by sliding-windows. Experimental results showed that the proposed parallel design solutions enabled by the GPUs significantly reduce the computation time for DFC analysis. Multicore implementation using OpenMP on 8-core processor provides up to 7.7× speed-up. GPU implementation using CUDA yielded substantial accelerations ranging from 18.5× to 157× speed-up once thread-based and block-based approaches were combined in the analysis. Proposed parallel programming solutions showed that multi-core processor and CUDA-supported GPU implementations accelerated the DFC analyses significantly. Developed algorithms make the DFC analyses more practical for multi-subject studies with more dynamic analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Administering truncated receive functions in a parallel messaging interface

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-12-09

    Administering truncated receive functions in a parallel messaging interface (`PMI`) of a parallel computer comprising a plurality of compute nodes coupled for data communications through the PMI and through a data communications network, including: sending, through the PMI on a source compute node, a quantity of data from the source compute node to a destination compute node; specifying, by an application on the destination compute node, a portion of the quantity of data to be received by the application on the destination compute node and a portion of the quantity of data to be discarded; receiving, by the PMI on the destination compute node, all of the quantity of data; providing, by the PMI on the destination compute node to the application on the destination compute node, only the portion of the quantity of data to be received by the application; and discarding, by the PMI on the destination compute node, the portion of the quantity of data to be discarded.

  17. Efficient Predictions of Excited State for Nanomaterials Using Aces 3 and 4

    DTIC Science & Technology

    2017-12-20

    by first-principle methods in the software package ACES by using large parallel computers, growing tothe exascale. 15. SUBJECT TERMS Computer...modeling, excited states, optical properties, structure, stability, activation barriers first principle methods , parallel computing 16. SECURITY...2 Progress with new density functional methods

  18. Efficient multi-objective calibration of a computationally intensive hydrologic model with parallel computing software in Python

    USDA-ARS?s Scientific Manuscript database

    With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...

  19. A Survey of Parallel Computing

    DTIC Science & Technology

    1988-07-01

    Evaluating Two Massively Parallel Machines. Communications of the ACM .9, , , 176 BIBLIOGRAPHY 29, 8 (August), pp. 752-758. Gajski , D.D., Padua, D.A., Kuck...Computer Architecture, edited by Gajski , D. D., Milutinovic, V. M. Siegel, H. J. and Furht, B. P. IEEE Computer Society Press, Washington, D.C., pp. 387-407

  20. Topical perspective on massive threading and parallelism.

    PubMed

    Farber, Robert M

    2011-09-01

    Unquestionably computer architectures have undergone a recent and noteworthy paradigm shift that now delivers multi- and many-core systems with tens to many thousands of concurrent hardware processing elements per workstation or supercomputer node. GPGPU (General Purpose Graphics Processor Unit) technology in particular has attracted significant attention as new software development capabilities, namely CUDA (Compute Unified Device Architecture) and OpenCL™, have made it possible for students as well as small and large research organizations to achieve excellent speedup for many applications over more conventional computing architectures. The current scientific literature reflects this shift with numerous examples of GPGPU applications that have achieved one, two, and in some special cases, three-orders of magnitude increased computational performance through the use of massive threading to exploit parallelism. Multi-core architectures are also evolving quickly to exploit both massive-threading and massive-parallelism such as the 1.3 million threads Blue Waters supercomputer. The challenge confronting scientists in planning future experimental and theoretical research efforts--be they individual efforts with one computer or collaborative efforts proposing to use the largest supercomputers in the world is how to capitalize on these new massively threaded computational architectures--especially as not all computational problems will scale to massive parallelism. In particular, the costs associated with restructuring software (and potentially redesigning algorithms) to exploit the parallelism of these multi- and many-threaded machines must be considered along with application scalability and lifespan. This perspective is an overview of the current state of threading and parallelize with some insight into the future. Published by Elsevier Inc.

Top