Science.gov

Sample records for advanced computer architecture

  1. ATCA for Machines-- Advanced Telecommunications Computing Architecture

    SciTech Connect

    Larsen, R.S.; /SLAC

    2008-04-22

    The Advanced Telecommunications Computing Architecture is a new industry open standard for electronics instrument modules and shelves being evaluated for the International Linear Collider (ILC). It is the first industrial standard designed for High Availability (HA). ILC availability simulations have shown clearly that the capabilities of ATCA are needed in order to achieve acceptable integrated luminosity. The ATCA architecture looks attractive for beam instruments and detector applications as well. This paper provides an overview of ongoing R&D including application of HA principles to power electronics systems.

  2. Computational Biology, Advanced Scientific Computing, and Emerging Computational Architectures

    SciTech Connect

    2007-06-27

    This CRADA was established at the start of FY02 with $200 K from IBM and matching funds from DOE to support post-doctoral fellows in collaborative research between International Business Machines and Oak Ridge National Laboratory to explore effective use of emerging petascale computational architectures for the solution of computational biology problems. 'No cost' extensions of the CRADA were negotiated with IBM for FY03 and FY04.

  3. Advanced computer architecture specification for automated weld systems

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1994-01-01

    This report describes the requirements for an advanced automated weld system and the associated computer architecture, and defines the overall system specification from a broad perspective. According to the requirements of welding procedures as they relate to an integrated multiaxis motion control and sensor architecture, the computer system requirements are developed based on a proven multiple-processor architecture with an expandable, distributed-memory, single global bus architecture, containing individual processors which are assigned to specific tasks that support sensor or control processes. The specified architecture is sufficiently flexible to integrate previously developed equipment, be upgradable and allow on-site modifications.

  4. Aerodynamic optimization studies on advanced architecture computers

    NASA Technical Reports Server (NTRS)

    Chawla, Kalpana

    1995-01-01

    The approach to carrying out multi-discipline aerospace design studies in the future, especially in massively parallel computing environments, comprises of choosing (1) suitable solvers to compute solutions to equations characterizing a discipline, and (2) efficient optimization methods. In addition, for aerodynamic optimization problems, (3) smart methodologies must be selected to modify the surface shape. In this research effort, a 'direct' optimization method is implemented on the Cray C-90 to improve aerodynamic design. It is coupled with an existing implicit Navier-Stokes solver, OVERFLOW, to compute flow solutions. The optimization method is chosen such that it can accomodate multi-discipline optimization in future computations. In the work , however, only single discipline aerodynamic optimization will be included.

  5. A comparison of computer architectures for the NASA demonstration advanced avionics system

    NASA Technical Reports Server (NTRS)

    Seacord, C. L.; Bailey, D. G.; Larson, J. C.

    1979-01-01

    The paper compares computer architectures for the NASA demonstration advanced avionics system. Two computer architectures are described with an unusual approach to fault tolerance: a single spare processor can correct for faults in any of the distributed processors by taking on the role of a failed module. It was shown the system must be used from a functional point of view to properly apply redundancy and achieve fault tolerance and ultra reliability. Data are presented on complexity and mission failure probability which show that the revised version offers equivalent mission reliability at lower cost as measured by hardware and software complexity.

  6. CRADA ORNL 91-0046B final report: Assessment of IBM advanced computing architectures

    SciTech Connect

    Geist, G.A.

    1996-02-01

    This was a Cooperative Research and Development Agreement (CRADA) with IBM to assess their advanced computer architectures. Over the course of this project three different architectures were evaluated. The POWER/4 RIOS1 based shared memory multiprocessor, the POWER/2 RIOS2 based high performance workstation, and the J30 PowerPC based shared memory multiprocessor. In addition to this hardware several software packages where beta tested for IBM including: ESSO scientific computing library, nv video-conferencing package, Ultimedia multimedia display environment, FORTRAN 90 and C++ compilers, and the AIX 4.1 operating system. Both IBM and ORNL benefited from the research performed in this project and even though access to the POWER/4 computer was delayed several months, all milestones were met.

  7. New computer architectures

    SciTech Connect

    Tiberghien, J.

    1984-01-01

    This book presents papers on supercomputers. Topics considered include decentralized computer architecture, new programming languages, data flow computers, reduction computers, parallel prefix calculations, structural and behavioral descriptions of digital systems, instruction sets, software generation, personal computing, and computer architecture education.

  8. Computer architectures for computational physics work done by Computational Research and Technology Branch and Advanced Computational Concepts Group

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Slides are reproduced that describe the importance of having high performance number crunching and graphics capability. They also indicate the types of research and development underway at Ames Research Center to ensure that, in the near term, Ames is a smart buyer and user, and in the long-term that Ames knows the best possible solutions for number crunching and graphics needs. The drivers for this research are real computational physics applications of interest to Ames and NASA. They are concerned with how to map the applications, and how to maximize the physics learned from the results of the calculations. The computer graphics activities are aimed at getting maximum information from the three-dimensional calculations by using the real time manipulation of three-dimensional data on the Silicon Graphics workstation. Work is underway on new algorithms that will permit the display of experimental results that are sparse and random, the same way that the dense and regular computed results are displayed.

  9. Computer architecture for efficient algorithmic executions in real-time systems: New technology for avionics systems and advanced space vehicles

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Youngblood, John N.; Saha, Aindam

    1987-01-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  10. Computer architecture for efficient algorithmic executions in real-time systems: new technology for avionics systems and advanced space vehicles

    SciTech Connect

    Carroll, C.C.; Youngblood, J.N.; Saha, A.

    1987-12-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  11. Advanced ground station architecture

    NASA Technical Reports Server (NTRS)

    Zillig, David; Benjamin, Ted

    1994-01-01

    This paper describes a new station architecture for NASA's Ground Network (GN). The architecture makes efficient use of emerging technologies to provide dramatic reductions in size, operational complexity, and operational and maintenance costs. The architecture, which is based on recent receiver work sponsored by the Office of Space Communications Advanced Systems Program, allows integration of both GN and Space Network (SN) modes of operation in the same electronics system. It is highly configurable through software and the use of charged coupled device (CCD) technology to provide a wide range of operating modes. Moreover, it affords modularity of features which are optional depending on the application. The resulting system incorporates advanced RF, digital, and remote control technology capable of introducing significant operational, performance, and cost benefits to a variety of NASA communications and tracking applications.

  12. Architecture Adaptive Computing Environment

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    2006-01-01

    Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.

  13. Advanced Architectures for Astrophysical Supercomputing

    NASA Astrophysics Data System (ADS)

    Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.

    2010-12-01

    Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.

  14. The Spin Torque Lego - from spin torque nano-devices to advanced computing architectures

    NASA Astrophysics Data System (ADS)

    Grollier, Julie

    2013-03-01

    Spin transfer torque (STT), predicted in 1996, and first observed around 2000, brought spintronic devices to the realm of active elements. A whole class of new devices, based on the combined effects of STT for writing and Giant Magneto-Resistance or Tunnel Magneto-Resistance for reading has emerged. The second generation of MRAMs, based on spin torque writing : the STT-RAM, is under industrial development and should be out on the market in three years. But spin torque devices are not limited to binary memories. We will rapidly present how the spin torque effect also allows to implement non-linear nano-oscillators, spin-wave emitters, controlled stochastic devices and microwave nano-detectors. What is extremely interesting is that all these functionalities can be obtained using the same materials, the exact same stack, simply by changing the device geometry and its bias conditions. So these different devices can be seen as Lego bricks, each brick with its own functionality. During this talk, I will show how spin torque can be engineered to build new bricks, such as the Spintronic Memristor, an artificial magnetic nano-synapse. I will then give hints on how to assemble these bricks in order to build novel types of computing architectures, with a special focus on neuromorphic circuits. Financial support by the European Research Council Starting Grant NanoBrain (ERC 2010 Stg 259068) is acknowledged.

  15. Advanced architectures for astrophysical supercomputing

    NASA Astrophysics Data System (ADS)

    Barsdell, B. R.

    2012-01-01

    This thesis explores the substantial benefits offered to astronomy research by advanced 'many-core' computing architectures, which can provide up to ten times more computing power than traditional processors. It begins by analysing the computations that are best suited to massively parallel computing and advocates a powerful, general approach to the use of many-core devices. These concepts are then put into practice to develop a fast data processing pipeline, with which new science outcomes are achieved in the field of pulsar astronomy, including the discovery of a new star. The work demonstrates how technology originally developed for the consumer market can now be used to accelerate the rate of scientific discovery.

  16. Advanced computing

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Advanced concepts in hardware, software and algorithms are being pursued for application in next generation space computers and for ground based analysis of space data. The research program focuses on massively parallel computation and neural networks, as well as optical processing and optical networking which are discussed under photonics. Also included are theoretical programs in neural and nonlinear science, and device development for magnetic and ferroelectric memories.

  17. The coupling of fluids, dynamics, and controls on advanced architecture computers

    NASA Technical Reports Server (NTRS)

    Atwood, Christopher

    1995-01-01

    This grant provided for the demonstration of coupled controls, body dynamics, and fluids computations in a workstation cluster environment; and an investigation of the impact of peer-peer communication on flow solver performance and robustness. The findings of these investigations were documented in the conference articles.The attached publication, 'Towards Distributed Fluids/Controls Simulations', documents the solution and scaling of the coupled Navier-Stokes, Euler rigid-body dynamics, and state feedback control equations for a two-dimensional canard-wing. The poor scaling shown was due to serialized grid connectivity computation and Ethernet bandwidth limits. The scaling of a peer-to-peer communication flow code on an IBM SP-2 was also shown. The scaling of the code on the switched fabric-linked nodes was good, with a 2.4 percent loss due to communication of intergrid boundary point information. The code performance on 30 worker nodes was 1.7 (mu)s/point/iteration, or a factor of three over a Cray C-90 head. The attached paper, 'Nonlinear Fluid Computations in a Distributed Environment', documents the effect of several computational rate enhancing methods on convergence. For the cases shown, the highest throughput was achieved using boundary updates at each step, with the manager process performing communication tasks only. Constrained domain decomposition of the implicit fluid equations did not degrade the convergence rate or final solution. The scaling of a coupled body/fluid dynamics problem on an Ethernet-linked cluster was also shown.

  18. THE COMPUTER AND THE ARCHITECTURAL PROFESSION.

    ERIC Educational Resources Information Center

    HAVILAND, DAVID S.

    THE ROLE OF ADVANCING TECHNOLOGY IN THE FIELD OF ARCHITECTURE IS DISCUSSED IN THIS REPORT. PROBLEMS IN COMMUNICATION AND THE DESIGN PROCESS ARE IDENTIFIED. ADVANTAGES AND DISADVANTAGES OF COMPUTERS ARE MENTIONED IN RELATION TO MAN AND MACHINE INTERACTION. PRESENT AND FUTURE IMPLICATIONS OF COMPUTER USAGE ARE IDENTIFIED AND DISCUSSED WITH RESPECT…

  19. Recursive computer architecture for VLSI

    SciTech Connect

    Treleaven, P.C.; Hopkins, R.P.

    1982-01-01

    A general-purpose computer architecture based on the concept of recursion and suitable for VLSI computer systems built from replicated (lego-like) computing elements is presented. The recursive computer architecture is defined by presenting a program organisation, a machine organisation and an experimental machine implementation oriented to VLSI. The experimental implementation is being restricted to simple, identical microcomputers each containing a memory, a processor and a communications capability. This future generation of lego-like computer systems are termed fifth generation computers by the Japanese. 30 references.

  20. Advanced image memory architecture

    NASA Astrophysics Data System (ADS)

    Vercillo, Richard; McNeill, Kevin M.

    1994-05-01

    A workstation for radiographic images, known as the Arizona Viewing Console (AVC), was developed at the University of Arizona Health Sciences Center in the Department of Radiology. This workstation has been in use as a research tool to aid us in investigating how a radiologist interacts with a workstation, to determine which image processing features are required to aid the radiologist, to develop user interfaces and to support psychophysical and clinical studies. Results from these studies have show a need to increase the current image memory's available storage in order to accommodate high resolution images. The current triple-ported image memory can be allocated to store any number of images up to a combined total of 4 million pixels. Over the past couple of years, higher resolution images have become easier to generate with the advent of laser digitizers and computed radiology systems. As part of our research, a larger 32 million pixel image memory for AVC has been designed to replace the existing image memory.

  1. Computing architecture for autonomous microgrids

    SciTech Connect

    Goldsmith, Steven Y.

    2015-09-29

    A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the .

  2. The new landscape of parallel computer architecture

    NASA Astrophysics Data System (ADS)

    Shalf, John

    2007-07-01

    The past few years has seen a sea change in computer architecture that will impact every facet of our society as every electronic device from cell phone to supercomputer will need to confront parallelism of unprecedented scale. Whereas the conventional multicore approach (2, 4, and even 8 cores) adopted by the computing industry will eventually hit a performance plateau, the highest performance per watt and per chip area is achieved using manycore technology (hundreds or even thousands of cores). However, fully unleashing the potential of the manycore approach to ensure future advances in sustained computational performance will require fundamental advances in computer architecture and programming models that are nothing short of reinventing computing. In this paper we examine the reasons behind the movement to exponentially increasing parallelism, and its ramifications for system design, applications and programming models.

  3. Advanced flight computer. Special study

    NASA Technical Reports Server (NTRS)

    Coo, Dennis

    1995-01-01

    This report documents a special study to define a 32-bit radiation hardened, SEU tolerant flight computer architecture, and to investigate current or near-term technologies and development efforts that contribute to the Advanced Flight Computer (AFC) design and development. An AFC processing node architecture is defined. Each node may consist of a multi-chip processor as needed. The modular, building block approach uses VLSI technology and packaging methods that demonstrate a feasible AFC module in 1998 that meets that AFC goals. The defined architecture and approach demonstrate a clear low-risk, low-cost path to the 1998 production goal, with intermediate prototypes in 1996.

  4. Savannah River Site computing architecture

    SciTech Connect

    Not Available

    1991-03-29

    A computing architecture is a framework for making decisions about the implementation of computer technology and the supporting infrastructure. Because of the size, diversity, and amount of resources dedicated to computing at the Savannah River Site (SRS), there must be an overall strategic plan that can be followed by the thousands of site personnel who make decisions daily that directly affect the SRS computing environment and impact the site`s production and business systems. This plan must address the following requirements: There must be SRS-wide standards for procurement or development of computing systems (hardware and software). The site computing organizations must develop systems that end users find easy to use. Systems must be put in place to support the primary function of site information workers. The developers of computer systems must be given tools that automate and speed up the development of information systems and applications based on computer technology. This document describes a proposal for a site-wide computing architecture that addresses the above requirements. In summary, this architecture is standards-based data-driven, and workstation-oriented with larger systems being utilized for the delivery of needed information to users in a client-server relationship.

  5. Savannah River Site computing architecture

    SciTech Connect

    Not Available

    1991-03-29

    A computing architecture is a framework for making decisions about the implementation of computer technology and the supporting infrastructure. Because of the size, diversity, and amount of resources dedicated to computing at the Savannah River Site (SRS), there must be an overall strategic plan that can be followed by the thousands of site personnel who make decisions daily that directly affect the SRS computing environment and impact the site's production and business systems. This plan must address the following requirements: There must be SRS-wide standards for procurement or development of computing systems (hardware and software). The site computing organizations must develop systems that end users find easy to use. Systems must be put in place to support the primary function of site information workers. The developers of computer systems must be given tools that automate and speed up the development of information systems and applications based on computer technology. This document describes a proposal for a site-wide computing architecture that addresses the above requirements. In summary, this architecture is standards-based data-driven, and workstation-oriented with larger systems being utilized for the delivery of needed information to users in a client-server relationship.

  6. Parallel Architecture For Robotics Computation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1990-01-01

    Universal Real-Time Robotic Controller and Simulator (URRCS) is highly parallel computing architecture for control and simulation of robot motion. Result of extensive algorithmic study of different kinematic and dynamic computational problems arising in control and simulation of robot motion. Study led to development of class of efficient parallel algorithms for these problems. Represents algorithmically specialized architecture, in sense capable of exploiting common properties of this class of parallel algorithms. System with both MIMD and SIMD capabilities. Regarded as processor attached to bus of external host processor, as part of bus memory.

  7. VLSI Architectures for Computing DFT's

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Reed, I. S.; Pei, D. Y.

    1986-01-01

    Simplifications result from use of residue Fermat number systems. System of finite arithmetic over residue Fermat number systems enables calculation of discrete Fourier transform (DFT) of series of complex numbers with reduced number of multiplications. Computer architectures based on approach suitable for design of very-large-scale integrated (VLSI) circuits for computing DFT's. General approach not limited to DFT's; Applicable to decoding of error-correcting codes and other transform calculations. System readily implemented in VLSI.

  8. Advanced information processing system for advanced launch system: Avionics architecture synthesis

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.; Harper, Richard E.; Jaskowiak, Kenneth R.; Rosch, Gene; Alger, Linda S.; Schor, Andrei L.

    1991-01-01

    The Advanced Information Processing System (AIPS) is a fault-tolerant distributed computer system architecture that was developed to meet the real time computational needs of advanced aerospace vehicles. One such vehicle is the Advanced Launch System (ALS) being developed jointly by NASA and the Department of Defense to launch heavy payloads into low earth orbit at one tenth the cost (per pound of payload) of the current launch vehicles. An avionics architecture that utilizes the AIPS hardware and software building blocks was synthesized for ALS. The AIPS for ALS architecture synthesis process starting with the ALS mission requirements and ending with an analysis of the candidate ALS avionics architecture is described.

  9. The flight telerobotic servicer: From functional architecture to computer architecture

    NASA Technical Reports Server (NTRS)

    Lumia, Ronald; Fiala, John

    1989-01-01

    After a brief tutorial on the NASA/National Bureau of Standards Standard Reference Model for Telerobot Control System Architecture (NASREM) functional architecture, the approach to its implementation is shown. First, interfaces must be defined which are capable of supporting the known algorithms. This is illustrated by considering the interfaces required for the SERVO level of the NASREM functional architecture. After interface definition, the specific computer architecture for the implementation must be determined. This choice is obviously technology dependent. An example illustrating one possible mapping of the NASREM functional architecture to a particular set of computers which implements it is shown. The result of choosing the NASREM functional architecture is that it provides a technology independent paradigm which can be mapped into a technology dependent implementation capable of evolving with technology in the laboratory and in space.

  10. Application of advanced electronics to a future spacecraft computer design

    NASA Technical Reports Server (NTRS)

    Carney, P. C.

    1980-01-01

    Advancements in hardware and software technology are summarized with specific emphasis on spacecraft computer capabilities. Available state of the art technology is reviewed and candidate architectures are defined.

  11. Brain architecture: a design for natural computation.

    PubMed

    Kaiser, Marcus

    2007-12-15

    Fifty years ago, John von Neumann compared the architecture of the brain with that of the computers he invented and which are still in use today. In those days, the organization of computers was based on concepts of brain organization. Here, we give an update on current results on the global organization of neural systems. For neural systems, we outline how the spatial and topological architecture of neuronal and cortical networks facilitates robustness against failures, fast processing and balanced network activation. Finally, we discuss mechanisms of self-organization for such architectures. After all, the organization of the brain might again inspire computer architecture. PMID:17855223

  12. Architectural Advancements in RELAP5-3D

    SciTech Connect

    Dr. George L. Mesina

    2005-11-01

    As both the computer industry and field of nuclear science and engineering move forward, there is a need to improve the computing tools used in the nuclear industry to keep pace with these changes. By increasing the capability of the codes, the growing modeling needs of nuclear plant analysis will be met and advantage can be taken of more powerful computer languages and architecture. In the past eighteen months, improvements have been made to RELAP5-3D [1] for these reasons. These architectural advances include code restructuring, conversion to Fortran 90, high performance computing upgrades, and rewriting of the RELAP5 Graphical User Interface (RGUI) [2] and XMGR5 [3] in Java. These architectural changes will extend the lifetime of RELAP5-3D, reduce the costs for development and maintenance, and improve it speed and reliability.

  13. Teaching Computer Aided Architectural Design at UCLA.

    ERIC Educational Resources Information Center

    Mitchell, William J.

    This brief overview includes a rationale for the program and describes course goals and objectives, curriculum content, teaching methods and materials, staffing, and problems of integrating computer aided design with traditional architectural curricula at the School of Architecture and Urban Planning at UCLA. A list of texts for use in teaching…

  14. Advanced control architecture for autonomous vehicles

    NASA Astrophysics Data System (ADS)

    Maurer, Markus; Dickmanns, Ernst D.

    1997-06-01

    An advanced control architecture for autonomous vehicles is presented. The hierarchical architecture consists of four levels: a vehicle level, a control level, a rule-based level and a knowledge-based level. A special focus is on forms of internal representation, which have to be chosen adequately for each level. The control scheme is applied to VaMP, a Mercedes passenger car which autonomously performs missions on German freeways. VaMP perceives the environment with its sense of vision and conventional sensors. It controls its actuators for locomotion and attention focusing. Modules for perception, cognition and action are discussed.

  15. Interfaces for Advanced Computing.

    ERIC Educational Resources Information Center

    Foley, James D.

    1987-01-01

    Discusses the coming generation of supercomputers that will have the power to make elaborate "artificial realities" that facilitate user-computer communication. Illustrates these technological advancements with examples of the use of head-mounted monitors which are connected to position and orientation sensors, and gloves that track finger and…

  16. Middleware in Modern High Performance Computing System Architectures

    SciTech Connect

    Engelmann, Christian; Ong, Hong Hoe; Scott, Stephen L

    2007-01-01

    A recent trend in modern high performance computing (HPC) system architectures employs ''lean'' compute nodes running a lightweight operating system (OS). Certain parts of the OS as well as other system software services are moved to service nodes in order to increase performance and scalability. This paper examines the impact of this HPC system architecture trend on HPC ''middleware'' software solutions, which traditionally equip HPC systems with advanced features, such as parallel and distributed programming models, appropriate system resource management mechanisms, remote application steering and user interaction techniques. Since the approach of keeping the compute node software stack small and simple is orthogonal to the middleware concept of adding missing OS features between OS and application, the role and architecture of middleware in modern HPC systems needs to be revisited. The result is a paradigm shift in HPC middleware design, where single middleware services are moved to service nodes, while runtime environments (RTEs) continue to reside on compute nodes.

  17. Opportunities in computational mechanics: Advances in parallel computing

    SciTech Connect

    Lesar, R.A.

    1999-02-01

    In this paper, the authors will discuss recent advances in computing power and the prospects for using these new capabilities for studying plasticity and failure. They will first review the new capabilities made available with parallel computing. They will discuss how these machines perform and how well their architecture might work on materials issues. Finally, they will give some estimates on the size of problems possible using these computers.

  18. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1991-01-01

    The Theory of Intelligent Machines proposes a hierarchical organization for the functions of an autonomous robot based on the Principle of Increasing Precision With Decreasing Intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed in recent years. A computer architecture that implements the lower two levels of the intelligent machine is presented. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Details of Execution Level controllers for motion and vision systems are addressed, as well as the Petri net transducer software used to implement Coordination Level functions. Extensions to UNIX and VxWorks operating systems which enable the development of a heterogeneous, distributed application are described. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  19. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1992-01-01

    The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  20. A memory-array architecture for computer vision

    SciTech Connect

    Balsara, P.T.

    1989-01-01

    With the fast advances in the area of computer vision and robotics there is a growing need for machines that can understand images at a very high speed. A conventional von Neumann computer is not suited for this purpose because it takes a tremendous amount of time to solve most typical image processing problems. Exploiting the inherent parallelism present in various vision tasks can significantly reduce the processing time. Fortunately, parallelism is increasingly affordable as hardware gets cheaper. Thus it is now imperative to study computer vision in a parallel processing framework. The author should first design a computational structure which is well suited for a wide range of vision tasks and then develop parallel algorithms which can run efficiently on this structure. Recent advances in VLSI technology have led to several proposals for parallel architectures for computer vision. In this thesis he demonstrates that a memory array architecture with efficient local and global communication capabilities can be used for high speed execution of a wide range of computer vision tasks. This architecture, called the Access Constrained Memory Array Architecture (ACMAA), is efficient for VLSI implementation because of its modular structure, simple interconnect and limited global control. Several parallel vision algorithms have been designed for this architecture. The choice of vision problems demonstrates the versatility of ACMAA for a wide range of vision tasks. These algorithms were simulated on a high level ACMAA simulator running on the Intel iPSC/2 hypercube, a parallel architecture. The results of this simulation are compared with those of sequential algorithms running on a single hypercube node. Details of the ACMAA processor architecture are also presented.

  1. Advancements in metro optical network architectures

    NASA Astrophysics Data System (ADS)

    Paraschis, Loukas

    2005-02-01

    This paper discusses the innovation in network architectures, and optical transport, that enables metropolitan networks to cost-effectively scale to hundreds Gb/s of capacity, and to hundreds km of reach, and to also meet the diverse service needs of enterprise and residential applications. A converged metro network, where Ethernet/IP services, and traditional TDM traffic operate over an intelligent WDM transport layer is increasingly becoming the most attractive architecture addressing the primary need of network operators for significantly improved capital and operational network cost. At the same time, this converged network has to leverage advanced technology, and introduce intelligence in order to significantly improve the deployment and manageability of WDM transport. The most important system advancements and the associated technology innovations that enhance the cost-effectiveness of metropolitan optical networks are being reviewed.

  2. Switching from Computer to Microcomputer Architecture Education

    ERIC Educational Resources Information Center

    Bolanakis, Dimosthenis E.; Kotsis, Konstantinos T.; Laopoulos, Theodore

    2010-01-01

    In the last decades, the technological and scientific evolution of the computing discipline has been widely affecting research in software engineering education, which nowadays advocates more enlightened and liberal ideas. This article reviews cross-disciplinary research on a computer architecture class in consideration of its switching to…

  3. Panel on future directions in parallel computer architecture

    SciTech Connect

    VanTilborg, A.M. )

    1989-06-01

    One of the program highlights of the 15th Annual International Symposium on Computer Architecture, held May 30 - June 2, 1988 in Honolulu, was a panel session on future directions in parallel computer architecture. The panel was organized and chaired by the author, and was comprised of Prof. Jack Dennis (NASA Ames Research Institute for Advanced Computer Science), Prof. H.T. Kung (Carnegie Mellon), and Dr. Burton Smith (Tera Computer Company). The objective of the panel was to identify the likely trajectory of future parallel computer system progress, particularly from the sandpoint of marketplace acceptance. Approximately 250 attendees participated in the session, in which each panelist began with a ten minute viewgraph explanation of his views, followed by an open and sometimes lively exchange with the audience and fellow panelists. The session ran for ninety minutes.

  4. The NASA/GSFC Advanced Data Grid: A Prototype for Future Earth Science Ground System Architectures

    NASA Technical Reports Server (NTRS)

    Gasster, Samuel D.; Lee, Craig; Davis, Brooks; Clark, Matt; AuYeung, Mike; Wilson, John R.; Ladwig, Debra M.

    2003-01-01

    Contents include the following: Background and motivation. Grid computing concepts. Advanced data grid (ADG) prototype development. ADG requirements and operations concept. ADG architecture. ADG implementation. ADG test plan. ADG schedule. Summary and status.

  5. Advanced Communications Architecture Demonstration Made Significant Progress

    NASA Technical Reports Server (NTRS)

    Carek, David Andrew

    2004-01-01

    Simulation for a ground station located at 44.5 deg latitude. The Advanced Communications Architecture Demonstration (ACAD) is a concept architecture to provide high-rate Ka-band (27-GHz) direct-to-ground delivery of payload data from the International Space Station. This new concept in delivering data from the space station targets scientific experiments that buffer data onboard. The concept design provides a method to augment the current downlink capability through the Tracking Data Relay Satellite System (TDRSS) Ku-band (15-GHz) communications system. The ACAD concept pushes the limits of technology in high-rate data communications for space-qualified systems. Research activities are ongoing in examining the various aspects of high-rate communications systems including: (1) link budget parametric analyses, (2) antenna configuration trade studies, (3) orbital simulations (see the preceding figure), (4) optimization of ground station contact time (see the following graph), (5) processor and storage architecture definition, and (6) protocol evaluations and dependencies.

  6. The Fermilab Central Computing Facility architectural model

    SciTech Connect

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs.

  7. Computing on Knights and Kepler Architectures

    NASA Astrophysics Data System (ADS)

    Bortolotti, G.; Caberletti, M.; Crimi, G.; Ferraro, A.; Giacomini, F.; Manzali, M.; Maron, G.; Pivanti, M.; Salomoni, D.; Schifano, S. F.; Tripiccione, R.; Zanella, M.

    2014-06-01

    A recent trend in scientific computing is the increasingly important role of co-processors, originally built to accelerate graphics rendering, and now used for general high-performance computing. The INFN Computing On Knights and Kepler Architectures (COKA) project focuses on assessing the suitability of co-processor boards for scientific computing in a wide range of physics applications, and on studying the best programming methodologies for these systems. Here we present in a comparative way our results in porting a Lattice Boltzmann code on two state-of-the-art accelerators: the NVIDIA K20X, and the Intel Xeon-Phi. We describe our implementations, analyze results and compare with a baseline architecture adopting Intel Sandy Bridge CPUs.

  8. Portable computer system architecture for the Space Station Freedom program

    NASA Technical Reports Server (NTRS)

    Alena, Richard; Liu, Yuan-Kwei; Fernquist, Alan R.

    1993-01-01

    This paper outlines various mission requirements and technical approaches that support the potential use of portable computers in several defined activities within the Space Station Freedom (SSF) program. Specifically, the use of portable computers as consoles for both spacecraft control and payload applications is presented. Various issues and proposed solutions regarding the incorporation of portable computers within the program are presented. The primary issues presented regard architecture (standard interface for expansion, advanced processors and displays), integration (methods of high-speed data communication, peripheral interfaces, and interconnectivity within various support networks), and evolution (wireless communications and multimedia data interface methods).

  9. Evaluation of Visual Computer Simulator for Computer Architecture Education

    ERIC Educational Resources Information Center

    Imai, Yoshiro; Imai, Masatoshi; Moritoh, Yoshio

    2013-01-01

    This paper presents trial evaluation of a visual computer simulator in 2009-2011, which has been developed to play some roles of both instruction facility and learning tool simultaneously. And it illustrates an example of Computer Architecture education for University students and usage of e-Learning tool for Assembly Programming in order to…

  10. Highly parallel computer architecture for robotic computation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir (Inventor); Bejczy, Anta K. (Inventor)

    1991-01-01

    In a computer having a large number of single instruction multiple data (SIMD) processors, each of the SIMD processors has two sets of three individual processor elements controlled by a master control unit and interconnected among a plurality of register file units where data is stored. The register files input and output data in synchronism with a minor cycle clock under control of two slave control units controlling the register file units connected to respective ones of the two sets of processor elements. Depending upon which ones of the register file units are enabled to store or transmit data during a particular minor clock cycle, the processor elements within an SIMD processor are connected in rings or in pipeline arrays, and may exchange data with the internal bus or with neighboring SIMD processors through interface units controlled by respective ones of the two slave control units.

  11. Implementing a computing architecture with WISDOM

    SciTech Connect

    Zebrowski, J.R.

    1991-01-01

    Over the past two years, the Savannah River Site (SRS) work force has expanded by more than 6000 employees. This large influx of personnel, in conjunction with the limited office space, has resulted in an overcrowding problem on site. To alleviate some of the overcrowding, Westinghouse Savannah River Company (WSRC) has been in the process of leasing space from several office buildings within Aiken, SC. Brookhaven, the latest off-site office building to be leased, is the starting point for a new direction in office automation which will eventually spread throughout SRS. The computing architecture in place at Brookhaven was designed to adhere to the SRS computer architecture guidelines as published by the WSRC Computer Architecture Standards Team (CAST). At the heart of the Brookhaven implementation is a Workstation Integration System for DOS, OS/2 and Macintosh (WISDOM). The key features of the WISDOM system include: it's utilization of a Local Area Network (LAN), it's Graphical User Interface (GUI), it's cross-platform capability, it's portable user interface, and the installation program. To begin, I will give an overview of the network architecture, then discuss WISDOM in detail, mention some platform integration problems that need to be addressed and conclude with a summary of the user benefits that WISDOM provides.

  12. Implementing a computing architecture with WISDOM

    SciTech Connect

    Zebrowski, J.R.

    1991-12-31

    Over the past two years, the Savannah River Site (SRS) work force has expanded by more than 6000 employees. This large influx of personnel, in conjunction with the limited office space, has resulted in an overcrowding problem on site. To alleviate some of the overcrowding, Westinghouse Savannah River Company (WSRC) has been in the process of leasing space from several office buildings within Aiken, SC. Brookhaven, the latest off-site office building to be leased, is the starting point for a new direction in office automation which will eventually spread throughout SRS. The computing architecture in place at Brookhaven was designed to adhere to the SRS computer architecture guidelines as published by the WSRC Computer Architecture Standards Team (CAST). At the heart of the Brookhaven implementation is a Workstation Integration System for DOS, OS/2 and Macintosh (WISDOM). The key features of the WISDOM system include: it`s utilization of a Local Area Network (LAN), it`s Graphical User Interface (GUI), it`s cross-platform capability, it`s portable user interface, and the installation program. To begin, I will give an overview of the network architecture, then discuss WISDOM in detail, mention some platform integration problems that need to be addressed and conclude with a summary of the user benefits that WISDOM provides.

  13. Computer graphics in architecture and engineering

    NASA Technical Reports Server (NTRS)

    Greenberg, D. P.

    1975-01-01

    The present status of the application of computer graphics to the building profession or architecture and its relationship to other scientific and technical areas were discussed. It was explained that, due to the fragmented nature of architecture and building activities (in contrast to the aerospace industry), a comprehensive, economic utilization of computer graphics in this area is not practical and its true potential cannot now be realized due to the present inability of architects and structural, mechanical, and site engineers to rely on a common data base. Future emphasis will therefore have to be placed on a vertical integration of the construction process and effective use of a three-dimensional data base, rather than on waiting for any technological breakthrough in interactive computing.

  14. Efficient tree codes on SIMD computer architectures

    NASA Astrophysics Data System (ADS)

    Olson, Kevin M.

    1996-11-01

    This paper describes changes made to a previous implementation of an N -body tree code developed for a fine-grained, SIMD computer architecture. These changes include (1) switching from a balanced binary tree to a balanced oct tree, (2) addition of quadrupole corrections, and (3) having the particles search the tree in groups rather than individually. An algorithm for limiting errors is also discussed. In aggregate, these changes have led to a performance increase of over a factor of 10 compared to the previous code. For problems several times larger than the processor array, the code now achieves performance levels of ~ 1 Gflop on the Maspar MP-2 or roughly 20% of the quoted peak performance of this machine. This percentage is competitive with other parallel implementations of tree codes on MIMD architectures. This is significant, considering the low relative cost of SIMD architectures.

  15. Advanced Active Thermal Control Systems Architecture Study

    NASA Technical Reports Server (NTRS)

    Hanford, Anthony J.; Ewert, Michael K.

    1996-01-01

    The Johnson Space Center (JSC) initiated a dynamic study to determine possible improvements available through advanced technologies (not used on previous or current human vehicles), identify promising development initiatives for advanced active thermal control systems (ATCS's), and help prioritize funding and personnel distribution among many research projects by providing a common basis to compare several diverse technologies. Some technologies included were two-phase thermal control systems, light-weight radiators, phase-change thermal storage, rotary fluid coupler, and heat pumps. JSC designed the study to estimate potential benefits from these various proposed and under-development thermal control technologies for five possible human missions early in the next century. The study compared all the technologies to a baseline mission using mass as a basis. Each baseline mission assumed an internal thermal control system; an external thermal control system; and aluminum, flow-through radiators. Solar vapor compression heat pumps and light-weight radiators showed the greatest promise as general advanced thermal technologies which can be applied across a range of missions. This initial study identified several other promising ATCS technologies which offer mass savings and other savings compared to traditional thermal control systems. Because the study format compares various architectures with a commonly defined baseline, it is versatile and expandable, and is expected to be updated as needed.

  16. Towards optoelectronic architectures for integrated neuromorphic computers

    NASA Astrophysics Data System (ADS)

    Martinenghi, Romain; Baylon Fuentes, Antonio; Jacquot, Maxime; Chembo, Yanne K.; Larger, Laurent

    2014-03-01

    We investigate theoretically and experimentally the computational properties of an optoelectronic neuromorphic processor based on a complex nonlinear dynamics. This neuromorphic approach is based on a new paradigm of or reservoir computing, which is intrinsically different from the concept of Turing machines. It essentially consists in expanding the input information to be processed into a higher dimensional phase space, through the nonlinear transient response of a complex dynamics excited by the input information. The computed output is then extracted via a linear separation of the transient trajectory in the complex phase space, performed through a learning phase consisting of the resolution of a regression problem. We here investigate an architecture for photonic neuromorphic computing via these complex nonlinear dynamical transients. A versatile photonic nonlinear transient computer based on a multiple-delay is reported. Its hybrid analogue and digital architecture allows for an easy reconfiguration, and for direct implementation of in-line processing. Its computational efficiency in parameter space is also analyzed, and the computational performance of this system is successfully evaluated on a standard spoken digit recognition task. We then discuss the pathways that can lead to its effective integration.

  17. Architectures & requirements for advanced weapon controllers.

    SciTech Connect

    McMurtrey, Brian J.; Klarer, Paul Richard; Bryan, Jon R.

    2004-02-01

    This report describes work done in FY2003 under Advanced and Exploratory Studies funding for Advanced Weapons Controllers. The contemporary requirements and envisioned missions for nuclear weapons are changing from the class of missions originally envisioned during development of the current stockpile. Technology available today in electronics, computing, and software provides capabilities not practical or even possible 20 years ago. This exploratory work looks at how Weapon Electrical Systems can be improved to accommodate new missions and new technologies while maintaining or improving existing standards in nuclear safety and reliability.

  18. Roadmap to the SRS computing architecture

    SciTech Connect

    Johnson, A.

    1994-07-05

    This document outlines the major steps that must be taken by the Savannah River Site (SRS) to migrate the SRS information technology (IT) environment to the new architecture described in the Savannah River Site Computing Architecture. This document proposes an IT environment that is {open_quotes}...standards-based, data-driven, and workstation-oriented, with larger systems being utilized for the delivery of needed information to users in a client-server relationship.{close_quotes} Achieving this vision will require many substantial changes in the computing applications, systems, and supporting infrastructure at the site. This document consists of a set of roadmaps which provide explanations of the necessary changes for IT at the site and describes the milestones that must be completed to finish the migration.

  19. Scalable computer architecture for digital vascular systems

    NASA Astrophysics Data System (ADS)

    Goddard, Iain; Chao, Hui; Skalabrin, Mark

    1998-06-01

    Digital vascular computer systems are used for radiology and fluoroscopy (R/F), angiography, and cardiac applications. In the United States alone, about 26 million procedures of these types are performed annually: about 81% R/F, 11% cardiac, and 8% angiography. Digital vascular systems have a very wide range of performance requirements, especially in terms of data rates. In addition, new features are added over time as they are shown to be clinically efficacious. Application-specific processing modes such as roadmapping, peak opacification, and bolus chasing are particular to some vascular systems. New algorithms continue to be developed and proven, such as Cox and deJager's precise registration methods for masks and live images in digital subtraction angiography. A computer architecture must have high scalability and reconfigurability to meet the needs of this modality. Ideally, the architecture could also serve as the basis for a nonvascular R/F system.

  20. Algorithms versus architectures for computational chemistry

    NASA Technical Reports Server (NTRS)

    Partridge, H.; Bauschlicher, C. W., Jr.

    1986-01-01

    The algorithms employed are computationally intensive and, as a result, increased performance (both algorithmic and architectural) is required to improve accuracy and to treat larger molecular systems. Several benchmark quantum chemistry codes are examined on a variety of architectures. While these codes are only a small portion of a typical quantum chemistry library, they illustrate many of the computationally intensive kernels and data manipulation requirements of some applications. Furthermore, understanding the performance of the existing algorithm on present and proposed supercomputers serves as a guide for future programs and algorithm development. The algorithms investigated are: (1) a sparse symmetric matrix vector product; (2) a four index integral transformation; and (3) the calculation of diatomic two electron Slater integrals. The vectorization strategies are examined for these algorithms for both the Cyber 205 and Cray XMP. In addition, multiprocessor implementations of the algorithms are looked at on the Cray XMP and on the MIT static data flow machine proposed by DENNIS.

  1. Nonlinear hierarchical substructural parallelism and computer architecture

    NASA Technical Reports Server (NTRS)

    Padovan, Joe

    1989-01-01

    Computer architecture is investigated in conjunction with the algorithmic structures of nonlinear finite-element analysis. To help set the stage for this goal, the development is undertaken by considering the wide-ranging needs associated with the analysis of rolling tires which possess the full range of kinematic, material and boundary condition induced nonlinearity in addition to gross and local cord-matrix material properties.

  2. Fast semivariogram computation using FPGA architectures

    NASA Astrophysics Data System (ADS)

    Lagadapati, Yamuna; Shirvaikar, Mukul; Dong, Xuanliang

    2015-02-01

    The semivariogram is a statistical measure of the spatial distribution of data and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. The semivariogram is a plot of semivariances for different lag distances between pixels. A semi-variance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O(n2). Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz, but they can perform tens of thousands of calculations per clock cycle while operating in the low range of power. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. The design consists of several modules dedicated to the constituent computational tasks. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. Anisotropic semivariogram implementation is anticipated to be an extension of the current architecture, ostensibly based on refinements to the current modules. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from MRI scans are utilized for the experiments

  3. Serial Back-Plane Technologies in Advanced Avionics Architectures

    NASA Technical Reports Server (NTRS)

    Varnavas, Kosta

    2005-01-01

    Current back plane technologies such as VME, and current personal computer back planes such as PCI, are shared bus systems that can exhibit nondeterministic latencies. This means a card can take control of the bus and use resources indefinitely affecting the ability of other cards in the back plane to acquire the bus. This provides a real hit on the reliability of the system. Additionally, these parallel busses only have bandwidths in the 100s of megahertz range and EMI and noise effects get worse the higher the bandwidth goes. To provide scalable, fault-tolerant, advanced computing systems, more applicable to today s connected computing environment and to better meet the needs of future requirements for advanced space instruments and vehicles, serial back-plane technologies should be implemented in advanced avionics architectures. Serial backplane technologies eliminate the problem of one card getting the bus and never relinquishing it, or one minor problem on the backplane bringing the whole system down. Being serial instead of parallel improves the reliability by reducing many of the signal integrity issues associated with parallel back planes and thus significantly improves reliability. The increased speeds associated with a serial backplane are an added bonus.

  4. Advanced data management system architectures testbed

    NASA Technical Reports Server (NTRS)

    Grant, Terry

    1990-01-01

    The objective of the Architecture and Tools Testbed is to provide a working, experimental focus to the evolving automation applications for the Space Station Freedom data management system. Emphasis is on defining and refining real-world applications including the following: the validation of user needs; understanding system requirements and capabilities; and extending capabilities. The approach is to provide an open, distributed system of high performance workstations representing both the standard data processors and networks and advanced RISC-based processors and multiprocessor systems. The system provides a base from which to develop and evaluate new performance and risk management concepts and for sharing the results. Participants are given a common view of requirements and capability via: remote login to the testbed; standard, natural user interfaces to simulations and emulations; special attention to user manuals for all software tools; and E-mail communication. The testbed elements which instantiate the approach are briefly described including the workstations, the software simulation and monitoring tools, and performance and fault tolerance experiments.

  5. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  6. HiMAT onboard flight computer system architecture and qualification

    NASA Technical Reports Server (NTRS)

    Myers, A. F.; Earls, M. R.; Callizo, L. A.

    1981-01-01

    Two highly maneuverable aircraft technology (HiMAT) remotely piloted research vehicles (RPRV's) are being flight tested at NASA Dryden Flight Research Center, Edwards, California, to demonstrate and evaluate a number of technological advances applicable to future fighter aircraft. Closed-loop primary flight control is performed from a ground-based cockpit utilizing a digital computer and up/down telemetry links. A backup flight control system for emergency operation resides in one of two onboard computers. Other functions of the onboard computer system are uplink processing, downlink processing, engine control, failure detection, and redundancy management. This paper describes the architecture, functions, and flight qualification of the HiMAT onboard flight computer systems.

  7. Developing a Distributed Computing Architecture at Arizona State University.

    ERIC Educational Resources Information Center

    Armann, Neil; And Others

    1994-01-01

    Development of Arizona State University's computing architecture, designed to ensure that all new distributed computing pieces will work together, is described. Aspects discussed include the business rationale, the general architectural approach, characteristics and objectives of the architecture, specific services, and impact on the university…

  8. Advances in Computational Astrophysics

    SciTech Connect

    Calder, Alan C.; Kouzes, Richard T.

    2009-03-01

    I was invited to be the guest editor for a special issue of Computing in Science and Engineering along with a colleague from Stony Brook. This is the guest editors' introduction to a special issue of Computing in Science and Engineering. Alan and I have written this introduction and have been the editors for the 4 papers to be published in this special edition.

  9. Frances: A Tool for Understanding Computer Architecture and Assembly Language

    ERIC Educational Resources Information Center

    Sondag, Tyler; Pokorny, Kian L.; Rajan, Hridesh

    2012-01-01

    Students in all areas of computing require knowledge of the computing device including software implementation at the machine level. Several courses in computer science curricula address these low-level details such as computer architecture and assembly languages. For such courses, there are advantages to studying real architectures instead of…

  10. Computer architecture evaluation for structural dynamics computations: Project summary

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1989-01-01

    The intent of the proposed effort is the examination of the impact of the elements of parallel architectures on the performance realized in a parallel computation. To this end, three major projects are developed: a language for the expression of high level parallelism, a statistical technique for the synthesis of multicomputer interconnection networks based upon performance prediction, and a queueing model for the analysis of shared memory hierarchies.

  11. The computational structural mechanics testbed architecture. Volume 2: Directives

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1989-01-01

    This is the second of a set of five volumes which describe the software architecture for the Computational Structural Mechanics Testbed. Derived from NICE, an integrated software system developed at Lockheed Palo Alto Research Laboratory, the architecture is composed of the command language (CLAMP), the command language interpreter (CLIP), and the data manager (GAL). Volumes 1, 2, and 3 (NASA CR's 178384, 178385, and 178386, respectively) describe CLAMP and CLIP and the CLIP-processor interface. Volumes 4 and 5 (NASA CR's 178387 and 178388, respectively) describe GAL and its low-level I/O. CLAMP, an acronym for Command Language for Applied Mechanics Processors, is designed to control the flow of execution of processors written for NICE. Volume 2 describes the CLIP directives in detail. It is intended for intermediate and advanced users.

  12. Hierarchical Poly Tree computer architectures defined by computational multidisciplinary mechanics

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Gute, Doug; Johnson, Keith

    1989-01-01

    This paper will develop an alternative computer architecture called the Poly Tree. Based on the requirements of computational mechanics and the concept of hierarchical substructuring, the paper will explore the development of problem-dependent parallel networks of processors which will enable significant, often superlinear, speed enhancements; provide a logical/efficient framework for linear/nonlinear and transient structural mechanics problems; and provide a logical framework from which to apply model reduction procedures. In addition, the paper will explore optimal processor arrangements which define the overall system granularity. Consideration will also be given to system I/O requirements.

  13. A new data architecture for advancing life cycle assessment

    EPA Science Inventory

    IntroductionLife cycle assessment (LCA) has a technical architecture that limits data interoperability, transparency, and automated integration of external data. More advanced information technologies offer promise for increasing the ease with which information can be synthesized...

  14. Advanced Ground Systems Maintenance Enterprise Architecture Project

    NASA Technical Reports Server (NTRS)

    Harp, Janicce Leshay

    2014-01-01

    The project implements an architecture for delivery of integrated health management capabilities for the 21st Century launch complex. Capabilities include anomaly detection, fault isolation, prognostics and physics-based diagnostics.

  15. Advanced Ground Systems Maintenance Enterprise Architecture Project

    NASA Technical Reports Server (NTRS)

    Perotti, Jose M. (Compiler)

    2015-01-01

    The project implements an architecture for delivery of integrated health management capabilities for the 21st Century launch complex. The delivered capabilities include anomaly detection, fault isolation, prognostics and physics based diagnostics.

  16. Open architecture controllers for advanced manufacturing

    SciTech Connect

    Gore, R.A.

    1994-03-01

    The application of intelligent control systems to the real world of machining and manufacturing will benefit form the presence of open architecture control systems on the machines or the processes. The ability to modify the control system as the process or product changes can be essential to the success of the application of neural net or fuzzy logic controllers. The effort at Los Alamos to obtain a commercially available open architecture machine tool controller is described.

  17. A High Performance COTS Based Computer Architecture

    NASA Astrophysics Data System (ADS)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  18. Topological Code Architectures for Quantum Computation

    NASA Astrophysics Data System (ADS)

    Cesare, Christopher Anthony

    This dissertation is concerned with quantum computation using many-body quantum systems encoded in topological codes. The interest in these topological systems has increased in recent years as devices in the lab begin to reach the fidelities required for performing arbitrarily long quantum algorithms. The most well-studied system, Kitaev's toric code, provides both a physical substrate for performing universal fault-tolerant quantum computations and a useful pedagogical tool for explaining the way other topological codes work. In this dissertation, I first review the necessary formalism for quantum information and quantum stabilizer codes, and then I introduce two families of topological codes: Kitaev's toric code and Bombin's color codes. I then present three chapters of original work. First, I explore the distinctness of encoding schemes in the color codes. Second, I introduce a model of quantum computation based on the toric code that uses adiabatic interpolations between static Hamiltonians with gaps constant in the system size. Lastly, I describe novel state distillation protocols that are naturally suited for topological architectures and show that they provide resource savings in terms of the number of required ancilla states when compared to more traditional approaches to quantum gate approximation.

  19. Playable Serious Games for Studying and Programming Computational STEM and Informatics Applications of Distributed and Parallel Computer Architectures

    ERIC Educational Resources Information Center

    Amenyo, John-Thones

    2012-01-01

    Carefully engineered playable games can serve as vehicles for students and practitioners to learn and explore the programming of advanced computer architectures to execute applications, such as high performance computing (HPC) and complex, inter-networked, distributed systems. The article presents families of playable games that are grounded in…

  20. Architecture and applications of the HEP multiprocessor computer system

    SciTech Connect

    Smith, B.J.

    1981-01-01

    The HEP computer system is a large scale scientific parallel computer employing shared-resource MIMD architecture. The hardware and software facilities provided by the system are described, and techniques found useful in programming the system are discussed. 3 references.

  1. Cryogenic Control Architecture for Large-Scale Quantum Computing

    NASA Astrophysics Data System (ADS)

    Hornibrook, J. M.; Colless, J. I.; Conway Lamb, I. D.; Pauka, S. J.; Lu, H.; Gossard, A. C.; Watson, J. D.; Gardner, G. C.; Fallahi, S.; Manfra, M. J.; Reilly, D. J.

    2015-02-01

    Solid-state qubits have recently advanced to the level that enables them, in principle, to be scaled up into fault-tolerant quantum computers. As these physical qubits continue to advance, meeting the challenge of realizing a quantum machine will also require the development of new supporting devices and control architectures with complexity far beyond the systems used in today's few-qubit experiments. Here, we report a microarchitecture for controlling and reading out qubits during the execution of a quantum algorithm such as an error-correcting code. We demonstrate the basic principles of this architecture using a cryogenic switch matrix implemented via high-electron-mobility transistors and a new kind of semiconductor device based on gate-switchable capacitance. The switch matrix is used to route microwave waveforms to qubits under the control of a field-programmable gate array, also operating at cryogenic temperatures. Taken together, these results suggest a viable approach for controlling large-scale quantum systems using semiconductor technology.

  2. Advancing Software Architecture Modeling for Large Scale Heterogeneous Systems

    SciTech Connect

    Gorton, Ian; Liu, Yan

    2010-11-07

    In this paper we describe how incorporating technology-specific modeling at the architecture level can help reduce risks and produce better designs for large, heterogeneous software applications. We draw an analogy with established modeling approaches in scientific domains, using groundwater modeling as an example, to help illustrate gaps in current software architecture modeling approaches. We then describe the advances in modeling, analysis and tooling that are required to bring sophisticated modeling and development methods within reach of software architects.

  3. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    NASA Technical Reports Server (NTRS)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  4. Center for Advanced Computational Technology

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    2000-01-01

    The Center for Advanced Computational Technology (ACT) was established to serve as a focal point for diverse research activities pertaining to application of advanced computational technology to future aerospace systems. These activities include the use of numerical simulations, artificial intelligence methods, multimedia and synthetic environments, and computational intelligence, in the modeling, analysis, sensitivity studies, optimization, design and operation of future aerospace systems. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The Center has four specific objectives: 1) conduct innovative research on applications of advanced computational technology to aerospace systems; 2) act as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); 3) help in identifying future directions of research in support of the aeronautical and space missions of the twenty-first century; and 4) help in the rapid transfer of research results to industry and in broadening awareness among researchers and engineers of the state-of-the-art in applications of advanced computational technology to the analysis, design prototyping and operations of aerospace and other high-performance engineering systems. In addition to research, Center activities include helping in the planning and coordination of the activities of a multi-center team of NASA and JPL researchers who are developing an intelligent synthesis environment for future aerospace systems; organizing workshops and national symposia; as well as writing state-of-the-art monographs and NASA special publications on timely topics.

  5. Toward a Fault Tolerant Architecture for Vital Medical-Based Wearable Computing.

    PubMed

    Abdali-Mohammadi, Fardin; Bajalan, Vahid; Fathi, Abdolhossein

    2015-12-01

    Advancements in computers and electronic technologies have led to the emergence of a new generation of efficient small intelligent systems. The products of such technologies might include Smartphones and wearable devices, which have attracted the attention of medical applications. These products are used less in critical medical applications because of their resource constraint and failure sensitivity. This is due to the fact that without safety considerations, small-integrated hardware will endanger patients' lives. Therefore, proposing some principals is required to construct wearable systems in healthcare so that the existing concerns are dealt with. Accordingly, this paper proposes an architecture for constructing wearable systems in critical medical applications. The proposed architecture is a three-tier one, supporting data flow from body sensors to cloud. The tiers of this architecture include wearable computers, mobile computing, and mobile cloud computing. One of the features of this architecture is its high possible fault tolerance due to the nature of its components. Moreover, the required protocols are presented to coordinate the components of this architecture. Finally, the reliability of this architecture is assessed by simulating the architecture and its components, and other aspects of the proposed architecture are discussed. PMID:26364202

  6. Innovative architectures for dense multi-microprocessor computers

    NASA Technical Reports Server (NTRS)

    Donaldson, Thomas; Doty, Karl; Engle, Steven W.; Larson, Robert E.; O'Reilly, John G.

    1988-01-01

    The results of a Phase I Small Business Innovative Research (SBIR) project performed for the NASA Langley Computational Structural Mechanics Group are described. The project resulted in the identification of a family of chordal-ring interconnection architectures with excellent potential to serve as the basis for new multimicroprocessor (MMP) computers. The paper presents examples of how computational algorithms from structural mechanics can be efficiently implemented on the chordal-ring architecture.

  7. Recent advances in computational aerodynamics

    NASA Astrophysics Data System (ADS)

    Agarwal, Ramesh K.; Desse, Jerry E.

    1991-04-01

    The current state of the art in computational aerodynamics is described. Recent advances in the discretization of surface geometry, grid generation, and flow simulation algorithms have led to flowfield predictions for increasingly complex and realistic configurations. As a result, computational aerodynamics is emerging as a crucial enabling technology for the development and design of flight vehicles. Examples illustrating the current capability for the prediction of aircraft, launch vehicle and helicopter flowfields are presented. Unfortunately, accurate modeling of turbulence remains a major difficulty in the analysis of viscosity-dominated flows. In the future inverse design methods, multidisciplinary design optimization methods, artificial intelligence technology and massively parallel computer technology will be incorporated into computational aerodynamics, opening up greater opportunities for improved product design at substantially reduced costs.

  8. Task scheduling in dataflow computer architectures

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1994-01-01

    Dataflow computers provide a platform for the solution of a large class of computational problems, which includes digital signal processing and image processing. Many typical applications are represented by a set of tasks which can be repetitively executed in parallel as specified by an associated dataflow graph. Research in this area aims to model these architectures, develop scheduling procedures, and predict the transient and steady state performance. Researchers at NASA have created a model and developed associated software tools which are capable of analyzing a dataflow graph and predicting its runtime performance under various resource and timing constraints. These models and tools were extended and used in this work. Experiments using these tools revealed certain properties of such graphs that require further study. Specifically, the transient behavior at the beginning of the execution of a graph can have a significant effect on the steady state performance. Transformation and retiming of the application algorithm and its initial conditions can produce a different transient behavior and consequently different steady state performance. The effect of such transformations on the resource requirements or under resource constraints requires extensive study. Task scheduling to obtain maximum performance (based on user-defined criteria), or to satisfy a set of resource constraints, can also be significantly affected by a transformation of the application algorithm. Since task scheduling is performed by heuristic algorithms, further research is needed to determine if new scheduling heuristics can be developed that can exploit such transformations. This work has provided the initial development for further long-term research efforts. A simulation tool was completed to provide insight into the transient and steady state execution of a dataflow graph. A set of scheduling algorithms was completed which can operate in conjunction with the modeling and performance tools

  9. Multithreaded processor architecture for parallel symbolic computation. Technical report

    SciTech Connect

    Fujita, T.

    1987-09-01

    This paper describes the Multilisp Architecture for Symbolic Applications (MASA), which is a multithreaded processor architecture for parallel symbolic computation with various features intended for effective Multilisp program execution. The principal mechanisms exploited for this processor are multiple contexts, interleaved pipeline execution from separate instruction streams, and synchronization based on a bit in each memory cell. The tagged architecture approach is taken for Lisp program execution, and trap conditions are provided for future object manipulation and garbage collection.

  10. Pipeline and parallel architectures for computer communication systems

    SciTech Connect

    Reddi, A.V.

    1983-01-01

    Various existing communication precessor systems (CPSS) at different nodes in computer communication systems (CCSS) are reviewed for distributed processing systems. To meet the increasing load of messages, pipeline and parallel architectures are suggested in CPSS. Finally, pipeline, array, multi and multiple-processor architectures and their advantages in CPSS for CCSS are presented and analysed, and their performances are compared with the performance of uniprocessor architecture. 19 references.

  11. Architectural development of an advanced EVA Electronic System

    NASA Technical Reports Server (NTRS)

    Lavelle, Joseph

    1992-01-01

    An advanced electronic system for future EVA missions (including zero gravity, the lunar surface, and the surface of Mars) is under research and development within the Advanced Life Support Division at NASA Ames Research Center. As a first step in the development, an optimum system architecture has been derived from an analysis of the projected requirements for these missions. The open, modular architecture centers around a distributed multiprocessing concept where the major subsystems independently process their own I/O functions and communicate over a common bus. Supervision and coordination of the subsystems is handled by an embedded real-time operating system kernel employing multitasking software techniques. A discussion of how the architecture most efficiently meets the electronic system functional requirements, maximizes flexibility for future development and mission applications, and enhances the reliability and serviceability of the system in these remote, hostile environments is included.

  12. A flexible architecture for advanced process control solutions

    NASA Astrophysics Data System (ADS)

    Faron, Kamyar; Iourovitski, Ilia

    2005-05-01

    Advanced Process Control (APC) is now mainstream practice in the semiconductor manufacturing industry. Over the past decade and a half APC has evolved from a "good idea", and "wouldn"t it be great" concept to mandatory manufacturing practice. APC developments have primarily dealt with two major thrusts, algorithms and infrastructure, and often the line between them has been blurred. The algorithms have evolved from very simple single variable solutions to sophisticated and cutting edge adaptive multivariable (input and output) solutions. Spending patterns in recent times have demanded that the economics of a comprehensive APC infrastructure be completely justified for any and all cost conscious manufacturers. There are studies suggesting integration costs as high as 60% of the total APC solution costs. Such cost prohibitive figures clearly diminish the return on APC investments. This has limited the acceptance and development of pure APC infrastructure solutions for many fabs. Modern APC solution architectures must satisfy the wide array of requirements from very manual R&D environments to very advanced and automated "lights out" manufacturing facilities. A majority of commercially available control solutions and most in house developed solutions lack important attributes of scalability, flexibility, and adaptability and hence require significant resources for integration, deployment, and maintenance. Many APC improvement efforts have been abandoned and delayed due to legacy systems and inadequate architectural design. Recent advancements (Service Oriented Architectures) in the software industry have delivered ideal technologies for delivering scalable, flexible, and reliable solutions that can seamlessly integrate into any fabs" existing system and business practices. In this publication we shall evaluate the various attributes of the architectures required by fabs and illustrate the benefits of a Service Oriented Architecture to satisfy these requirements. Blue

  13. A heterogeneous hierarchical architecture for real-time computing

    SciTech Connect

    Skroch, D.A.; Fornaro, R.J.

    1988-12-01

    The need for high-speed data acquisition and control algorithms has prompted continued research in the area of multiprocessor systems and related programming techniques. The result presented here is a unique hardware and software architecture for high-speed real-time computer systems. The implementation of a prototype of this architecture has required the integration of architecture, operating systems and programming languages into a cohesive unit. This report describes a Heterogeneous Hierarchial Architecture for Real-Time (H{sup 2} ART) and system software for program loading and interprocessor communication.

  14. A Simple Physical Optics Algorithm Perfect for Parallel Computing Architecture

    NASA Technical Reports Server (NTRS)

    Imbriale, W. A.; Cwik, T.

    1994-01-01

    A reflector antenna computer program based upon a simple discreet approximation of the radiation integral has proven to be extremely easy to adapt to the parallel computing architecture of the modest number of large-gain computing elements such as are used in the Intel iPSC and Touchstone Delta parallel machines.

  15. Usage of Thin-Client/Server Architecture in Computer Aided Education

    ERIC Educational Resources Information Center

    Cimen, Caghan; Kavurucu, Yusuf; Aydin, Halit

    2014-01-01

    With the advances of technology, thin-client/server architecture has become popular in multi-user/single network environments. Thin-client is a user terminal in which the user can login to a domain and run programs by connecting to a remote server. Recent developments in network and hardware technologies (cloud computing, virtualization, etc.)…

  16. First 3 years of operation of RIACS (Research Institute for Advanced Computer Science) (1983-1985)

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    The focus of the Research Institute for Advanced Computer Science (RIACS) is to explore matches between advanced computing architectures and the processes of scientific research. An architecture evaluation of the MIT static dataflow machine, specification of a graphical language for expressing distributed computations, and specification of an expert system for aiding in grid generation for two-dimensional flow problems was initiated. Research projects for 1984 and 1985 are summarized.

  17. Integrated computer control system architectural overview

    SciTech Connect

    Van Arsdall, P.

    1997-06-18

    This overview introduces the NIF Integrated Control System (ICCS) architecture. The design is abstract to allow the construction of many similar applications from a common framework. This summary lays the essential foundation for understanding the model-based engineering approach used to execute the design.

  18. Supervisory Control System Architecture for Advanced Small Modular Reactors

    SciTech Connect

    Cetiner, Sacit M; Cole, Daniel L; Fugate, David L; Kisner, Roger A; Melin, Alexander M; Muhlheim, Michael David; Rao, Nageswara S; Wood, Richard Thomas

    2013-08-01

    This technical report was generated as a product of the Supervisory Control for Multi-Modular SMR Plants project within the Instrumentation, Control and Human-Machine Interface technology area under the Advanced Small Modular Reactor (SMR) Research and Development Program of the U.S. Department of Energy. The report documents the definition of strategies, functional elements, and the structural architecture of a supervisory control system for multi-modular advanced SMR (AdvSMR) plants. This research activity advances the state-of-the art by incorporating decision making into the supervisory control system architectural layers through the introduction of a tiered-plant system approach. The report provides a brief history of hierarchical functional architectures and the current state-of-the-art, describes a reference AdvSMR to show the dependencies between systems, presents a hierarchical structure for supervisory control, indicates the importance of understanding trip setpoints, applies a new theoretic approach for comparing architectures, identifies cyber security controls that should be addressed early in system design, and describes ongoing work to develop system requirements and hardware/software configurations.

  19. Expandable computed-tomography architecture for nondestructive inspection

    NASA Astrophysics Data System (ADS)

    Agi, Iskender; Hurst, Paul J.; Current, K. W.

    1993-04-01

    The Radon transform and its inverse, commonly used for computed tomography (CT), are computationally burdensome for single processor computers. Since projection-based computations are easily executed in parallel, multiprocessor architectures have been proposed for high-speed operation. In this paper, we describe an architecture for a high-speed (30 MHz raster-scan image data rate), high accuracy (12-bits per pixel) computed-tomography system for use in non-destructive inspection system. This architecture reconstructs images from fan- or parallel-beam data using either single-pass or iterative reconstruction techniques. Our architecture uses a number of identical processor modules in a pipeline. Each processor module consists of memory for data storage, a commercially available digital signal processing (DSP) chip for filtering, and our custom IC which performs 450 million mathematical operations per second (MOPS). This architecture can reconstruct CT images as large as 1024 X 1024 pixels from a variety of image reconstruction algorithms. The details of the implementation and performance of our expandable architecture are discussed.

  20. A computational architecture for social agents

    SciTech Connect

    Bond, A.H.

    1996-12-31

    This article describes a new class of information-processing models for social agents. They axe derived from primate brain architecture, the processing in brain regions, the interactions among brain regions, and the social behavior of primates. In another paper, we have reviewed the neuroanatomical connections and functional involvements of cortical regions. We reviewed the evidence for a hierarchical architecture in the primate brain. By examining neuroanatomical evidence for connections among neural areas, we were able to establish anatomical regions and connections. We then examined evidence for specific functional involvements of the different neural axeas and found some support for hierarchical functioning, not only for the perception hierarchies but also for the planning and action hierarchy in the frontal lobes.

  1. A high performance parallel computing architecture for robust image features

    NASA Astrophysics Data System (ADS)

    Zhou, Renyan; Liu, Leibo; Wei, Shaojun

    2014-03-01

    A design of parallel architecture for image feature detection and description is proposed in this article. The major component of this architecture is a 2D cellular network composed of simple reprogrammable processors, enabling the Hessian Blob Detector and Haar Response Calculation, which are the most computing-intensive stage of the Speeded Up Robust Features (SURF) algorithm. Combining this 2D cellular network and dedicated hardware for SURF descriptors, this architecture achieves real-time image feature detection with minimal software in the host processor. A prototype FPGA implementation of the proposed architecture achieves 1318.9 GOPS general pixel processing @ 100 MHz clock and achieves up to 118 fps in VGA (640 × 480) image feature detection. The proposed architecture is stand-alone and scalable so it is easy to be migrated into VLSI implementation.

  2. Computer Architecture. (Latest Citations from the Aerospace Database)

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The bibliography contains citations concerning research and development in the field of computer architecture. Design of computer systems, microcomputer components, and digital networks are among the topics discussed. Multimicroprocessor system performance, software development, and aerospace avionics applications are also included. (Contains 50-250 citations and includes a subject term index and title list.)

  3. The Contribution of Visualization to Learning Computer Architecture

    ERIC Educational Resources Information Center

    Yehezkel, Cecile; Ben-Ari, Mordechai; Dreyfus, Tommy

    2007-01-01

    This paper describes a visualization environment and associated learning activities designed to improve learning of computer architecture. The environment, EasyCPU, displays a model of the components of a computer and the dynamic processes involved in program execution. We present the results of a research program that analysed the contribution of…

  4. Architecture and applications of the HEP multiprocessor computer system

    SciTech Connect

    Smith, B.J.; Fink, D.J.

    1982-01-01

    The HEP computer system is a large scale scientific parallel computer employing shared resource MIMD architecture. The hardware and software facilities provided by the system are described, and techniques found to be useful in programming the system are also discussed. 3 references.

  5. Fault tolerant hypercube computer system architecture

    NASA Technical Reports Server (NTRS)

    Madan, Herb S. (Inventor); Chow, Edward (Inventor)

    1989-01-01

    A fault-tolerant multiprocessor computer system of the hypercube type comprising a hierarchy of computers of like kind which can be functionally substituted for one another as necessary is disclosed. Communication between the working nodes is via one communications network while communications between the working nodes and watch dog nodes and load balancing nodes higher in the structure is via another communications network separate from the first. A typical branch of the hierarchy reporting to a master node or host computer comprises, a plurality of first computing nodes; a first network of message conducting paths for interconnecting the first computing nodes as a hypercube. The first network provides a path for message transfer between the first computing nodes; a first watch dog node; and a second network of message connecting paths for connecting the first computing nodes to the first watch dog node independent from the first network, the second network provides an independent path for test message and reconfiguration affecting transfers between the first computing nodes and the first switch watch dog node. There is additionally, a plurality of second computing nodes; a third network of message conducting paths for interconnecting the second computing nodes as a hypercube. The third network provides a path for message transfer between the second computing nodes; a fourth network of message conducting paths for connecting the second computing nodes to the first watch dog node independent from the third network. The fourth network provides an independent path for test message and reconfiguration affecting transfers between the second computing nodes and the first watch dog node; and a first multiplexer disposed between the first watch dog node and the second and fourth networks for allowing the first watch dog node to selectively communicate with individual ones of the computing nodes through the second and fourth networks; as well as, a second watch dog node

  6. A Communication Architecture for an Advanced Extravehicular Mobile Unit

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Sands, Obed S.; Bakula, Casey J.; Oldham, Daniel R.; Wright, Ted; Bradish, Martin A.; Klebau, Joseph M.

    2014-01-01

    This document describes the communication architecture for the Power, Avionics and Software (PAS) 1.0 subsystem for the Advanced Extravehicular Mobility Unit (AEMU). The following systems are described in detail: Caution Warning and Control System, Informatics, Storage, Video, Audio, Communication, and Monitoring Test and Validation. This document also provides some background as well as the purpose and goals of the PAS subsystem being developed at Glenn Research Center (GRC).

  7. Architecture independent environment for developing engineering software on MIMD computers

    NASA Technical Reports Server (NTRS)

    Valimohamed, Karim A.; Lopez, L. A.

    1990-01-01

    Engineers are constantly faced with solving problems of increasing complexity and detail. Multiple Instruction stream Multiple Data stream (MIMD) computers have been developed to overcome the performance limitations of serial computers. The hardware architectures of MIMD computers vary considerably and are much more sophisticated than serial computers. Developing large scale software for a variety of MIMD computers is difficult and expensive. There is a need to provide tools that facilitate programming these machines. First, the issues that must be considered to develop those tools are examined. The two main areas of concern were architecture independence and data management. Architecture independent software facilitates software portability and improves the longevity and utility of the software product. It provides some form of insurance for the investment of time and effort that goes into developing the software. The management of data is a crucial aspect of solving large engineering problems. It must be considered in light of the new hardware organizations that are available. Second, the functional design and implementation of a software environment that facilitates developing architecture independent software for large engineering applications are described. The topics of discussion include: a description of the model that supports the development of architecture independent software; identifying and exploiting concurrency within the application program; data coherence; engineering data base and memory management.

  8. Thrifty: An Exascale Architecture for Energy Proportional Computing

    SciTech Connect

    Torrellas, Josep

    2014-12-23

    The objective of this project is to design key aspects of an exascale architecture called Thrifty that addresses the challenges of power/energy efficiency, resiliency, and performance in exascale systems. The project includes work on computer architecture (Josep Torrellas from University of Illinois), compilation (Daniel Quinlan from Lawrence Livermore National Laboratory), runtime and applications (Laura Carrington from University of California San Diego), and circuits (Wilfred Pinfold from Intel Corporation).

  9. Heavy Lift Vehicle (HLV) Avionics Flight Computing Architecture Study

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.; Chen, Yuan; Morgan, Dwayne R.; Butler, A. Marc; Sdhuh, Joseph M.; Petelle, Jennifer K.; Gwaltney, David A.; Coe, Lisa D.; Koelbl, Terry G.; Nguyen, Hai D.

    2011-01-01

    A NASA multi-Center study team was assembled from LaRC, MSFC, KSC, JSC and WFF to examine potential flight computing architectures for a Heavy Lift Vehicle (HLV) to better understand avionics drivers. The study examined Design Reference Missions (DRMs) and vehicle requirements that could impact the vehicles avionics. The study considered multiple self-checking and voting architectural variants and examined reliability, fault-tolerance, mass, power, and redundancy management impacts. Furthermore, a goal of the study was to develop the skills and tools needed to rapidly assess additional architectures should requirements or assumptions change.

  10. Advanced optical network architecture for integrated digital avionics

    NASA Astrophysics Data System (ADS)

    Morgan, D. Reed

    1996-12-01

    For the first time in the history of avionics, the network designer now has a choice in selecting the media that interconnects the sources and sinks of digital data on aircraft. Electrical designs are already giving way to photonics in application areas where the data rate times distance product is large or where special design requirements such as low weight or EMI considerations are critical. Future digital avionic architectures will increasingly favor the use of photonic interconnects as network data rates of one gigabit/second and higher are needed to support real-time operation of high-speed integrated digital processing. As the cost of optical network building blocks is reduced and as temperature-rugged laser sources are matured, metal interconnects will be forced to retreat to applications spanning shorter and shorter distances. Although the trend is already underway, the widespread use of digital optics will first occur at the system level, where gigabit/second, real-time interconnects between sensors, processors, mass memories and displays separated by a least of few meters will be required. The application of photonic interconnects for inter-printed wiring board signalling across the backplane will eventually find application for gigabit/second applications since signal degradation over copper traces occurs before one gigabit/second and 0.5 meters are reached. For the foreseeable future however, metal interconnects will continue to be used to interconnect devices on printed wiring boards since 5 gigabit/second signals can be sent over metal up to around 15 centimeters. Current-day applications of optical interconnects at the system level are described and a projection of how advanced optical interconnect technology will be driven by the use of high speed integrated digital processing on future aircraft is presented. The recommended advanced network for application in the 2010 time frame is a fiber-based system with a signalling speed of around 2

  11. LINCS: Livermore's network architecture. [Octopus computing network

    SciTech Connect

    Fletcher, J.G.

    1982-01-01

    Octopus, a local computing network that has been evolving at the Lawrence Livermore National Laboratory for over fifteen years, is currently undergoing a major revision. The primary purpose of the revision is to consolidate and redefine the variety of conventions and formats, which have grown up over the years, into a single standard family of protocols, the Livermore Interactive Network Communication Standard (LINCS). This standard treats the entire network as a single distributed operating system such that access to a computing resource is obtained in a single way, whether that resource is local (on the same computer as the accessing process) or remote (on another computer). LINCS encompasses not only communication but also such issues as the relationship of customer to server processes and the structure, naming, and protection of resources. The discussion includes: an overview of the Livermore user community and computing hardware, the functions and structure of each of the seven layers of LINCS protocol, the reasons why we have designed our own protocols and why we are dissatisfied by the directions that current protocol standards are taking.

  12. Computational architecture for integrated controls and structures design

    NASA Technical Reports Server (NTRS)

    Belvin, W. Keith; Park, K. C.

    1989-01-01

    To facilitate the development of control structure interaction (CSI) design methodology, a computational architecture for interdisciplinary design of active structures is presented. The emphasis of the computational procedure is to exploit existing sparse matrix structural analysis techniques, in-core data transfer with control synthesis programs, and versatility in the optimization methodology to avoid unnecessary structural or control calculations. The architecture is designed such that all required structure, control and optimization analyses are performed within one program. Hence, the optimization strategy is not unduly constrained by cold starts of existing structural analysis and control synthesis packages.

  13. Layered Architectures for Quantum Computers and Quantum Repeaters

    NASA Astrophysics Data System (ADS)

    Jones, Nathan C.

    This chapter examines how to organize quantum computers and repeaters using a systematic framework known as layered architecture, where machine control is organized in layers associated with specialized tasks. The framework is flexible and could be used for analysis and comparison of quantum information systems. To demonstrate the design principles in practice, we develop architectures for quantum computers and quantum repeaters based on optically controlled quantum dots, showing how a myriad of technologies must operate synchronously to achieve fault-tolerance. Optical control makes information processing in this system very fast, scalable to large problem sizes, and extendable to quantum communication.

  14. Hybrid VLSI/QCA Architecture for Computing FFTs

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Toomarian, Nikzad; Modarres, Katayoon; Spotnitz, Matthew

    2003-01-01

    A data-processor architecture that would incorporate elements of both conventional very-large-scale integrated (VLSI) circuitry and quantum-dot cellular automata (QCA) has been proposed to enable the highly parallel and systolic computation of fast Fourier transforms (FFTs). The proposed circuit would complement the QCA-based circuits described in several prior NASA Tech Briefs articles, namely Implementing Permutation Matrices by Use of Quantum Dots (NPO-20801), Vol. 25, No. 10 (October 2001), page 42; Compact Interconnection Networks Based on Quantum Dots (NPO-20855) Vol. 27, No. 1 (January 2003), page 32; and Bit-Serial Adder Based on Quantum Dots (NPO-20869), Vol. 27, No. 1 (January 2003), page 35. The cited prior articles described the limitations of very-large-scale integrated (VLSI) circuitry and the major potential advantage afforded by QCA. To recapitulate: In a VLSI circuit, signal paths that are required not to interact with each other must not cross in the same plane. In contrast, for reasons too complex to describe in the limited space available for this article, suitably designed and operated QCAbased signal paths that are required not to interact with each other can nevertheless be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes.

  15. Architecture and grid application of cluster computing system

    NASA Astrophysics Data System (ADS)

    Lv, Yi; Yu, Shuiqin; Mao, Youju

    2004-11-01

    Recently, people pay more attention to the grid technology. It can not only connect all kinds of resources in the network, but also put them into a super transparent computing environment for customers to realize mete-computing which can share computing resources. Traditional parallel computing system, such as SMP(Symmetrical multiprocessor) and MPP(massively parallel processor), use multi-processors to raise computing speed in a close coupling way, so the flexible and scalable performance of the system are limited, as a result of it, the system can't meet the requirement of the grid technology. In this paper, the architecture of cluster computing system applied in grid nodes is introduced. It mainly includes the following aspects. First, the network architecture of cluster computing system in grid nodes is analyzed and designed. Second, how to realize distributing computing (including coordinating computing and sharing computing) of cluster computing system in grid nodes to construct virtual node computers is discussed. Last, communication among grid nodes is analyzed. In other words, it discusses how to realize single reflection to let all the service requirements from customers be met through sending to the grid nodes.

  16. A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Owen, Jeffrey E.

    1988-01-01

    A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.

  17. Neuromorphic Computing – From Materials Research to Systems Architecture Roundtable

    SciTech Connect

    Schuller, Ivan K.; Stevens, Rick; Pino, Robinson; Pechan, Michael

    2015-10-29

    Computation in its many forms is the engine that fuels our modern civilization. Modern computation—based on the von Neumann architecture—has allowed, until now, the development of continuous improvements, as predicted by Moore’s law. However, computation using current architectures and materials will inevitably—within the next 10 years—reach a limit because of fundamental scientific reasons. DOE convened a roundtable of experts in neuromorphic computing systems, materials science, and computer science in Washington on October 29-30, 2015 to address the following basic questions: Can brain-like (“neuromorphic”) computing devices based on new material concepts and systems be developed to dramatically outperform conventional CMOS based technology? If so, what are the basic research challenges for materials sicence and computing? The overarching answer that emerged was: The development of novel functional materials and devices incorporated into unique architectures will allow a revolutionary technological leap toward the implementation of a fully “neuromorphic” computer. To address this challenge, the following issues were considered: The main differences between neuromorphic and conventional computing as related to: signaling models, timing/clock, non-volatile memory, architecture, fault tolerance, integrated memory and compute, noise tolerance, analog vs. digital, and in situ learning New neuromorphic architectures needed to: produce lower energy consumption, potential novel nanostructured materials, and enhanced computation Device and materials properties needed to implement functions such as: hysteresis, stability, and fault tolerance Comparisons of different implementations: spin torque, memristors, resistive switching, phase change, and optical schemes for enhanced breakthroughs in performance, cost, fault tolerance, and/or manufacturability.

  18. Addressing fundamental architectural challenges of an activity-based intelligence and advanced analytics (ABIAA) system

    NASA Astrophysics Data System (ADS)

    Yager, Kevin; Albert, Thomas; Brower, Bernard V.; Pellechia, Matthew F.

    2015-06-01

    The domain of Geospatial Intelligence Analysis is rapidly shifting toward a new paradigm of Activity Based Intelligence (ABI) and information-based Tipping and Cueing. General requirements for an advanced ABIAA system present significant challenges in architectural design, computing resources, data volumes, workflow efficiency, data mining and analysis algorithms, and database structures. These sophisticated ABI software systems must include advanced algorithms that automatically flag activities of interest in less time and within larger data volumes than can be processed by human analysts. In doing this, they must also maintain the geospatial accuracy necessary for cross-correlation of multi-intelligence data sources. Historically, serial architectural workflows have been employed in ABIAA system design for tasking, collection, processing, exploitation, and dissemination. These simpler architectures may produce implementations that solve short term requirements; however, they have serious limitations that preclude them from being used effectively in an automated ABIAA system with multiple data sources. This paper discusses modern ABIAA architectural considerations providing an overview of an advanced ABIAA system and comparisons to legacy systems. It concludes with a recommended strategy and incremental approach to the research, development, and construction of a fully automated ABIAA system.

  19. Characterization of UMT2013 Performance on Advanced Architectures

    SciTech Connect

    Howell, Louis

    2014-12-31

    This paper presents part of a larger effort to make detailed assessments of several proxy applications on various advanced architectures, with the eventual goal of extending these assessments to codes of programmatic interest running more realistic simulations. The focus here is on UMT2013, a proxy implementation of deterministic transport for unstructured meshes. I present weak and strong MPI scaling results and studies of OpenMP efficiency on the Sequoia BG/Q system at LLNL, with comparison against similar tests on an Intel Sandy Bridge TLCC2 system. The hardware counters on BG/Q provide detailed information on many aspects of on-node performance, while information from the mpiP tool gives insight into the reasons for the differing scaling behavior on these two different architectures. Preliminary tests that exploit NVRAM as extended memory on an Ivy Bridge machine designed for “Big Data” applications are also included.

  20. Advanced Exploration Systems Water Architecture Study Interim Results

    NASA Technical Reports Server (NTRS)

    Sargusingh, Miriam J.

    2013-01-01

    The mission of the Advanced Exploration System (AES) Water Recovery Project (WRP) is to develop advanced water recovery systems that enable NASA human exploration missions beyond low Earth orbit (LEO). The primary objective of the AES WRP is to develop water recovery technologies critical to near-term missions beyond LEO. The secondary objective is to continue to advance mid-readiness-level technologies to support future NASA missions. An effort is being undertaken to establish the architecture for the AES Water Recovery System (WRS) that meets both near- and long-term objectives. The resultant architecture will be used to guide future technical planning, establish a baseline development roadmap for technology infusion, and establish baseline assumptions for integrated ground and on-orbit Environmental Control and Life Support Systems definition. This study is being performed in three phases. Phase I established the scope of the study through definition of the mission requirements and constraints, as well as identifying all possible WRS configurations that meet the mission requirements. Phase II focused on the near-term space exploration objectives by establishing an International Space Station-derived reference schematic for long-duration (>180 day) in-space habitation. Phase III will focus on the long-term space exploration objectives, trading the viable WRS configurations identified in Phase I to identify the ideal exploration WRS. The results of Phases I and II are discussed in this paper.

  1. Pipelined CPU Design with FPGA in Teaching Computer Architecture

    ERIC Educational Resources Information Center

    Lee, Jong Hyuk; Lee, Seung Eun; Yu, Heon Chang; Suh, Taeweon

    2012-01-01

    This paper presents a pipelined CPU design project with a field programmable gate array (FPGA) system in a computer architecture course. The class project is a five-stage pipelined 32-bit MIPS design with experiments on the Altera DE2 board. For proper scheduling, milestones were set every one or two weeks to help students complete the project on…

  2. In-Memory Computing Architectures for Sparse Distributed Memory.

    PubMed

    Kang, Mingu; Shanbhag, Naresh R

    2016-08-01

    This paper presents an energy-efficient and high-throughput architecture for Sparse Distributed Memory (SDM)-a computational model of the human brain [1]. The proposed SDM architecture is based on the recently proposed in-memory computing kernel for machine learning applications called Compute Memory (CM) [2], [3]. CM achieves energy and throughput efficiencies by deeply embedding computation into the memory array. SDM-specific techniques such as hierarchical binary decision (HBD) are employed to reduce the delay and energy further. The CM-based SDM (CM-SDM) is a mixed-signal circuit, and hence circuit-aware behavioral, energy, and delay models in a 65 nm CMOS process are developed in order to predict system performance of SDM architectures in the auto- and hetero-associative modes. The delay and energy models indicate that CM-SDM, in general, can achieve up to 25 × and 12 × delay and energy reduction, respectively, over conventional SDM. When classifying 16 × 16 binary images with high noise levels (input bad pixel ratios: 15%-25%) into nine classes, all SDM architectures are able to generate output bad pixel ratios (Bo) ≤ 2%. The CM-SDM exhibits negligible loss in accuracy, i.e., its Bo degradation is within 0.4% as compared to that of the conventional SDM. PMID:27305686

  3. SpaceWire- Based Control System Architecture for the Lightweight Advanced Robotic Arm Demonstrator [LARAD

    NASA Astrophysics Data System (ADS)

    Rucinski, Marek; Coates, Adam; Montano, Giuseppe; Allouis, Elie; Jameux, David

    2015-09-01

    The Lightweight Advanced Robotic Arm Demonstrator (LARAD) is a state-of-the-art, two-meter long robotic arm for planetary surface exploration currently being developed by a UK consortium led by Airbus Defence and Space Ltd under contract to the UK Space Agency (CREST-2 programme). LARAD has a modular design, which allows for experimentation with different electronics and control software. The control system architecture includes the on-board computer, control software and firmware, and the communication infrastructure (e.g. data links, switches) connecting on-board computer(s), sensors, actuators and the end-effector. The purpose of the control system is to operate the arm according to pre-defined performance requirements, monitoring its behaviour in real-time and performing safing/recovery actions in case of faults. This paper reports on the results of a recent study about the feasibility of the development and integration of a novel control system architecture for LARAD fully based on the SpaceWire protocol. The current control system architecture is based on the combination of two communication protocols, Ethernet and CAN. The new SpaceWire-based control system will allow for improved monitoring and telecommanding performance thanks to higher communication data rate, allowing for the adoption of advanced control schemes, potentially based on multiple vision sensors, and for the handling of sophisticated end-effectors that require fine control, such as science payloads or robotic hands.

  4. Regulation of plant root system architecture: implications for crop advancement.

    PubMed

    Rogers, Eric D; Benfey, Philip N

    2015-04-01

    Root system architecture (RSA) plays a major role in plant fitness, crop performance, and grain yield yet only recently has this role been appreciated. RSA describes the spatial arrangement of root tissue within the soil and is therefore crucial to nutrient and water uptake. Recent studies have identified many of the genetic and environmental factors influencing root growth that contribute to RSA. Some of the identified genes have the potential to limit crop loss caused by environmental extremes and are currently being used to confer drought tolerance. It is hypothesized that manipulating these and other genes that influence RSA will be pivotal for future crop advancements worldwide. PMID:25448235

  5. Geocomputation over Hybrid Computer Architecture and Systems: Prior Works and On-going Initiatives at UARK

    NASA Astrophysics Data System (ADS)

    Shi, X.

    2015-12-01

    As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced

  6. Analysis of an Atom-Optical Architecture for Quantum Computation

    NASA Astrophysics Data System (ADS)

    Devitt, Simon J.; Stephens, Ashley M.; Munro, William J.; Nemoto, Kae

    Quantum technology based on photons has emerged as one of the most promising platforms for quantum information processing, having already been used in proof-of-principle demonstrations of quantum communication and quantum computation. However, the scalability of this technology depends on the successful integration of experimentally feasible devices in an architecture that tolerates realistic errors and imperfections. Here, we analyse an atom-optical architecture for quantum computation designed to meet the requirements of scalability. The architecture is based on a modular atom-cavity device that provides an effective photon-photon interaction, allowing for the rapid, deterministic preparation of a large class of entangled states. We begin our analysis at the physical level, where we outline the experimental cavity quantum electrodynamics requirements of the basic device. Then, we describe how a scalable network of these devices can be used to prepare a three-dimensional topological cluster state, sufficient for universal fault-tolerant quantum computation. We conclude at the application level, where we estimate the system-level requirements of the architecture executing an algorithm compiled for compatibility with the topological cluster state.

  7. Advanced computer graphic techniques for laser range finder (LRF) simulation

    NASA Astrophysics Data System (ADS)

    Bedkowski, Janusz; Jankowski, Stanislaw

    2008-11-01

    This paper show an advanced computer graphic techniques for laser range finder (LRF) simulation. The LRF is the common sensor for unmanned ground vehicle, autonomous mobile robot and security applications. The cost of the measurement system is extremely high, therefore the simulation tool is designed. The simulation gives an opportunity to execute algorithm such as the obstacle avoidance[1], slam for robot localization[2], detection of vegetation and water obstacles in surroundings of the robot chassis[3], LRF measurement in crowd of people[1]. The Axis Aligned Bounding Box (AABB) and alternative technique based on CUDA (NVIDIA Compute Unified Device Architecture) is presented.

  8. High-speed parallel-processing networks for advanced architectures

    SciTech Connect

    Morgan, D.R.

    1988-06-01

    This paper describes various parallel-processing architecture networks that are candidates for eventual airborne use. An attempt at projecting which type of network is suitable or optimum for specific metafunction or stand-alone applications is made. However, specific algorithms will need to be developed and bench marks executed before firm conclusions can be drawn. Also, a conceptual projection of how these processors can be built in small, flyable units through the use of wafer-scale integration is offered. The use of the PAVE PILLAR system architecture to provide system level support for these tightly coupled networks is described. The author concludes that: (1) extremely high processing speeds implemented in flyable hardware is possible through parallel-processing networks if development programs are pursued; (2) dramatic speed enhancements through parallel processing requires an excellent match between the algorithm and computer-network architecture; (3) matching several high speed parallel oriented algorithms across the aircraft system to a limited set of hardware modules may be the most cost-effective approach to achieving speed enhancements; and (4) software-development tools and improved operating systems will need to be developed to support efficient parallel-processor use.

  9. Reviews of computing technology: A review of compound document architectures

    SciTech Connect

    Hudson, B.J.

    1991-10-01

    This review of computing technology will define, describe, and give examples of various approaches to document management through the use of compound document architectures. Experts agree that only 10% of business information exists in machine readable form, but much of what is stored is not in useful form. As a result, the average business document is copied over a dozen times during its life and duplicate copies are stored in numerous locations. The goal of compound document architectures is to provide an information support environment where rapid access to the correct information in the proper format is simplified. A compound document architecture provides structure to seemingly unstructured electronic documents, and standardizes the methods for interchange and access of entire or partial documents by authors and users.

  10. Advances in Orion's On-Orbit Guidance and Targeting System Architecture

    NASA Technical Reports Server (NTRS)

    Scarritt, Sara K.; Fill, Thomas; Robinson, Shane

    2015-01-01

    NASA's manned spaceflight programs have a rich history of advancing onboard guidance and targeting technology. In order to support future missions, the guidance and targeting architecture for the Orion Multi-Purpose Crew Vehicle must be able to operate in complete autonomy, without any support from the ground. Orion's guidance and targeting system must be sufficiently flexible to easily adapt to a wide array of undecided future missions, yet also not cause an undue computational burden on the flight computer. This presents a unique design challenge from the perspective of both algorithm development and system architecture construction. The present work shows how Orion's guidance and targeting system addresses these challenges. On the algorithm side, the system advances the state-of-the-art by: (1) steering burns with a simple closed-loop guidance strategy based on Shuttle heritage, and (2) planning maneuvers with a cutting-edge two-level targeting routine. These algorithms are then placed into an architecture designed to leverage the advantages of each and ensure that they function in concert with one another. The resulting system is characterized by modularity and simplicity. As such, it is adaptable to the on-orbit phases of any future mission that Orion may attempt.

  11. Hybrid parallel computing architecture for multiview phase shifting

    NASA Astrophysics Data System (ADS)

    Zhong, Kai; Li, Zhongwei; Zhou, Xiaohui; Shi, Yusheng; Wang, Congjun

    2014-11-01

    The multiview phase-shifting method shows its powerful capability in achieving high resolution three-dimensional (3-D) shape measurement. Unfortunately, this ability results in very high computation costs and 3-D computations have to be processed offline. To realize real-time 3-D shape measurement, a hybrid parallel computing architecture is proposed for multiview phase shifting. In this architecture, the central processing unit can co-operate with the graphic processing unit (GPU) to achieve hybrid parallel computing. The high computation cost procedures, including lens distortion rectification, phase computation, correspondence, and 3-D reconstruction, are implemented in GPU, and a three-layer kernel function model is designed to simultaneously realize coarse-grained and fine-grained paralleling computing. Experimental results verify that the developed system can perform 50 fps (frame per second) real-time 3-D measurement with 260 K 3-D points per frame. A speedup of up to 180 times is obtained for the performance of the proposed technique using a NVIDIA GT560Ti graphics card rather than a sequential C in a 3.4 GHZ Inter Core i7 3770.

  12. QNIX: A Linear Optical Architecture for Quantum Computing

    NASA Astrophysics Data System (ADS)

    Gimeno-Segovia, Mercedes; Shadbolt, Peter J.; Rudolph, Terry G.; Browne, Dan E.; Mendoza, Gabriel; Russell, Nicholas J.; Silverstone, Joshua W.; Santamato, Alberto; Carolan, Jacques; O'Brien, Jeremy

    2015-03-01

    There is currently a great deal of effort to develop a large-scale quantum computer, and one of the most promising platforms to do so is integrated linear optics. We present a proposal for a dynamical scheme for an integrated linear optics implementation of a one-way quantum computer. We go beyond the purely theoretical work and address practical issues in order to create a physically realistic design. We describe every step of cluster state construction and processing, showing the outstanding issues left to be addressed and our contributions to the different stages of the dynamical process. These include optimised interferometers for the generation of GHZ states, a universal and scalable architecture which requires entangled sources of no more than 3 photons with no active feed-forward, and loss-tolerant and fault-tolerant strategies specifically tailored to our proposed architecture. Our work demonstrates that building a linear optical quantum computer need be less challenging than previously thought, and brings large-scale switch-free linear optical architectures for quantum computing much closer to experimental realisation.

  13. OS friendly microprocessor architecture: Hardware level computer security

    NASA Astrophysics Data System (ADS)

    Jungwirth, Patrick; La Fratta, Patrick

    2016-05-01

    We present an introduction to the patented OS Friendly Microprocessor Architecture (OSFA) and hardware level computer security. Conventional microprocessors have not tried to balance hardware performance and OS performance at the same time. Conventional microprocessors have depended on the Operating System for computer security and information assurance. The goal of the OS Friendly Architecture is to provide a high performance and secure microprocessor and OS system. We are interested in cyber security, information technology (IT), and SCADA control professionals reviewing the hardware level security features. The OS Friendly Architecture is a switched set of cache memory banks in a pipeline configuration. For light-weight threads, the memory pipeline configuration provides near instantaneous context switching times. The pipelining and parallelism provided by the cache memory pipeline provides for background cache read and write operations while the microprocessor's execution pipeline is running instructions. The cache bank selection controllers provide arbitration to prevent the memory pipeline and microprocessor's execution pipeline from accessing the same cache bank at the same time. This separation allows the cache memory pages to transfer to and from level 1 (L1) caching while the microprocessor pipeline is executing instructions. Computer security operations are implemented in hardware. By extending Unix file permissions bits to each cache memory bank and memory address, the OSFA provides hardware level computer security.

  14. Advancing manufacturing through computational chemistry

    SciTech Connect

    Noid, D.W.; Sumpter, B.G.; Tuzun, R.E.

    1995-12-31

    The capabilities of nanotechnology and computational chemistry are reaching a point of convergence. New computer hardware and novel computational methods have created opportunities to test proposed nanometer-scale devices, investigate molecular manufacturing and model and predict properties of new materials. Experimental methods are also beginning to provide new capabilities that make the possibility of manufacturing various devices with atomic precision tangible. In this paper, we will discuss some of the novel computational methods we have used in molecular dynamics simulations of polymer processes, neural network predictions of new materials, and simulations of proposed nano-bearings and fluid dynamics in nano- sized devices.

  15. How the Common Component Architecture Advances Compuational Science

    SciTech Connect

    Kumfert, G; Bernholdt, D; Epperly, T; Kohl, J; McInnes, L C; Parker, S; Ray, J

    2006-06-19

    Computational chemists are using Common Component Architecture (CCA) technology to increase the parallel scalability of their application ten-fold. Combustion researchers are publishing science faster because the CCA manages software complexity for them. Both the solver and meshing communities in SciDAC are converging on community interface standards as a direct response to the novel level of interoperability that CCA presents. Yet, there is much more to do before component technology becomes mainstream computational science. This paper highlights the impact that the CCA has made on scientific applications, conveys some lessons learned from five years of the SciDAC program, and previews where applications could go with the additional capabilities that the CCA has planned for SciDAC 2.

  16. Nanotube devices based crossbar architecture: toward neuromorphic computing.

    PubMed

    Zhao, W S; Agnus, G; Derycke, V; Filoramo, A; Bourgoin, J-P; Gamrat, C

    2010-04-30

    Nanoscale devices such as carbon nanotube and nanowires based transistors, memristors and molecular devices are expected to play an important role in the development of new computing architectures. While their size represents a decisive advantage in terms of integration density, it also raises the critical question of how to efficiently address large numbers of densely integrated nanodevices without the need for complex multi-layer interconnection topologies similar to those used in CMOS technology. Two-terminal programmable devices in crossbar geometry seem particularly attractive, but suffer from severe addressing difficulties due to cross-talk, which implies complex programming procedures. Three-terminal devices can be easily addressed individually, but with limited gain in terms of interconnect integration. We show how optically gated carbon nanotube devices enable efficient individual addressing when arranged in a crossbar geometry with shared gate electrodes. This topology is particularly well suited for parallel programming or learning in the context of neuromorphic computing architectures. PMID:20368686

  17. FFT Computation with Systolic Arrays, A New Architecture

    NASA Technical Reports Server (NTRS)

    Boriakoff, Valentin

    1994-01-01

    The use of the Cooley-Tukey algorithm for computing the l-d FFT lends itself to a particular matrix factorization which suggests direct implementation by linearly-connected systolic arrays. Here we present a new systolic architecture that embodies this algorithm. This implementation requires a smaller number of processors and a smaller number of memory cells than other recent implementations, as well as having all the advantages of systolic arrays. For the implementation of the decimation-in-frequency case, word-serial data input allows continuous real-time operation without the need of a serial-to-parallel conversion device. No control or data stream switching is necessary. Computer simulation of this architecture was done in the context of a 1024 point DFT with a fixed point processor, and CMOS processor implementation has started.

  18. A fully programmable computing architecture for medical ultrasound machines.

    PubMed

    Schneider, Fabio Kurt; Agarwal, Anup; Yoo, Yang Mo; Fukuoka, Tetsuya; Kim, Yongmin

    2010-03-01

    Application-specific ICs have been traditionally used to support the high computational and data rate requirements in medical ultrasound systems, particularly in receive beamforming. Utilizing the previously developed efficient front-end algorithms, in this paper, we present a simple programmable computing architecture, consisting of a field-programmable gate array (FPGA) and a digital signal processor (DSP), to support core ultrasound signal processing. It was found that 97.3% and 51.8% of the FPGA and DSP resources are, respectively, needed to support all the front-end and back-end processing for B-mode imaging with 64 channels and 120 scanlines per frame at 30 frames/s. These results indicate that this programmable architecture can meet the requirements of low- and medium-level ultrasound machines while providing a flexible platform for supporting the development and deployment of new algorithms and emerging clinical applications. PMID:19546045

  19. Multi-level Hierarchical Poly Tree computer architectures

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Gute, Doug

    1990-01-01

    Based on the concept of hierarchical substructuring, this paper develops an optimal multi-level Hierarchical Poly Tree (HPT) parallel computer architecture scheme which is applicable to the solution of finite element and difference simulations. Emphasis is given to minimizing computational effort, in-core/out-of-core memory requirements, and the data transfer between processors. In addition, a simplified communications network that reduces the number of I/O channels between processors is presented. HPT configurations that yield optimal superlinearities are also demonstrated. Moreover, to generalize the scope of applicability, special attention is given to developing: (1) multi-level reduction trees which provide an orderly/optimal procedure by which model densification/simplification can be achieved, as well as (2) methodologies enabling processor grading that yields architectures with varying types of multi-level granularity.

  20. Integration of nanoscale memristor synapses in neuromorphic computing architectures

    NASA Astrophysics Data System (ADS)

    Indiveri, Giacomo; Linares-Barranco, Bernabé; Legenstein, Robert; Deligeorgis, George; Prodromakis, Themistoklis

    2013-09-01

    Conventional neuro-computing architectures and artificial neural networks have often been developed with no or loose connections to neuroscience. As a consequence, they have largely ignored key features of biological neural processing systems, such as their extremely low-power consumption features or their ability to carry out robust and efficient computation using massively parallel arrays of limited precision, highly variable, and unreliable components. Recent developments in nano-technologies are making available extremely compact and low power, but also variable and unreliable solid-state devices that can potentially extend the offerings of availing CMOS technologies. In particular, memristors are regarded as a promising solution for modeling key features of biological synapses due to their nanoscale dimensions, their capacity to store multiple bits of information per element and the low energy required to write distinct states. In this paper, we first review the neuro- and neuromorphic computing approaches that can best exploit the properties of memristor and scale devices, and then propose a novel hybrid memristor-CMOS neuromorphic circuit which represents a radical departure from conventional neuro-computing approaches, as it uses memristors to directly emulate the biophysics and temporal dynamics of real synapses. We point out the differences between the use of memristors in conventional neuro-computing architectures and the hybrid memristor-CMOS circuit proposed, and argue how this circuit represents an ideal building block for implementing brain-inspired probabilistic computing paradigms that are robust to variability and fault tolerant by design.

  1. Efficient Universal Computing Architectures for Decoding Neural Activity

    PubMed Central

    Rapoport, Benjamin I.; Turicchia, Lorenzo; Wattanapanitch, Woradorn; Davidson, Thomas J.; Sarpeshkar, Rahul

    2012-01-01

    The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain– machine interfaces (BMIs). Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain– machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than . We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA) implementation of this portion is consequently energy efficient

  2. Multiplexing electro-optic architectures for advanced aircraft integrated flight control systems

    NASA Technical Reports Server (NTRS)

    Seal, D. W.

    1989-01-01

    This report describes the results of a 10 month program sponsored by NASA. The objective of this program was to evaluate various optical sensor modulation technologies and to design an optimal Electro-Optic Architecture (EOA) for servicing remote clusters of sensors and actuators in advanced aircraft flight control systems. The EOA's supply optical power to remote sensors and actuators, process the modulated optical signals returned from the sensors, and produce conditioned electrical signals acceptable for use by a digital flight control computer or Vehicle Management System (VMS) computer. This study was part of a multi-year initiative under the Fiber Optic Control System Integration (FOCSI) program to design, develop, and test a totally integrated fiber optic flight/propulsion control system for application to advanced aircraft. Unlike earlier FOCSI studies, this program concentrated on the design of the EOA interface rather than the optical transducer technology itself.

  3. Hardware architecture for full analytical Fraunhofer computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Pang, Zhi-Yong; Xu, Zong-Xi; Xiong, Yi; Chen, Biao; Dai, Hui-Min; Jiang, Shao-Ji; Dong, Jian-Wen

    2015-09-01

    Hardware architecture of parallel computation is proposed for generating Fraunhofer computer-generated holograms (CGHs). A pipeline-based integrated circuit architecture is realized by employing the modified Fraunhofer analytical formulism, which is large scale and enables all components to be concurrently operated. The architecture of the CGH contains five modules to calculate initial parameters of amplitude, amplitude compensation, phases, and phase compensation, respectively. The precalculator of amplitude is fully adopted considering the "reusable design" concept. Each complex operation type (such as square arithmetic) is reused only once by means of a multichannel selector. The implemented hardware calculates an 800×600 pixels hologram in parallel using 39,319 logic elements, 21,074 registers, and 12,651 memory bits in an Altera field-programmable gate array environment with stable operation at 50 MHz. Experimental results demonstrate that the quality of the images reconstructed from the hardware-generated hologram can be comparable to that of a software implementation. Moreover, the calculation speed is approximately 100 times faster than that of a personal computer with an Intel i5-3230M 2.6 GHz CPU for a triangular object.

  4. Advanced nanoelectronic architectures for THz-based biological agent detection

    NASA Astrophysics Data System (ADS)

    Woolard, Dwight L.; Jensen, James O.

    2009-02-01

    The U.S. Army Research Office (ARO) and the U.S. Army Edgewood Chemical Biological Center (ECBC) jointly lead and support novel research programs that are advancing the state-of-the-art in nanoelectronic engineering in application areas that have relevance to national defense and security. One fundamental research area that is presently being emphasized by ARO and ECBC is the exploratory investigation of new bio-molecular architectural concepts that can be used to achieve rapid, reagent-less detection and discrimination of biological warfare (BW) agents, through the control of multi-photon and multi-wavelength processes at the nanoscale. This paper will overview an ARO/ECBC led multidisciplinary research program presently under the support of the U.S. Defense Threat Reduction Agency (DTRA) that seeks to develop new devices and nanoelectronic architectures that are effective for extracting THz signatures from target bio-molecules. Here, emphasis will be placed on the new nanosensor concepts and THz/Optical measurement methodologies for spectral-based sequencing/identification of genetic molecules.

  5. Quantum chromodynamics with advanced computing

    SciTech Connect

    Kronfeld, Andreas S.; /Fermilab

    2008-07-01

    We survey results in lattice quantum chromodynamics from groups in the USQCD Collaboration. The main focus is on physics, but many aspects of the discussion are aimed at an audience of computational physicists.

  6. First 3 years of operation of RIACS (Research Institute for Advanced Computer Science) (1983-1985). Final Report

    SciTech Connect

    Denning, P.J.

    1986-04-01

    The focus of the Research Institute for Advanced Computer Science (RIACS) is to explore matches between advanced computing architectures and the processes of scientific research. An architecture evaluation of the MIT static dataflow machine, specification of a graphical language for expressing distributed computations, and specification of an expert system for aiding in grid generation for two-dimensional flow problems was initiated. Research projects for 1984 and 1985 are summarized.

  7. Aerodynamic Analyses Requiring Advanced Computers, Part 1

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Papers are presented which deal with results of theoretical research on aerodynamic flow problems requiring the use of advanced computers. Topics discussed include: viscous flows, boundary layer equations, turbulence modeling and Navier-Stokes equations, and internal flows.

  8. Bringing Advanced Computational Techniques to Energy Research

    SciTech Connect

    Mitchell, Julie C

    2012-11-17

    Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

  9. Aerodynamic Analyses Requiring Advanced Computers, part 2

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Papers given at the conference present the results of theoretical research on aerodynamic flow problems requiring the use of advanced computers. Topics discussed include two-dimensional configurations, three-dimensional configurations, transonic aircraft, and the space shuttle.

  10. Architectural requirements for the Red Storm computing system.

    SciTech Connect

    Camp, William J.; Tomkins, James Lee

    2003-10-01

    This report is based on the Statement of Work (SOW) describing the various requirements for delivering 3 new supercomputer system to Sandia National Laboratories (Sandia) as part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) program. This system is named Red Storm and will be a distributed memory, massively parallel processor (MPP) machine built primarily out of commodity parts. The requirements presented here distill extensive architectural and design experience accumulated over a decade and a half of research, development and production operation of similar machines at Sandia. Red Storm will have an unusually high bandwidth, low latency interconnect, specially designed hardware and software reliability features, a light weight kernel compute node operating system and the ability to rapidly switch major sections of the machine between classified and unclassified computing environments. Particular attention has been paid to architectural balance in the design of Red Storm, and it is therefore expected to achieve an atypically high fraction of its peak speed of 41 TeraOPS on real scientific computing applications. In addition, Red Storm is designed to be upgradeable to many times this initial peak capability while still retaining appropriate balance in key design dimensions. Installation of the Red Storm computer system at Sandia's New Mexico site is planned for 2004, and it is expected that the system will be operated for a minimum of five years following installation.

  11. Parallel algorithms and architecture for computation of manipulator forward dynamics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Parallel computation of manipulator forward dynamics is investigated. Considering three classes of algorithms for the solution of the problem, that is, the O(n), the O(n exp 2), and the O(n exp 3) algorithms, parallelism in the problem is analyzed. It is shown that the problem belongs to the class of NC and that the time and processors bounds are of O(log2/2n) and O(n exp 4), respectively. However, the fastest stable parallel algorithms achieve the computation time of O(n) and can be derived by parallelization of the O(n exp 3) serial algorithms. Parallel computation of the O(n exp 3) algorithms requires the development of parallel algorithms for a set of fundamentally different problems, that is, the Newton-Euler formulation, the computation of the inertia matrix, decomposition of the symmetric, positive definite matrix, and the solution of triangular systems. Parallel algorithms for this set of problems are developed which can be efficiently implemented on a unique architecture, a triangular array of n(n+2)/2 processors with a simple nearest-neighbor interconnection. This architecture is particularly suitable for VLSI and WSI implementations. The developed parallel algorithm, compared to the best serial O(n) algorithm, achieves an asymptotic speedup of more than two orders-of-magnitude in the computation the forward dynamics.

  12. Performance evaluation of the SX-6 vector architecture forscientific computations

    SciTech Connect

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan Carter; Shalf,John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri,Jahed; Van der Wijngaart, Rob

    2005-01-01

    The growing gap between sustained and peak performance for scientific applications is a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to reduce this gap for many computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX-6 vector processor, and compares it against the cache-based IBMPower3 and Power4 superscalar architectures, across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines many low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks. Finally, we evaluate the performance of several scientific computing codes. Overall results demonstrate that the SX-6 achieves high performance on a large fraction of our application suite and often significantly outperforms the cache-based architectures. However, certain classes of applications are not easily amenable to vectorization and would require extensive algorithm and implementation reengineering to utilize the SX-6 effectively.

  13. Design of a fault tolerant airborne digital computer. Volume 1: Architecture

    NASA Technical Reports Server (NTRS)

    Wensley, J. H.; Levitt, K. N.; Green, M. W.; Goldberg, J.; Neumann, P. G.

    1973-01-01

    This volume is concerned with the architecture of a fault tolerant digital computer for an advanced commercial aircraft. All of the computations of the aircraft, including those presently carried out by analogue techniques, are to be carried out in this digital computer. Among the important qualities of the computer are the following: (1) The capacity is to be matched to the aircraft environment. (2) The reliability is to be selectively matched to the criticality and deadline requirements of each of the computations. (3) The system is to be readily expandable. contractible, and (4) The design is to appropriate to post 1975 technology. Three candidate architectures are discussed and assessed in terms of the above qualities. Of the three candidates, a newly conceived architecture, Software Implemented Fault Tolerance (SIFT), provides the best match to the above qualities. In addition SIFT is particularly simple and believable. The other candidates, Bus Checker System (BUCS), also newly conceived in this project, and the Hopkins multiprocessor are potentially more efficient than SIFT in the use of redundancy, but otherwise are not as attractive.

  14. A Component Architecture for High-Performance Scientific Computing

    SciTech Connect

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  15. A Component Architecture for High-Performance Scientific Computing

    SciTech Connect

    Bernholdt, David E; Allan, Benjamin A; Armstrong, Robert C; Bertrand, Felipe; Chiu, Kenneth; Dahlgren, Tamara L; Damevski, Kostadin; Elwasif, Wael R; Epperly, Thomas G; Govindaraju, Madhusudhan; Katz, Daniel S; Kohl, James A; Krishnan, Manoj Kumar; Kumfert, Gary K; Larson, J Walter; Lefantzi, Sophia; Lewis, Michael J; Malony, Allen D; McInnes, Lois C; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G; Ray, Jaideep; Shende, Sameer; Windus, Theresa L; Zhou, Shujia

    2006-07-03

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  16. Advanced space communications architecture study. Volume 2: Technical report

    NASA Technical Reports Server (NTRS)

    Horstein, Michael; Hadinger, Peter J.

    1987-01-01

    The technical feasibility and economic viability of satellite system architectures that are suitable for customer premise service (CPS) communications are investigated. System evaluation is performed at 30/20 GHz (Ka-band); however, the system architectures examined are equally applicable to 14/11 GHz (Ku-band). Emphasis is placed on systems that permit low-cost user terminals. Frequency division multiple access (FDMA) is used on the uplink, with typically 10,000 simultaneous accesses per satellite, each of 64 kbps. Bulk demodulators onboard the satellite, in combination with a baseband multiplexer, convert the many narrowband uplink signals into a small number of wideband data streams for downlink transmission. Single-hop network interconnectivity is accomplished via downlink scanning beams. Each satellite is estimated to weigh 5600 lb and consume 6850W of power; the corresponding payload totals are 1000 lb and 5000 W. Nonrecurring satellite cost is estimated at $110 million, with the first-unit cost at $113 million. In large quantities, the user terminal cost estimate is $25,000. For an assumed traffic profile, the required system revenue has been computed as a function of the internal rate of return (IRR) on invested capital. The equivalent user charge per-minute of 64-kbps channel service has also been determined.

  17. Advanced laptop and small personal computer technology

    NASA Technical Reports Server (NTRS)

    Johnson, Roger L.

    1991-01-01

    Advanced laptop and small personal computer technology is presented in the form of the viewgraphs. The following areas of hand carried computers and mobile workstation technology are covered: background, applications, high end products, technology trends, requirements for the Control Center application, and recommendations for the future.

  18. Advanced Biomedical Computing Center (ABCC) | DSITP

    Cancer.gov

    The Advanced Biomedical Computing Center (ABCC), located in Frederick Maryland (MD), provides HPC resources for both NIH/NCI intramural scientists and the extramural biomedical research community. Its mission is to provide HPC support, to provide collaborative research, and to conduct in-house research in various areas of computational biology and biomedical research.

  19. Distributed sequence alignment applications for the public computing architecture.

    PubMed

    Pellicer, S; Chen, G; Chan, K C C; Pan, Y

    2008-03-01

    The public computer architecture shows promise as a platform for solving fundamental problems in bioinformatics such as global gene sequence alignment and data mining with tools such as the basic local alignment search tool (BLAST). Our implementation of these two problems on the Berkeley open infrastructure for network computing (BOINC) platform demonstrates a runtime reduction factor of 1.15 for sequence alignment and 16.76 for BLAST. While the runtime reduction factor of the global gene sequence alignment application is modest, this value is based on a theoretical sequential runtime extrapolated from the calculation of a smaller problem. Because this runtime is extrapolated from running the calculation in memory, the theoretical sequential runtime would require 37.3 GB of memory on a single system. With this in mind, the BOINC implementation not only offers the reduced runtime, but also the aggregation of the available memory of all participant nodes. If an actual sequential run of the problem were compared, a more drastic reduction in the runtime would be seen due to an additional secondary storage I/O overhead for a practical system. Despite the limitations of the public computer architecture, most notably in communication overhead, it represents a practical platform for grid- and cluster-scale bioinformatics computations today and shows great potential for future implementations. PMID:18334454

  20. Communication-efficient parallel architectures and algorithms for image computations

    SciTech Connect

    Alnuweiri, H.M.

    1989-01-01

    The main purpose of this dissertation is the design of efficient parallel techniques for image computations which require global operations on image pixels, as well as the development of parallel architectures with special communication features which can support global data movement efficiently. The class of image problems considered in this dissertation involves global operations on image pixels, and irregular (data-dependent) data movement operations. Such problems include histogramming, component labeling, proximity computations, computing the Hough Transform, computing convexity of regions and related properties such as computing the diameter and a smallest area enclosing rectangle for each region. Images with multiple figures and multiple labeled-sets of pixels are also considered. Efficient solutions to such problems involve integer sorting, graph theoretic techniques, and techniques from computational geometry. Although such solutions are not computationally intensive (they all require O(n{sup 2}) operations to be performed on an n {times} n image), they require global communications. The emphasis here is on developing parallel techniques for data movement, reduction, and distribution, which lead to processor-time optimal solutions for such problems on the proposed organizations. The proposed parallel architectures are based on a memory array which can be viewed as an arrangement of memory modules in a k-dimensional space such that the modules are connected to buses placed parallel to the orthogonal axes of the space, and each bus is connected to one processor or a group of processors. It will be shown that such organizations are communication-efficient and are thus highly suited to the image problems considered here, and also to several other classes of problems. The proposed organizations have p processors and O(n{sup 2}) words of memory to process n {times} n images.

  1. The computational structural mechanics testbed architecture. Volume 2: The interface

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1988-01-01

    This is the third set of five volumes which describe the software architecture for the Computational Structural Mechanics Testbed. Derived from NICE, an integrated software system developed at Lockheed Palo Alto Research Laboratory, the architecture is composed of the command language CLAMP, the command language interpreter CLIP, and the data manager GAL. Volumes 1, 2, and 3 (NASA CR's 178384, 178385, and 178386, respectively) describe CLAMP and CLIP and the CLIP-processor interface. Volumes 4 and 5 (NASA CR's 178387 and 178388, respectively) describe GAL and its low-level I/O. CLAMP, an acronym for Command Language for Applied Mechanics Processors, is designed to control the flow of execution of processors written for NICE. Volume 3 describes the CLIP-Processor interface and related topics. It is intended only for processor developers.

  2. Integrated command, control, communications and computation system functional architecture

    NASA Technical Reports Server (NTRS)

    Cooley, C. G.; Gilbert, L. E.

    1981-01-01

    The functional architecture for an integrated command, control, communications, and computation system applicable to the command and control portion of the NASA End-to-End Data. System is described including the downlink data processing and analysis functions required to support the uplink processes. The functional architecture is composed of four elements: (1) the functional hierarchy which provides the decomposition and allocation of the command and control functions to the system elements; (2) the key system features which summarize the major system capabilities; (3) the operational activity threads which illustrate the interrelationahip between the system elements; and (4) the interfaces which illustrate those elements that originate or generate data and those elements that use the data. The interfaces also provide a description of the data and the data utilization and access techniques.

  3. The computational structural mechanics testbed architecture. Volume 1: The language

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1988-01-01

    This is the first set of five volumes which describe the software architecture for the Computational Structural Mechanics Testbed. Derived from NICE, an integrated software system developed at Lockheed Palo Alto Research Laboratory, the architecture is composed of the command language CLAMP, the command language interpreter CLIP, and the data manager GAL. Volumes 1, 2, and 3 (NASA CR's 178384, 178385, and 178386, respectively) describe CLAMP and CLIP, and the CLIP-processor interface. Volumes 4 and 5 (NASA CR's 178387 and 178388, respectively) describe GAL and its low-level I/O. CLAMP, an acronym for Command Language for Applied Mechanics Processors, is designed to control the flow of execution of processors written for NICE. Volume 1 presents the basic elements of the CLAMP language and is intended for all users.

  4. Advances and trends in computational structural mechanics

    NASA Technical Reports Server (NTRS)

    Noor, A. K.

    1986-01-01

    Recent developments in computational structural mechanics are reviewed with reference to computational needs for future structures technology, advances in computational models for material behavior, discrete element technology, assessment and control of numerical simulations of structural response, hybrid analysis, and techniques for large-scale optimization. Research areas in computational structural mechanics which have high potential for meeting future technological needs are identified. These include prediction and analysis of the failure of structural components made of new materials, development of computational strategies and solution methodologies for large-scale structural calculations, and assessment of reliability and adaptive improvement of response predictions.

  5. HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation

    NASA Technical Reports Server (NTRS)

    Sterling, Thomas; Bergman, Larry

    2000-01-01

    Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention

  6. Job Superscheduler Architecture and Performance in Computational Grid Environments

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Oliker, Leonid; Biswas, Rupak

    2003-01-01

    Computational grids hold great promise in utilizing geographically separated heterogeneous resources to solve large-scale complex scientific problems. However, a number of major technical hurdles, including distributed resource management and effective job scheduling, stand in the way of realizing these gains. In this paper, we propose a novel grid superscheduler architecture and three distributed job migration algorithms. We also model the critical interaction between the superscheduler and autonomous local schedulers. Extensive performance comparisons with ideal, central, and local schemes using real workloads from leading computational centers are conducted in a simulation environment. Additionally, synthetic workloads are used to perform a detailed sensitivity analysis of our superscheduler. Several key metrics demonstrate that substantial performance gains can be achieved via smart superscheduling in distributed computational grids.

  7. Evaluation of leading scalar and vector architectures for scientific computations

    SciTech Connect

    Simon, Horst D.; Oliker, Leonid; Canning, Andrew; Carter, Jonathan; Ethier, Stephane; Shalf, John

    2004-04-20

    The growing gap between sustained and peak performance for scientific applications is a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to reduce this gap for many computational science codes and deliver a substantial increase in computing capabilities. This project examines the performance of the cacheless vector Earth Simulator (ES) and compares it to superscalar cache-based IBM Power3 system. Results demonstrate that the ES is significantly faster than the Power3 architecture, highlighting the tremendous potential advantage of the ES for numerical simulation. However, vectorization of a particle-in-cell application (GTC) greatly increased the memory footprint preventing loop-level parallelism and limiting scalability potential.

  8. The Tradeoffs of Fused Memory Hierarchies in Heterogeneous Computing Architectures

    SciTech Connect

    Spafford, Kyle L; Meredith, Jeremy S; Lee, Seyong; Li, Dong; Roth, Philip C; Vetter, Jeffrey S

    2012-01-01

    With the rise of general purpose computing on graphics processing units (GPGPU), the influence from consumer markets can now be seen across the spectrum of computer architectures. In fact, many of the high-ranking Top500 HPC systems now include these accelerators. Traditionally, GPUs have connected to the CPU via the PCIe bus, which has proved to be a significant bottleneck for scalable scientific applications. Now, a trend toward tighter integration between CPU and GPU has removed this bottleneck and unified the memory hierarchy for both CPU and GPU cores. We examine the impact of this trend for high performance scientific computing by investigating AMD's new Fusion Accelerated Processing Unit (APU) as a testbed. In particular, we evaluate the tradeoffs in performance, power consumption, and programmability when comparing this unified memory hierarchy with similar, but discrete GPUs.

  9. Advanced information processing system: Inter-computer communication services

    NASA Technical Reports Server (NTRS)

    Burkhardt, Laura; Masotto, Tom; Sims, J. Terry; Whittredge, Roy; Alger, Linda S.

    1991-01-01

    The purpose is to document the functional requirements and detailed specifications for the Inter-Computer Communications Services (ICCS) of the Advanced Information Processing System (AIPS). An introductory section is provided to outline the overall architecture and functional requirements of the AIPS and to present an overview of the ICCS. An overview of the AIPS architecture as well as a brief description of the AIPS software is given. The guarantees of the ICCS are provided, and the ICCS is described as a seven-layered International Standards Organization (ISO) Model. The ICCS functional requirements, functional design, and detailed specifications as well as each layer of the ICCS are also described. A summary of results and suggestions for future work are presented.

  10. An evolutionary method for synthesizing technological planning and architectural advance

    NASA Astrophysics Data System (ADS)

    Cole, Bjorn Forstrom

    the appropriate technological antecedents are accounted for in developing the projection. The third chapter of the thesis compiles a series of observations and philosophical considerations into a series of research questions. Some research questions are then answered with further thought, observation, and reading, leading to conjectures on the problem. The remainder require some form of experimentation, and so are used to formulate hypotheses. Falsifiability conditions are then generated from those hypotheses, and used to get the development of experiments to be performed, in this case on a computer upon various conditions of use of a genetic algorithm. The fourth chapter of the thesis walks through the formulation of a method to attack the problem of strategically choosing an architecture. This method is designed to find the optimum architecture under multiple conditions, which is required for the ability to play the "what if" games typically undertaken in strategic situations. The chapter walks through a graph-based representation of architecture, provides the rationale for choosing a given technology forecasting technique, and lays out the implementation of the optimization algorithm, named Sindri, within a commercial analysis code, Pacelab. The fifth chapter of the thesis then tests the Sindri code. The first test applied is a series of standardized combinatorial spaces, which are meant to be analogous to test problems traditionally posed to optimizers (e.g., Rosenbrock's valley function). The results from this test assess the value of various operators used to transform the architecture graph in the course of conducting a genetic search. Finally, this method is employed on a test case involving the transition of a miniature helicopter from glow engine to battery propulsion, and finally to a design where the battery functions as both structure and power source. The final two chapters develop conclusions based on the body of work conducted within this thesis and

  11. Biomorphic Multi-Agent Architecture for Persistent Computing

    NASA Technical Reports Server (NTRS)

    Lodding, Kenneth N.; Brewster, Paul

    2009-01-01

    A multi-agent software/hardware architecture, inspired by the multicellular nature of living organisms, has been proposed as the basis of design of a robust, reliable, persistent computing system. Just as a multicellular organism can adapt to changing environmental conditions and can survive despite the failure of individual cells, a multi-agent computing system, as envisioned, could adapt to changing hardware, software, and environmental conditions. In particular, the computing system could continue to function (perhaps at a reduced but still reasonable level of performance) if one or more component( s) of the system were to fail. One of the defining characteristics of a multicellular organism is unity of purpose. In biology, the purpose is survival of the organism. The purpose of the proposed multi-agent architecture is to provide a persistent computing environment in harsh conditions in which repair is difficult or impossible. A multi-agent, organism-like computing system would be a single entity built from agents or cells. Each agent or cell would be a discrete hardware processing unit that would include a data processor with local memory, an internal clock, and a suite of communication equipment capable of both local line-of-sight communications and global broadcast communications. Some cells, denoted specialist cells, could contain such additional hardware as sensors and emitters. Each cell would be independent in the sense that there would be no global clock, no global (shared) memory, no pre-assigned cell identifiers, no pre-defined network topology, and no centralized brain or control structure. Like each cell in a living organism, each agent or cell of the computing system would contain a full description of the system encoded as genes, but in this case, the genes would be components of a software genome.

  12. A Component Architecture for High-Performance Computing

    SciTech Connect

    Bernholdt, D E; Elwasif, W R; Kohl, J A; Epperly, T G W

    2003-01-21

    The Common Component Architecture (CCA) provides a means for developers to manage the complexity of large-scale scientific software systems and to move toward a ''plug and play'' environment for high-performance computing. The CCA model allows for a direct connection between components within the same process to maintain performance on inter-component calls. It is neutral with respect to parallelism, allowing components to use whatever means they desire to communicate within their parallel ''cohort.'' We will discuss in detail the importance of performance in the design of the CCA and will analyze the performance costs associated with features of the CCA.

  13. An Architectural Design System Based on Computer Graphics.

    ERIC Educational Resources Information Center

    MacDonald, Stephen L.; Wehrli, Robert

    The recent developments in computer hardware and software are presented to inform architects of this design tool. Technical advancements in equipment include--(1) cathode ray tube displays, (2) light pens, (3) print-out and photo copying attachments, (4) controls for comparison and selection of images, (5) chording keyboards, (6) plotters, and (7)…

  14. Advances and challenges in computational plasma science

    NASA Astrophysics Data System (ADS)

    Tang, W. M.

    2005-02-01

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This

  15. Rapid indirect trajectory optimization on highly parallel computing architectures

    NASA Astrophysics Data System (ADS)

    Antony, Thomas

    Trajectory optimization is a field which can benefit greatly from the advantages offered by parallel computing. The current state-of-the-art in trajectory optimization focuses on the use of direct optimization methods, such as the pseudo-spectral method. These methods are favored due to their ease of implementation and large convergence regions while indirect methods have largely been ignored in the literature in the past decade except for specific applications in astrodynamics. It has been shown that the shortcomings conventionally associated with indirect methods can be overcome by the use of a continuation method in which complex trajectory solutions are obtained by solving a sequence of progressively difficult optimization problems. High performance computing hardware is trending towards more parallel architectures as opposed to powerful single-core processors. Graphics Processing Units (GPU), which were originally developed for 3D graphics rendering have gained popularity in the past decade as high-performance, programmable parallel processors. The Compute Unified Device Architecture (CUDA) framework, a parallel computing architecture and programming model developed by NVIDIA, is one of the most widely used platforms in GPU computing. GPUs have been applied to a wide range of fields that require the solution of complex, computationally demanding problems. A GPU-accelerated indirect trajectory optimization methodology which uses the multiple shooting method and continuation is developed using the CUDA platform. The various algorithmic optimizations used to exploit the parallelism inherent in the indirect shooting method are described. The resulting rapid optimal control framework enables the construction of high quality optimal trajectories that satisfy problem-specific constraints and fully satisfy the necessary conditions of optimality. The benefits of the framework are highlighted by construction of maximum terminal velocity trajectories for a hypothetical

  16. Supporting Undergraduate Computer Architecture Students Using a Visual MIPS64 CPU Simulator

    ERIC Educational Resources Information Center

    Patti, D.; Spadaccini, A.; Palesi, M.; Fazzino, F.; Catania, V.

    2012-01-01

    The topics of computer architecture are always taught using an Assembly dialect as an example. The most commonly used textbooks in this field use the MIPS64 Instruction Set Architecture (ISA) to help students in learning the fundamentals of computer architecture because of its orthogonality and its suitability for real-world applications. This…

  17. Role of HPC in Advancing Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2004-01-01

    On behalf of the High Performance Computing and Modernization Program (HPCMP) and NASA Advanced Supercomputing Division (NAS) a study is conducted to assess the role of supercomputers on computational aeroelasticity of aerospace vehicles. The study is mostly based on the responses to a web based questionnaire that was designed to capture the nuances of high performance computational aeroelasticity, particularly on parallel computers. A procedure is presented to assign a fidelity-complexity index to each application. Case studies based on major applications using HPCMP resources are presented.

  18. A two-dimensional advanced systolic array and its arithmetic architecture and design

    SciTech Connect

    Jun, M.S.

    1989-01-01

    The rapid advances in the very large scale integrated (VLSI) technology has created a flurry of research in designing future computer architectures. Many methods have been developed for parallel processing of algorithms by directly mapping them onto parallel architectures. We present new methodologies for design of systolic arrays and asynchronous arrays that implement recursive algorithms efficiently. Using the new methods, we develop a systolic array with very simple local interconnection for matrix multiplication which achieves optimal performance without using any undesirable properties such as preloading input data or global broadcasting. We prove the correctness of the matrix multiplication algorithms on the systolic array with space-time parameters. The implementations of the algorithms can be easily proved and can be systolically expanded. We also develop a multi-purpose built-in logic for asynchronous self-test (BLAST) modules in processing elements. An asynchronous array for matrix multiplication which can speed up the total computation time significantly is also presented. To demonstrate the power of the proposed systolic array, the array will be applied to the shortest path problem by using the partitioned mapping approach which will be the key to extend the computational capacity of VLSI architectures with fixed size. The utilization of partitioning algorithms can overcome difficulties in the management of a large-size graph. To achieve the highest possible computation speed of the systolic array, we develop a prefix carry-lookahead adder/subtractor which achieves the maximal possible parallelism. The new carry-lookahead design leads to a high-speed adders/subtractors with regular layout. The time complexity is 2log{sub 2}n - 1 while the Brent-Kung's scheme has 4log{sub 2}n.

  19. Advanced Avionics and Processor Systems for a Flexible Space Exploration Architecture

    NASA Technical Reports Server (NTRS)

    Keys, Andrew S.; Adams, James H.; Smith, Leigh M.; Johnson, Michael A.; Cressler, John D.

    2010-01-01

    The Advanced Avionics and Processor Systems (AAPS) project, formerly known as the Radiation Hardened Electronics for Space Environments (RHESE) project, endeavors to develop advanced avionic and processor technologies anticipated to be used by NASA s currently evolving space exploration architectures. The AAPS project is a part of the Exploration Technology Development Program, which funds an entire suite of technologies that are aimed at enabling NASA s ability to explore beyond low earth orbit. NASA s Marshall Space Flight Center (MSFC) manages the AAPS project. AAPS uses a broad-scoped approach to developing avionic and processor systems. Investment areas include advanced electronic designs and technologies capable of providing environmental hardness, reconfigurable computing techniques, software tools for radiation effects assessment, and radiation environment modeling tools. Near-term emphasis within the multiple AAPS tasks focuses on developing prototype components using semiconductor processes and materials (such as Silicon-Germanium (SiGe)) to enhance a device s tolerance to radiation events and low temperature environments. As the SiGe technology will culminate in a delivered prototype this fiscal year, the project emphasis shifts its focus to developing low-power, high efficiency total processor hardening techniques. In addition to processor development, the project endeavors to demonstrate techniques applicable to reconfigurable computing and partially reconfigurable Field Programmable Gate Arrays (FPGAs). This capability enables avionic architectures the ability to develop FPGA-based, radiation tolerant processor boards that can serve in multiple physical locations throughout the spacecraft and perform multiple functions during the course of the mission. The individual tasks that comprise AAPS are diverse, yet united in the common endeavor to develop electronics capable of operating within the harsh environment of space. Specifically, the AAPS tasks for

  20. Modern hardware architectures accelerate porous media flow computations

    NASA Astrophysics Data System (ADS)

    Kulczewski, Michal; Kurowski, Krzysztof; Kierzynka, Michal; Dohnalik, Marek; Kaczmarczyk, Jan; Borujeni, Ali Takbiri

    2012-05-01

    Investigation of rock properties, porosity and permeability particularly, which determines transport media characteristic, is crucial to reservoir engineering. Nowadays, micro-tomography (micro-CT) methods allow to obtain vast of petro-physical properties. The micro-CT method facilitates visualization of pores structures and acquisition of total porosity factor, determined by sticking together 2D slices of scanned rock and applying proper absorption cut-off point. Proper segmentation of pores representation in 3D is important to solve the permeability of porous media. This factor is recently determined by the means of Computational Fluid Dynamics (CFD), a popular method to analyze problems related to fluid flows, taking advantage of numerical methods and constantly growing computing powers. The recent advent of novel multi-, many-core and graphics processing unit (GPU) hardware architectures allows scientists to benefit even more from parallel processing and built-in new features. The high level of parallel scalability offers both, the time-to-solution decrease and greater accuracy - top factors in reservoir engineering. This paper aims to present research results related to fluid flow simulations, particularly solving the total porosity and permeability of porous media, taking advantage of modern hardware architectures. In our approach total porosity is calculated by the means of general-purpose computing on multiple GPUs. This application sticks together 2D slices of scanned rock and by the means of a marching tetrahedra algorithm, creates a 3D representation of pores and calculates the total porosity. Experimental results are compared with data obtained via other popular methods, including Nuclear Magnetic Resonance (NMR), helium porosity and nitrogen permeability tests. Then CFD simulations are performed on a large-scale high performance hardware architecture to solve the flow and permeability of porous media. In our experiments we used Lattice Boltzmann

  1. Advanced information processing system: The Army Fault-Tolerant Architecture detailed design overview

    NASA Technical Reports Server (NTRS)

    Harper, Richard E.; Babikyan, Carol A.; Butler, Bryan P.; Clasen, Robert J.; Harris, Chris H.; Lala, Jaynarayan H.; Masotto, Thomas K.; Nagle, Gail A.; Prizant, Mark J.; Treadwell, Steven

    1994-01-01

    The Army Avionics Research and Development Activity (AVRADA) is pursuing programs that would enable effective and efficient management of large amounts of situational data that occurs during tactical rotorcraft missions. The Computer Aided Low Altitude Night Helicopter Flight Program has identified automated Terrain Following/Terrain Avoidance, Nap of the Earth (TF/TA, NOE) operation as key enabling technology for advanced tactical rotorcraft to enhance mission survivability and mission effectiveness. The processing of critical information at low altitudes with short reaction times is life-critical and mission-critical necessitating an ultra-reliable/high throughput computing platform for dependable service for flight control, fusion of sensor data, route planning, near-field/far-field navigation, and obstacle avoidance operations. To address these needs the Army Fault Tolerant Architecture (AFTA) is being designed and developed. This computer system is based upon the Fault Tolerant Parallel Processor (FTPP) developed by Charles Stark Draper Labs (CSDL). AFTA is hard real-time, Byzantine, fault-tolerant parallel processor which is programmed in the ADA language. This document describes the results of the Detailed Design (Phase 2 and 3 of a 3-year project) of the AFTA development. This document contains detailed descriptions of the program objectives, the TF/TA NOE application requirements, architecture, hardware design, operating systems design, systems performance measurements and analytical models.

  2. Advanced Computed-Tomography Inspection System

    NASA Technical Reports Server (NTRS)

    Harris, Lowell D.; Gupta, Nand K.; Smith, Charles R.; Bernardi, Richard T.; Moore, John F.; Hediger, Lisa

    1993-01-01

    Advanced Computed Tomography Inspection System (ACTIS) is computed-tomography x-ray apparatus revealing internal structures of objects in wide range of sizes and materials. Three x-ray sources and adjustable scan geometry gives system unprecedented versatility. Gantry contains translation and rotation mechanisms scanning x-ray beam through object inspected. Distance between source and detector towers varied to suit object. System used in such diverse applications as development of new materials, refinement of manufacturing processes, and inspection of components.

  3. Advances and Challenges in Computational Plasma Science

    SciTech Connect

    W.M. Tang; V.S. Chan

    2005-01-03

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behavior. Recent advances in simulations of magnetically-confined plasmas are reviewed in this paper with illustrative examples chosen from associated research areas such as microturbulence, magnetohydrodynamics, and other topics. Progress has been stimulated in particular by the exponential growth of computer speed along with significant improvements in computer technology.

  4. The computational structural mechanics testbed architecture. Volume 5: The Input-Output Manager DMGASP

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1989-01-01

    This is the fifth of a set of five volumes which describe the software architecture for the Computational Structural Mechanics Testbed. Derived from NICE, an integrated software system developed at Lockheed Palo Alto Research Laboratory, the architecture is composed of the command language (CLAMP), the command language interpreter (CLIP), and the data manager (GAL). Volumes 1, 2, and 3 (NASA CR's 178384, 178385, and 178386, respectively) describe CLAMP and CLIP and the CLIP-processor interface. Volumes 4 and 5 (NASA CR's 178387 and 178388, respectively) describe GAL and its low-level I/O. CLAMP, an acronym for Command Language for Applied Mechanics Processors, is designed to control the flow of execution of processors written for NICE. Volume 5 describes the low-level data management component of the NICE software. It is intended only for advanced programmers involved in maintenance of the software.

  5. Technology advances and market forces: Their impact on high performance architectures

    NASA Technical Reports Server (NTRS)

    Best, D. R.

    1978-01-01

    Reasonable projections into future supercomputer architectures and technology require an analysis of the computer industry market environment, the current capabilities and trends within the component industry, and the research activities on computer architecture in the industrial and academic communities. Management, programmer, architect, and user must cooperate to increase the efficiency of supercomputer development efforts. Care must be taken to match the funding, compiler, architecture and application with greater attention to testability, maintainability, reliability, and usability than supercomputer development programs of the past.

  6. Advanced Launch System Multi-Path Redundant Avionics Architecture Analysis and Characterization

    NASA Technical Reports Server (NTRS)

    Baker, Robert L.

    1993-01-01

    The objective of the Multi-Path Redundant Avionics Suite (MPRAS) program is the development of a set of avionic architectural modules which will be applicable to the family of launch vehicles required to support the Advanced Launch System (ALS). To enable ALS cost/performance requirements to be met, the MPRAS must support autonomy, maintenance, and testability capabilities which exceed those present in conventional launch vehicles. The multi-path redundant or fault tolerance characteristics of the MPRAS are necessary to offset a reduction in avionics reliability due to the increased complexity needed to support these new cost reduction and performance capabilities and to meet avionics reliability requirements which will provide cost-effective reductions in overall ALS recurring costs. A complex, real-time distributed computing system is needed to meet the ALS avionics system requirements. General Dynamics, Boeing Aerospace, and C.S. Draper Laboratory have proposed system architectures as candidates for the ALS MPRAS. The purpose of this document is to report the results of independent performance and reliability characterization and assessment analyses of each proposed candidate architecture and qualitative assessments of testability, maintainability, and fault tolerance mechanisms. These independent analyses were conducted as part of the MPRAS Part 2 program and were carried under NASA Langley Research Contract NAS1-17964, Task Assignment 28.

  7. Moon-Based Advanced Reusable Transportation Architecture: The MARTA Project

    NASA Astrophysics Data System (ADS)

    Alexander, R.; Bechtel, R.; Chen, T.; Cormier, T.; Kalaver, S.; Kirtas, M.; Lewe, J.-H.; Marcus, L.; Marshall, D.; Medlin, M.; McIntire, J.; Nelson, D.; Remolina, D.; Scott, A.; Weglian, J.; Olds, J.

    2000-01-01

    The Moon-based Advanced Reusable Transportation Architecture (MARTA) Project conducted an in-depth investigation of possible Low Earth Orbit (LEO) to lunar surface transportation systems capable of sending both astronauts and large masses of cargo to the Moon and back. This investigation was conducted from the perspective of a private company operating the transportation system for a profit. The goal of this company was to provide an Internal Rate of Return (IRR) of 25% to its shareholders. The technical aspect of the study began with a wide open design space that included nuclear rockets and tether systems as possible propulsion systems. Based on technical, political, and business considerations, the architecture was quickly narrowed down to a traditional chemical rocket using liquid oxygen and liquid hydrogen. However, three additional technologies were identified for further investigation: aerobraking, in-situ resource utilization (ISRU), and a mass driver on the lunar surface. These three technologies were identified because they reduce the mass of propellant used. Operational costs are the largest expense with propellant cost the largest contributor. ISRU, the production of materials using resources on the Moon, was considered because an Earth to Orbit (ETO) launch cost of 1600 per kilogram made taking propellant from the Earth's surface an expensive proposition. The use of an aerobrake to circularize the orbit of a vehicle coming from the Moon towards Earth eliminated 3, 100 meters per second of velocity change (Delta V), eliminating almost 30% of the 11,200 m/s required for one complete round trip. The use of a mass driver on the lunar surface, in conjunction with an ISRU production facility, would reduce the amount of propellant required by eliminating using propellant to take additional propellant from the lunar surface to Low Lunar Orbit (LLO). However, developing and operating such a system required further study to identify if it was cost effective. The

  8. Advances in computational fluid dynamics solvers for modern computing environments

    NASA Astrophysics Data System (ADS)

    Hertenstein, Daniel; Humphrey, John R.; Paolini, Aaron L.; Kelmelis, Eric J.

    2013-05-01

    EM Photonics has been investigating the application of massively multicore processors to a key problem area: Computational Fluid Dynamics (CFD). While the capabilities of CFD solvers have continually increased and improved to support features such as moving bodies and adjoint-based mesh adaptation, the software architecture has often lagged behind. This has led to poor scaling as core counts reach the tens of thousands. In the modern High Performance Computing (HPC) world, clusters with hundreds of thousands of cores are becoming the standard. In addition, accelerator devices such as NVIDIA GPUs and Intel Xeon Phi are being installed in many new systems. It is important for CFD solvers to take advantage of the new hardware as the computations involved are well suited for the massively multicore architecture. In our work, we demonstrate that new features in NVIDIA GPUs are able to empower existing CFD solvers by example using AVUS, a CFD solver developed by the Air Force Research Labratory (AFRL) and the Volcanic Ash Advisory Center (VAAC). The effort has resulted in increased performance and scalability without sacrificing accuracy. There are many well-known codes in the CFD space that can benefit from this work, such as FUN3D, OVERFLOW, and TetrUSS. Such codes are widely used in the commercial, government, and defense sectors.

  9. Advanced networks and computing in healthcare

    PubMed Central

    Ackerman, Michael

    2011-01-01

    As computing and network capabilities continue to rise, it becomes increasingly important to understand the varied applications for using them to provide healthcare. The objective of this review is to identify key characteristics and attributes of healthcare applications involving the use of advanced computing and communication technologies, drawing upon 45 research and development projects in telemedicine and other aspects of healthcare funded by the National Library of Medicine over the past 12 years. Only projects publishing in the professional literature were included in the review. Four projects did not publish beyond their final reports. In addition, the authors drew on their first-hand experience as project officers, reviewers and monitors of the work. Major themes in the corpus of work were identified, characterizing key attributes of advanced computing and network applications in healthcare. Advanced computing and network applications are relevant to a range of healthcare settings and specialties, but they are most appropriate for solving a narrower range of problems in each. Healthcare projects undertaken primarily to explore potential have also demonstrated effectiveness and depend on the quality of network service as much as bandwidth. Many applications are enabling, making it possible to provide service or conduct research that previously was not possible or to achieve outcomes in addition to those for which projects were undertaken. Most notable are advances in imaging and visualization, collaboration and sense of presence, and mobility in communication and information-resource use. PMID:21486877

  10. Advanced networks and computing in healthcare.

    PubMed

    Ackerman, Michael; Locatis, Craig

    2011-01-01

    As computing and network capabilities continue to rise, it becomes increasingly important to understand the varied applications for using them to provide healthcare. The objective of this review is to identify key characteristics and attributes of healthcare applications involving the use of advanced computing and communication technologies, drawing upon 45 research and development projects in telemedicine and other aspects of healthcare funded by the National Library of Medicine over the past 12 years. Only projects publishing in the professional literature were included in the review. Four projects did not publish beyond their final reports. In addition, the authors drew on their first-hand experience as project officers, reviewers and monitors of the work. Major themes in the corpus of work were identified, characterizing key attributes of advanced computing and network applications in healthcare. Advanced computing and network applications are relevant to a range of healthcare settings and specialties, but they are most appropriate for solving a narrower range of problems in each. Healthcare projects undertaken primarily to explore potential have also demonstrated effectiveness and depend on the quality of network service as much as bandwidth. Many applications are enabling, making it possible to provide service or conduct research that previously was not possible or to achieve outcomes in addition to those for which projects were undertaken. Most notable are advances in imaging and visualization, collaboration and sense of presence, and mobility in communication and information-resource use. PMID:21486877

  11. Advanced Algebra and Trigonometry: Supplemental Computer Units.

    ERIC Educational Resources Information Center

    Dotseth, Karen

    A set of computer-oriented, supplemental activities is offered which can be used with a course in advanced algebra and trigonometry. The activities involve use of the BASIC programming language; it is assumed that the teacher is familiar with programming in BASIC. Students will learn some BASIC; however, the intent is not to develop proficient…

  12. Predictive Dynamic Security Assessment through Advanced Computing

    SciTech Connect

    Huang, Zhenyu; Diao, Ruisheng; Jin, Shuangshuang; Chen, Yousu

    2014-11-30

    Abstract— Traditional dynamic security assessment is limited by several factors and thus falls short in providing real-time information to be predictive for power system operation. These factors include the steady-state assumption of current operating points, static transfer limits, and low computational speed. This addresses these factors and frames predictive dynamic security assessment. The primary objective of predictive dynamic security assessment is to enhance the functionality and computational process of dynamic security assessment through the use of high-speed phasor measurements and the application of advanced computing technologies for faster-than-real-time simulation. This paper presents algorithms, computing platforms, and simulation frameworks that constitute the predictive dynamic security assessment capability. Examples of phasor application and fast computation for dynamic security assessment are included to demonstrate the feasibility and speed enhancement for real-time applications.

  13. An architecture for quantum computation with magnetically trapped Holmium atoms

    NASA Astrophysics Data System (ADS)

    Saffman, Mark; Hostetter, James; Booth, Donald; Collett, Jeffrey

    2016-05-01

    Outstanding challenges for scalable neutral atom quantum computation include correction of atom loss due to collisions with untrapped background gas, reduction of crosstalk during state preparation and measurement due to scattering of near resonant light, and the need to improve quantum gate fidelity. We present a scalable architecture based on loading single Holmium atoms into an array of Ioffe-Pritchard traps. The traps are formed by grids of superconducting wires giving a trap array with 40 μm period, suitable for entanglement via long range Rydberg gates. The states | F = 5 , M = 5 > and | F = 7 , M = 7 > provide a magic trapping condition at a low field of 3.5 G for long coherence time qubit encoding. The F = 11 level will be used for state preparation and measurement. The availability of different states for encoding, gate operations, and measurement, spectroscopically isolates the different operations and will prevent crosstalk to neighboring qubits. Operation in a cryogenic environment with ultra low pressure will increase atom lifetime and Rydberg gate fidelity by reduction of blackbody induced Rydberg decay. We will present a complete description of the architecture including estimates of achievable performance metrics. Work supported by NSF award PHY-1404357.

  14. Architecture Framework for Trapped-Ion Quantum Computer based on Performance Simulation Tool

    NASA Astrophysics Data System (ADS)

    Ahsan, Muhammad

    The challenge of building scalable quantum computer lies in striking appropriate balance between designing a reliable system architecture from large number of faulty computational resources and improving the physical quality of system components. The detailed investigation of performance variation with physics of the components and the system architecture requires adequate performance simulation tool. In this thesis we demonstrate a software tool capable of (1) mapping and scheduling the quantum circuit on a realistic quantum hardware architecture with physical resource constraints, (2) evaluating the performance metrics such as the execution time and the success probability of the algorithm execution, and (3) analyzing the constituents of these metrics and visualizing resource utilization to identify system components which crucially define the overall performance. Using this versatile tool, we explore vast design space for modular quantum computer architecture based on trapped ions. We find that while success probability is uniformly determined by the fidelity of physical quantum operation, the execution time is a function of system resources invested at various layers of design hierarchy. At physical level, the number of lasers performing quantum gates, impact the latency of the fault-tolerant circuit blocks execution. When these blocks are used to construct meaningful arithmetic circuit such as quantum adders, the number of ancilla qubits for complicated non-clifford gates and entanglement resources to establish long-distance communication channels, become major performance limiting factors. Next, in order to factorize large integers, these adders are assembled into modular exponentiation circuit comprising bulk of Shor's algorithm. At this stage, the overall scaling of resource-constraint performance with the size of problem, describes the effectiveness of chosen design. By matching the resource investment with the pace of advancement in hardware technology

  15. White paper: A plan for cooperation between NASA and DARPA to establish a center for advanced architectures

    NASA Technical Reports Server (NTRS)

    Denning, P. J.; Adams, G. B., III; Brown, R. L.; Kanerva, P.; Leiner, B. M.; Raugh, M. R.

    1986-01-01

    Large, complex computer systems require many years of development. It is recognized that large scale systems are unlikely to be delivered in useful condition unless users are intimately involved throughout the design process. A mechanism is described that will involve users in the design of advanced computing systems and will accelerate the insertion of new systems into scientific research. This mechanism is embodied in a facility called the Center for Advanced Architectures (CAA). CAA would be a division of RIACS (Research Institute for Advanced Computer Science) and would receive its technical direction from a Scientific Advisory Board established by RIACS. The CAA described here is a possible implementation of a center envisaged in a proposed cooperation between NASA and DARPA.

  16. Examining the architecture of cellular computing through a comparative study with a computer.

    PubMed

    Wang, Degeng; Gribskov, Michael

    2005-06-22

    The computer and the cell both use information embedded in simple coding, the binary software code and the quadruple genomic code, respectively, to support system operations. A comparative examination of their system architecture as well as their information storage and utilization schemes is performed. On top of the code, both systems display a modular, multi-layered architecture, which, in the case of a computer, arises from human engineering efforts through a combination of hardware implementation and software abstraction. Using the computer as a reference system, a simplistic mapping of the architectural components between the two is easily detected. This comparison also reveals that a cell abolishes the software-hardware barrier through genomic encoding for the constituents of the biochemical network, a cell's "hardware" equivalent to the computer central processing unit (CPU). The information loading (gene expression) process acts as a major determinant of the encoded constituent's abundance, which, in turn, often determines the "bandwidth" of a biochemical pathway. Cellular processes are implemented in biochemical pathways in parallel manners. In a computer, on the other hand, the software provides only instructions and data for the CPU. A process represents just sequentially ordered actions by the CPU and only virtual parallelism can be implemented through CPU time-sharing. Whereas process management in a computer may simply mean job scheduling, coordinating pathway bandwidth through the gene expression machinery represents a major process management scheme in a cell. In summary, a cell can be viewed as a super-parallel computer, which computes through controlled hardware composition. While we have, at best, a very fragmented understanding of cellular operation, we have a thorough understanding of the computer throughout the engineering process. The potential utilization of this knowledge to the benefit of systems biology is discussed. PMID:16849179

  17. Final Report: Super Instruction Architecture for Scalable Parallel Computations

    SciTech Connect

    Sanders, Beverly Ann; Bartlett, Rodney; Deumens, Erik

    2013-12-23

    The most advanced methods for reliable and accurate computation of the electronic structure of molecular and nano systems are the coupled-cluster techniques. These high-accuracy methods help us to understand, for example, how biological enzymes operate and contribute to the design of new organic explosives. The ACES III software provides a modern, high-performance implementation of these methods optimized for high performance parallel computer systems, ranging from small clusters typical in individual research groups, through larger clusters available in campus and regional computer centers, all the way to high-end petascale systems at national labs, including exploiting GPUs if available. This project enhanced the ACESIII software package and used it to study interesting scientific problems.

  18. Advanced Energy Conversion Technologies and Architectures for Earth and Beyond

    NASA Technical Reports Server (NTRS)

    Howell, Joe T.; Fikes, John C.; Phillips, Dane J.; Laycock, Rustin L.; ONeill, Mark; Henley, Mark W.; Fork, Richard L.

    2006-01-01

    Research, development and studies of novel space-based solar power systems, technologies and architectures for Earth and beyond are needed to reduce the cost of clean electrical power for terrestrial use and to provide a stepping stone for providing an abundance of power in space, i.e., manufacturing facilities, tourist facilities, delivery of power between objects in space, and between space and surface sites. The architectures, technologies and systems needed for space to Earth applications may also be used for in-space applications. Advances in key technologies, i.e., power generation, power management and distribution, power beaming and conversion of beamed power are needed to achieve the objectives of both terrestrial and extraterrestrial applications. There is a need to produce "proof-ofconcept" validation of critical WPT technologies for both the near-term, as well as far-term applications. Investments may be harvested in near-term beam safe demonstrations of commercial WPT applications. Receiving sites (users) include ground-based stations for terrestrial electrical power, orbital sites to provide power for satellites and other platforms, future space elevator systems, space vehicle propulsion, and space surface sites. Space surface receiving sites of particular interest include the areas of permanent shadow near the moon s North and South poles, where WPT technologies could enable access to ice and other useful resources for human exploration. This paper discusses work addressing a promising approach to solar power generation and beamed power conversion. The approach is based on a unique high-power solar concentrator array called Stretched Lens Array (SLA) applied to both solar power generation and beamed power conversion. Since both versions (solar and laser) of SLA use many identical components (only the photovoltaic cells need to be different), economies of manufacturing and scale may be realized by using SLA on both ends of the laser power beaming

  19. Advances in Computational Capabilities for Hypersonic Flows

    NASA Technical Reports Server (NTRS)

    Kumar, Ajay; Gnoffo, Peter A.; Moss, James N.; Drummond, J. Philip

    1997-01-01

    The paper reviews the growth and advances in computational capabilities for hypersonic applications over the period from the mid-1980's to the present day. The current status of the code development issues such as surface and field grid generation, algorithms, physical and chemical modeling, and validation is provided. A brief description of some of the major codes being used at NASA Langley Research Center for hypersonic continuum and rarefied flows is provided, along with their capabilities and deficiencies. A number of application examples are presented, and future areas of research to enhance accuracy, reliability, efficiency, and robustness of computational codes are discussed.

  20. Advances in Architectural Elements For Future Missions to Titan

    NASA Astrophysics Data System (ADS)

    Reh, Kim; Coustenis, Athena; Lunine, Jonathan; Matson, Dennis; Lebreton, Jean-Pierre; Vargas, Andre; Beauchamp, Pat; Spilker, Tom; Strange, Nathan; Elliott, John

    2010-05-01

    The future exploration of Titan is of high priority for the solar system exploration community as recommended by the 2003 National Research Council (NRC) Decadal Survey [1] and ESA's Cosmic Vision Program themes. Recent Cassini-Huygens discoveries continue to emphasize that Titan is a complex world with very many Earth-like features. Titan has a dense, nitrogen atmosphere, an active climate and meteorological cycles where conditions are such that the working fluid, methane, plays the role that water does on Earth. Titan's surface, with lakes and seas, broad river valleys, sand dunes and mountains was formed by processes like those that have shaped the Earth. Supporting this panoply of Earth-like processes is an ice crust that floats atop what might be a liquid water ocean. Furthermore, Titan is rich in very many different organic compounds—more so than any place in the solar system, except Earth. The Titan Saturn System Mission (TSSM) concept that followed the 2007 TandEM ESA CV proposal [2] and the 2007 Titan Explorer NASA Flagship study [3], was examined [4,5] and prioritized by NASA and ESA in February 2009 as a mission to follow the Europa Jupiter System Mission. The TSSM study, like others before it, again concluded that an orbiter, a montgolfiere hot-air balloon and a surface package (e.g. lake lander, Geosaucer (instrumented heat shield), …) are very high priority elements for any future mission to Titan. Such missions could be conceived as Flagship/Cosmic Vision L-Class or as individual smaller missions that could possibly fit into NASA New Frontiers or ESA Cosmic Vision M-Class budgets. As a result of a multitude of Titan mission studies, a clear blueprint has been laid out for the work needed to reduce the risks inherent in such missions and the areas where advances would be beneficial for elements critical to future Titan missions have been identified. The purpose of this paper is to provide a brief overview of the flagship mission architecture and

  1. Earth Science Computational Architecture for Multi-disciplinary Investigations

    NASA Astrophysics Data System (ADS)

    Parker, J. W.; Blom, R.; Gurrola, E.; Katz, D.; Lyzenga, G.; Norton, C.

    2005-12-01

    Understanding the processes underlying Earth's deformation and mass transport requires a non-traditional, integrated, interdisciplinary, approach dependent on multiple space and ground based data sets, modeling, and computational tools. Currently, details of geophysical data acquisition, analysis, and modeling largely limit research to discipline domain experts. Interdisciplinary research requires a new computational architecture that is optimized to perform complex data processing of multiple solid Earth science data types in a user-friendly environment. A web-based computational framework is being developed and integrated with applications for automatic interferometric radar processing, and models for high-resolution deformation & gravity, forward models of viscoelastic mass loading over short wavelengths & complex time histories, forward-inverse codes for characterizing surface loading-response over time scales of days to tens of thousands of years, and inversion of combined space magnetic & gravity fields to constrain deep crustal and mantle properties. This framework combines an adaptation of the QuakeSim distributed services methodology with the Pyre framework for multiphysics development. The system uses a three-tier architecture, with a middle tier server that manages user projects, available resources, and security. This ensures scalability to very large networks of collaborators. Users log into a web page and have a personal project area, persistently maintained between connections, for each application. Upon selection of an application and host from a list of available entities, inputs may be uploaded or constructed from web forms and available data archives, including gravity, GPS and imaging radar data. The user is notified of job completion and directed to results posted via URLs. Interdisciplinary work is supported through easy availability of all applications via common browsers, application tutorials and reference guides, and worked examples with

  2. E-Governance and Service Oriented Computing Architecture Model

    NASA Astrophysics Data System (ADS)

    Tejasvee, Sanjay; Sarangdevot, S. S.

    2010-11-01

    E-Governance is the effective application of information communication and technology (ICT) in the government processes to accomplish safe and reliable information lifecycle management. Lifecycle of the information involves various processes as capturing, preserving, manipulating and delivering information. E-Governance is meant to transform of governance in better manner to the citizens which is transparent, reliable, participatory, and accountable in point of view. The purpose of this paper is to attempt e-governance model, focus on the Service Oriented Computing Architecture (SOCA) that includes combination of information and services provided by the government, innovation, find out the way of optimal service delivery to citizens and implementation in transparent and liable practice. This paper also try to enhance focus on the E-government Service Manager as a essential or key factors service oriented and computing model that provides a dynamically extensible structural design in which all area or branch can bring in innovative services. The heart of this paper examine is an intangible model that enables E-government communication for trade and business, citizen and government and autonomous bodies.

  3. Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2000-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center. It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. Ames has been designated NASA's Center of Excellence in Information Technology. In this capacity, Ames is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA Ames and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth; (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking. Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to a

  4. 3D-SoftChip: A Novel Architecture for Next-Generation Adaptive Computing Systems

    NASA Astrophysics Data System (ADS)

    Kim, Chul; Rassau, Alex; Lachowicz, Stefan; Lee, Mike Myung-Ok; Eshraghian, Kamran

    2006-12-01

    This paper introduces a novel architecture for next-generation adaptive computing systems, which we term 3D-SoftChip. The 3D-SoftChip is a 3-dimensional (3D) vertically integrated adaptive computing system combining state-of-the-art processing and 3D interconnection technology. It comprises the vertical integration of two chips (a configurable array processor and an intelligent configurable switch) through an indium bump interconnection array (IBIA). The configurable array processor (CAP) is an array of heterogeneous processing elements (PEs), while the intelligent configurable switch (ICS) comprises a switch block, 32-bit dedicated RISC processor for control, on-chip program/data memory, data frame buffer, along with a direct memory access (DMA) controller. This paper introduces the novel 3D-SoftChip architecture for real-time communication and multimedia signal processing as a next-generation computing system. The paper further describes the advanced HW/SW codesign and verification methodology, including high-level system modeling of the 3D-SoftChip using SystemC, being used to determine the optimum hardware specification in the early design stage.

  5. Program partitioning and scheduling for NUMA computer architectures

    SciTech Connect

    Wolski, R.M.

    1994-03-01

    To effect the parallel execution of a program on a multiprocessor, each of the program`s constituent computations must be assigned to a processing resource within the multiprocessor. The problem of making this assignment so that execution time is minimized (known as the mapping problem) has been shown to be NP-complete. However, heuristics based on the performance characteristics of the target multiprocessor can yield execution times that approach the minimum possible. The mapping problem can be divided in to the problem of partitioning the computations into sequential threads, and the problem of scheduling those threads on the processors of the target system. This dissertation presents a logical framework and a set of heuristics that operate within the framework for the automatic partitioning and scheduling of programs at compile-time. The framework is based on the memory-node execution model which correctly captures the interaction between computations, processors, and the communication resources within a multiprocessor. The CP and HEF heuristics manipulate the features of the memory-node model to produce efficient program mappings. The effectiveness of the partitioning and scheduling techniques is investigated for Non-uniform Memory Access (NUMA) architecture types. To test the versatility of the approach, results are presented both for processors implementing strict execution semantics, and non-strict load/store semantics popular with RISC systems. The partitioner and scheduler are also used to investigate the possible advantages of multithreading (using either hardware or software), and the effectiveness of massively parallel systems, within a scientific programming context.

  6. Airborne Advanced Reconfigurable Computer System (ARCS)

    NASA Technical Reports Server (NTRS)

    Bjurman, B. E.; Jenkins, G. M.; Masreliez, C. J.; Mcclellan, K. L.; Templeman, J. E.

    1976-01-01

    A digital computer subsystem fault-tolerant concept was defined, and the potential benefits and costs of such a subsystem were assessed when used as the central element of a new transport's flight control system. The derived advanced reconfigurable computer system (ARCS) is a triple-redundant computer subsystem that automatically reconfigures, under multiple fault conditions, from triplex to duplex to simplex operation, with redundancy recovery if the fault condition is transient. The study included criteria development covering factors at the aircraft's operation level that would influence the design of a fault-tolerant system for commercial airline use. A new reliability analysis tool was developed for evaluating redundant, fault-tolerant system availability and survivability; and a stringent digital system software design methodology was used to achieve design/implementation visibility.

  7. An advanced mixed user domestic satellite system architecture

    NASA Technical Reports Server (NTRS)

    Raymond, H. G.; Holmes, W. M., Jr.

    1980-01-01

    A domestic satellite system architecture that can efficiently and economically accommodate a wide variety of disparate user classes is described and a baseline system configuration identified. With such a technique, both the efficiency of TDMA operation and the operational terminal flexibility of FDMA can be simultaneously achieved.

  8. Recent advances in computational actinoid chemistry.

    PubMed

    Wang, Dongqi; van Gunsteren, Wilfred F; Chai, Zhifang

    2012-09-01

    We briefly review advances in computational actinoid (An) chemistry during the past ten years in regard to two issues: the geometrical and electronic structures, and reactions. The former addresses the An-O, An-C, and M-An (M is a metal atom including An) bonds in the actinoid molecular systems, including actinoid oxo and oxide species, actinoid-carbenoid, dinuclear and diatomic systems, and the latter the hydration and ligand exchange, the disproportionation, the oxidation, the reduction of uranyl, hydroamination, and the photolysis of uranium azide. Concerning their relevance to the electronic structures and reactions of actinoids and their importance in the development of an advanced nuclear fuel cycle, we also mentioned the work on actinoid carbides and nitrides, which have been proposed to be candidates of the next generation of nuclear fuel, and the oxidation of PuO(x), which is important to understand the speciation of actinoids in the environment, followed by a brief discussion on the urgent need for a heavier involvement of computational actinoid chemistry in developing advanced reprocessing protocols of spent nuclear fuel. The paper is concluded with an outlook. PMID:22777520

  9. Specification, Design, and Analysis of Advanced HUMS Architectures

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    2004-01-01

    During the two-year project period, we have worked on several aspects of domain-specific architectures for HUMS. In particular, we looked at using scenario-based approach for the design and designed a language for describing such architectures. The language is now being used in all aspects of our HUMS design. In particular, we have made contributions in the following areas. 1) We have employed scenarios in the development of HUMS in three main areas. They are: (a) To improve reusability by using scenarios as a library indexing tool and as a domain analysis tool; (b) To improve maintainability by recording design rationales from two perspectives - problem domain and solution domain; (c) To evaluate the software architecture. 2) We have defined a new architectural language called HADL or HUMS Architectural Definition Language. It is a customized version of xArch/xADL. It is based on XML and, hence, is easily portable from domain to domain, application to application, and machine to machine. Specifications written in HADL can be easily read and parsed using the currently available XML parsers. Thus, there is no need to develop a plethora of software to support HADL. 3) We have developed an automated design process that involves two main techniques: (a) Selection of solutions from a large space of designs; (b) Synthesis of designs. However, the automation process is not an absolute Artificial Intelligence (AI) approach though it uses a knowledge-based system that epitomizes a specific HUMS domain. The process uses a database of solutions as an aid to solve the problems rather than creating a new design in the literal sense. Since searching is adopted as the main technique, the challenges involved are: (a) To minimize the effort in searching the database where a very large number of possibilities exist; (b) To develop representations that could conveniently allow us to depict design knowledge evolved over many years; (c) To capture the required information that aid the

  10. Architectures of Kepler Planet Systems with Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Morehead, Robert C.; Ford, Eric B.

    2015-12-01

    The distribution of period normalized transit duration ratios among Kepler’s multiple transiting planet systems constrains the distributions of mutual orbital inclinations and orbital eccentricities. However, degeneracies in these parameters tied to the underlying number of planets in these systems complicate their interpretation. To untangle the true architecture of planet systems, the mutual inclination, eccentricity, and underlying planet number distributions must be considered simultaneously. The complexities of target selection, transit probability, detection biases, vetting, and follow-up observations make it impractical to write an explicit likelihood function. Approximate Bayesian computation (ABC) offers an intriguing path forward. In its simplest form, ABC generates a sample of trial population parameters from a prior distribution to produce synthetic datasets via a physically-motivated forward model. Samples are then accepted or rejected based on how close they come to reproducing the actual observed dataset to some tolerance. The accepted samples form a robust and useful approximation of the true posterior distribution of the underlying population parameters. We build on the considerable progress from the field of statistics to develop sequential algorithms for performing ABC in an efficient and flexible manner. We demonstrate the utility of ABC in exoplanet populations and present new constraints on the distributions of mutual orbital inclinations, eccentricities, and the relative number of short-period planets per star. We conclude with a discussion of the implications for other planet occurrence rate calculations, such as eta-Earth.

  11. Applying a cloud computing approach to storage architectures for spacecraft

    NASA Astrophysics Data System (ADS)

    Baldor, Sue A.; Quiroz, Carlos; Wood, Paul

    As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.

  12. PHENIX On-Line Distributed Computing System Architecture

    SciTech Connect

    Desmond, Edmond; Haggerty, John; Kehayias, Hyon Joo; Purschke, Martin L.; Witzig, Chris; Kozlowski, Thomas

    1997-05-22

    PHENIX is one of the two large experiments at the Relativistic Heavy Ion Collider (RHIC) currently under construction at Brookhaven National Laboratory. The detector consists of 11 sub-detectors, that are further subdivided into 29 units (``granules``) that can be operated independently, which includes simultaneous data taking with independent data streams and independent triggers. The detector has 250,000 channels and is read out by front end modules, where the data is buffered in a pipeline while awaiting the level trigger decision. Zero suppression and calibration is done after the level accept in custom built data collection modules (DCMs) with DSPs before the data is sent to an event builder (design throughput of 2 Gb/sec) and higher level triggers. The On-line Computing Systems Group (ONCS) has two responsibilities. Firstly it is responsible for receiving the data from the event builder, routing it through a network of workstations to consumer processes and archiving it at a data rate of 20 MB/sec. Secondly it is also responsible for the overall configuration, control and operation of the detector and data acquisition chain, which comprises the software integration for several thousand custom built hardware modules. The software must furthermore support the independent operation of the above mentioned granules, which includes the coordination of processes that run in 60-100 VME processors and workstations. ONOS has adapted the Shlaer- Mellor Object Oriented Methodology for the design of the top layer software. CORBA is used as communication layer between the distributed objects, which are implemented as asynchronous finite state machines. We will give an overview of the PHENIX online system with the main focus on the system architecture, software components and integration tasks of the On-line Computing group ONCS and report on the status of the current prototypes.

  13. 75 FR 43518 - Advanced Scientific Computing Advisory Committee; Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-26

    ... Advanced Scientific Computing Advisory Committee; Meeting AGENCY: Office of Science, DOE. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing Advisory..., Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U. S. Department of...

  14. 78 FR 6087 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-29

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing..., Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U. S. Department of...

  15. 76 FR 41234 - Advanced Scientific Computing Advisory Committee Charter Renewal

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-13

    ... Advanced Scientific Computing Advisory Committee Charter Renewal AGENCY: Department of Energy, Office of... Administration, notice is hereby given that the Advanced Scientific Computing Advisory Committee will be renewed... concerning the Advanced Scientific Computing program in response only to charges from the Director of...

  16. 76 FR 9765 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-22

    ... Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing..., Office of Advanced Scientific Computing Research, SC-21/Germantown Building, U.S. Department of...

  17. 78 FR 41046 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-09

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION... hereby given that the Advanced Scientific Computing Advisory Committee will be renewed for a two-year... (DOE), on the Advanced Scientific Computing Research Program managed by the Office of...

  18. 75 FR 9887 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-04

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing... Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U.S. Department...

  19. 78 FR 56871 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-16

    ... Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing... Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U.S. Department...

  20. Computational Design of Advanced Nuclear Fuels

    SciTech Connect

    Savrasov, Sergey; Kotliar, Gabriel; Haule, Kristjan

    2014-06-03

    The objective of the project was to develop a method for theoretical understanding of nuclear fuel materials whose physical and thermophysical properties can be predicted from first principles using a novel dynamical mean field method for electronic structure calculations. We concentrated our study on uranium, plutonium, their oxides, nitrides, carbides, as well as some rare earth materials whose 4f eletrons provide a simplified framework for understanding complex behavior of the f electrons. We addressed the issues connected to the electronic structure, lattice instabilities, phonon and magnon dynamics as well as thermal conductivity. This allowed us to evaluate characteristics of advanced nuclear fuel systems using computer based simulations and avoid costly experiments.

  1. Space Power Architectures for NASA Missions: The Applicability and Benefits of Advanced Power and Electric Propulsion

    NASA Technical Reports Server (NTRS)

    Hoffman, David J.

    2001-01-01

    The relative importance of electrical power systems as compared with other spacecraft bus systems is examined. The quantified benefits of advanced space power architectures for NASA Earth Science, Space Science, and Human Exploration and Development of Space (HEDS) missions is then presented. Advanced space power technologies highlighted include high specific power solar arrays, regenerative fuel cells, Stirling radioisotope power sources, flywheel energy storage and attitude control, lithium ion polymer energy storage and advanced power management and distribution.

  2. Advanced satellite system architecture for VSATs with ISDN compatibility

    NASA Technical Reports Server (NTRS)

    Jorasch, Ronald E.; Price, Kent M.

    1988-01-01

    This paper describes a future communications satellite system architecture concept which allows the use of Very Small Aperature Terminals (VSATs) of 1.2 m to 1.8 m diameter and which provides access according to the international Integrated Services Digital Network (ISDN) standard. This satellite system design could make dial-up integrated voice and data service available nationwide and perhaps worldwide. The paper gives a conceptual system design based on the year 1995 technology for the communications satellite, the earth terminal, and the ground-based master control station and interface to the terrestrial ISDN network.

  3. General purpose architecture for intelligent computer-aided training

    NASA Technical Reports Server (NTRS)

    Loftin, R. Bowen (Inventor); Wang, Lui (Inventor); Baffes, Paul T. (Inventor); Hua, Grace C. (Inventor)

    1994-01-01

    An intelligent computer-aided training system having a general modular architecture is provided for use in a wide variety of training tasks and environments. It is comprised of a user interface which permits the trainee to access the same information available in the task environment and serves as a means for the trainee to assert actions to the system; a domain expert which is sufficiently intelligent to use the same information available to the trainee and carry out the task assigned to the trainee; a training session manager for examining the assertions made by the domain expert and by the trainee for evaluating such trainee assertions and providing guidance to the trainee which are appropriate to his acquired skill level; a trainee model which contains a history of the trainee interactions with the system together with summary evaluative data; an intelligent training scenario generator for designing increasingly complex training exercises based on the current skill level contained in the trainee model and on any weaknesses or deficiencies that the trainee has exhibited in previous interactions; and a blackboard that provides a common fact base for communication between the other components of the system. Preferably, the domain expert contains a list of 'mal-rules' which typifies errors that are usually made by novice trainees. Also preferably, the training session manager comprises an intelligent error detection means and an intelligent error handling means. The present invention utilizes a rule-based language having a control structure whereby a specific message passing protocol is utilized with respect to tasks which are procedural or step-by-step in structure. The rules can be activated by the trainee in any order to reach the solution by any valid or correct path.

  4. Electro-optic architecture for servicing sensors and actuators in advanced aircraft propulsion systems

    NASA Technical Reports Server (NTRS)

    Poppel, G. L.; Glasheen, W. M.

    1989-01-01

    A detailed design of a fiber optic propulsion control system, integrating favored sensors and electro-optics architecture is presented. Layouts, schematics, and sensor lists describe an advanced fighter engine system model. Components and attributes of candidate fiber optic sensors are identified, and evaluation criteria are used in a trade study resulting in favored sensors for each measurand. System architectural ground rules were applied to accomplish an electro-optics architecture for the favored sensors. A key result was a considerable reduction in signal conductors. Drawings, schematics, specifications, and printed circuit board layouts describe the detailed system design, including application of a planar optical waveguide interface.

  5. Advanced space communications architecture study. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Horstein, Michael; Hadinger, Peter J.

    1987-01-01

    The technical feasibility and economic viability of satellite system architectures that are suitable for Customer Premise Service (CPS) communications is investigated. System evaluation is performed at 30/20 GHz (Ka-band); however, the system architectures examined are equally applicable to 14/11 GHz (Ku-band). Emphasis is placed on system that permit low cost user terminals. Frequency Division Multiple Access (FDMA) is used on the uplink, with typically 10,000 simultaneous accesses per satellite, each of 64 kbps. Bulk demodulators onboard the satellite, in combination with a baseband multiplexer, convert the many narrowband uplink signals into a small number of wideband data streams for downlink transmission. Single hop network interconnectivity is accomplished through use of downlink scanning beams. Each satellite is estimated to weigh 5600 lb and consume 6850W of power; the corresponding payload totals are 1000 lb and 5000W. Nonrecurring satellite cost is estimated at $110 million, with the first unit cost at $113 million. In large quantities, the user terminal cost estimate is $25,000.

  6. Application of complex macromolecular architectures for advanced microelectronic materials.

    PubMed

    Hedrick, James L; Magbitang, Teddie; Connor, Eric F; Glauser, Thierry; Volksen, Willi; Hawker, Craig J; Lee, Victor Y; Miller, Robert D

    2002-08-01

    The distinctive features of well-defined, three-dimensional macromolecules with topologies designed to enhance solubility and amplify end-group functionality facilitated nanophase morphologies in mixtures with organosilicates and ultimately nanoporous organosilicate networks. Novel macromolecular architectures including dendritic and star-shaped polymers and organic nanoparticles were prepared by a modular approach from several libraries of building blocks including various generations of dendritic initiators and dendrons, selectively placed to amplify functionality and/or arm number, coupled with living polymerization techniques. Mixtures of an organosilicate and the macromolecular template were deposited, cured, and the phase separation of the organic component, organized the vitrifying organosilicate into nanostructures. Removal of the sacrificial macromolecular template, also denoted as porogen, by thermolysis, yielded the desired nanoporous organosilicate, and the size scale of phase separation was strongly dependent on the chain topology. These materials were designed for use as interlayer, ultra-low dielectric insulators for on-chip applications with dielectric constant values as low as 1.5. The porogen design, chemistry and role of polymer architecture on hybrid and pore morphology will be emphasized. PMID:12203311

  7. Advanced Intestinal Cancers often Maintain a Multi-Ancestral Architecture

    PubMed Central

    Zahm, Christopher D.; Szulczewski, Joseph M.; Leystra, Alyssa A.; Paul Olson, Terrah J.; Clipson, Linda; Albrecht, Dawn M.; Middlebrooks, Malisa; Thliveris, Andrew T.; Matkowskyj, Kristina A.; Washington, Mary Kay; Newton, Michael A.; Eliceiri, Kevin W.; Halberg, Richard B.

    2016-01-01

    A widely accepted paradigm in the field of cancer biology is that solid tumors are uni-ancestral being derived from a single founder and its descendants. However, data have been steadily accruing that indicate early tumors in mice and humans can have a multi-ancestral origin in which an initiated primogenitor facilitates the transformation of neighboring co-genitors. We developed a new mouse model that permits the determination of clonal architecture of intestinal tumors in vivo and ex vivo, have validated this model, and then used it to assess the clonal architecture of adenomas, intramucosal carcinomas, and invasive adenocarcinomas of the intestine. The percentage of multi-ancestral tumors did not significantly change as tumors progressed from adenomas with low-grade dysplasia [40/65 (62%)], to adenomas with high-grade dysplasia [21/37 (57%)], to intramucosal carcinomas [10/23 (43%]), to invasive adenocarcinomas [13/19 (68%)], indicating that the clone arising from the primogenitor continues to coexist with clones arising from co-genitors. Moreover, neoplastic cells from distinct clones within a multi-ancestral adenocarcinoma have even been observed to simultaneously invade into the underlying musculature [2/15 (13%)]. Thus, intratumoral heterogeneity arising early in tumor formation persists throughout tumorigenesis. PMID:26919712

  8. MIDEX Advanced Modular and Distributed Spacecraft Avionics Architecture

    NASA Technical Reports Server (NTRS)

    Ruffa, John A.; Castell, Karen; Flatley, Thomas; Lin, Michael

    1998-01-01

    MIDEX (Medium Class Explorer) is the newest line in NASA's Explorer spacecraft development program. As part of the MIDEX charter, the MIDEX spacecraft development team has developed a new modular, distributed, and scaleable spacecraft architecture that pioneers new spaceflight technologies and implementation approaches, all designed to reduce overall spacecraft cost while increasing overall functional capability. This resultant "plug and play" system dramatically decreases the complexity and duration of spacecraft integration and test, providing a basic framework that supports spacecraft modularity and scalability for missions of varying size and complexity. Together, these subsystems form a modular, flexible avionics suite that can be modified and expanded to support low-end and very high-end mission requirements with a minimum of redesign, as well as allowing a smooth, continuous infusion of new technologies as they are developed without redesigning the system. This overall approach has the net benefit of allowing a greater portion of the overall mission budget to be allocated to mission science instead of a spacecraft bus. The MIDEX scaleable architecture is currently being manufactured and tested for use on the Microwave Anisotropy Probe (MAP), an inhouse program at GSFC.

  9. UNEDF: Advanced Scientific Computing Collaboration Transforms the Low-Energy Nuclear Many-Body Problem

    SciTech Connect

    Nam, H.; Stoitsov, M.; Nazarewicz, W.; Bulgac, A.; Hagen, G.; Kortelainen, M.; Maris, P.; Pei, J. C.; Roche, K. J.; Schunck, N.; Thompson, I.; Vary, J. P.; Wild, S. M.

    2012-12-20

    The demands of cutting-edge science are driving the need for larger and faster computing resources. With the rapidly growing scale of computing systems and the prospect of technologically disruptive architectures to meet these needs, scientists face the challenge of effectively using complex computational resources to advance scientific discovery. Multi-disciplinary collaborating networks of researchers with diverse scientific backgrounds are needed to address these complex challenges. The UNEDF SciDAC collaboration of nuclear theorists, applied mathematicians, and computer scientists is developing a comprehensive description of nuclei and their reactions that delivers maximum predictive power with quantified uncertainties. This paper describes UNEDF and identifies attributes that classify it as a successful computational collaboration. Finally, we illustrate significant milestones accomplished by UNEDF through integrative solutions using the most reliable theoretical approaches, most advanced algorithms, and leadership-class computational resources.

  10. Architecture-Adaptive Computing Environment: A Tool for Teaching Parallel Programming

    NASA Technical Reports Server (NTRS)

    Dorband, John E.; Aburdene, Maurice F.

    2002-01-01

    Recently, networked and cluster computation have become very popular. This paper is an introduction to a new C based parallel language for architecture-adaptive programming, aCe C. The primary purpose of aCe (Architecture-adaptive Computing Environment) is to encourage programmers to implement applications on parallel architectures by providing them the assurance that future architectures will be able to run their applications with a minimum of modification. A secondary purpose is to encourage computer architects to develop new types of architectures by providing an easily implemented software development environment and a library of test applications. This new language should be an ideal tool to teach parallel programming. In this paper, we will focus on some fundamental features of aCe C.

  11. Advanced Scientific Computing Research Network Requirements

    SciTech Connect

    Bacon, Charles; Bell, Greg; Canon, Shane; Dart, Eli; Dattoria, Vince; Goodwin, Dave; Lee, Jason; Hicks, Susan; Holohan, Ed; Klasky, Scott; Lauzon, Carolyn; Rogers, Jim; Shipman, Galen; Skinner, David; Tierney, Brian

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  12. Integrating Computing Resources: A Shared Distributed Architecture for Academics and Administrators.

    ERIC Educational Resources Information Center

    Beltrametti, Monica; English, Will

    1994-01-01

    Development and implementation of a shared distributed computing architecture at the University of Alberta (Canada) are described. Aspects discussed include design of the architecture, users' views of the electronic environment, technical and managerial challenges, and the campuswide human infrastructures needed to manage such an integrated…

  13. Optical design and characterization of an advanced computational imaging system

    NASA Astrophysics Data System (ADS)

    Shepard, R. Hamilton; Fernandez-Cull, Christy; Raskar, Ramesh; Shi, Boxin; Barsi, Christopher; Zhao, Hang

    2014-09-01

    We describe an advanced computational imaging system with an optical architecture that enables simultaneous and dynamic pupil-plane and image-plane coding accommodating several task-specific applications. We assess the optical requirement trades associated with custom and commercial-off-the-shelf (COTS) optics and converge on the development of two low-cost and robust COTS testbeds. The first is a coded-aperture programmable pixel imager employing a digital micromirror device (DMD) for image plane per-pixel oversampling and spatial super-resolution experiments. The second is a simultaneous pupil-encoded and time-encoded imager employing a DMD for pupil apodization or a deformable mirror for wavefront coding experiments. These two testbeds are built to leverage two MIT Lincoln Laboratory focal plane arrays - an orthogonal transfer CCD with non-uniform pixel sampling and on-chip dithering and a digital readout integrated circuit (DROIC) with advanced on-chip per-pixel processing capabilities. This paper discusses the derivation of optical component requirements, optical design metrics, and performance analyses for the two testbeds built.

  14. 76 FR 45786 - Advanced Scientific Computing Advisory Committee; Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-01

    ... Advanced Scientific Computing Advisory Committee; Meeting AGENCY: Office of Science, Department of Energy. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific... INFORMATION CONTACT: Melea Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown...

  15. OPENING REMARKS: Scientific Discovery through Advanced Computing

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2006-01-01

    Good morning. Welcome to SciDAC 2006 and Denver. I share greetings from the new Undersecretary for Energy, Ray Orbach. Five years ago SciDAC was launched as an experiment in computational science. The goal was to form partnerships among science applications, computer scientists, and applied mathematicians to take advantage of the potential of emerging terascale computers. This experiment has been a resounding success. SciDAC has emerged as a powerful concept for addressing some of the biggest challenges facing our world. As significant as these successes were, I believe there is also significance in the teams that achieved them. In addition to their scientific aims these teams have advanced the overall field of computational science and set the stage for even larger accomplishments as we look ahead to SciDAC-2. I am sure that many of you are expecting to hear about the results of our current solicitation for SciDAC-2. I’m afraid we are not quite ready to make that announcement. Decisions are still being made and we will announce the results later this summer. Nearly 250 unique proposals were received and evaluated, involving literally thousands of researchers, postdocs, and students. These collectively requested more than five times our expected budget. This response is a testament to the success of SciDAC in the community. In SciDAC-2 our budget has been increased to about 70 million for FY 2007 and our partnerships have expanded to include the Environment and National Security missions of the Department. The National Science Foundation has also joined as a partner. These new partnerships are expected to expand the application space of SciDAC, and broaden the impact and visibility of the program. We have, with our recent solicitation, expanded to turbulence, computational biology, and groundwater reactive modeling and simulation. We are currently talking with the Department’s applied energy programs about risk assessment, optimization of complex systems - such

  16. 75 FR 57742 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-22

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION... Scientific Computing Advisory Committee (ASCAC). Federal Advisory Committee Act (Pub. L. 92-463, 86 Stat. 770...: Melea Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building;...

  17. Making Advanced Computer Science Topics More Accessible through Interactive Technologies

    ERIC Educational Resources Information Center

    Shao, Kun; Maher, Peter

    2012-01-01

    Purpose: Teaching advanced technical concepts in a computer science program to students of different technical backgrounds presents many challenges. The purpose of this paper is to present a detailed experimental pedagogy in teaching advanced computer science topics, such as computer networking, telecommunications and data structures using…

  18. 77 FR 12823 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-02

    ... final report, Advanced Networking update Status from Computer Science COV Early Career technical talks Summary of Applied Math and Computer Science Workshops ASCR's new SBIR awards Data-intensive Science... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science....

  19. Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 1: Army fault tolerant architecture overview

    NASA Technical Reports Server (NTRS)

    Harper, R. E.; Alger, L. S.; Babikyan, C. A.; Butler, B. P.; Friend, S. A.; Ganska, R. J.; Lala, J. H.; Masotto, T. K.; Meyer, A. J.; Morton, D. P.

    1992-01-01

    Digital computing systems needed for Army programs such as the Computer-Aided Low Altitude Helicopter Flight Program and the Armored Systems Modernization (ASM) vehicles may be characterized by high computational throughput and input/output bandwidth, hard real-time response, high reliability and availability, and maintainability, testability, and producibility requirements. In addition, such a system should be affordable to produce, procure, maintain, and upgrade. To address these needs, the Army Fault Tolerant Architecture (AFTA) is being designed and constructed under a three-year program comprised of a conceptual study, detailed design and fabrication, and demonstration and validation phases. Described here are the results of the conceptual study phase of the AFTA development. Given here is an introduction to the AFTA program, its objectives, and key elements of its technical approach. A format is designed for representing mission requirements in a manner suitable for first order AFTA sizing and analysis, followed by a discussion of the current state of mission requirements acquisition for the targeted Army missions. An overview is given of AFTA's architectural theory of operation.

  20. FY04 Advanced Life Support Architecture and Technology Studies: Mid-Year Presentation

    NASA Technical Reports Server (NTRS)

    Lange, Kevin; Anderson, Molly; Duffield, Bruce; Hanford, Tony; Jeng, Frank

    2004-01-01

    Long-Term Objective: Identify optimal advanced life support system designs that meet existing and projected requirements for future human spaceflight missions. a) Include failure-tolerance, reliability, and safe-haven requirements. b) Compare designs based on multiple criteria including equivalent system mass (ESM), technology readiness level (TRL), simplicity, commonality, etc. c) Develop and evaluate new, more optimal, architecture concepts and technology applications.

  1. Quantum perceptron over a field and neural network architecture selection in a quantum computer.

    PubMed

    da Silva, Adenilton José; Ludermir, Teresa Bernarda; de Oliveira, Wilson Rosa

    2016-04-01

    In this work, we propose a quantum neural network named quantum perceptron over a field (QPF). Quantum computers are not yet a reality and the models and algorithms proposed in this work cannot be simulated in actual (or classical) computers. QPF is a direct generalization of a classical perceptron and solves some drawbacks found in previous models of quantum perceptrons. We also present a learning algorithm named Superposition based Architecture Learning algorithm (SAL) that optimizes the neural network weights and architectures. SAL searches for the best architecture in a finite set of neural network architectures with linear time over the number of patterns in the training set. SAL is the first learning algorithm to determine neural network architectures in polynomial time. This speedup is obtained by the use of quantum parallelism and a non-linear quantum operator. PMID:26878722

  2. An Advanced Electrospinning Method of Fabricating Nanofibrous Patterned Architectures with Controlled Deposition and Desired Alignment

    NASA Astrophysics Data System (ADS)

    Rasel, Sheikh Md

    containing 0, 5, 10, and 20 wt % of fillers. Morphological analyses carried out by digital optical microscope, scanning electron microscopy, x-ray computed tomography, and Fourier transform infrared spectroscopy, confirmed the presence and well dispersion of fillers in the composites. In addition, improvement of mechanical properties with increased filler content further emphasized the adhesion between matrix and reinforcement. PVA with 20 wt % wollastonite composite exhibited the highest tensile strength (11.99 MPa) and tensile module (198 MPa) as compared to pure PVA (3.92 MPa and 83 MPa, respectively). Moreover, the thermal tests demonstrated that there is no major deviation in the thermal stability due to the addition of wollastonite in PVA scaffolds. Almost similar trend was observed in PVA/wood flour nanocomposites where tensile strength improved by 228 % for 20 wt % of reinforcement. The PVA/wollastonite and PVA/wood flour fibrous nanocomposite which poses higher mechanical properties might be potentially suitable for many advanced applications such as filtration, tissue engineering, and food processing. We believe this study will contribute to further scientific understanding of the patterning mechanism of electrospun nanofibers and to allow for variety of design of specific patterned nanofibrous architectures with desired functional properties. Therefore, this improved scheme of electrospinning can have significant impact in a broad range of applications including tissue engineering scaffolds, filtrations, and nanoelectronics.

  3. An efficient FPGA architecture for integer ƞth root computation

    NASA Astrophysics Data System (ADS)

    Rangel-Valdez, Nelson; Barron-Zambrano, Jose Hugo; Torres-Huitzil, Cesar; Torres-Jimenez, Jose

    2015-10-01

    In embedded computing, it is common to find applications such as signal processing, image processing, computer graphics or data compression that might benefit from hardware implementation for the computation of integer roots of order ?. However, the scientific literature lacks architectural designs that implement such operations for different values of N, using a low amount of resources. This article presents a parameterisable field programmable gate array (FPGA) architecture for an efficient Nth root calculator that uses only adders/subtractors and ? location memory elements. The architecture was tested for different values of ?, using 64-bit number representation. The results show a consumption up to 10% of the logical resources of a Xilinx XC6SLX45-CSG324C device, depending on the value of N. The hardware implementation improved the performance of its corresponding software implementations in one order of magnitude. The architecture performance varies from several thousands to seven millions of root operations per second.

  4. Proceedings: Workshop on advanced mathematics and computer science for power systems analysis

    SciTech Connect

    Esselman, W.H.; Iveson, R.H. )

    1991-08-01

    The Mathematics and Computer Workshop on Power System Analysis was held February 21--22, 1989, in Palo Alto, California. The workshop was the first in a series sponsored by EPRI's Office of Exploratory Research as part of its effort to develop ways in which recent advances in mathematics and computer science can be applied to the problems of the electric utility industry. The purpose of this workshop was to identify research objectives in the field of advanced computational algorithms needed for the application of advanced parallel processing architecture to problems of power system control and operation. Approximately 35 participants heard six presentations on power flow problems, transient stability, power system control, electromagnetic transients, user-machine interfaces, and database management. In the discussions that followed, participants identified five areas warranting further investigation: system load flow analysis, transient power and voltage analysis, structural instability and bifurcation, control systems design, and proximity to instability. 63 refs.

  5. 77 FR 45345 - DOE/Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-31

    ... Update from Committee of Visitors for Computer Science activities Facilities update including early science efforts ] Early Career technical talks Recompetition results for Scientific Discovery through.../Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy....

  6. Developing an Advanced Environment for Collaborative Computing

    NASA Technical Reports Server (NTRS)

    Becerra-Fernandez, Irma; Stewart, Helen; DelAlto, Martha; DelAlto, Martha; Knight, Chris

    1999-01-01

    Knowledge management in general tries to organize and make available important know-how, whenever and where ever is needed. Today, organizations rely on decision-makers to produce "mission critical" decisions that am based on inputs from multiple domains. The ideal decision-maker has a profound understanding of specific domains that influence the decision-making process coupled with the experience that allows them to act quickly and decisively on the information. In addition, learning companies benefit by not repeating costly mistakes, and by reducing time-to-market in Research & Development projects. Group-decision making tools can help companies make better decisions by capturing the knowledge from groups of experts. Furthermore, companies that capture their customers preferences can improve their customer service, which translates to larger profits. Therefore collaborative computing provides a common communication space, improves sharing of knowledge, provides a mechanism for real-time feedback on the tasks being performed, helps to optimize processes, and results in a centralized knowledge warehouse. This paper presents the research directions. of a project which seeks to augment an advanced collaborative web-based environment called Postdoc, with workflow capabilities. Postdoc is a "government-off-the-shelf" document management software developed at NASA-Ames Research Center (ARC).

  7. Advanced Information Processing System (AIPS)-based fault tolerant avionics architecture for launch vehicles

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.; Harper, Richard E.; Jaskowiak, Kenneth R.; Rosch, Gene; Alger, Linda S.; Schor, Andrei L.

    1990-01-01

    An avionics architecture for the advanced launch system (ALS) that uses validated hardware and software building blocks developed under the advanced information processing system program is presented. The AIPS for ALS architecture defined is preliminary, and reliability requirements can be met by the AIPS hardware and software building blocks that are built using the state-of-the-art technology available in the 1992-93 time frame. The level of detail in the architecture definition reflects the level of detail available in the ALS requirements. As the avionics requirements are refined, the architecture can also be refined and defined in greater detail with the help of analysis and simulation tools. A useful methodology is demonstrated for investigating the impact of the avionics suite to the recurring cost of the ALS. It is shown that allowing the vehicle to launch with selected detected failures can potentially reduce the recurring launch costs. A comparative analysis shows that validated fault-tolerant avionics built out of Class B parts can result in lower life-cycle-cost in comparison to simplex avionics built out of Class S parts or other redundant architectures.

  8. Advanced Information Processing System (AIPS)-based fault tolerant avionics architecture for launch vehicles

    NASA Astrophysics Data System (ADS)

    Lala, Jaynarayan H.; Harper, Richard E.; Jaskowiak, Kenneth R.; Rosch, Gene; Alger, Linda S.; Schor, Andrei L.

    An avionics architecture for the advanced launch system (ALS) that uses validated hardware and software building blocks developed under the advanced information processing system program is presented. The AIPS for ALS architecture defined is preliminary, and reliability requirements can be met by the AIPS hardware and software building blocks that are built using the state-of-the-art technology available in the 1992-93 time frame. The level of detail in the architecture definition reflects the level of detail available in the ALS requirements. As the avionics requirements are refined, the architecture can also be refined and defined in greater detail with the help of analysis and simulation tools. A useful methodology is demonstrated for investigating the impact of the avionics suite to the recurring cost of the ALS. It is shown that allowing the vehicle to launch with selected detected failures can potentially reduce the recurring launch costs. A comparative analysis shows that validated fault-tolerant avionics built out of Class B parts can result in lower life-cycle-cost in comparison to simplex avionics built out of Class S parts or other redundant architectures.

  9. Memory intensive functional architecture for distributed computer control systems

    SciTech Connect

    Dimmler, D.G.

    1983-10-01

    A memory-intensive functional architectue for distributed data-acquisition, monitoring, and control systems with large numbers of nodes has been conceptually developed and applied in several large-scale and some smaller systems. This discussion concentrates on: (1) the basic architecture; (2) recent expansions of the architecture which now become feasible in view of the rapidly developing component technologies in microprocessors and functional large-scale integration circuits; and (3) implementation of some key hardware and software structures and one system implementation which is a system for performing control and data acquisition of a neutron spectrometer at the Brookhaven High Flux Beam Reactor. The spectrometer is equipped with a large-area position-sensitive neutron detector.

  10. Architectural issues in fault-tolerant, secure computing systems

    SciTech Connect

    Joseph, M.K.

    1988-01-01

    This dissertation explores several facets of the applicability of fault-tolerance techniques to secure computer design, these being: (1) how fault-tolerance techniques can be used on unsolved problems in computer security (e.g., computer viruses, and denial-of-service); (2) how fault-tolerance techniques can be used to support classical computer-security mechanisms in the presence of accidental and deliberate faults; and (3) the problems involved in designing a fault-tolerant, secure computer system (e.g., how computer security can degrade along with both the computational and fault-tolerance capabilities of a computer system). The approach taken in this research is almost as important as its results. It is different from current computer-security research in that a design paradigm for fault-tolerant computer design is used. This led to an extensive fault and error classification of many typical security threats. Throughout this work, a fault-tolerance perspective is taken. However, the author did not ignore basic computer-security technology. For some problems he investigated how to support and extend basic-security mechanism (e.g., trusted computing base), instead of trying to achieve the same result with purely fault-tolerance techniques.

  11. Reference Architecture for High Dependability On-Board Computers

    NASA Astrophysics Data System (ADS)

    Silva, Nuno; Esper, Alexandre; Zandin, Johan; Barbosa, Ricardo; Monteleone, Claudio

    2014-08-01

    The industrial process in the area of on-board computers is characterized by small production series of on-board computers (hardware and software) configuration items with little recurrence at unit or set level (e.g. computer equipment unit, set of interconnected redundant units). These small production series result into a reduced amount of statistical data related to dependability, which influence on the way on-board computers are specified, designed and verified. In the context of ESA harmonization policy for the deployment of enhanced and homogeneous industrial processes in the area of avionics embedded systems and on-board computers for the space industry, this study aimed at rationalizing the initiation phase of the development or procurement of on-board computers and at improving dependability assurance. This aim was achieved by establishing generic requirements for the procurement or development of on-board computers with a focus on well-defined reliability, availability, and maintainability requirements, as well as a generic methodology for planning, predicting and assessing the dependability of on-board computers hardware and software throughout their life cycle. It also provides guidelines for producing evidence material and arguments to support dependability assurance of on-board computers hardware and software throughout the complete lifecycle, including an assessment of feasibility aspects of the dependability assurance process and how the use of computer-aided environment can contribute to the on-board computer dependability assurance.

  12. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    SciTech Connect

    Kirk, B.L.; Sartori, E.

    1997-06-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.

  13. A computationally efficient particle-simulation method suited to vector-computer architectures

    SciTech Connect

    McDonald, J.D.

    1990-01-01

    Recent interest in a National Aero-Space Plane (NASP) and various Aero-assisted Space Transfer Vehicles (ASTVs) presents the need for a greater understanding of high-speed rarefied flight conditions. Particle simulation techniques such as the Direct Simulation Monte Carlo (DSMC) method are well suited to such problems, but the high cost of computation limits the application of the methods to two-dimensional or very simple three-dimensional problems. This research re-examines the algorithmic structure of existing particle simulation methods and re-structures them to allow efficient implementation on vector-oriented supercomputers. A brief overview of the DSMC method and the Cray-2 vector computer architecture are provided, and the elements of the DSMC method that inhibit substantial vectorization are identified. One such element is the collision selection algorithm. A complete reformulation of underlying kinetic theory shows that this may be efficiently vectorized for general gas mixtures. The mechanics of collisions are vectorizable in the DSMC method, but several optimizations are suggested that greatly enhance performance. Also this thesis proposes a new mechanism for the exchange of energy between vibration and other energy modes. The developed scheme makes use of quantized vibrational states and is used in place of the Borgnakke-Larsen model. Finally, a simplified representation of physical space and boundary conditions is utilized to further reduce the computational cost of the developed method. Comparison to solutions obtained from the DSMC method for the relaxation of internal energy modes in a homogeneous gas, as well as single and multiple specie shock wave profiles, are presented. Additionally, a large scale simulation of the flow about the proposed Aeroassisted Flight Experiment (AFE) vehicle is included as an example of the new computational capability of the developed particle simulation method.

  14. Advanced computational research in materials processing for design and manufacturing

    SciTech Connect

    Zacharia, T.

    1995-04-01

    Advanced mathematical techniques and computer simulation play a major role in providing enhanced understanding of conventional and advanced materials processing operations. Development and application of mathematical models and computer simulation techniques can provide a quantitative understanding of materials processes and will minimize the need for expensive and time consuming trial- and error-based product development. As computer simulations and materials databases grow in complexity, high performance computing and simulation are expected to play a key role in supporting the improvements required in advanced material syntheses and processing by lessening the dependence on expensive prototyping and re-tooling. Many of these numerical models are highly compute-intensive. It is not unusual for an analysis to require several hours of computational time on current supercomputers despite the simplicity of the models being studied. For example, to accurately simulate the heat transfer in a 1-m{sup 3} block using a simple computational method requires 10`2 arithmetic operations per second of simulated time. For a computer to do the simulation in real time would require a sustained computation rate 1000 times faster than that achievable by current supercomputers. Massively parallel computer systems, which combine several thousand processors able to operate concurrently on a problem are expected to provide orders of magnitude increase in performance. This paper briefly describes advanced computational research in materials processing at ORNL. Continued development of computational techniques and algorithms utilizing the massively parallel computers will allow the simulation of conventional and advanced materials processes in sufficient generality.

  15. Advanced flight computers for planetary exploration

    NASA Astrophysics Data System (ADS)

    Stephenson, R. Rhoads

    Research concerning flight computers for use on interplanetary probes is reviewed. The history of these computers from the Viking mission to the present is outlined. The differences between ground commercial computers and computers for planetary exploration are listed. The development of a computer for the Mariner Mark II comet rendezvous asteroid flyby mission is described. Various aspects of recently developed computer systems are examined, including the Max real time, embedded computer, a hypercube distributed supercomputer, a SAR data processor, a processor for the High Resolution IR Imaging Spectrometer, and a robotic vision multiresolution pyramid machine for processsing images obtained by a Mars Rover.

  16. Advanced flight computers for planetary exploration

    NASA Technical Reports Server (NTRS)

    Stephenson, R. Rhoads

    1988-01-01

    Research concerning flight computers for use on interplanetary probes is reviewed. The history of these computers from the Viking mission to the present is outlined. The differences between ground commercial computers and computers for planetary exploration are listed. The development of a computer for the Mariner Mark II comet rendezvous asteroid flyby mission is described. Various aspects of recently developed computer systems are examined, including the Max real time, embedded computer, a hypercube distributed supercomputer, a SAR data processor, a processor for the High Resolution IR Imaging Spectrometer, and a robotic vision multiresolution pyramid machine for processsing images obtained by a Mars Rover.

  17. On-Board Computing Subsystem for MIRAX: Architectural and Interface Aspects

    SciTech Connect

    Santiago, Valdivino

    2006-06-09

    This paper presents some proposals of architecture and interfaces among the different types of processing units of MIRAX on-board computing subsystem. MIRAX satellite payload is composed of dedicated computers, two Hard X-Ray cameras and one Soft X-Ray camera (WFC flight spare unit from BeppoSAX satellite). The architectures for the On-Board Computing Subsystem will take into account hardware or software solution of the event preprocessing for CdZnTe detectors. Hardware and software interfaces approaches will be shown and also requirements of on-board memory storage and telemetry will be addressed.

  18. Architectural concepts and redundancy techniques in fault-tolerant computers

    NASA Technical Reports Server (NTRS)

    Rennels, D. A.

    1974-01-01

    This paper presents a description of redundancy techniques employed in the design of fault-tolerant computers, and a discussion of the effects of functional requirements, technology constraints, and cost considerations which enter into the choice of these techniques. The STAR computer, developed at the Jet Propulsion Laboratory for long-duration planetary spacecraft missions, is discussed along with several later fault-tolerant computer designs. The class of computers described in this paper employs dynamic redundancy, i.e., the machine is divided into a set of submodules, each with standby spares; a special hard core monitor unit detects and diagnoses faults, and effects automated recovery by replacing failed parts.

  19. Opportunities for X-ray Science in Future Computing Architectures

    SciTech Connect

    Foster, Ian

    2011-02-09

    The world of computing continues to evolve rapidly. In just the past 10 years, we have seen the emergence of petascale supercomputing, cloud computing that provides on-demand computing and storage with considerable economies of scale, software-as-a-service methods that permit outsourcing of complex processes, and grid computing that enables federation of resources across institutional boundaries. These trends show no sign of slowing down. The next 10 years will surely see exascale, new cloud offerings, and other terabit networks. This talk reviews various of these developments and discusses their potential implications for x-ray science and x-ray facilities.

  20. Spin-qubit inspired architectures for superconducting quantum computing

    NASA Astrophysics Data System (ADS)

    Shim, Yun-Pil; Tahan, Charles

    2015-03-01

    In recent years, the superconducting qubit community has achieved single and two-qubit benchmarked gate fidelities approaching 99.9%, fast readout with novel superconducting amplifiers, distributed entanglement, and other milestones on the road to fault-tolerant quantum information processing. Obviously, this is a field that could use some help from the semiconductor qubit community! Here we present theoretical work on superconducting qubit systems inspired by our experience with semiconductor qubits. We discuss initialization, single- and two-qubit gate operations, and measurement schemes for an encoded qubit in a two-dimensional architecture. Our results motivate new ways of designing or operating superconducting quantum information processors.

  1. 76 FR 64330 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-18

    ... Workshop on Mathematics for the Analysis, Simulation, and Optimization of Complex Systems Report from ASCR..., Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U. S. Department of Energy... Department of Energy on scientific priorities within the field of advanced scientific computing...

  2. A Survey and Evaluation of Simulators Suitable for Teaching Courses in Computer Architecture and Organization

    ERIC Educational Resources Information Center

    Nikolic, B.; Radivojevic, Z.; Djordjevic, J.; Milutinovic, V.

    2009-01-01

    Courses in Computer Architecture and Organization are regularly included in Computer Engineering curricula. These courses are usually organized in such a way that students obtain not only a purely theoretical experience, but also a practical understanding of the topics lectured. This practical work is usually done in a laboratory using simulators…

  3. From Archi Torture to Architecture: Undergraduate Students Design and Implement Computers Using the Multimedia Logic Emulator

    ERIC Educational Resources Information Center

    Stanley, Timothy D.; Wong, Lap Kei; Prigmore, Daniel; Benson, Justin; Fishler, Nathan; Fife, Leslie; Colton, Don

    2007-01-01

    Students learn better when they both hear and do. In computer architecture courses "doing" can be difficult in small schools without hardware laboratories hosted by computer engineering, electrical engineering, or similar departments. Software solutions exist. Our success with George Mills' Multimedia Logic (MML) is the focus of this paper. MML…

  4. A Project-Based Learning Approach to Programmable Logic Design and Computer Architecture

    ERIC Educational Resources Information Center

    Kellett, C. M.

    2012-01-01

    This paper describes a course in programmable logic design and computer architecture as it is taught at the University of Newcastle, Australia. The course is designed around a major design project and has two supplemental assessment tasks that are also described. The context of the Computer Engineering degree program within which the course is…

  5. A learnable parallel processing architecture towards unity of memory and computing

    PubMed Central

    Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.

    2015-01-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area. PMID:26271243

  6. A learnable parallel processing architecture towards unity of memory and computing

    NASA Astrophysics Data System (ADS)

    Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.

    2015-08-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  7. A learnable parallel processing architecture towards unity of memory and computing.

    PubMed

    Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J

    2015-01-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area. PMID:26271243

  8. Some Recent Advances in Computer Graphics.

    ERIC Educational Resources Information Center

    Whitted, Turner

    1982-01-01

    General principles of computer graphics are reviewed, including discussions of display hardware, geometric modeling, algorithms, and applications in science, computer-aided design, flight training, communications, business, art, and entertainment. (JN)

  9. Distributed Computing Architecture for Image-Based Wavefront Sensing and 2 D FFTs

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey S.; Dean, Bruce H.; Haghani, Shadan

    2006-01-01

    Image-based wavefront sensing (WFS) provides significant advantages over interferometric-based wavefi-ont sensors such as optical design simplicity and stability. However, the image-based approach is computational intensive, and therefore, specialized high-performance computing architectures are required in applications utilizing the image-based approach. The development and testing of these high-performance computing architectures are essential to such missions as James Webb Space Telescope (JWST), Terrestial Planet Finder-Coronagraph (TPF-C and CorSpec), and Spherical Primary Optical Telescope (SPOT). The development of these specialized computing architectures require numerous two-dimensional Fourier Transforms, which necessitate an all-to-all communication when applied on a distributed computational architecture. Several solutions for distributed computing are presented with an emphasis on a 64 Node cluster of DSPs, multiple DSP FPGAs, and an application of low-diameter graph theory. Timing results and performance analysis will be presented. The solutions offered could be applied to other all-to-all communication and scientifically computationally complex problems.

  10. Advanced Design and Implementation of a Control Architecture for Long Range Autonomous Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Martin-Alvarez, A.; Hayati, S.; Volpe, R.; Petras, R.

    1999-01-01

    An advanced design and implementation of a Control Architecture for Long Range Autonomous Planetary Rovers is presented using a hierarchical top-down task decomposition, and the common structure of each design is presented based on feedback control theory. Graphical programming is presented as a common intuitive language for the design when a large design team is composed of managers, architecture designers, engineers, programmers, and maintenance personnel. The whole design of the control architecture consists in the classic control concepts of cyclic data processing and event-driven reaction to achieve all the reasoning and behaviors needed. For this purpose, a commercial graphical tool is presented that includes the mentioned control capabilities. Messages queues are used for inter-communication among control functions, allowing Artificial Intelligence (AI) reasoning techniques based on queue manipulation. Experimental results show a highly autonomous control system running in real time on top the JPL micro-rover Rocky 7 controlling simultaneously several robotic devices. This paper validates the sinergy between Artificial Intelligence and classic control concepts in having in advanced Control Architecture for Long Range Autonomous Planetary Rovers.

  11. Computing Advances in the Teaching of Chemistry.

    ERIC Educational Resources Information Center

    Baskett, W. P.; Matthews, G. P.

    1984-01-01

    Discusses three trends in computer-oriented chemistry instruction: (1) availability of interfaces to integrate computers with experiments; (2) impact of the development of higher resolution graphics and greater memory capacity; and (3) role of videodisc technology on computer assisted instruction. Includes program listings for auto-titration and…

  12. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  13. NASA's Advanced Multimission Operations System: A Case Study in Formalizing Software Architecture Evolution

    NASA Technical Reports Server (NTRS)

    Barnes, Jeffrey M.

    2011-01-01

    All software systems of significant size and longevity eventually undergo changes to their basic architectural structure. Such changes may be prompted by evolving requirements, changing technology, or other reasons. Whatever the cause, software architecture evolution is commonplace in real world software projects. Recently, software architecture researchers have begun to study this phenomenon in depth. However, this work has suffered from problems of validation; research in this area has tended to make heavy use of toy examples and hypothetical scenarios and has not been well supported by real world examples. To help address this problem, I describe an ongoing effort at the Jet Propulsion Laboratory to re-architect the Advanced Multimission Operations System (AMMOS), which is used to operate NASA's deep-space and astrophysics missions. Based on examination of project documents and interviews with project personnel, I describe the goals and approach of this evolution effort and then present models that capture some of the key architectural changes. Finally, I demonstrate how approaches and formal methods from my previous research in architecture evolution may be applied to this evolution, while using languages and tools already in place at the Jet Propulsion Laboratory.

  14. Activities and operations of the Advanced Computing Research Facility, July-October 1986

    SciTech Connect

    Pieper, G.W.

    1986-01-01

    Research activities and operations of the Advanced Computing Research Facility (ACRF) at Argonne National Laboratory are discussed for the period from July 1986 through October 1986. The facility is currently supported by the Department of Energy, and is operated by the Mathematics and Computer Science Division at Argonne. Over the past four-month period, a new commercial multiprocessor, the Intel iPSC-VX/d4 hypercube was installed. In addition, four other commercial multiprocessors continue to be available for research - an Encore Multimax, a Sequent Balance 21000, an Alliant FX/8, and an Intel iPSC/d5 - as well as a locally designed multiprocessor, the Lemur. These machines are being actively used by scientists at Argonne and throughout the nation in a wide variety of projects concerning computer systems with parallel and vector architectures. A variety of classes, workshops, and seminars have been sponsored to train researchers on computing techniques for the advanced computer systems at the Advanced Computing Research Facility. For example, courses were offered on writing programs for parallel computer systems and hosted the first annual Alliant users group meeting. A Sequent users group meeting and a two-day workshop on performance evaluation of parallel computers and programs are being organized.

  15. Arranging computer architectures to create higher-performance controllers

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    1988-01-01

    Techniques for integrating microprocessors, array processors, and other intelligent devices in control systems are reviewed, with an emphasis on the (re)arrangement of components to form distributed or parallel processing systems. Consideration is given to the selection of the host microprocessor, increasing the power and/or memory capacity of the host, multitasking software for the host, array processors to reduce computation time, the allocation of real-time and non-real-time events to different computer subsystems, intelligent devices to share the computational burden for real-time events, and intelligent interfaces to increase communication speeds. The case of a helicopter vibration-suppression and stabilization controller is analyzed as an example, and significant improvements in computation and throughput rates are demonstrated.

  16. Preliminary design and implementation of the baseline digital baseband architecture for advanced deep space transponders

    NASA Technical Reports Server (NTRS)

    Nguyen, T. M.; Yeh, H.-G.

    1993-01-01

    The baseline design and implementation of the digital baseband architecture for advanced deep space transponders is investigated and identified. Trade studies on the selection of the number of bits for the analog-to-digital converter (ADC) and optimum sampling schemes are presented. In addition, the proposed optimum sampling scheme is analyzed in detail. Descriptions of possible implementations for the digital baseband (or digital front end) and digital phase-locked loop (DPLL) for carrier tracking are also described.

  17. Computer-Aided Design of Organic Host Architectures for Selective Chemosensors

    SciTech Connect

    Hay, Benjamin; Bryantsev, Vyacheslav S.

    2009-01-01

    Selective organic hosts provide the foundation for the development of many types of sensors. The deliberate design of host molecules with predetermined selectivity, however, remains a challenge in supramolecular chemistry. To address this issue we have developed a de novo structure-based design approach for the unbiased construction of complementary host architectures. This chapter summarizes recent progress including improvements on a computer software program, HostDesigner, specifically tailored to discover host architectures for small guest molecules. HostDesigner is capable of generating and evaluating millions of candidate structures in minutes on a desktop personal computer, allowing a user to rapidly identify three-dimensional architectures that are structurally organized for binding a targeted guest species. The efficacy of this computational methodology is illustrated with a search for cation hosts containing aliphatic ether oxygen groups and anion hosts containing urea groups.

  18. Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters

    PubMed Central

    Torres-Huitzil, Cesar

    2013-01-01

    Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k × k kernel requires of k2 − 1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on 1024 × 1024 images with up to 255 × 255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding. PMID:24288456

  19. Resource efficient hardware architecture for fast computation of running max/min filters.

    PubMed

    Torres-Huitzil, Cesar

    2013-01-01

    Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k × k kernel requires of k(2) - 1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on 1024 × 1024 images with up to 255 × 255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding. PMID:24288456

  20. A Survey of Architectural Techniques for Near-Threshold Computing

    DOE PAGESBeta

    Mittal, Sparsh

    2015-12-28

    Energy efficiency has now become the primary obstacle in scaling the performance of all classes of computing systems. In low-voltage computing and specifically, near-threshold voltage computing (NTC), which involves operating the transistor very close to and yet above its threshold voltage, holds the promise of providing many-fold improvement in energy efficiency. However, use of NTC also presents several challenges such as increased parametric variation, failure rate and performance loss etc. Our paper surveys several re- cent techniques which aim to offset these challenges for fully leveraging the potential of NTC. By classifying these techniques along several dimensions, we also highlightmore » their similarities and differences. Ultimately, we hope that this paper will provide insights into state-of-art NTC techniques to researchers and system-designers and inspire further research in this field.« less

  1. A Survey of Architectural Techniques for Near-Threshold Computing

    SciTech Connect

    Mittal, Sparsh

    2015-12-28

    Energy efficiency has now become the primary obstacle in scaling the performance of all classes of computing systems. In low-voltage computing and specifically, near-threshold voltage computing (NTC), which involves operating the transistor very close to and yet above its threshold voltage, holds the promise of providing many-fold improvement in energy efficiency. However, use of NTC also presents several challenges such as increased parametric variation, failure rate and performance loss etc. Our paper surveys several re- cent techniques which aim to offset these challenges for fully leveraging the potential of NTC. By classifying these techniques along several dimensions, we also highlight their similarities and differences. Ultimately, we hope that this paper will provide insights into state-of-art NTC techniques to researchers and system-designers and inspire further research in this field.

  2. Advancing crime scene computer forensics techniques

    NASA Astrophysics Data System (ADS)

    Hosmer, Chet; Feldman, John; Giordano, Joe

    1999-02-01

    Computers and network technology have become inexpensive and powerful tools that can be applied to a wide range of criminal activity. Computers have changed the world's view of evidence because computers are used more and more as tools in committing `traditional crimes' such as embezzlements, thefts, extortion and murder. This paper will focus on reviewing the current state-of-the-art of the data recovery and evidence construction tools used in both the field and laboratory for prosection purposes.

  3. A language comparison for scientific computing on MIMD architectures

    NASA Technical Reports Server (NTRS)

    Jones, Mark T.; Patrick, Merrell L.; Voigt, Robert G.

    1989-01-01

    Choleski's method for solving banded symmetric, positive definite systems is implemented on a multiprocessor computer using three FORTRAN based parallel programming languages, the Force, PISCES and Concurrent FORTRAN. The capabilities of the language for expressing parallelism and their user friendliness are discussed, including readability of the code, debugging assistance offered, and expressiveness of the languages. The performance of the different implementations is compared. It is argued that PISCES, using the Force for medium-grained parallelism, is the appropriate choice for programming Choleski's method on the multiprocessor computer, Flex/32.

  4. A multitasking finite state architecture for computer control of an electric powertrain

    SciTech Connect

    Burba, J.C.

    1984-01-01

    Finite state techniques provide a common design language between the control engineer and the computer engineer for event driven computer control systems. They simplify communication and provide a highly maintainable control system understandable by both. This paper describes the development of a control system for an electric vehicle powertrain utilizing finite state concepts. The basics of finite state automata are provided as a framework to discuss a unique multitasking software architecture developed for this application. The architecture employs conventional time-sliced techniques with task scheduling controlled by a finite state machine representation of the control strategy of the powertrain. The complexities of excitation variable sampling in this environment are also considered.

  5. COMPUTER ARCHITECTURE FOR RESEARCH IN METEOROLOGY AND ATMOSPHERIC CHEMISTRY

    EPA Science Inventory

    The study examines the feasibility of constructing a peripheral hardware module that could be attached to a mini or midsized computer to accelerate the execution of large air pollution models, such as the EPA's Regional Oxidant Model (ROM). Crucial information necessary to design...

  6. CSP: A Multifaceted Hybrid Architecture for Space Computing

    NASA Technical Reports Server (NTRS)

    Rudolph, Dylan; Wilson, Christopher; Stewart, Jacob; Gauvin, Patrick; George, Alan; Lam, Herman; Crum, Gary Alex; Wirthlin, Mike; Wilson, Alex; Stoddard, Aaron

    2014-01-01

    Research on the CHREC Space Processor (CSP) takes a multifaceted hybrid approach to embedded space computing. Working closely with the NASA Goddard SpaceCube team, researchers at the National Science Foundation (NSF) Center for High-Performance Reconfigurable Computing (CHREC) at the University of Florida and Brigham Young University are developing hybrid space computers that feature an innovative combination of three technologies: commercial-off-the-shelf (COTS) devices, radiation-hardened (RadHard) devices, and fault-tolerant computing. Modern COTS processors provide the utmost in performance and energy-efficiency but are susceptible to ionizing radiation in space, whereas RadHard processors are virtually immune to this radiation but are more expensive, larger, less energy-efficient, and generations behind in speed and functionality. By featuring COTS devices to perform the critical data processing, supported by simpler RadHard devices that monitor and manage the COTS devices, and augmented with novel uses of fault-tolerant hardware, software, information, and networking within and between COTS devices, the resulting system can maximize performance and reliability while minimizing energy consumption and cost. NASA Goddard has adopted the CSP concept and technology with plans underway to feature flight-ready CSP boards on two upcoming space missions.

  7. A fast algorithm for parallel computation of multibody dynamics on MIMD parallel architectures

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Kwan, Gregory; Bagherzadeh, Nader

    1993-01-01

    In this paper the implementation of a parallel O(LogN) algorithm for computation of rigid multibody dynamics on a Hypercube MIMD parallel architecture is presented. To our knowledge, this is the first algorithm that achieves the time lower bound of O(LogN) by using an optimal number of O(N) processors. However, in addition to its theoretical significance, the algorithm is also highly efficient for practical implementation on commercially available MIMD parallel architectures due to its highly coarse grain size and simple communication and synchronization requirements. We present a multilevel parallel computation strategy for implementation of the algorithm on a Hypercube. This strategy allows the exploitation of parallelism at several computational levels as well as maximum overlapping of computation and communication to increase the performance of parallel computation.

  8. Model of the reliability analysis of the distributed computer systems with architecture "client-server"

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Zelenkov, P. V.; Karaseva, M. V.; Tsarev, M. Yu; Tsarev, R. Yu

    2015-01-01

    The paper considers the problem of the analysis of distributed computer systems reliability with client-server architecture. A distributed computer system is a set of hardware and software for implementing the following main functions: processing, storage, transmission and data protection. This paper discusses the distributed computer systems architecture "client-server". The paper presents the scheme of the distributed computer system functioning represented as a graph where vertices are the functional state of the system and arcs are transitions from one state to another depending on the prevailing conditions. In reliability analysis we consider such reliability indicators as the probability of the system transition in the stopping state and accidents, as well as the intensity of these transitions. The proposed model allows us to obtain correlations for the reliability parameters of the distributed computer system without any assumptions about the distribution laws of random variables and the elements number in the system.

  9. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING

    EPA Science Inventory

    The overall goal of the EPA-ORD NERL research program on Computational Toxicology (CompTox) is to provide the Agency with the tools of modern chemistry, biology, and computing to improve quantitative risk assessments and reduce uncertainties in the source-to-adverse outcome conti...

  10. Transonic wing analysis using advanced computational methods

    NASA Technical Reports Server (NTRS)

    Henne, P. A.; Hicks, R. M.

    1978-01-01

    This paper discusses the application of three-dimensional computational transonic flow methods to several different types of transport wing designs. The purpose of these applications is to evaluate the basic accuracy and limitations associated with such numerical methods. The use of such computational methods for practical engineering problems can only be justified after favorable evaluations are completed. The paper summarizes a study of both the small-disturbance and the full potential technique for computing three-dimensional transonic flows. Computed three-dimensional results are compared to both experimental measurements and theoretical results. Comparisons are made not only of pressure distributions but also of lift and drag forces. Transonic drag rise characteristics are compared. Three-dimensional pressure distributions and aerodynamic forces, computed from the full potential solution, compare reasonably well with experimental results for a wide range of configurations and flow conditions.

  11. Application of advanced computational technology to propulsion CFD

    NASA Astrophysics Data System (ADS)

    Szuch, John R.

    The Internal Fluid Mechanics Division of the NASA Lewis Research Center is combining the key elements of computational fluid dynamics, aerothermodynamic experiments, and advanced computational technology to bring internal computational fluid dynamics (ICFM) to a state of practical application for aerospace propulsion system design. This paper presents an overview of efforts underway at NASA Lewis to advance and apply computational technology to ICFM. These efforts include the use of modern, software engineering principles for code development, the development of an AI-based user-interface for large codes, the establishment of a high-performance, data communications network to link ICFM researchers and facilities, and the application of parallel processing to speed up computationally intensive and/or time-critical ICFM problems. A multistage compressor flow physics program is cited as an example of efforts to use advanced computational technology to enhance a current NASA Lewis ICFM research program.

  12. Real-Time Cognitive Computing Architecture for Data Fusion in a Dynamic Environment

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Duong, Vu A.

    2012-01-01

    A novel cognitive computing architecture is conceptualized for processing multiple channels of multi-modal sensory data streams simultaneously, and fusing the information in real time to generate intelligent reaction sequences. This unique architecture is capable of assimilating parallel data streams that could be analog, digital, synchronous/asynchronous, and could be programmed to act as a knowledge synthesizer and/or an "intelligent perception" processor. In this architecture, the bio-inspired models of visual pathway and olfactory receptor processing are combined as processing components, to achieve the composite function of "searching for a source of food while avoiding the predator." The architecture is particularly suited for scene analysis from visual data and odorant.

  13. Algorithmic and architectural optimizations for computationally efficient particle filtering.

    PubMed

    Sankaranarayanan, Aswin C; Srivastava, Ankur; Chellappa, Rama

    2008-05-01

    In this paper, we analyze the computational challenges in implementing particle filtering, especially to video sequences. Particle filtering is a technique used for filtering nonlinear dynamical systems driven by non-Gaussian noise processes. It has found widespread applications in detection, navigation, and tracking problems. Although, in general, particle filtering methods yield improved results, it is difficult to achieve real time performance. In this paper, we analyze the computational drawbacks of traditional particle filtering algorithms, and present a method for implementing the particle filter using the Independent Metropolis Hastings sampler, that is highly amenable to pipelined implementations and parallelization. We analyze the implementations of the proposed algorithm, and, in particular, concentrate on implementations that have minimum processing times. It is shown that the design parameters for the fastest implementation can be chosen by solving a set of convex programs. The proposed computational methodology was verified using a cluster of PCs for the application of visual tracking. We demonstrate a linear speed-up of the algorithm using the methodology proposed in the paper. PMID:18390378

  14. Advanced Scientific Computing Environment Team new scientific database management task

    SciTech Connect

    Church, J.P.; Roberts, J.C.; Sims, R.N.; Smetana, A.O.; Westmoreland, B.W.

    1991-06-01

    The mission of the ASCENT Team is to continually keep pace with, evaluate, and select emerging computing technologies to define and implement prototypic scientific environments that maximize the ability of scientists and engineers to manage scientific data. These environments are to be implemented in a manner consistent with the site computing architecture and standards and NRTSC/SCS strategic plans for scientific computing. The major trends in computing hardware and software technology clearly indicate that the future computer'' will be a network environment that comprises supercomputers, graphics boxes, mainframes, clusters, workstations, terminals, and microcomputers. This network computer'' will have an architecturally transparent operating system allowing the applications code to run on any box supplying the required computing resources. The environment will include a distributed database and database managing system(s) that permits use of relational, hierarchical, object oriented, GIS, et al, databases. To reach this goal requires a stepwise progression from the present assemblage of monolithic applications codes running on disparate hardware platforms and operating systems. The first steps include converting from the existing JOSHUA system to a new J80 system that complies with modern language standards, development of a new J90 prototype to provide JOSHUA capabilities on Unix platforms, development of portable graphics tools to greatly facilitate preparation of input and interpretation of output; and extension of Jvv'' concepts and capabilities to distributed and/or parallel computing environments.

  15. Computing Algorithms for Nuffield Advanced Physics.

    ERIC Educational Resources Information Center

    Summers, M. K.

    1978-01-01

    Defines all recurrence relations used in the Nuffield course, to solve first- and second-order differential equations, and describes a typical algorithm for computer generation of solutions. (Author/GA)

  16. Advanced Crew Personal Support Computer (CPSC) task

    NASA Technical Reports Server (NTRS)

    Muratore, Debra

    1991-01-01

    The topics are presented in view graph form and include: background; objectives of task; benefits to the Space Station Freedom (SSF) Program; technical approach; baseline integration; and growth and evolution options. The objective is to: (1) introduce new computer technology into the SSF Program; (2) augment core computer capabilities to meet additional mission requirements; (3) minimize risk in upgrading technology; and (4) provide a low cost way to enhance crew and ground operations support.

  17. Frontiers of research in advanced computations

    SciTech Connect

    1996-07-01

    The principal mission of the Institute for Scientific Computing Research is to foster interactions among LLNL researchers, universities, and industry on selected topics in scientific computing. In the area of computational physics, the Institute has developed a new algorithm, GaPH, to help scientists understand the chemistry of turbulent and driven plasmas or gases at far less cost than other methods. New low-frequency electromagnetic models better describe the plasma etching and deposition characteristics of a computer chip in the making. A new method for modeling realistic curved boundaries within an orthogonal mesh is resulting in a better understanding of the physics associated with such boundaries and much quicker solutions. All these capabilities are being developed for massively parallel implementation, which is an ongoing focus of Institute researchers. Other groups within the Institute are developing novel computational methods to address a range of other problems. Examples include feature detection and motion recognition by computer, improved monitoring of blood oxygen levels, and entirely new models of human joint mechanics and prosthetic devices.

  18. Advances in computing, and their impact on scientific computing.

    PubMed

    Giles, Mike

    2002-01-01

    This paper begins by discussing the developments and trends in computer hardware, starting with the basic components (microprocessors, memory, disks, system interconnect, networking and visualization) before looking at complete systems (death of vector supercomputing, slow demise of large shared-memory systems, rapid growth in very large clusters of PCs). It then considers the software side, the relative maturity of shared-memory (OpenMP) and distributed-memory (MPI) programming environments, and new developments in 'grid computing'. Finally, it touches on the increasing importance of software packages in scientific computing, and the increased importance and difficulty of introducing good software engineering practices into very large academic software development projects. PMID:12539947

  19. Special purpose parallel computer architecture for real-time control and simulation in robotic applications

    NASA Technical Reports Server (NTRS)

    Fijany, Amir (Inventor); Bejczy, Antal K. (Inventor)

    1993-01-01

    This is a real-time robotic controller and simulator which is a MIMD-SIMD parallel architecture for interfacing with an external host computer and providing a high degree of parallelism in computations for robotic control and simulation. It includes a host processor for receiving instructions from the external host computer and for transmitting answers to the external host computer. There are a plurality of SIMD microprocessors, each SIMD processor being a SIMD parallel processor capable of exploiting fine grain parallelism and further being able to operate asynchronously to form a MIMD architecture. Each SIMD processor comprises a SIMD architecture capable of performing two matrix-vector operations in parallel while fully exploiting parallelism in each operation. There is a system bus connecting the host processor to the plurality of SIMD microprocessors and a common clock providing a continuous sequence of clock pulses. There is also a ring structure interconnecting the plurality of SIMD microprocessors and connected to the clock for providing the clock pulses to the SIMD microprocessors and for providing a path for the flow of data and instructions between the SIMD microprocessors. The host processor includes logic for controlling the RRCS by interpreting instructions sent by the external host computer, decomposing the instructions into a series of computations to be performed by the SIMD microprocessors, using the system bus to distribute associated data among the SIMD microprocessors, and initiating activity of the SIMD microprocessors to perform the computations on the data by procedure call.

  20. Information management architecture for an integrated computing environment for the Environmental Restoration Program. Environmental Restoration Program, Volume 3, Interim technical architecture

    SciTech Connect

    Not Available

    1994-09-01

    This third volume of the Information Management Architecture for an Integrated Computing Environment for the Environmental Restoration Program--the Interim Technical Architecture (TA) (referred to throughout the remainder of this document as the ER TA)--represents a key milestone in establishing a coordinated information management environment in which information initiatives can be pursued with the confidence that redundancy and inconsistencies will be held to a minimum. This architecture is intended to be used as a reference by anyone whose responsibilities include the acquisition or development of information technology for use by the ER Program. The interim ER TA provides technical guidance at three levels. At the highest level, the technical architecture provides an overall computing philosophy or direction. At this level, the guidance does not address specific technologies or products but addresses more general concepts, such as the use of open systems, modular architectures, graphical user interfaces, and architecture-based development. At the next level, the technical architecture provides specific information technology recommendations regarding a wide variety of specific technologies. These technologies include computing hardware, operating systems, communications software, database management software, application development software, and personal productivity software, among others. These recommendations range from the adoption of specific industry or Martin Marietta Energy Systems, Inc. (Energy Systems) standards to the specification of individual products. At the third level, the architecture provides guidance regarding implementation strategies for the recommended technologies that can be applied to individual projects and to the ER Program as a whole.

  1. Towards automatic Markov reliability modeling of computer architectures

    NASA Technical Reports Server (NTRS)

    Liceaga, C. A.; Siewiorek, D. P.

    1986-01-01

    The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.

  2. Building an advanced climate model: Program plan for the CHAMMP (Computer Hardware, Advanced Mathematics, and Model Physics) Climate Modeling Program

    SciTech Connect

    Not Available

    1990-12-01

    The issue of global warming and related climatic changes from increasing concentrations of greenhouse gases in the atmosphere has received prominent attention during the past few years. The Computer Hardware, Advanced Mathematics, and Model Physics (CHAMMP) Climate Modeling Program is designed to contribute directly to this rapid improvement. The goal of the CHAMMP Climate Modeling Program is to develop, verify, and apply a new generation of climate models within a coordinated framework that incorporates the best available scientific and numerical approaches to represent physical, biogeochemical, and ecological processes, that fully utilizes the hardware and software capabilities of new computer architectures, that probes the limits of climate predictability, and finally that can be used to address the challenging problem of understanding the greenhouse climate issue through the ability of the models to simulate time-dependent climatic changes over extended times and with regional resolution.

  3. p88110: A Graphical Simulator for Computer Architecture and Organization Courses

    ERIC Educational Resources Information Center

    Garcia, M. I.; Rodriguez, S.; Perez, A.; Garcia, A.

    2009-01-01

    Studying fundamental Computer Architecture and Organization topics requires a significant amount of practical work if students are to acquire a good grasp of the theoretical concepts presented in classroom lectures or textbooks. The use of simulators is commonly adopted in order to reach this objective. However, as most of the available…

  4. Combining Self-Explaining with Computer Architecture Diagrams to Enhance the Learning of Assembly Language Programming

    ERIC Educational Resources Information Center

    Hung, Y.-C.

    2012-01-01

    This paper investigates the impact of combining self explaining (SE) with computer architecture diagrams to help novice students learn assembly language programming. Pre- and post-test scores for the experimental and control groups were compared and subjected to covariance (ANCOVA) statistical analysis. Results indicate that the SE-plus-diagram…

  5. Recent advances in optical computing in Japan

    NASA Astrophysics Data System (ADS)

    Ishihara, Satoshi

    The results of recent Japanese research in optical and hybrid computer systems and components are summarized and illustrated with drawings and diagrams, and the organizational structure of the research efforts is outlined. Topics addressed include optical logic devices, spatial light modulators, two-dimensional lasers, optical bistable devices, device theory, optically controlled array processing, an optical bus for a multiprocessor system, real-time multiple-matrix-product processing, optical numerical processing, optical parallel-array logic systems, optical associative memory, and neural-network computation. Consideration is given to the roles of the Optical Computer Group of the Japan Society of Applied Physics, industry, and government (through the universities and Ministry of Education and through the Ministry of International Trade and Industry).

  6. Advanced coupled-micro-resonator architectures for dispersion and spectral engineering applications

    NASA Astrophysics Data System (ADS)

    Van, Vien

    2009-02-01

    We report recent progress in the design and fabrication of coupled optical micro-resonators and their applications in realizing compact OEIC devices for optical spectral engineering. By leveraging synthesis techniques for analog and digital electrical circuits, advanced coupled-microring device architectures can be realized with the complexity and functionality approaching that of state-of-the-art microwave filters. In addition, the traveling wave nature of microring resonators can be exploited to realize novel devices not possible with standing wave resonators. Applications of coupledmicro- resonator devices in realizing complex optical transfer functions for amplitude, phase and group delay engineering will be presented. Progress in the practical implementation of these devices in the Silicon-on-Insulator OEIC platform will be highlighted along with the challenges and potential for constructing very high order optical filters using coupledmicroring architectures.

  7. Neural computing architectures: The design of brain-like machines

    SciTech Connect

    Aleksander, I.

    1989-01-01

    Theoretical and applications aspects of neural-network (NN) computers are discussed in chapters contributed by European experts. Topics addressed include speech recognition based on topology-preserving neural maps, neural-map applications, backpropagation in nonfeedforward NNs, a parallel-distributed-processing learning approach to natural language, the learning capabilities of Boolean NNs, the logic of connectionist systems, and a probabilistic-logic NN for associative learning. Consideration is given to N-tuple sampling and genetic algorithms for speech recognition; the dynamic behavior of Boolean NNs; statistical mechanics and NNs; digital NNs, matched filters, and optical implementations; heteroassociative NNs using cabling vs link-disabling local modification rules; and the generation of movement trajectories in primates and robots. Also provided is an overview of parallel distributed processing.

  8. Emotions are emergent processes: they require a dynamic computational architecture

    PubMed Central

    Scherer, Klaus R.

    2009-01-01

    Emotion is a cultural and psychobiological adaptation mechanism which allows each individual to react flexibly and dynamically to environmental contingencies. From this claim flows a description of the elements theoretically needed to construct a virtual agent with the ability to display human-like emotions and to respond appropriately to human emotional expression. This article offers a brief survey of the desirable features of emotion theories that make them ideal blueprints for agent models. In particular, the component process model of emotion is described, a theory which postulates emotion-antecedent appraisal on different levels of processing that drive response system patterning predictions. In conclusion, investing seriously in emergent computational modelling of emotion using a nonlinear dynamic systems approach is suggested. PMID:19884141

  9. Advances in Computer-Supported Learning

    ERIC Educational Resources Information Center

    Neto, Francisco; Brasileiro, Francisco

    2007-01-01

    The Internet and growth of computer networks have eliminated geographic barriers, creating an environment where education can be brought to a student no matter where that student may be. The success of distance learning programs and the availability of many Web-supported applications and multimedia resources have increased the effectiveness of…

  10. Space data systems: Advanced flight computers

    NASA Technical Reports Server (NTRS)

    Benz, Harry F.

    1991-01-01

    The technical objectives are to develop high-performance, space-qualifiable, onboard computing, storage, and networking technologies. The topics are presented in viewgraph form and include the following: technology challenges; state-of-the-art assessment; program description; relationship to external programs; and cooperation and coordination effort.

  11. Advanced Simulation and Computing Business Plan

    SciTech Connect

    Rummel, E.

    2015-07-09

    To maintain a credible nuclear weapons program, the National Nuclear Security Administration’s (NNSA’s) Office of Defense Programs (DP) needs to make certain that the capabilities, tools, and expert staff are in place and are able to deliver validated assessments. This requires a complete and robust simulation environment backed by an experimental program to test ASC Program models. This ASC Business Plan document encapsulates a complex set of elements, each of which is essential to the success of the simulation component of the Nuclear Security Enterprise. The ASC Business Plan addresses the hiring, mentoring, and retaining of programmatic technical staff responsible for building the simulation tools of the nuclear security complex. The ASC Business Plan describes how the ASC Program engages with industry partners—partners upon whom the ASC Program relies on for today’s and tomorrow’s high performance architectures. Each piece in this chain is essential to assure policymakers, who must make decisions based on the results of simulations, that they are receiving all the actionable information they need.

  12. An Object-Oriented Network-Centric Software Architecture for Physical Computing

    NASA Astrophysics Data System (ADS)

    Palmer, Richard

    1997-08-01

    Recent developments in object-oriented computer languages and infrastructure such as the Internet, Web browsers, and the like provide an opportunity to define a more productive computational environment for scientific programming that is based more closely on the underlying mathematics describing physics than traditional programming languages such as FORTRAN or C++. In this talk I describe an object-oriented software architecture for representing physical problems that includes classes for such common mathematical objects as geometry, boundary conditions, partial differential and integral equations, discretization and numerical solution methods, etc. In practice, a scientific program written using this architecture looks remarkably like the mathematics used to understand the problem, is typically an order of magnitude smaller than traditional FORTRAN or C++ codes, and hence easier to understand, debug, describe, etc. All objects in this architecture are ``network-enabled,'' which means that components of a software solution to a physical problem can be transparently loaded from anywhere on the Internet or other global network. The architecture is expressed as an ``API,'' or application programmers interface specification, with reference embeddings in Java, Python, and C++. A C++ class library for an early version of this API has been implemented for machines ranging from PC's to the IBM SP2, meaning that phidentical codes run on all architectures.

  13. Evaluation of Advanced Computing Techniques and Technologies: Reconfigurable Computing

    NASA Technical Reports Server (NTRS)

    Wells, B. Earl

    2003-01-01

    The focus of this project was to survey the technology of reconfigurable computing determine its level of maturity and suitability for NASA applications. To better understand and assess the effectiveness of the reconfigurable design paradigm that is utilized within the HAL-15 reconfigurable computer system. This system was made available to NASA MSFC for this purpose, from Star Bridge Systems, Inc. To implement on at least one application that would benefit from the performance levels that are possible with reconfigurable hardware. It was originally proposed that experiments in fault tolerance and dynamically reconfigurability would be perform but time constraints mandated that these be pursued as future research.

  14. Architecture for computer vision application development within the HORUS system

    NASA Astrophysics Data System (ADS)

    Eckstein, Wolfgang; Steger, Carsten T.

    1997-04-01

    An integrated program development environment for computer vision tasks is presented. The first component of the system is concerned with the visualization of 2D image data. This is done in an object-oriented manner. Programming of the visualization process is achieved by arranging the representations of iconic data in an interactively customizable hierarchy that establishes an intuitive flow of messages between data representations seen as objects. The visualization objects called displays, are designed for different levels of abstraction, starting from direct iconic representation down to numerical features, depending on the information needed. Two types of messages are passed between these displays, which yield a clear and intuitive semantics. The second component of the system is an interactive tool for rapid program development. It helps the user in selecting appropriate operators in many ways. For example, the system provides context sensitive selection of possible alternative operators as well as suitable successors and required predecessors. For the task of choosing appropriate parameters several alternatives exist. For example, the system provides default values as well as lists of useful values for al parameters of each operator. To achieve this, a knowledge base containing facts about the operators and their parameters is used. Second, through the tight coupling of the two system components, parameters can be determined quickly by data exploration within the visualization components.

  15. Adaptive kinetic-fluid solvers for heterogeneous computing architectures

    NASA Astrophysics Data System (ADS)

    Zabelok, Sergey; Arslanbekov, Robert; Kolobov, Vladimir

    2015-12-01

    We show feasibility and benefits of porting an adaptive multi-scale kinetic-fluid code to CPU-GPU systems. Challenges are due to the irregular data access for adaptive Cartesian mesh, vast difference of computational cost between kinetic and fluid cells, and desire to evenly load all CPUs and GPUs during grid adaptation and algorithm refinement. Our Unified Flow Solver (UFS) combines Adaptive Mesh Refinement (AMR) with automatic cell-by-cell selection of kinetic or fluid solvers based on continuum breakdown criteria. Using GPUs enables hybrid simulations of mixed rarefied-continuum flows with a million of Boltzmann cells each having a 24 × 24 × 24 velocity mesh. We describe the implementation of CUDA kernels for three modules in UFS: the direct Boltzmann solver using the discrete velocity method (DVM), the Direct Simulation Monte Carlo (DSMC) solver, and a mesoscopic solver based on the Lattice Boltzmann Method (LBM), all using adaptive Cartesian mesh. Double digit speedups on single GPU and good scaling for multi-GPUs have been demonstrated.

  16. Synchronized computational architecture for generalized bilateral control of robot arms

    NASA Technical Reports Server (NTRS)

    Szakaly, Zoltan F. (Inventor)

    1991-01-01

    A master six degree of freedom Force Reflecting Hand Controller (FRHC) is available at a master site where a received image displays, in essentially real time, a remote robotic manipulator which is being controlled in the corresponding six degree freedom by command signals which are transmitted to the remote site in accordance with the movement of the FRHC at the master site. Software is user-initiated at the master site in order to establish the basic system conditions, and then a physical movement of the FRHC in Cartesean space is reflected at the master site by six absolute numbers that are sensed, translated and computed as a difference signal relative to the earlier position. The change in position is then transmitted in that differential signal form over a high speed synchronized bilateral communication channel which simultaneously returns robot-sensed response information to the master site as forces applied to the FRHC so that the FRHC reflects the feel of what is taking place at the remote site. A system wide clock rate is selected at a sufficiently high rate that the operator at the master site experiences the Force Reflecting operation in real time.

  17. Development of an interactive computer program for advance care planning

    PubMed Central

    Green, Michael J.; Levi, Benjamin H.

    2013-01-01

    Objective To describe the development of an innovative, multimedia decision aid for advance care planning. Background Advance care planning is an important way for people to articulate their wishes for medical care when they are not able to speak for themselves. Living wills and other types of advance directives are the most commonly used tools for advance care planning, but have been criticized for being vague, difficult to interpret, and inconsistent with individuals’ core beliefs and values. Results We developed a multimedia, computer-based decision aid for advance care planning (‘Making Your Wishes Known: Planning Your Medical Future’) to overcome many of the limitations of standard advance directive forms. This computer program guides individuals through the process of advance care planning, and unlike standard advance directives, provides tailored education, values clarification exercises, and a decision-making tool that translates an individual’s values and preferences into a specific medical plan that can be implemented by a health-care team. Pilot testing with 50 adult volunteers recruited from an outpatient primary care clinic showed high levels of satisfaction with the program. Further pilot testing with 34 cancer patients indicated that the program was perceived to be highly accurate at representing patients’ wishes. Conclusions This paper describes the development of an innovative decision aid for advance care planning that was designed to overcome common problems with standard advance directives. Preliminary testing suggests that it is acceptable to users and is accurate. PMID:18823445

  18. Adaptation of the anelastic solver EULAG to high performance computing architectures.

    NASA Astrophysics Data System (ADS)

    Wójcik, Damian; Ciżnicki, Miłosz; Kopta, Piotr; Kulczewski, Michał; Kurowski, Krzysztof; Piotrowski, Zbigniew; Rojek, Krzysztof; Rosa, Bogdan; Szustak, Łukasz; Wyrzykowski, Roman

    2014-05-01

    In recent years there has been widespread interest in employing heterogeneous and hybrid supercomputing architectures for geophysical research. Especially promising application for the modern supercomputing architectures is the numerical weather prediction (NWP). Adopting traditional NWP codes to the new machines based on multi- and many-core processors, such as GPUs allows to increase computational efficiency and decrease energy consumption. This offers unique opportunity to develop simulations with finer grid resolutions and computational domains larger than ever before. Further, it enables to extend the range of scales represented in the model so that the accuracy of representation of the simulated atmospheric processes can be improved. Consequently, it allows to improve quality of weather forecasts. Coalition of Polish scientific institutions launched a project aimed at adopting EULAG fluid solver for future high-performance computing platforms. EULAG is currently being implemented as a new dynamical core of COSMO Consortium weather prediction framework. The solver code combines features of a stencil and point wise computations. Its communication scheme consists of both halo exchange subroutines and global reduction functions. Within the project, two main modules of EULAG, namely MPDATA advection and iterative GCR elliptic solver are analyzed and optimized. Relevant techniques have been chosen and applied to accelerate code execution on modern HPC architectures: stencil decomposition, block decomposition (with weighting analysis between computation and communication), reduction of inter-cache communication by partitioning of cores into independent teams, cache reusing and vectorization. Experiments with matching computational domain topology to cluster topology are performed as well. The parallel formulation was extended from pure MPI to hybrid MPI - OpenMP approach. Porting to GPU using CUDA directives is in progress. Preliminary results of performance of the

  19. Advanced Computing Tools and Models for Accelerator Physics

    SciTech Connect

    Ryne, Robert; Ryne, Robert D.

    2008-06-11

    This paper is based on a transcript of my EPAC'08 presentation on advanced computing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics.

  20. A single user efficiency measure for evaluation of parallel or pipeline computer architectures

    NASA Technical Reports Server (NTRS)

    Jones, W. P.

    1978-01-01

    A precise statement of the relationship between sequential computation at one rate, parallel or pipeline computation at a much higher rate, the data movement rate between levels of memory, the fraction of inherently sequential operations or data that must be processed sequentially, the fraction of data to be moved that cannot be overlapped with computation, and the relative computational complexity of the algorithms for the two processes, scalar and vector, was developed. The relationship should be applied to the multirate processes that obtain in the employment of various new or proposed computer architectures for computational aerodynamics. The relationship, an efficiency measure that the single user of the computer system perceives, argues strongly in favor of separating scalar and vector processes, sometimes referred to as loosely coupled processes, to achieve optimum use of hardware.

  1. Advances in Computationally Modeling Human Oral Bioavailability

    PubMed Central

    Wang, Junmei; Hou, Tingjun

    2015-01-01

    Although significant progress has been made in experimental high throughput screening (HTS) of ADME (absorption, distribution, metabolism, excretion) and pharmacokinetic properties, the ADME and Toxicity (ADME-Tox) in silico modeling is still indispensable in drug discovery as it can guide us to wisely select drug candidates prior to expensive ADME screenings and clinical trials. Compared to other ADME-Tox properties, human oral bioavailability (HOBA) is particularly important but extremely difficult to predict. In this paper, the advances in human oral bioavailability modeling will be reviewed. Moreover, our deep insight on how to construct more accurate and reliable HOBA QSAR and classification models will also discussed. PMID:25582307

  2. Advances in computationally modeling human oral bioavailability.

    PubMed

    Wang, Junmei; Hou, Tingjun

    2015-06-23

    Although significant progress has been made in experimental high throughput screening (HTS) of ADME (absorption, distribution, metabolism, excretion) and pharmacokinetic properties, the ADME and Toxicity (ADME-Tox) in silico modeling is still indispensable in drug discovery as it can guide us to wisely select drug candidates prior to expensive ADME screenings and clinical trials. Compared to other ADME-Tox properties, human oral bioavailability (HOBA) is particularly important but extremely difficult to predict. In this paper, the advances in human oral bioavailability modeling will be reviewed. Moreover, our deep insight on how to construct more accurate and reliable HOBA QSAR and classification models will also discussed. PMID:25582307

  3. ASDA - Advanced Suit Design Analyzer computer program

    NASA Technical Reports Server (NTRS)

    Bue, Grant C.; Conger, Bruce C.; Iovine, John V.; Chang, Chi-Min

    1992-01-01

    An ASDA model developed to evaluate the heat and mass transfer characteristics of advanced pressurized suit design concepts for low pressure or vacuum planetary applications is presented. The model is based on a generalized 3-layer suit that uses the Systems Integrated Numerical Differencing Analyzer '85 in conjunction with a 41-node FORTRAN routine. The latter simulates the transient heat transfer and respiratory processes of a human body in a suited environment. The user options for the suit encompass a liquid cooled garment, a removable jacket, a CO2/H2O permeable layer, and a phase change layer.

  4. A Cerebellar Neuroprosthetic System: Computational Architecture and in vivo Test.

    PubMed

    Herreros, Ivan; Giovannucci, Andrea; Taub, Aryeh H; Hogri, Roni; Magal, Ari; Bamford, Sim; Prueckl, Robert; Verschure, Paul F M J

    2014-01-01

    Emulating the input-output functions performed by a brain structure opens the possibility for developing neuroprosthetic systems that replace damaged neuronal circuits. Here, we demonstrate the feasibility of this approach by replacing the cerebellar circuit responsible for the acquisition and extinction of motor memories. Specifically, we show that a rat can undergo acquisition, retention, and extinction of the eye-blink reflex even though the biological circuit responsible for this task has been chemically inactivated via anesthesia. This is achieved by first developing a computational model of the cerebellar microcircuit involved in the acquisition of conditioned reflexes and training it with synthetic data generated based on physiological recordings. Secondly, the cerebellar model is interfaced with the brain of an anesthetized rat, connecting the model's inputs and outputs to afferent and efferent cerebellar structures. As a result, we show that the anesthetized rat, equipped with our neuroprosthetic system, can be classically conditioned to the acquisition of an eye-blink response. However, non-stationarities in the recorded biological signals limit the performance of the cerebellar model. Thus, we introduce an updated cerebellar model and validate it with physiological recordings showing that learning becomes stable and reliable. The resulting system represents an important step toward replacing lost functions of the central nervous system via neuroprosthetics, obtained by integrating a synthetic circuit with the afferent and efferent pathways of a damaged brain region. These results also embody an early example of science-based medicine, where on the one hand the neuroprosthetic system directly validates a theory of cerebellar learning that informed the design of the system, and on the other one it takes a step toward the development of neuro-prostheses that could recover lost learning functions in animals and, in the longer term, humans. PMID:25152887

  5. A Cerebellar Neuroprosthetic System: Computational Architecture and in vivo Test

    PubMed Central

    Herreros, Ivan; Giovannucci, Andrea; Taub, Aryeh H.; Hogri, Roni; Magal, Ari; Bamford, Sim; Prueckl, Robert; Verschure, Paul F. M. J.

    2014-01-01

    Emulating the input–output functions performed by a brain structure opens the possibility for developing neuroprosthetic systems that replace damaged neuronal circuits. Here, we demonstrate the feasibility of this approach by replacing the cerebellar circuit responsible for the acquisition and extinction of motor memories. Specifically, we show that a rat can undergo acquisition, retention, and extinction of the eye-blink reflex even though the biological circuit responsible for this task has been chemically inactivated via anesthesia. This is achieved by first developing a computational model of the cerebellar microcircuit involved in the acquisition of conditioned reflexes and training it with synthetic data generated based on physiological recordings. Secondly, the cerebellar model is interfaced with the brain of an anesthetized rat, connecting the model’s inputs and outputs to afferent and efferent cerebellar structures. As a result, we show that the anesthetized rat, equipped with our neuroprosthetic system, can be classically conditioned to the acquisition of an eye-blink response. However, non-stationarities in the recorded biological signals limit the performance of the cerebellar model. Thus, we introduce an updated cerebellar model and validate it with physiological recordings showing that learning becomes stable and reliable. The resulting system represents an important step toward replacing lost functions of the central nervous system via neuroprosthetics, obtained by integrating a synthetic circuit with the afferent and efferent pathways of a damaged brain region. These results also embody an early example of science-based medicine, where on the one hand the neuroprosthetic system directly validates a theory of cerebellar learning that informed the design of the system, and on the other one it takes a step toward the development of neuro-prostheses that could recover lost learning functions in animals and, in the longer term, humans. PMID:25152887

  6. Advances in Computer-Based Autoantibodies Analysis

    NASA Astrophysics Data System (ADS)

    Soda, Paolo; Iannello, Giulio

    Indirect Immunofluorescence (IIF) imaging is the recommended me-thod to detect autoantibodies in patient serum, whose common markers are antinuclear autoantibodies (ANA) and autoantibodies directed against double strand DNA (anti-dsDNA). Since the availability of accurately performed and correctly reported laboratory determinations is crucial for the clinicians, an evident medical demand is the development of Computer Aided Diagnosis (CAD) tools supporting physicians' decisions.

  7. Advancement of photonics for space and other platforms: open optical interconnect architecture (OOIA)

    NASA Astrophysics Data System (ADS)

    Gaydeski, Michael S.

    1997-07-01

    Continuous investigation of new technologies for avionics and space processing has led to the improvement of applications capabilities and processing for tactical platforms (commercial and government satellites, tactical asset such as the USN Reconnaissance Fighter F/A-18R, USAF Fighter F-16, various helicopters, etc.,) and surveillance platforms (commercial and government satellites, Joint Surveillance Target Attack Radar System, Advanced Warning and Control System). This paper focuses on the potential benefits of inserting optical interconnect technology into these platforms while subscribing an Open Optical Interconnect Architecture concept and a methodology for systems development and integration.

  8. Parallel language constructs for tensor product computations on loosely coupled architectures

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Vanrosendale, John

    1989-01-01

    Distributed memory architectures offer high levels of performance and flexibility, but have proven awkard to program. Current languages for nonshared memory architectures provide a relatively low level programming environment, and are poorly suited to modular programming, and to the construction of libraries. A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. Tensor product array computations are focused on along with a simple but important class of numerical algorithms. The problem of programming 1-D kernal routines is focused on first, such as parallel tridiagonal solvers, and then how such parallel kernels can be combined to form parallel tensor product algorithms is examined.

  9. Scalable quantum computer architecture with coupled donor-quantum dot qubits

    DOEpatents

    Schenkel, Thomas; Lo, Cheuk Chi; Weis, Christoph; Lyon, Stephen; Tyryshkin, Alexei; Bokor, Jeffrey

    2014-08-26

    A quantum bit computing architecture includes a plurality of single spin memory donor atoms embedded in a semiconductor layer, a plurality of quantum dots arranged with the semiconductor layer and aligned with the donor atoms, wherein a first voltage applied across at least one pair of the aligned quantum dot and donor atom controls a donor-quantum dot coupling. A method of performing quantum computing in a scalable architecture quantum computing apparatus includes arranging a pattern of single spin memory donor atoms in a semiconductor layer, forming a plurality of quantum dots arranged with the semiconductor layer and aligned with the donor atoms, applying a first voltage across at least one aligned pair of a quantum dot and donor atom to control a donor-quantum dot coupling, and applying a second voltage between one or more quantum dots to control a Heisenberg exchange J coupling between quantum dots and to cause transport of a single spin polarized electron between quantum dots.

  10. An Evaluation of Architectural Platforms for Parallel Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Jayasimha, D. N.; Hayder, M. E.; Pillay, S. K.

    1996-01-01

    We study the computational, communication, and scalability characteristics of a computational fluid dynamics application, which solves the time accurate flow field of a jet using the compressible Navier-Stokes equations, on a variety of parallel architecture platforms. The platforms chosen for this study are a cluster of workstations (the LACE experimental testbed at NASA Lewis), a shared memory multiprocessor (the Cray YMP), and distributed memory multiprocessors with different topologies - the IBM SP and the Cray T3D. We investigate the impact of various networks connecting the cluster of workstations on the performance of the application and the overheads induced by popular message passing libraries used for parallelization. The work also highlights the importance of matching the memory bandwidth to the processor speed for good single processor performance. By studying the performance of an application on a variety of architectures, we are able to point out the strengths and weaknesses of each of the example computing platforms.

  11. Parallelizing Navier-Stokes Computations on a Variety of Architectural Platforms

    NASA Technical Reports Server (NTRS)

    Jayasimha, D. N.; Hayder, M. E.; Pillay, S. K.

    1997-01-01

    We study the computational, communication, and scalability characteristics of a Computational Fluid Dynamics application, which solves the time accurate flow field of a jet using the compressible Navier-Stokes equations, on a variety of parallel architectural platforms. The platforms chosen for this study are a cluster of workstations (the LACE experimental testbed at NASA Lewis), a shared memory multiprocessor (the Cray YMP), distributed memory multiprocessors with different topologies-the IBM SP and the Cray T3D. We investigate the impact of various networks, connecting the cluster of workstations, on the performance of the application and the overheads induced by popular message passing libraries used for parallelization. The work also highlights the importance of matching the memory bandwidth to the processor speed for good single processor performance. By studying the performance of an application on a variety of architectures, we are able to point out the strengths and weaknesses of each of the example computing platforms.

  12. Advanced computational techniques for hypersonic propulsion

    NASA Technical Reports Server (NTRS)

    Povinelli, Louis A.

    1989-01-01

    Computational Fluid Dynamics (CFD) has played a major role in the resurgence of hypersonic flight, on the premise that numerical methods will allow performance of simulations at conditions for which no ground test capability exists. Validation of CFD methods is being established using the experimental data base available, which is below Mach 8. It is important, however, to realize the limitations involved in the extrapolation process as well as the deficiencies that exist in numerical methods at the present time. Current features of CFD codes are examined for application to propulsion system components. The shortcomings in simulation and modeling are identified and discussed.

  13. Intelligent Software Tools for Advanced Computing

    SciTech Connect

    Baumgart, C.W.

    2001-04-03

    Feature extraction and evaluation are two procedures common to the development of any pattern recognition application. These features are the primary pieces of information which are used to train the pattern recognition tool, whether that tool is a neural network, a fuzzy logic rulebase, or a genetic algorithm. Careful selection of the features to be used by the pattern recognition tool can significantly streamline the overall development and training of the solution for the pattern recognition application. This report summarizes the development of an integrated, computer-based software package called the Feature Extraction Toolbox (FET), which can be used for the development and deployment of solutions to generic pattern recognition problems. This toolbox integrates a number of software techniques for signal processing, feature extraction and evaluation, and pattern recognition, all under a single, user-friendly development environment. The toolbox has been developed to run on a laptop computer, so that it may be taken to a site and used to develop pattern recognition applications in the field. A prototype version of this toolbox has been completed and is currently being used for applications development on several projects in support of the Department of Energy.

  14. Reusable Rocket Engine Advanced Health Management System. Architecture and Technology Evaluation: Summary

    NASA Technical Reports Server (NTRS)

    Pettit, C. D.; Barkhoudarian, S.; Daumann, A. G., Jr.; Provan, G. M.; ElFattah, Y. M.; Glover, D. E.

    1999-01-01

    In this study, we proposed an Advanced Health Management System (AHMS) functional architecture and conducted a technology assessment for liquid propellant rocket engine lifecycle health management. The purpose of the AHMS is to improve reusable rocket engine safety and to reduce between-flight maintenance. During the study, past and current reusable rocket engine health management-related projects were reviewed, data structures and health management processes of current rocket engine programs were assessed, and in-depth interviews with rocket engine lifecycle and system experts were conducted. A generic AHMS functional architecture, with primary focus on real-time health monitoring, was developed. Fourteen categories of technology tasks and development needs for implementation of the AHMS were identified, based on the functional architecture and our assessment of current rocket engine programs. Five key technology areas were recommended for immediate development, which (1) would provide immediate benefits to current engine programs, and (2) could be implemented with minimal impact on the current Space Shuttle Main Engine (SSME) and Reusable Launch Vehicle (RLV) engine controllers.

  15. Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing

    SciTech Connect

    Fletcher, James H.; Cox, Philip; Harrington, William J; Campbell, Joseph L

    2013-09-03

    ABSTRACT Project Title: Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing PROJECT OBJECTIVE The objective of the project was to advance portable fuel cell system technology towards the commercial targets of power density, energy density and lifetime. These targets were laid out in the DOE’s R&D roadmap to develop an advanced direct methanol fuel cell power supply that meets commercial entry requirements. Such a power supply will enable mobile computers to operate non-stop, unplugged from the wall power outlet, by using the high energy density of methanol fuel contained in a replaceable fuel cartridge. Specifically this project focused on balance-of-plant component integration and miniaturization, as well as extensive component, subassembly and integrated system durability and validation testing. This design has resulted in a pre-production power supply design and a prototype that meet the rigorous demands of consumer electronic applications. PROJECT TASKS The proposed work plan was designed to meet the project objectives, which corresponded directly with the objectives outlined in the Funding Opportunity Announcement: To engineer the fuel cell balance-of-plant and packaging to meet the needs of consumer electronic systems, specifically at power levels required for mobile computing. UNF used existing balance-of-plant component technologies developed under its current US Army CERDEC project, as well as a previous DOE project completed by PolyFuel, to further refine them to both miniaturize and integrate their functionality to increase the system power density and energy density. Benefits of UNF’s novel passive water recycling MEA (membrane electrode assembly) and the simplified system architecture it enabled formed the foundation of the design approach. The package design was hardened to address orientation independence, shock, vibration, and environmental requirements. Fuel cartridge and fuel subsystems were improved to ensure effective fuel

  16. Integrating Computer Architectures into the Design of High-Performance Controllers

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.; Leyland, Jane A.; Warmbrodt, William

    1986-01-01

    Modern control systems must typically perform real-time identification and control, as well as coordinate a host of other activities related to user interaction, on-line graphics, and file management. This paper discusses five global design considerations that are useful to integrate array processor, multimicroprocessor, and host computer system architecture into versatile, high-speed controllers. Such controllers are capable of very high control throughput, and can maintain constant interaction with the non-real-time or user environment. As an application example, the architecture of a high-speed, closed-loop controller used to actively control helicopter vibration will be briefly discussed. Although this system has been designed for use as the controller for real-time rotorcraft dynamics and control studies in a wind-tunnel environment, the control architecture can generally be applied to a wide range of automatic control applications.

  17. Experimental Demonstration of a Self-organized Architecture for Emerging Grid Computing Applications on OBS Testbed

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Hong, Xiaobin; Wu, Jian; Lin, Jintong

    As Grid computing continues to gain popularity in the industry and research community, it also attracts more attention from the customer level. The large number of users and high frequency of job requests in the consumer market make it challenging. Clearly, all the current Client/Server(C/S)-based architecture will become unfeasible for supporting large-scale Grid applications due to its poor scalability and poor fault-tolerance. In this paper, based on our previous works [1, 2], a novel self-organized architecture to realize a highly scalable and flexible platform for Grids is proposed. Experimental results show that this architecture is suitable and efficient for consumer-oriented Grids.

  18. Electron-impact calculations of near-neutral atomic systems utilising Petascale computer architectures

    NASA Astrophysics Data System (ADS)

    Ballance, Connor

    2013-05-01

    Over the last couple of decades, a number of advanced non-perturbative approaches such as the R-matrix, TDCC and CCC methods have made great strides in terms of improved target representation and investigating fundamental 2-4 electron problems. However, for the electron-impact excitation of near-neutral species or complicated open-shell atomic systems we are forced to make certain compromises in terms of the atomic structure and/or the number of channels included in close-coupling expansion of the subsequent scattering calculation. The availability of modern supercomputing architectures with hundreds of thousands of cores, and the emergence new opportunities through GPU usauge offers one possibility to address some of these issues. To effectively harness this computational power will require significant revision of the existing code structures. I shall discuss some effective strategies within a non-relativistic and relativistic R-matrix framework using the examples detailed below. The goal is to extend existing R-matrix methods from 1-2 thousand close coupled channels to 10,000 channels. With the construction of the ITER experiment in Cadarache, which will have Tungsten plasma-facing components, there is an urgent diagnostic need for the collisional rates for the near-neutral ion stages. In particular, spectroscopic diagnostics of impurity influx require accurate electron-impact excitation and ionisation as well as a good target representation. There have been only a few non-perturbative collisional calculations for this system, and the open-f shell ion stages provide a daunting challenge even for perturbative approaches. I shall present non-perturbative results for for the excitation and ionisation of W3+ and illustrate how these fundamental calculations can be integrated into a meaningful diagnostic for the ITER device. We acknowledge support from DoE fusion.

  19. Recent advances in transonic computational aeroelasticity

    NASA Technical Reports Server (NTRS)

    Batina, John T.; Bennett, Robert M.; Seidel, David A.; Cunningham, Herbert J.; Bland, Samuel R.

    1988-01-01

    A transonic unsteady aerodynamic and aeroelasticity code called CAP-TSD was developed for application to realistic aircraft configurations. The code permits the calculation of steady and unsteady flows about complete aircraft configurations for aeroelastic analysis in the flutter critical transonic speed range. The CAP-TSD code uses a time accurate approximate factorization algorithm for solution of the unsteady transonic small disturbance potential equation. An overview is given of the CAP-TSD code development effort and results are presented which demonstrate various capabilities of the code. Calculations are presented for several configurations including the General Dynamics 1/9 scale F-16 aircraft model and the ONERA M6 wing. Calculations are also presented from a flutter analysis of a 45 deg sweptback wing which agrees well with the experimental data. Descriptions are presented of the CAP-TSD code and algorithm details along with results and comparisons which demonstrate these recent developments in transonic computational aeroelasticity.

  20. A Multifaceted Approach to Modernizing NASA's Advanced Multi-Mission Operations System (AMMOS) System Architecture

    NASA Technical Reports Server (NTRS)

    Estefan, Jeff A.; Giovannoni, Brian J.

    2014-01-01

    The Advanced Multi-Mission Operations Systems (AMMOS) is NASA's premier space mission operations product line offering for use in deep-space robotic and astrophysics missions. The general approach to AMMOS modernization over the course of its 29-year history exemplifies a continual, evolutionary approach with periods of sponsor investment peaks and valleys in between. Today, the Multimission Ground Systems and Services (MGSS) office-the program office that manages the AMMOS for NASA-actively pursues modernization initiatives and continues to evolve the AMMOS by incorporating enhanced capabilities and newer technologies into its end-user tool and service offerings. Despite the myriad of modernization investments that have been made over the evolutionary course of the AMMOS, pain points remain. These pain points, based on interviews with numerous flight project mission operations personnel, can be classified principally into two major categories: 1) information-related issues, and 2) process-related issues. By information-related issues, we mean pain points associated with the management and flow of MOS data across the various system interfaces. By process-related issues, we mean pain points associated with the MOS activities performed by mission operators (i.e., humans) and supporting software infrastructure used in support of those activities. In this paper, three foundational concepts-Timeline, Closed Loop Control, and Separation of Concerns-collectively form the basis for expressing a set of core architectural tenets that provides a multifaceted approach to AMMOS system architecture modernization intended to address the information- and process-related issues. Each of these architectural tenets will be further explored in this paper. Ultimately, we envision the application of these core tenets resulting in a unified vision of a future-state architecture for the AMMOS-one that is intended to result in a highly adaptable, highly efficient, and highly cost

  1. CPU SIM: A Computer Simulator for Use in an Introductory Computer Organization-Architecture Class.

    ERIC Educational Resources Information Center

    Skrein, Dale

    1994-01-01

    CPU SIM, an interactive low-level computer simulation package that runs on the Macintosh computer, is described. The program is designed for instructional use in the first or second year of undergraduate computer science, to teach various features of typical computer organization through hands-on exercises. (MSE)

  2. Design and Analysis of a Neuromemristive Reservoir Computing Architecture for Biosignal Processing.

    PubMed

    Kudithipudi, Dhireesha; Saleh, Qutaiba; Merkel, Cory; Thesing, James; Wysocki, Bryant

    2015-01-01

    Reservoir computing (RC) is gaining traction in several signal processing domains, owing to its non-linear stateful computation, spatiotemporal encoding, and reduced training complexity over recurrent neural networks (RNNs). Previous studies have shown the effectiveness of software-based RCs for a wide spectrum of applications. A parallel body of work indicates that realizing RNN architectures using custom integrated circuits and reconfigurable hardware platforms yields significant improvements in power and latency. In this research, we propose a neuromemristive RC architecture, with doubly twisted toroidal structure, that is validated for biosignal processing applications. We exploit the device mismatch to implement the random weight distributions within the reservoir and propose mixed-signal subthreshold circuits for energy efficiency. A comprehensive analysis is performed to compare the efficiency of the neuromemristive RC architecture in both digital(reconfigurable) and subthreshold mixed-signal realizations. Both Electroencephalogram (EEG) and Electromyogram (EMG) biosignal benchmarks are used for validating the RC designs. The proposed RC architecture demonstrated an accuracy of 90 and 84% for epileptic seizure detection and EMG prosthetic finger control, respectively. PMID:26869876

  3. Design and Analysis of a Neuromemristive Reservoir Computing Architecture for Biosignal Processing

    PubMed Central

    Kudithipudi, Dhireesha; Saleh, Qutaiba; Merkel, Cory; Thesing, James; Wysocki, Bryant

    2016-01-01

    Reservoir computing (RC) is gaining traction in several signal processing domains, owing to its non-linear stateful computation, spatiotemporal encoding, and reduced training complexity over recurrent neural networks (RNNs). Previous studies have shown the effectiveness of software-based RCs for a wide spectrum of applications. A parallel body of work indicates that realizing RNN architectures using custom integrated circuits and reconfigurable hardware platforms yields significant improvements in power and latency. In this research, we propose a neuromemristive RC architecture, with doubly twisted toroidal structure, that is validated for biosignal processing applications. We exploit the device mismatch to implement the random weight distributions within the reservoir and propose mixed-signal subthreshold circuits for energy efficiency. A comprehensive analysis is performed to compare the efficiency of the neuromemristive RC architecture in both digital(reconfigurable) and subthreshold mixed-signal realizations. Both Electroencephalogram (EEG) and Electromyogram (EMG) biosignal benchmarks are used for validating the RC designs. The proposed RC architecture demonstrated an accuracy of 90 and 84% for epileptic seizure detection and EMG prosthetic finger control, respectively. PMID:26869876

  4. TOPICAL REVIEW: Advances and challenges in computational plasma science

    NASA Astrophysics Data System (ADS)

    Tang, W. M.; Chan, V. S.

    2005-02-01

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This

  5. Advanced ERS design using computer simulation

    SciTech Connect

    Melhem, G.A.

    1995-12-31

    There are two schools of thought regarding pressure relief design, shortcut/simplified methods and detailed methods. The shortcut/simplified methods are mostly applicable to non-reactive systems. These methods use direct scale-up techniques to obtain a vent size. Little useful information can be obtained for reaction data such as onset temperatures, activation energy, decompositon stoichiometry, etc. In addition, this approach does not readily provide the ability to perform what-if and sensitivity analysis or data that can be used for post-release mitigation design. The detailed approach advocates a more fundamental approach to pressure relief design, especially for reactive systems. First, the reaction chemistry is qualified using small scale experiments and then this data is coupled with fluid dynamics to design the emergency relief system. In addition to vent sizing information, this approach provides insights into process modification and refinement as well as the establishment of a safe operating envelope. This approach provides necessary flow data for vent containment design (if required), structural support, etc. This approach also allows the direct evaluation of design sensitivity to variables such as temperature, pressure, composition, fill level, etc. on vent sizing while the shortcut approach requires an additional experiment per what-if scenario. This approach meets DIERS technology requirements for two-phase flow and vapor/liquid disengagement and exceeds it in many key areas for reacting systems such as stoichiometry estimation for decomposition reactions, non-ideal solutions effects, continuing reactions in piping and vent containment systems, etc. This paper provides an overview of our proposed equation of state based modeling approach and its computer code implementation. Numerous examples and model validations are also described. 42 refs., 23 figs., 9 tabs.

  6. Stencil Computation Optimization and Auto-tuning on State-of-the-Art Multicore Architectures

    SciTech Connect

    Datta, Kaushik; Murphy, Mark; Volkov, Vasily; Williams, Samuel; Carter, Jonathan; Oliker, Leonid; Patterson, David; Shalf, John; Yelick, Katherine

    2008-08-22

    Understanding the most efficient design and utilization of emerging multicore systems is one of the most challenging questions faced by the mainstream and scientific computing industries in several decades. Our work explores multicore stencil (nearest-neighbor) computations -- a class of algorithms at the heart of many structured grid codes, including PDE solvers. We develop a number of effective optimization strategies, and build an auto-tuning environment that searches over our optimizations and their parameters to minimize runtime, while maximizing performance portability. To evaluate the effectiveness of these strategies we explore the broadest set of multicore architectures in the current HPC literature, including the Intel Clovertown, AMD Barcelona, Sun Victoria Falls, IBM QS22 PowerXCell 8i, and NVIDIA GTX280. Overall, our auto-tuning optimization methodology results in the fastest multicore stencil performance to date. Finally, we present several key insights into the architectural trade-offs of emerging multicore designs and their implications on scientific algorithm development.

  7. VLab: a service oriented architecture for first principles computations of planetary materials properties

    NASA Astrophysics Data System (ADS)

    da Silva, C. R.; da Silveira, P.; Wentzcovitch, R. M.; Pierce, M.; Erlebacher, G.

    2007-12-01

    We present an overview of the VLab, a system developed to handle execution of extensive workflows generated by first principles computations of thermoelastic properties of minerals. The multiplicity (102-3) of tasks derives from sampling of parameter space with variables such as pressure, temperature, strain, composition, etc. We review the algorithms of physical importance that define the system's requirements, its underlying service oriented architecture (SOA), and metadata. The system architecture emerges naturally. The SOA is a collection of web-services providing access to distributed computing nodes, controlling workflow execution, monitoring services, and providing data analyses tools, visualization services, data bases, and authentication services. A usage view diagram is described. We also show snapshots taken from the actual operational procedure in VLab. Research supported by NSF/ITR (VLab)

  8. VLab: A Service Oriented Architecture for Distributed First Principles Materials Computations

    NASA Astrophysics Data System (ADS)

    da Silva, Cesar; da Silveira, Pedro; Wentzcovitch, Renata; Pierce, Marlon; Erlebacher, Gordon

    2008-03-01

    We present an overview of VLab, a system developed to handle execution of extensive workflows generated by first principles computations of thermoelastic properties of minerals. The multiplicity (10^2-3) of tasks derives from sampling of parameter space with variables such as pressure, temperature, strain, composition, etc. We review the algorithms of physical importance that define the system's requirements, its underlying service oriented architecture (SOA), and metadata. The system architecture emerges naturally. The SOA is a collection of web-services providing access to distributed computing nodes, workflow control, and monitoring services, and providing data analysis tools, visualization services, data bases, and authentication services. A usage view diagram is described. We also show snapshots taken from the actual operational procedure in VLab.

  9. Service-Oriented Architecture for NVO and TeraGrid Computing

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph; Miller, Craig; Williams, Roy; Steenberg, Conrad; Graham, Matthew

    2008-01-01

    The National Virtual Observatory (NVO) Extensible Secure Scalable Service Infrastructure (NESSSI) is a Web service architecture and software framework that enables Web-based astronomical data publishing and processing on grid computers such as the National Science Foundation's TeraGrid. Characteristics of this architecture include the following: (1) Services are created, managed, and upgraded by their developers, who are trusted users of computing platforms on which the services are deployed. (2) Service jobs can be initiated by means of Java or Python client programs run on a command line or with Web portals. (3) Access is granted within a graduated security scheme in which the size of a job that can be initiated depends on the level of authentication of the user.

  10. Optimization of knowledge sharing through multi-forum using cloud computing architecture

    NASA Astrophysics Data System (ADS)

    Madapusi Vasudevan, Sriram; Sankaran, Srivatsan; Muthuswamy, Shanmugasundaram; Ram, N. Sankar

    2011-12-01

    Knowledge sharing is done through various knowledge sharing forums which requires multiple logins through multiple browser instances. Here a single Multi-Forum knowledge sharing concept is introduced which requires only one login session which makes user to connect multiple forums and display the data in a single browser window. Also few optimization techniques are introduced here to speed up the access time using cloud computing architecture.

  11. Integrating Clinical Trial Imaging Data Resources Using Service-Oriented Architecture and Grid Computing

    PubMed Central

    Cladé, Thierry; Snyder, Joshua C.

    2010-01-01

    Clinical trials which use imaging typically require data management and workflow integration across several parties. We identify opportunities for all parties involved to realize benefits with a modular interoperability model based on service-oriented architecture and grid computing principles. We discuss middleware products for implementation of this model, and propose caGrid as an ideal candidate due to its healthcare focus; free, open source license; and mature developer tools and support. PMID:20449775

  12. Use of advanced computers for aerodynamic flow simulation

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.; Ballhaus, W. F.

    1980-01-01

    The current and projected use of advanced computers for large-scale aerodynamic flow simulation applied to engineering design and research is discussed. The design use of mature codes run on conventional, serial computers is compared with the fluid research use of new codes run on parallel and vector computers. The role of flow simulations in design is illustrated by the application of a three dimensional, inviscid, transonic code to the Sabreliner 60 wing redesign. Research computations that include a more complete description of the fluid physics by use of Reynolds averaged Navier-Stokes and large-eddy simulation formulations are also presented. Results of studies for a numerical aerodynamic simulation facility are used to project the feasibility of design applications employing these more advanced three dimensional viscous flow simulations.

  13. PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)

    NASA Astrophysics Data System (ADS)

    Vincenti, Henri

    2016-03-01

    The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.

  14. An Architecture and Supporting Environment of Service-Oriented Computing Based-On Context Awareness

    NASA Astrophysics Data System (ADS)

    Ma, Tianxiao; Wu, Gang; Huang, Jun

    Service-oriented computing (SOC) is emerging to be an important computing paradigm of the next future. Based on context awareness, this paper proposes an architecture of SOC. A definition of the context in open environments such as Internet is given, which is based on ontology. The paper also proposes a supporting environment for the context-aware SOC, which focus on services on-demand composition and context-awareness evolving. A reference implementation of the supporting environment based on OSGi[11] is given at last.

  15. IMPLEMENTING SCIENTIFIC SIMULATION CODES HIGHLY TAILORED FOR VECTOR ARCHITECTURES USING CUSTOM CONFIGURABLE COMPUTING MACHINES

    NASA Technical Reports Server (NTRS)

    Rutishauser, David K.

    2006-01-01

    The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters

  16. Advanced computational sensors technology: testing and evaluation in visible, SWIR, and LWIR imaging

    NASA Astrophysics Data System (ADS)

    Rizk, Charbel G.; Wilson, John P.; Pouliquen, Philippe

    2015-05-01

    The Advanced Computational Sensors Team at the Johns Hopkins University Applied Physics Laboratory and the Johns Hopkins University Department of Electrical and Computer Engineering has been developing advanced readout integrated circuit (ROIC) technology for more than 10 years with a particular focus on the key challenges of dynamic range, sampling rate, system interface and bandwidth, and detector materials or band dependencies. Because the pixel array offers parallel sampling by default, the team successfully demonstrated that adding smarts in the pixel and the chip can increase performance significantly. Each pixel becomes a smart sensor and can operate independently in collecting, processing, and sharing data. In addition, building on the digital circuit revolution, the effective well size can be increased by orders of magnitude within the same pixel pitch over analog designs. This research has yielded an innovative class of a system-on-chip concept: the Flexible Readout and Integration Sensor (FRIS) architecture. All key parameters are programmable and/or can be adjusted dynamically, and this architecture can potentially be sensor and application agnostic. This paper reports on the testing and evaluation of one prototype that can support either detector polarity and includes sample results with visible, short-wavelength infrared (SWIR), and long-wavelength infrared (LWIR) imaging.

  17. Recovery Act: Advanced Interaction, Computation, and Visualization Tools for Sustainable Building Design

    SciTech Connect

    Greenberg, Donald P.; Hencey, Brandon M.

    2013-08-20

    Current building energy simulation technology requires excessive labor, time and expertise to create building energy models, excessive computational time for accurate simulations and difficulties with the interpretation of the results. These deficiencies can be ameliorated using modern graphical user interfaces and algorithms which take advantage of modern computer architectures and display capabilities. To prove this hypothesis, we developed an experimental test bed for building energy simulation. This novel test bed environment offers an easy-to-use interactive graphical interface, provides access to innovative simulation modules that run at accelerated computational speeds, and presents new graphics visualization methods to interpret simulation results. Our system offers the promise of dramatic ease of use in comparison with currently available building energy simulation tools. Its modular structure makes it suitable for early stage building design, as a research platform for the investigation of new simulation methods, and as a tool for teaching concepts of sustainable design. Improvements in the accuracy and execution speed of many of the simulation modules are based on the modification of advanced computer graphics rendering algorithms. Significant performance improvements are demonstrated in several computationally expensive energy simulation modules. The incorporation of these modern graphical techniques should advance the state of the art in the domain of whole building energy analysis and building performance simulation, particularly at the conceptual design stage when decisions have the greatest impact. More importantly, these better simulation tools will enable the transition from prescriptive to performative energy codes, resulting in better, more efficient designs for our future built environment.

  18. Architecture and implementation for high-bandwidth real-time radar signal transmission and computing application

    NASA Astrophysics Data System (ADS)

    Cho, Yoong-Goog; Chandrasekar, V.; Jayasumana, Anura P.; Brunkow, David

    2002-06-01

    he design, architecture, and implementation for the high-throughput data transmission and high-performance computing,which are applicable for various real-time radar signal transmission applications over the data network, are presented. With a client-server model, the multiple processes and threads on the end systems operate simultaneously and collaborately to meet the real-time requirement. The design covers the Digitized Radar Signal (DRS) data acquisition and data transmission on the DRS server end as well as DRS data receiving, radar signal parameter computation and parameter transmission on the DRS receiver end. Generic packet and data structures for transmission and inter-process data sharing are constructed. The architecture was successfully implemented on Sun/Solaris workstations with dual 750 MHz UltraSPARC-III processors containing Gigabit Ethernet card. The comparison in transmission throughput over gigabit link between with computation and without computation clearly shows the importance of the signal processing capability on the end-to-end performance. Profiling analysis on the DRS receiver process shows the work-loaded functions and provides guides for improving computing capabilities.

  19. Evaluation of cache-based superscalar and cacheless vector architectures for scientific computations

    SciTech Connect

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan; Shalf, John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri, Jahed; Van der Wijngaart, Rob

    2003-05-01

    The growing gap between sustained and peak performance for scientific applications is a well-known problem in high end computing. The recent development of parallel vector systems offers the potential to bridge this gap for many computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX-6 vector processor and the cache-based IBM Power3/4 superscalar architectures across a number of scientific computing areas. First, we present the performance of a microbenchmark suite that examines low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks. Finally, we evaluate the performance of several scientific computing codes. Results demonstrate that the SX-6 achieves high performance on a large fraction of our applications and often significantly out performs the cache-based architectures. However, certain applications are not easily amenable to vectorization and would re quire extensive algorithm and implementation reengineering to utilize the SX-6 effectively.

  20. Evaluation of Cache-based Superscalar and Cacheless Vector Architectures for Scientific Computations

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Carter, Jonathan; Shalf, John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri, Jahed; VanderWijngaart, Rob

    2003-01-01

    The growing gap between sustained and peak performance for scientific applications has become a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to bridge this gap for a significant number of computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX6 vector processor and the cache-based IBM Power3/4 superscalar architectures across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines a full spectrum of low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks using some simple optimizations. Finally, we evaluate the perfor- mance of several numerical codes from key scientific computing domains. Overall results demonstrate that the SX6 achieves high performance on a large fraction of our application suite and in many cases significantly outperforms the RISC-based architectures. However, certain classes of applications are not easily amenable to vectorization and would likely require extensive reengineering of both algorithm and implementation to utilize the SX6 effectively.

  1. Gilgamesh: A Multithreaded Processor-In-Memory Architecture for Petaflops Computing

    NASA Technical Reports Server (NTRS)

    Sterling, T. L.; Zima, H. P.

    2002-01-01

    Processor-in-Memory (PIM) architectures avoid the von Neumann bottleneck in conventional machines by integrating high-density DRAM and CMOS logic on the same chip. Parallel systems based on this new technology are expected to provide higher scalability, adaptability, robustness, fault tolerance and lower power consumption than current MPPs or commodity clusters. In this paper we describe the design of Gilgamesh, a PIM-based massively parallel architecture, and elements of its execution model. Gilgamesh extends existing PIM capabilities by incorporating advanced mechanisms for virtualizing tasks and data and providing adaptive resource management for load balancing and latency tolerance. The Gilgamesh execution model is based on macroservers, a middleware layer which supports object-based runtime management of data and threads allowing explicit and dynamic control of locality and load balancing. The paper concludes with a discussion of related research activities and an outlook to future work.

  2. Recent advances in metal oxide-based electrode architecture design for electrochemical energy storage.

    PubMed

    Jiang, Jian; Li, Yuanyuan; Liu, Jinping; Huang, Xintang; Yuan, Changzhou; Lou, Xiong Wen David

    2012-10-01

    Metal oxide nanostructures are promising electrode materials for lithium-ion batteries and supercapacitors because of their high specific capacity/capacitance, typically 2-3 times higher than that of the carbon/graphite-based materials. However, their cycling stability and rate performance still can not meet the requirements of practical applications. It is therefore urgent to improve their overall device performance, which depends on not only the development of advanced electrode materials but also in a large part "how to design superior electrode architectures". In the article, we will review recent advances in strategies for advanced metal oxide-based hybrid nanostructure design, with the focus on the binder-free film/array electrodes. These binder-free electrodes, with the integration of unique merits of each component, can provide larger electrochemically active surface area, faster electron transport and superior ion diffusion, thus leading to substantially improved cycling and rate performance. Several recently emerged concepts of using ordered nanostructure arrays, synergetic core-shell structures, nanostructured current collectors, and flexible paper/textile electrodes will be highlighted, pointing out advantages and challenges where appropriate. Some future electrode design trends and directions are also discussed. PMID:22912066

  3. Advanced payload concepts and system architecture for emerging services in Indian National Satellite Systems

    NASA Astrophysics Data System (ADS)

    Balasubramanian, E. P.; Rao, N. Prahlad; Sarkar, S.; Singh, D. K.

    2008-07-01

    Over the past two decades Indian Space Research Organization (ISRO) has developed and operationalized satellites to generate a large capacity of transponders for telecommunication service use in INSAT system. More powerful on-board transmitters are built to usher-in direct-to-home broadcast services. These have transformed the Satcom application scenario in the country. With the proliferation of satellite technology, a shift in the Indian market is witnessed today in terms of demand for new services like Broadband Internet, Interactive Multimedia, etc. While it is imperative to pay attention to market trends, ISRO is also committed towards taking the benefits of technological advancement to all round growth of our population, 70% of which dwell in rural areas. The initiatives already taken in space application related to telemedicine, tele-education and Village Resource Centres are required to be taken to a greater height of efficiency. These targets pose technological challenges to build a large capacity and cost-effective satellite system. This paper addresses advanced payload concepts and system architecture along with the trade-off analysis on design parameters in proposing a new generation satellite system capable of extending the reach of the Indian broadband structure to individual users, educational and medical institutions and enterprises for interactive services. This will be a strategic step in the evolution of INSAT system to employ advanced technology to touch every human face of our population.

  4. Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 2: Army fault tolerant architecture design and analysis

    NASA Technical Reports Server (NTRS)

    Harper, R. E.; Alger, L. S.; Babikyan, C. A.; Butler, B. P.; Friend, S. A.; Ganska, R. J.; Lala, J. H.; Masotto, T. K.; Meyer, A. J.; Morton, D. P.

    1992-01-01

    Described here is the Army Fault Tolerant Architecture (AFTA) hardware architecture and components and the operating system. The architectural and operational theory of the AFTA Fault Tolerant Data Bus is discussed. The test and maintenance strategy developed for use in fielded AFTA installations is presented. An approach to be used in reducing the probability of AFTA failure due to common mode faults is described. Analytical models for AFTA performance, reliability, availability, life cycle cost, weight, power, and volume are developed. An approach is presented for using VHSIC Hardware Description Language (VHDL) to describe and design AFTA's developmental hardware. A plan is described for verifying and validating key AFTA concepts during the Dem/Val phase. Analytical models and partial mission requirements are used to generate AFTA configurations for the TF/TA/NOE and Ground Vehicle missions.

  5. A high-radix CORDIC architecture dedicated to compute the Gaussian potential function in neural networks

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe H.; Meyer-Baese, Anke; Ramirez, Javier; Garcia, Antonio

    2003-08-01

    In this paper, a new parallel hardware architecture dedicated to compute the Gaussian Potential Function is proposed. This function is commonly utilized in neural radial basis classifiers for pattern recognition as described by Lee; Girosi and Poggio; and Musavi et al. Attention to a simplified Gaussian Potential Function which processes uncorrelated features is confined. Operations of most interest included by the Gaussian potential function are the exponential and the square function. Our hardware computes the exponential function and its exponent at the same time. The contributions of all features to the exponent are computed in parallel. This parallelism reduces computational delay in the output function. The duration does not depend on the number of features processed. Software and hardware case studies are presented to evaluate the new CORDIC.

  6. Architecture for event-driven real-time distributed computer systems

    SciTech Connect

    McDonald, J.E.

    1983-01-01

    The author describes a proposed preliminary system design that includes hardware and software for real-time distributed computer systems. This new system is appropriate as a digital avionics architecture or as a real-time multi-computer simulation system using a mixture of computers, mainframes to micros. The hardware contains a network that employs high-speed serial data transmission concepts in emulating a multicomputer shared memory system. The distributed multicomputer system then capitalizes on the attributes of the hardware by structuring the real-time software as the data-driven input-output system. The real-time software executes only on demand and not synchronously as in conventional real-time systems. Background information concerning multi-computer systems using serial and parallel data transmission networks is given. This information supports the design rationale of the proposed hardware system which is basically a technology blend of conventional serial and parallel transmission schemes. 2 references.

  7. A pipelined IC architecture for radon transform computations in a multiprocessor array

    SciTech Connect

    Agi, I.; Hurst, P.J.; Current, K.W. . Dept. of Electrical Engineering and Computer Science)

    1990-05-25

    The amount of data generated by CT scanners is enormous, making the reconstruction operation slow, especially for 3-D and limited-data scans requiring iterative algorithms. The Radon transform and its inverse, commonly used for CT image reconstruction from projections, are computationally burdensome for today's single-processor computer architectures. If the processing times for the forward and inverse Radon transforms were comparatively small, a large set of new CT algorithms would become feasible, especially those for 3-D and iterative tomographic image reconstructions. In addition to image reconstruction, a fast Radon Transform Computer'' could be naturally applied in other areas of multidimensional signal processing including 2-D power spectrum estimation, modeling of human perception, Hough transforms, image representation, synthetic aperture radar processing, and others. A high speed processor for this operation is likely to motivate new algorithms for general multidimensional signal processing using the Radon transform. In the proposed workshop paper, we will first describe interpolation schemes useful in computation of the discrete Radon transform and backprojection and compare their errors and hardware complexities. We then will evaluate through statistical means the fixed-point number system required to accept and generate 12-bit input and output data with acceptable error using the linear interpolation scheme selected. These results set some of the requirements that must be met by our new VLSI chip architecture. Finally we will present a new unified architecture for a single-chip processor for computing both the forward Radon transform and backprojection at high data rates. 3 refs., 2 figs.

  8. An Experiment in the Use of Computer-Based Education to Teach Energy Considerations in Architectural Design.

    ERIC Educational Resources Information Center

    Arumi, Francisco N.

    Computer programs capable of describing the thermal behavior of buildings are used to help architectural students understand environmental systems. The Numerical Simulation Laboratory at the Architectural School of the University of Texas at Austin was developed to provide the necessary software capable of simulating the energy transactions…

  9. Advanced Placement Computer Science with Pascal. Volume 2. Experimental Edition.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn, NY.

    This curriculum guide presents 100 lessons for an advanced placement course on programming in Pascal. Some of the topics covered include arrays, sorting, strings, sets, records, computers in society, files, stacks, queues, linked lists, binary trees, searching, hashing, and chaining. Performance objectives, vocabulary, motivation, aim,…

  10. The Federal Government's Role in Advancing Computer Technology

    ERIC Educational Resources Information Center

    Information Hotline, 1978

    1978-01-01

    As part of the Federal Data Processing Reorganization Study submitted by the Science and Technology Team, the Federal Government's role in advancing and diffusing computer technology is discussed. Findings and conclusions assess the state-of-the-art in government and in industry, and five recommendations provide directions for government policy…

  11. Computer-Assisted Foreign Language Teaching and Learning: Technological Advances

    ERIC Educational Resources Information Center

    Zou, Bin; Xing, Minjie; Wang, Yuping; Sun, Mingyu; Xiang, Catherine H.

    2013-01-01

    Computer-Assisted Foreign Language Teaching and Learning: Technological Advances highlights new research and an original framework that brings together foreign language teaching, experiments and testing practices that utilize the most recent and widely used e-learning resources. This comprehensive collection of research will offer linguistic…

  12. Advances in reversed field pinch theory and computation

    SciTech Connect

    Schnack, D.D.; Ho, Y.L.; Carreras, B.A.; Sidikman, K.; Craddock, G.G.; Mattor, N.; Nebel, R.A.; Prager, S.C.; Terry, P.W.; Zita, E.J.

    1992-12-31

    Advances in theory and computations related to the reversed field pinch (RFP) are presented. These are: (1) the effect of the dynamo on thermal transport; (2) a theory of ion heating due to dynamo fluctuations; (3) studies of active and passive feedback schemes for controlling dynamo fluctuations; and (4) an analytic model for coupled g-mode and rippling turbulence in the RFP edge.

  13. Advances in computational design and analysis of airbreathing propulsion systems

    NASA Technical Reports Server (NTRS)

    Klineberg, John M.

    1989-01-01

    The development of commercial and military aircraft depends, to a large extent, on engine manufacturers being able to achieve significant increases in propulsion capability through improved component aerodynamics, materials, and structures. The recent history of propulsion has been marked by efforts to develop computational techniques that can speed up the propulsion design process and produce superior designs. The availability of powerful supercomputers, such as the NASA Numerical Aerodynamic Simulator, and the potential for even higher performance offered by parallel computer architectures, have opened the door to the use of multi-dimensional simulations to study complex physical phenomena in propulsion systems that have previously defied analysis or experimental observation. An overview of several NASA Lewis research efforts is provided that are contributing toward the long-range goal of a numerical test-cell for the integrated, multidisciplinary design, analysis, and optimization of propulsion systems. Specific examples in Internal Computational Fluid Mechanics, Computational Structural Mechanics, Computational Materials Science, and High Performance Computing are cited and described in terms of current capabilities, technical challenges, and future research directions.

  14. Advanced computational research in materials processing for design and manufacturing

    SciTech Connect

    Zacharia, T.

    1994-12-31

    The computational requirements for design and manufacture of automotive components have seen dramatic increases for producing automobiles with three times the mileage. Automotive component design systems are becoming increasingly reliant on structural analysis requiring both overall larger analysis and more complex analyses, more three-dimensional analyses, larger model sizes, and routine consideration of transient and non-linear effects. Such analyses must be performed rapidly to minimize delays in the design and development process, which drives the need for parallel computing. This paper briefly describes advanced computational research in superplastic forming and automotive crash worthiness.

  15. Characterization of Proxy Application Performance on Advanced Architectures. UMT2013, MCB, AMG2013

    SciTech Connect

    Howell, Louis H.; Gunney, Brian T.; Bhatele, Abhinav

    2015-10-09

    Three codes were tested at LLNL as part of a Tri-Lab effort to make detailed assessments of several proxy applications on various advanced architectures, with the eventual goal of extending these assessments to codes of programmatic interest running more realistic simulations. Teams from Sandia and Los Alamos tested proxy apps of their own. The focus in this report is on the LLNL codes UMT2013, MCB, and AMG2013. We present weak and strong MPI scaling results and studies of OpenMP efficiency on a large BG/Q system at LLNL, with comparison against similar tests on an Intel Sandy Bridge TLCC2 system. The hardware counters on BG/Q provide detailed information on many aspects of on-node performance, while information from the mpiP tool gives insight into the reasons for the differing scaling behavior on these two different architectures. Results from three more speculative tests are also included: one that exploits NVRAM as extended memory, one that studies performance under a power bound, and one that illustrates the effects of changing the torus network mapping on BG/Q.

  16. Advanced computational tools for 3-D seismic analysis

    SciTech Connect

    Barhen, J.; Glover, C.W.; Protopopescu, V.A.

    1996-06-01

    The global objective of this effort is to develop advanced computational tools for 3-D seismic analysis, and test the products using a model dataset developed under the joint aegis of the United States` Society of Exploration Geophysicists (SEG) and the European Association of Exploration Geophysicists (EAEG). The goal is to enhance the value to the oil industry of the SEG/EAEG modeling project, carried out with US Department of Energy (DOE) funding in FY` 93-95. The primary objective of the ORNL Center for Engineering Systems Advanced Research (CESAR) is to spearhead the computational innovations techniques that would enable a revolutionary advance in 3-D seismic analysis. The CESAR effort is carried out in collaboration with world-class domain experts from leading universities, and in close coordination with other national laboratories and oil industry partners.

  17. Space architecture monograph series. Volume 4: Genesis 2: Advanced lunar outpost

    NASA Technical Reports Server (NTRS)

    Fieber, Joseph P.; Huebner-Moths, Janis; Paruleski, Kerry L.; Moore, Gary T. (Editor)

    1991-01-01

    This research and design study investigated advanced lunar habitats for astronauts and mission specialists on the Earth's moon. Design recommendations are based on environmental response to the lunar environment, human habitability (human factors and environmental behavior research), transportability (structural and materials system with least mass), constructability (minimizing extravehicular time), construction dependability and resilience, and suitability for NASA launch research missions in the 21st century. The recommended design uses lunar lava tubes, with construction being a combination of Space Station Freedom derived hard modules and light weight Kevlar laminate inflatable structures. The proposed habitat includes research labs and a biotron, crew quarters and crew support facility, mission control, health maintenance facility, maintenance work areas for psychological retreat, privacy, and comtemplation. Furniture, specialized equipment, and lighting are included in the analysis and design. Drawings include base master plans, construction sequencing, overall architectural configuration, detailed floor plans, sections and axonometrics, with interior perspectives.

  18. A system architecture for an advanced Canadian wideband mobile satellite system

    NASA Technical Reports Server (NTRS)

    Takats, P.; Keelty, M.; Moody, H.

    1993-01-01

    In this paper, the system architecture for an advanced Canadian ka-band geostationary mobile satellite system is described, utilizing hopping spot beams to support a 256 kbps wideband service for both N-ISDN and packet-switched interconnectivity to small briefcase-size portable and mobile terminals. An assessment is given of the technical feasibility of the satellite payload and terminal design in the post year 2000 timeframe. The satellite payload includes regeneration and on-board switching to permit single hop interconnectivity between mobile terminals. The mobile terminal requires antenna tracking and platform stabilization to ensure acquisition of the satellite signal. The potential user applications targeted for this wideband service includes: home-office, multimedia, desk-top (PC) videoconferencing, digital audio broadcasting, single and multi-user personal communications.

  19. [Activities of Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2001-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administrations missions. RIACS is located at the NASA Ames Research Center, Moffett Field, California. RIACS research focuses on the three cornerstones of IT research necessary to meet the future challenges of NASA missions: 1. Automated Reasoning for Autonomous Systems Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. 2. Human-Centered Computing Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities. 3. High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to analysis of large scientific datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply IT research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, visiting scientist programs and student summer programs, designed to encourage and facilitate collaboration between the university and NASA IT research communities.

  20. Developing a New Framework for Integration and Teaching of Computer Aided Architectural Design (CAAD) in Nigerian Schools of Architecture

    ERIC Educational Resources Information Center

    Uwakonye, Obioha; Alagbe, Oluwole; Oluwatayo, Adedapo; Alagbe, Taiye; Alalade, Gbenga

    2015-01-01

    As a result of globalization of digital technology, intellectual discourse on what constitutes the basic body of architectural knowledge to be imparted to future professionals has been on the increase. This digital revolution has brought to the fore the need to review the already overloaded architectural education curriculum of Nigerian schools of…

  1. Advances in Numerical Boundary Conditions for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.

    1997-01-01

    Advances in Computational Aeroacoustics (CAA) depend critically on the availability of accurate, nondispersive, least dissipative computation algorithm as well as high quality numerical boundary treatments. This paper focuses on the recent developments of numerical boundary conditions. In a typical CAA problem, one often encounters two types of boundaries. Because a finite computation domain is used, there are external boundaries. On the external boundaries, boundary conditions simulating the solution outside the computation domain are to be imposed. Inside the computation domain, there may be internal boundaries. On these internal boundaries, boundary conditions simulating the presence of an object or surface with specific acoustic characteristics are to be applied. Numerical boundary conditions, both external or internal, developed for simple model problems are reviewed and examined. Numerical boundary conditions for real aeroacoustic problems are also discussed through specific examples. The paper concludes with a description of some much needed research in numerical boundary conditions for CAA.

  2. Inter-computer communication architecture for a mixed redundancy distributed system

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.; Adams, Stuart J.

    1987-01-01

    The triply redundant intercomputer network for the Advanced Information Processing System (AIPS), an architecture developed to serve as the core avionics system for a broad range of aerospace vehicles, is discussed. The AIPS intercomputer network provides a high-speed, Byzantine-fault-resilient communication service between processing sites, even in the presence of arbitrary failures of simplex and duplex processing sites on the IC network. The IC network contention poll has evolved from the Laning Poll. An analysis of the failure modes and effects and a simulation of the AIPS contention poll, demonstrate the robustness of the system.

  3. Computational architecture for full-color holographic displays based on anisotropic leaky-mode modulators

    NASA Astrophysics Data System (ADS)

    Jolly, Sundeep; Smalley, Daniel; Barabas, James; Bove, V. Michael

    2014-02-01

    The MIT Mark IV holographic display system employs a novel anisotropic leaky-mode spatial light modulator that allows for the simultaneous and superimposed modulation of red, green, and blue light via wavelength-division multiplexing. This WDM-based scheme for full-color display requires that incoming video signals containing holographic fringe information are comprised of non-overlapping spectral bands that fall within the available 200 MHz output bandwidth of commercial GPUs. These bands correspond to independent color channels in the display output and are appropriately band-limited and centered to match the multiplexed passbands and center frequencies in the frequency response of the mode-coupling device. The computational architecture presented in this paper involves the computation of holographic fringe patterns for each color channel and their summation in generating a single video signal for input to the display. In composite, 18 such input signals, each containing holographic fringe information for 26 horizontal-parallax only holographic lines, are generated via three dual-head GPUs for a total of 468 holographic lines in the display output. We present a general scheme for full-color CGH computation for input to Mark IV and furthermore depict the adaptation of the diffraction specific coherent panoramagram approach to fringe computation for the Mark IV architecture.

  4. Scaling to Nanotechnology Limits with the PIMS Computer Architecture and a new Scaling Rule.

    SciTech Connect

    Debenedictis, Erik

    2015-02-01

    We describe a new approach to computing that moves towards the limits of nanotechnology using a newly formulated sc aling rule. This is in contrast to the current computer industry scali ng away from von Neumann's original computer at the rate of Moore's Law. We extend Moore's Law to 3D, which l eads generally to architectures that integrate logic and memory. To keep pow er dissipation cons tant through a 2D surface of the 3D structure requires using adiabatic principles. We call our newly proposed architecture Processor In Memory and Storage (PIMS). We propose a new computational model that integrates processing and memory into "tiles" that comprise logic, memory/storage, and communications functions. Since the programming model will be relatively stable as a system scales, programs repr esented by tiles could be executed in a PIMS system built with today's technology or could become the "schematic diagram" for implementation in an ultimate 3D nanotechnology of the future. We build a systems software approach that offers advantages over and above the technological and arch itectural advantages. Firs t, the algorithms may be more efficient in the conventional sens e of having fewer steps. Second, the algorithms may run with higher power efficiency per operation by being a better match for the adiabatic scaling ru le. The performance analysis based on demonstrated ideas in physical science suggests 80,000 x improvement in cost per operation for the (arguably) gene ral purpose function of emulating neurons in Deep Learning.

  5. Ultra-Fast Data-Mining Hardware Architecture Based on Stochastic Computing

    PubMed Central

    Oliver, Antoni; Alomar, Miquel L.

    2015-01-01

    Minimal hardware implementations able to cope with the processing of large amounts of data in reasonable times are highly desired in our information-driven society. In this work we review the application of stochastic computing to probabilistic-based pattern-recognition analysis of huge database sets. The proposed technique consists in the hardware implementation of a parallel architecture implementing a similarity search of data with respect to different pre-stored categories. We design pulse-based stochastic-logic blocks to obtain an efficient pattern recognition system. The proposed architecture speeds up the screening process of huge databases by a factor of 7 when compared to a conventional digital implementation using the same hardware area. PMID:25955274

  6. Ultra-fast data-mining hardware architecture based on stochastic computing.

    PubMed

    Morro, Antoni; Canals, Vincent; Oliver, Antoni; Alomar, Miquel L; Rossello, Josep L

    2015-01-01

    Minimal hardware implementations able to cope with the processing of large amounts of data in reasonable times are highly desired in our information-driven society. In this work we review the application of stochastic computing to probabilistic-based pattern-recognition analysis of huge database sets. The proposed technique consists in the hardware implementation of a parallel architecture implementing a similarity search of data with respect to different pre-stored categories. We design pulse-based stochastic-logic blocks to obtain an efficient pattern recognition system. The proposed architecture speeds up the screening process of huge databases by a factor of 7 when compared to a conventional digital implementation using the same hardware area. PMID:25955274

  7. The computational structural mechanics testbed architecture. Volume 4: The global-database manager GAL-DBM

    NASA Technical Reports Server (NTRS)

    Wright, Mary A.; Regelbrugge, Marc E.; Felippa, Carlos A.

    1989-01-01

    This is the fourth of a set of five volumes which describe the software architecture for the Computational Structural Mechanics Testbed. Derived from NICE, an integrated software system developed at Lockheed Palo Alto Research Laboratory, the architecture is composed of the command language CLAMP, the command language interpreter CLIP, and the data manager GAL. Volumes 1, 2, and 3 (NASA CR's 178384, 178385, and 178386, respectively) describe CLAMP and CLIP and the CLIP-processor interface. Volumes 4 and 5 (NASA CR's 178387 and 178388, respectively) describe GAL and its low-level I/O. CLAMP, an acronym for Command Language for Applied Mechanics Processors, is designed to control the flow of execution of processors written for NICE. Volume 4 describes the nominal-record data management component of the NICE software. It is intended for all users.

  8. PREFACE: 16th International workshop on Advanced Computing and Analysis Techniques in physics research (ACAT2014)

    NASA Astrophysics Data System (ADS)

    Fiala, L.; Lokajicek, M.; Tumova, N.

    2015-05-01

    This volume of the IOP Conference Series is dedicated to scientific contributions presented at the 16th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2014), this year the motto was ''bridging disciplines''. The conference took place on September 1-5, 2014, at the Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic. The 16th edition of ACAT explored the boundaries of computing system architectures, data analysis algorithmics, automatic calculations, and theoretical calculation technologies. It provided a forum for confronting and exchanging ideas among these fields, where new approaches in computing technologies for scientific research were explored and promoted. This year's edition of the workshop brought together over 140 participants from all over the world. The workshop's 16 invited speakers presented key topics on advanced computing and analysis techniques in physics. During the workshop, 60 talks and 40 posters were presented in three tracks: Computing Technology for Physics Research, Data Analysis - Algorithms and Tools, and Computations in Theoretical Physics: Techniques and Methods. The round table enabled discussions on expanding software, knowledge sharing and scientific collaboration in the respective areas. ACAT 2014 was generously sponsored by Western Digital, Brookhaven National Laboratory, Hewlett Packard, DataDirect Networks, M Computers, Bright Computing, Huawei and PDV-Systemhaus. Special appreciations go to the track liaisons Lorenzo Moneta, Axel Naumann and Grigory Rubtsov for their work on the scientific program and the publication preparation. ACAT's IACC would also like to express its gratitude to all referees for their work on making sure the contributions are published in the proceedings. Our thanks extend to the conference liaisons Andrei Kataev and Jerome Lauret who worked with the local contacts and made this conference possible as well as to the program

  9. Apparatuses and Methods for Producing Runtime Architectures of Computer Program Modules

    NASA Technical Reports Server (NTRS)

    Abi-Antoun, Marwan Elia (Inventor); Aldrich, Jonathan Erik (Inventor)

    2013-01-01

    Apparatuses and methods for producing run-time architectures of computer program modules. One embodiment includes creating an abstract graph from the computer program module and from containment information corresponding to the computer program module, wherein the abstract graph has nodes including types and objects, and wherein the abstract graph relates an object to a type, and wherein for a specific object the abstract graph relates the specific object to a type containing the specific object; and creating a runtime graph from the abstract graph, wherein the runtime graph is a representation of the true runtime object graph, wherein the runtime graph represents containment information such that, for a specific object, the runtime graph relates the specific object to another object that contains the specific object.

  10. Development of a Computer Architecture to Support the Optical Plume Anomaly Detection (OPAD) System

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1996-01-01

    The NASA OPAD spectrometer system relies heavily on extensive software which repetitively extracts spectral information from the engine plume and reports the amounts of metals which are present in the plume. The development of this software is at a sufficiently advanced stage where it can be used in actual engine tests to provide valuable data on engine operation and health. This activity will continue and, in addition, the OPAD system is planned to be used in flight aboard space vehicles. The two implementations, test-stand and in-flight, may have some differing requirements. For example, the data stored during a test-stand experiment are much more extensive than in the in-flight case. In both cases though, the majority of the requirements are similar. New data from the spectrograph is generated at a rate of once every 0.5 sec or faster. All processing must be completed within this period of time to maintain real-time performance. Every 0.5 sec, the OPAD system must report the amounts of specific metals within the engine plume, given the spectral data. At present, the software in the OPAD system performs this function by solving the inverse problem. It uses powerful physics-based computational models (the SPECTRA code), which receive amounts of metals as inputs to produce the spectral data that would have been observed, had the same metal amounts been present in the engine plume. During the experiment, for every spectrum that is observed, an initial approximation is performed using neural networks to establish an initial metal composition which approximates as accurately as possible the real one. Then, using optimization techniques, the SPECTRA code is repetitively used to produce a fit to the data, by adjusting the metal input amounts until the produced spectrum matches the observed one to within a given level of tolerance. This iterative solution to the original problem of determining the metal composition in the plume requires a relatively long period of time

  11. Advanced Laser Architecture for Two-Step Laser Tandem Mass Spectrometer

    NASA Technical Reports Server (NTRS)

    Fahey, Molly E.; Li, Steven X.; Yu, Anthony W.; Getty, Stephanie A.

    2016-01-01

    Future astrobiology missions will focus on planets with significant astrochemical or potential astrobiological features, such as small, primitive bodies and the icy moons of the outer planets that may host diverse organic compounds. These missions require advanced instrument techniques to fully and unambiguously characterize the composition of surface and dust materials. Laser desorptionionization mass spectrometry (LDMS) is an emerging instrument technology for in situ mass analysis of non-volatile sample composition. A recent Goddard LDMS advancement is the two-step laser tandem mass spectrometer (L2MS) instrument to address the need for future flight instrumentation to deconvolve complex organic signatures. The L2MS prototype uses a resonance enhanced multi-photon laser ionization mechanism to selectively detect aromatic species from a more complex sample. By neglecting the aliphatic and inorganic mineral signatures in the two-step mass spectrum, the L2MS approach can provide both mass assignments and clues to structural information for an in situ investigation of non-volatile sample composition. In this paper we will describe our development effort on a new laser architecture that is based on the previously flown Lunar Orbiter Laser Altimeter (LOLA) laser transmitter for the L2MS instrument. The laser provides two discrete midinfrared wavelengths (2.8 m and 3.4 m) using monolithic optical parametric oscillators and ultraviolet (UV) wavelength (266 nm) on a single laser bench with a straightforward development path toward flight readiness.

  12. Designing and Operating Through Compromise: Architectural Analysis of CKMS for the Advanced Metering Infrastructure

    SciTech Connect

    Duren, Mike; Aldridge, Hal; Abercrombie, Robert K; Sheldon, Frederick T

    2013-01-01

    Compromises attributable to the Advanced Persistent Threat (APT) highlight the necessity for constant vigilance. The APT provides a new perspective on security metrics (e.g., statistics based cyber security) and quantitative risk assessments. We consider design principals and models/tools that provide high assurance for energy delivery systems (EDS) operations regardless of the state of compromise. Cryptographic keys must be securely exchanged, then held and protected on either end of a communications link. This is challenging for a utility with numerous substations that must secure the intelligent electronic devices (IEDs) that may comprise complex control system of systems. For example, distribution and management of keys among the millions of intelligent meters within the Advanced Metering Infrastructure (AMI) is being implemented as part of the National Smart Grid initiative. Without a means for a secure cryptographic key management system (CKMS) no cryptographic solution can be widely deployed to protect the EDS infrastructure from cyber-attack. We consider 1) how security modeling is applied to key management and cyber security concerns on a continuous basis from design through operation, 2) how trusted models and key management architectures greatly impact failure scenarios, and 3) how hardware-enabled trust is a critical element to detecting, surviving, and recovering from attack.

  13. Parallel processing algorithms for hydrocodes on a computer with MIMD architecture (DENELCOR's HEP)

    SciTech Connect

    Hicks, D.L.

    1983-11-01

    In real time simulation/prediction of complex systems such as water-cooled nuclear reactors, if reactor operators had fast simulator/predictors to check the consequences of their operations before implementing them, events such as the incident at Three Mile Island might be avoided. However, existing simulator/predictors such as RELAP run slower than real time on serial computers. It appears that the only way to overcome the barrier to higher computing rates is to use computers with architectures that allow concurrent computations or parallel processing. The computer architecture with the greatest degree of parallelism is labeled Multiple Instruction Stream, Multiple Data Stream (MIMD). An example of a machine of this type is the HEP computer by DENELCOR. It appears that hydrocodes are very well suited for parallelization on the HEP. It is a straightforward exercise to parallelize explicit, one-dimensional Lagrangean hydrocodes in a zone-by-zone parallelization. Similarly, implicit schemes can be parallelized in a zone-by-zone fashion via an a priori, symbolic inversion of the tridiagonal matrix that arises in an implicit scheme. These techniques are extended to Eulerian hydrocodes by using Harlow's rezone technique. The extension from single-phase Eulerian to two-phase Eulerian is straightforward. This step-by-step extension leads to hydrocodes with zone-by-zone parallelization that are capable of two-phase flow simulation. Extensions to two and three spatial dimensions can be achieved by operator splitting. It appears that a zone-by-zone parallelization is the best way to utilize the capabilities of an MIMD machine. 40 references.

  14. A connectionist architecture for view-independent grip-aperture computation.

    PubMed

    Prevete, Roberto; Tessitore, Giovanni; Santoro, Matteo; Catanzariti, Ezio

    2008-08-15

    This paper addresses the problem of extracting view-invariant visual features for the recognition of object-directed actions and introduces a computational model of how these visual features are processed in the brain. In particular, in the test-bed setting of reach-to-grasp actions, grip aperture is identified as a good candidate for inclusion into a parsimonious set of hand high-level features describing overall hand movement during reach-to-grasp actions. The computational model NeGOI (neural network architecture for measuring grip aperture in an observer-independent way) for extracting grip aperture in a view-independent fashion was developed on the basis of functional hypotheses about cortical areas that are involved in visual processing. An assumption built into NeGOI is that grip aperture can be measured from the superposition of a small number of prototypical hand shapes corresponding to predefined grip-aperture sizes. The key idea underlying the NeGOI model is to introduce view-independent units (VIP units) that are selective for prototypical hand shapes, and to integrate the output of VIP units in order to compute grip aperture. The distinguishing traits of the NEGOI architecture are discussed together with results of tests concerning its view-independence and grip-aperture recognition properties. The overall functional organization of NEGOI model is shown to be coherent with current functional models of the ventral visual stream, up to and including temporal area STS. Finally, the functional role of the NeGOI model is examined from the perspective of a biologically plausible architecture which provides a parsimonious set of high-level and view-independent visual features as input to mirror systems. PMID:18538746

  15. MAX - An advanced parallel computer for space applications

    NASA Technical Reports Server (NTRS)

    Lewis, Blair F.; Bunker, Robert L.

    1991-01-01

    MAX is a fault-tolerant multicomputer hardware and software architecture designed to meet the needs of NASA spacecraft systems. It consists of conventional computing modules (computers) connected via a dual network topology. One network is used to transfer data among the computers and between computers and I/O devices. This network's topology is arbitrary. The second network operates as a broadcast medium for operating system synchronization messages and supports the operating system's Byzantine resilience. A fully distributed operating system supports multitasking in an asynchronous event and data driven environment. A large grain dataflow paradigm is used to coordinate the multitasking and provide easy control of concurrency. It is the basis of the system's fault tolerance and allows both static and dynamical location of tasks. Redundant execution of tasks with software voting of results may be specified for critical tasks. The dataflow paradigm also supports simplified software design, test and maintenance. A unique feature is a method for reliably patching code in an executing dataflow application.

  16. Activities of the Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph

    1994-01-01

    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities with research programs in the aerospace sciences, under contract with NASA. The primary mission of RIACS is to provide research and expertise in computer science and scientific computing to support the scientific missions of NASA ARC. The research carried out at RIACS must change its emphasis from year to year in response to NASA ARC's changing needs and technological opportunities. Research at RIACS is currently being done in the following areas: (1) parallel computing; (2) advanced methods for scientific computing; (3) high performance networks; and (4) learning systems. RIACS technical reports are usually preprints of manuscripts that have been submitted to research journals or conference proceedings. A list of these reports for the period January 1, 1994 through December 31, 1994 is in the Reports and Abstracts section of this report.

  17. Projection-based geometrical feature extraction for computer vision: Algorithms in pipeline architectures

    SciTech Connect

    Sanz, J.L.; Dinstein, I.

    1987-01-01

    In this paper, some image transforms and features such as projections along linear patterns, convex hull approximations, Hough transform for line detection, diameter, moments, and principal components will be considered. Specifically, we present algorithms for computing these features which are suitable for implementation in image analysis pipeline architectures. In particular, random access memories and other dedicated hardware components which may be found in the implementation of classical techniques are not longer needed in our algorithms. The effectiveness of our approach is demonstrated by running some of the new algorithms in conventional short-pipelines for image analysis.

  18. Computation studies into architecture and energy transfer properties of photosynthetic units from filamentous anoxygenic phototrophs

    SciTech Connect

    Linnanto, Juha Matti; Freiberg, Arvi

    2014-10-06

    We have used different computational methods to study structural architecture, and light-harvesting and energy transfer properties of the photosynthetic unit of filamentous anoxygenic phototrophs. Due to the huge number of atoms in the photosynthetic unit, a combination of atomistic and coarse methods was used for electronic structure calculations. The calculations reveal that the light energy absorbed by the peripheral chlorosome antenna complex transfers efficiently via the baseplate and the core B808–866 antenna complexes to the reaction center complex, in general agreement with the present understanding of this complex system.

  19. Efficient computer architecture for the realization of realtime linear systems with intelligent processing of large sparse matrices

    SciTech Connect

    Chae, S.H.

    1988-01-01

    This dissertation describes an intelligent rule-based iterative parallel algorithm for solving randomly distributed large sparse linear systems of equations, and also the efficient parallel processing computer architecture for the implementation of the algorithm. Implemented with the Jacobi iterative method, the intelligent rule-based algorithm reduces the parallel execution time by reducing the individual inner product operation time. A static dataflow architecture is proposed for implementing the intelligent rule-based iterative parallel algorithm. The dataflow computer architecture has the capability to support parallelism exploited in the algorithm, and the execute the algorithm asynchronously. The proposed computer architecture consists of a main processor, several control processors, scalar slave processors, and pipelined slave processors. Several control processors share with the main processor the heavy burden of allocation of operation packets and of sychronization for parallel processing.

  20. Architecture for an advanced biomedical collaboration domain for the European paediatric cancer research community (ABCD-4-E).

    PubMed

    Nitzlnader, Michael; Falgenhauer, Markus; Gossy, Christian; Schreier, Günter

    2015-01-01

    Today, progress in biomedical research often depends on large, interdisciplinary research projects and tailored information and communication technology (ICT) support. In the context of the European Network for Cancer Research in Children and Adolescents (ENCCA) project the exchange of data between data source (Source Domain) and data consumer (Consumer Domain) systems in a distributed computing environment needs to be facilitated. This work presents the requirements and the corresponding solution architecture of the Advanced Biomedical Collaboration Domain for Europe (ABCD-4-E). The proposed concept utilises public as well as private cloud systems, the Integrating the Healthcare Enterprise (IHE) framework and web-based applications to provide the core capabilities in accordance with privacy and security needs. The utility of crucial parts of the concept was evaluated by prototypic implementation. A discussion of the design indicates that the requirements of ENCCA are fully met. A whole system demonstration is currently being prepared to verify that ABCD-4-E has the potential to evolve into a domain-bridging collaboration platform in the future. PMID:26063273

  1. Advanced computer modeling techniques expand belt conveyor technology

    SciTech Connect

    Alspaugh, M.

    1998-07-01

    Increased mining production is continuing to challenge engineers and manufacturers to keep up. The pressure to produce larger and more versatile equipment is increasing. This paper will show some recent major projects in the belt conveyor industry that have pushed the limits of design and engineering technology. Also, it will discuss the systems engineering discipline and advanced computer modeling tools that have helped make these achievements possible. Several examples of technologically advanced designs will be reviewed. However, new technology can sometimes produce increased problems with equipment availability and reliability if not carefully developed. Computer modeling techniques that help one design larger equipment can also compound operational headaches if engineering processes and algorithms are not carefully analyzed every step of the way.

  2. Soft computing in design and manufacturing of advanced materials

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.; Baaklini, George Y; Vary, Alex

    1993-01-01

    The potential of fuzzy sets and neural networks, often referred to as soft computing, for aiding in all aspects of manufacturing of advanced materials like ceramics is addressed. In design and manufacturing of advanced materials, it is desirable to find which of the many processing variables contribute most to the desired properties of the material. There is also interest in real time quality control of parameters that govern material properties during processing stages. The concepts of fuzzy sets and neural networks are briefly introduced and it is shown how they can be used in the design and manufacturing processes. These two computational methods are alternatives to other methods such as the Taguchi method. The two methods are demonstrated by using data collected at NASA Lewis Research Center. Future research directions are also discussed.

  3. OPENING REMARKS: SciDAC: Scientific Discovery through Advanced Computing

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2005-01-01

    with industry and virtual prototyping. New instruments of collaboration will include institutes and centers while summer schools, workshops and outreach will invite new talent and expertise. Computational science adds new dimensions to science and its practice. Disciplines of fusion, accelerator science, and combustion are poised to blur the boundaries between pure and applied science. As we open the door into FY2006 we shall see a landscape of new scientific challenges: in biology, chemistry, materials, and astrophysics to name a few. The enabling technologies of SciDAC have been transformational as drivers of change. Planning for major new software systems assumes a base line employing Common Component Architectures and this has become a household word for new software projects. While grid algorithms and mesh refinement software have transformed applications software, data management and visualization have transformed our understanding of science from data. The Gordon Bell prize now seems to be dominated by computational science and solvers developed by TOPS ISIC. The priorities of the Office of Science in the Department of Energy are clear. The 20 year facilities plan is driven by new science. High performance computing is placed amongst the two highest priorities. Moore's law says that by the end of the next cycle of SciDAC we shall have peta-flop computers. The challenges of petascale computing are enormous. These and the associated computational science are the highest priorities for computing within the Office of Science. Our effort in Leadership Class computing is just a first step towards this goal. Clearly, computational science at this scale will face enormous challenges and possibilities. Performance evaluation and prediction will be critical to unraveling the needed software technologies. We must not lose sight of our overarching goal—that of scientific discovery. Science does not stand still and the landscape of science discovery and computing holds

  4. Simulation of Si:P spin-based quantum computer architecture

    SciTech Connect

    Chang Yiachung; Fang Angbo

    2008-11-07

    We present realistic simulation for single and double phosphorous donors in a silicon-based quantum computer design by solving a valley-orbit coupled effective-mass equation for describing phosphorous donors in strained silicon quantum well (QW). Using a generalized unrestricted Hartree-Fock method, we solve the two-electron effective-mass equation with quantum well confinement and realistic gate potentials. The effects of QW width, gate voltages, donor separation, and donor position shift on the lowest singlet and triplet energies and their charge distributions for a neighboring donor pair in the quantum computer(QC) architecture are analyzed. The gate tunability are defined and evaluated for a typical QC design. Estimates are obtained for the duration of spin half-swap gate operation.

  5. Computational architecture for image processing on a small unmanned ground vehicle

    NASA Astrophysics Data System (ADS)

    Ho, Sean; Nguyen, Hung

    2010-08-01

    Man-portable Unmanned Ground Vehicles (UGVs) have been fielded on the battlefield with limited computing power. This limitation constrains their use primarily to teleoperation control mode for clearing areas and bomb defusing. In order to extend their capability to include the reconnaissance and surveillance missions of dismounted soldiers, a separate processing payload is desired. This paper presents a processing architecture and the design details on the payload module that enables the PackBot to perform sophisticated, real-time image processing algorithms using data collected from its onboard imaging sensors including LADAR, IMU, visible, IR, stereo, and the Ladybug spherical cameras. The entire payload is constructed from currently available Commercial off-the-shelf (COTS) components including an Intel multi-core CPU and a Nvidia GPU. The result of this work enables a small UGV to perform computationally expensive image processing tasks that once were only feasible on a large workstation.

  6. VLSI architectures for computing multiplications and inverses in GF(2m).

    PubMed

    Wang, C C; Truong, T K; Shao, H M; Deutsch, L J; Omura, J K; Reed, I S

    1985-08-01

    Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that can be easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. In this paper, a pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal basis representation used together with this multiplier, a pipeline architecture is developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable, and therefore, naturally suitable for VLSI implementation. PMID:11539660

  7. VLSI architectures for computing multiplications and inverses in GF(2-m)

    NASA Technical Reports Server (NTRS)

    Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.; Reed, I. S.

    1983-01-01

    Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  8. VLSI architectures for computing multiplications and inverses in GF(2m)

    NASA Technical Reports Server (NTRS)

    Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.

    1985-01-01

    Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  9. Introduction to the special section on computer architectures and parallel algorithms for PAMI

    SciTech Connect

    Dyer, C.R.

    1989-03-01

    The topic of multiprocessor computer architectures and parallel algorithms for computer vision and related applications is not new, but researchers are now addressing both a wider scope of issues and emphasizing system integration. Recently, a wide variety of different systems have been designed, built, and tested on a range of image understanding tasks. An important goal beginning to be addressed is how to achieve high performance when a complete, integrated set of component vision processes are combined. The papers in this special section describe a number of approaches to improving the performance of vision architectures. Each paper uses a different model of parallel processing. The first four papers describe machines or chips which have been built, each exhibiting certain advantages for vision. One important distinction between these approaches is in terms of the number of processors used, defining the granularity of parallel processing. The first three papers also evaluate the performance of their systems on a suite of vision tasks covering several image representations and processing requirements.

  10. Peer-to-peer architectures for exascale computing : LDRD final report.

    SciTech Connect

    Vorobeychik, Yevgeniy; Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.; Rudish, Donald W.

    2010-09-01

    The goal of this research was to investigate the potential for employing dynamic, decentralized software architectures to achieve reliability in future high-performance computing platforms. These architectures, inspired by peer-to-peer networks such as botnets that already scale to millions of unreliable nodes, hold promise for enabling scientific applications to run usefully on next-generation exascale platforms ({approx} 10{sup 18} operations per second). Traditional parallel programming techniques suffer rapid deterioration of performance scaling with growing platform size, as the work of coping with increasingly frequent failures dominates over useful computation. Our studies suggest that new architectures, in which failures are treated as ubiquitous and their effects are considered as simply another controllable source of error in a scientific computation, can remove such obstacles to exascale computing for certain applications. We have developed a simulation framework, as well as a preliminary implementation in a large-scale emulation environment, for exploration of these 'fault-oblivious computing' approaches. High-performance computing (HPC) faces a fundamental problem of increasing total component failure rates due to increasing system sizes, which threaten to degrade system reliability to an unusable level by the time the exascale range is reached ({approx} 10{sup 18} operations per second, requiring of order millions of processors). As computer scientists seek a way to scale system software for next-generation exascale machines, it is worth considering peer-to-peer (P2P) architectures that are already capable of supporting 10{sup 6}-10{sup 7} unreliable nodes. Exascale platforms will require a different way of looking at systems and software because the machine will likely not be available in its entirety for a meaningful execution time. Realistic estimates of failure rates range from a few times per day to more than once per hour for these platforms. P2

  11. The advanced computational testing and simulation toolkit (ACTS)

    SciTech Connect

    Drummond, L.A.; Marques, O.

    2002-05-21

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Distinctively, a number of these are important scientific problems ranging in scale from the atomic to the cosmic. For example, ionization is a phenomenon as ubiquitous in modern society as the glow of fluorescent lights and the etching on silicon computer chips; but it was not until 1999 that researchers finally achieved a complete numerical solution to the simplest example of ionization, the collision of a hydrogen atom with an electron. On the opposite scale, cosmologists have long wondered whether the expansion of the Universe, which began with the Big Bang, would ever reverse itself, ending the Universe in a Big Crunch. In 2000, analysis of new measurements of the cosmic microwave background radiation showed that the geometry of the Universe is flat, and thus the Universe will continue expanding forever. Both of these discoveries depended on high performance computer simulations that utilized computational tools included in the Advanced Computational Testing and Simulation (ACTS) Toolkit. The ACTS Toolkit is an umbrella project that brought together a number of general purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools, which have been developed independently, mainly at DOE laboratories, make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS Toolkit Project enables the use of these tools by a much wider community of computational scientists, and promotes code portability, reusability, reduction of duplicate efforts

  12. High-Performance Computing for Advanced Smart Grid Applications

    SciTech Connect

    Huang, Zhenyu; Chen, Yousu

    2012-07-06

    The power grid is becoming far more complex as a result of the grid evolution meeting an information revolution. Due to the penetration of smart grid technologies, the grid is evolving as an unprecedented speed and the information infrastructure is fundamentally improved with a large number of smart meters and sensors that produce several orders of magnitude larger amounts of data. How to pull data in, perform analysis, and put information out in a real-time manner is a fundamental challenge in smart grid operation and planning. The future power grid requires high performance computing to be one of the foundational technologies in developing the algorithms and tools for the significantly increased complexity. New techniques and computational capabilities are required to meet the demands for higher reliability and better asset utilization, including advanced algorithms and computing hardware for large-scale modeling, simulation, and analysis. This chapter summarizes the computational challenges in smart grid and the need for high performance computing, and present examples of how high performance computing might be used for future smart grid operation and planning.

  13. Computational methods for predicting the response of critical as-built infrastructure to dynamic loads (architectural surety)

    SciTech Connect

    Preece, D.S.; Weatherby, J.R.; Attaway, S.W.; Swegle, J.W.; Matalucci, R.V.

    1998-06-01

    Coupled blast-structural computational simulations using supercomputer capabilities will significantly advance the understanding of how complex structures respond under dynamic loads caused by explosives and earthquakes, an understanding with application to the surety of both federal and nonfederal buildings. Simulation of the effects of explosives on structures is a challenge because the explosive response can best be simulated using Eulerian computational techniques and structural behavior is best modeled using Lagrangian methods. Due to the different methodologies of the two computational techniques and code architecture requirements, they are usually implemented in different computer programs. Explosive and structure modeling in two different codes make it difficult or next to impossible to do coupled explosive/structure interaction simulations. Sandia National Laboratories has developed two techniques for solving this problem. The first is called Smoothed Particle Hydrodynamics (SPH), a relatively new gridless method comparable to Eulerian, that is especially suited for treating liquids and gases such as those produced by an explosive. The SPH capability has been fully implemented into the transient dynamics finite element (Lagrangian) codes PRONTO-2D and -3D. A PRONTO-3D/SPH simulation of the effect of a blast on a protective-wall barrier is presented in this paper. The second technique employed at Sandia National Laboratories uses a relatively new code called ALEGRA which is an ALE (Arbitrary Lagrangian-Eulerian) wave code with specific emphasis on large deformation and shock propagation. ALEGRA is capable of solving many shock-wave physics problems but it is especially suited for modeling problems involving the interaction of decoupled explosives with structures.

  14. In-situ sensor monitoring of resin film infusion of advanced fiber architecture preforms

    SciTech Connect

    Kranbuehl, D.E.; Hood, D.; Rogozinski, J.

    1995-12-01

    Resin transfer molding (RTM) of advanced fiber architecture stitched preforms is being developed as a smart cost-effective manufacturing technique for fabricating damage tolerant composite structures with geometrically complex reinforcements. Dry textile preforms are infiltrated with resin and cured in a single step process, thus eliminating separate prepreg manufacture and ply-by-ply lay-up. The number of parameters that must be controlled during infiltration and cure make trial-and-error methods of process cycle optimization extremely inefficient. In situ cure monitoring sensors and an analytical processing model are a superior alternative for the determination of optimum processing cycles, quality assurance, and automated process control. Resin transfer molding experiments have been conducted in a manufacturing plant with a reactive epoxy resin and carbon fabric preforms. Frequency dependent electromagnetic sensing (FDEMS) was used to monitor in situ resin position, viscosity and degree of cure in situ in the mold during the Resin Transfer Molding infiltration and cure process. A science based multi-dimensional model of Resin Transfer Molding (RTM) was used to predict the infiltration behavior, as well as viscosity and degree of cure as the resin flows and cures in the dry textile preform.

  15. A novel sputtered Pd mesh architecture as an advanced electrocatalyst for highly efficient hydrogen production

    NASA Astrophysics Data System (ADS)

    de Lucas-Consuegra, Antonio; de la Osa, Ana R.; Calcerrada, Ana B.; Linares, José J.; Horwat, David

    2016-07-01

    This study reports the preparation, characterization and testing of a sputtered Pd mesh-like anode as an advanced electrocatalyst for H2 production from alkaline ethanol solutions in an Alkaline Membrane Electrolyzer (AEM). Pd anodic catalyst is prepared by magnetron sputtering technique onto a microfiber carbon paper support. Scanning Electron Microscopy images reveal that the used preparation technique enables to cover the surface of the carbon microfibers exposed to the Pd target, leading to a continuous network that also maintains part of the original carbon paper macroporosity. Such novel anodic architecture (organic binder free) presents an excellent electro-chemical performance, with a maximum current density of 700 mA cm-2 at 1.3 V, and, concomitantly, a large H2 production rate with low energy requirement compared to water electrolysis. Potassium hydroxide emerges as the best electrolyte, whereas temperature exerts the expected promotional effect up to 90 °C. On the other hand, a 1 mol L-1 ethanol solution is enough to guarantee an efficient fuel supply without any mass transfer limitation. The proposed system also demonstrates to remain stable over 150 h of operation along five consecutives cycles, producing highly pure H2 (99.999%) at the cathode and potassium acetate as the main anodic product.

  16. An architecture and model for cognitive engineering simulation analysis - Application to advanced aviation automation

    NASA Technical Reports Server (NTRS)

    Corker, Kevin M.; Smith, Barry R.

    1993-01-01

    The process of designing crew stations for large-scale, complex automated systems is made difficult because of the flexibility of roles that the crew can assume, and by the rapid rate at which system designs become fixed. Modern cockpit automation frequently involves multiple layers of control and display technology in which human operators must exercise equipment in augmented, supervisory, and fully automated control modes. In this context, we maintain that effective human-centered design is dependent on adequate models of human/system performance in which representations of the equipment, the human operator(s), and the mission tasks are available to designers for manipulation and modification. The joint Army-NASA Aircrew/Aircraft Integration (A3I) Program, with its attendant Man-machine Integration Design and Analysis System (MIDAS), was initiated to meet this challenge. MIDAS provides designers with a test bed for analyzing human-system integration in an environment in which both cognitive human function and 'intelligent' machine function are described in similar terms. This distributed object-oriented simulation system, its architecture and assumptions, and our experiences from its application in advanced aviation crew stations are described.

  17. Computation of the tip vortex flowfield for advanced aircraft propellers

    NASA Technical Reports Server (NTRS)

    Tsai, Tommy M.; Dejong, Frederick J.; Levy, Ralph

    1988-01-01

    The tip vortex flowfield plays a significant role in the performance of advanced aircraft propellers. The flowfield in the tip region is complex, three-dimensional and viscous with large secondary velocities. An analysis is presented using an approximate set of equations which contains the physics required by the tip vortex flowfield, but which does not require the resources of the full Navier-Stokes equations. A computer code was developed to predict the tip vortex flowfield of advanced aircraft propellers. A grid generation package was developed to allow specification of a variety of advanced aircraft propeller shapes. Calculations of the tip vortex generation on an SR3 type blade at high Reynolds numbers were made using this code and a parametric study was performed to show the effect of tip thickness on tip vortex intensity. In addition, calculations of the tip vortex generation on a NACA 0012 type blade were made, including the flowfield downstream of the blade trailing edge. Comparison of flowfield calculations with experimental data from an F4 blade was made. A user's manual was also prepared for the computer code (NASA CR-182178).

  18. Architecture for access to a compute intensive image mosaic service in the NVO

    NASA Astrophysics Data System (ADS)

    Berriman, G. Bruce; Curkendall, David; Good, John C.; Jacob, Joseph C.; Katz, Daniel S.; Kong, Mihseh; Monkewitz, Serge; Moore, Reagan; Prince, Thomas A.; Williams, Roy E.

    2002-12-01

    The National Virtual Observatory (NVO) will provide on-demand access to data collections, data fusion services and compute intensive applications. The paper describes the development of a framework that will support two key aspects of these objectives: a compute engine that will deliver custom image mosaics, and a "request management system," based on an e-business applications server, for job processing, including monitoring, failover and status reporting. We will develop this request management system to support a diverse range of astronomical requests, including services scaled to operate on the emerging computational grid infrastructure. Data requests will be made through existing portals to demonstrate the system: the NASA/IPAC Extragalactic Database (NED), the On-Line Archive Science Information Services (OASIS) at the NASA/IPAC Infrared Science Archive (IRSA); the Virtual Sky service at Caltech's Center for Advanced Computing Research (CACR), and the yourSky mosaic server at the Jet Propulsion Laboratory (JPL).

  19. Selecting an Architecture for a Safety-Critical Distributed Computer System with Power, Weight and Cost Considerations

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2014-01-01

    This report presents an example of the application of multi-criteria decision analysis to the selection of an architecture for a safety-critical distributed computer system. The design problem includes constraints on minimum system availability and integrity, and the decision is based on the optimal balance of power, weight and cost. The analysis process includes the generation of alternative architectures, evaluation of individual decision criteria, and the selection of an alternative based on overall value. In this example presented here, iterative application of the quantitative evaluation process made it possible to deliberately generate an alternative architecture that is superior to all others regardless of the relative importance of cost.

  20. High resolution computed tomography of advanced composite and ceramic materials

    NASA Technical Reports Server (NTRS)

    Yancey, R. N.; Klima, S. J.

    1991-01-01

    Advanced composite and ceramic materials are being developed for use in many new defense and commercial applications. In order to achieve the desired mechanical properties of these materials, the structural elements must be carefully analyzed and engineered. A study was conducted to evaluate the use of high resolution computed tomography (CT) as a macrostructural analysis tool for advanced composite and ceramic materials. Several samples were scanned using a laboratory high resolution CT scanner. Samples were also destructively analyzed at the locations of the scans and the nondestructive and destructive results were compared. The study provides useful information outlining the strengths and limitations of this technique and the prospects for further research in this area.

  1. Whole-genome CNV analysis: advances in computational approaches

    PubMed Central

    Pirooznia, Mehdi; Goes, Fernando S.; Zandi, Peter P.

    2015-01-01

    Accumulating evidence indicates that DNA copy number variation (CNV) is likely to make a significant contribution to human diversity and also play an important role in disease susceptibility. Recent advances in genome sequencing technologies have enabled the characterization of a variety of genomic features, including CNVs. This has led to the development of several bioinformatics approaches to detect CNVs from next-generation sequencing data. Here, we review recent advances in CNV detection from whole genome sequencing. We discuss the informatics approaches and current computational tools that have been developed as well as their strengths and limitations. This review will assist researchers and analysts in choosing the most suitable tools for CNV analysis as well as provide suggestions for new directions in future development. PMID:25918519

  2. Advances in Computational Stability Analysis of Composite Aerospace Structures

    SciTech Connect

    Degenhardt, R.; Araujo, F. C. de

    2010-09-30

    European aircraft industry demands for reduced development and operating costs. Structural weight reduction by exploitation of structural reserves in composite aerospace structures contributes to this aim, however, it requires accurate and experimentally validated stability analysis of real structures under realistic loading conditions. This paper presents different advances from the area of computational stability analysis of composite aerospace structures which contribute to that field. For stringer stiffened panels main results of the finished EU project COCOMAT are given. It investigated the exploitation of reserves in primary fibre composite fuselage structures through an accurate and reliable simulation of postbuckling and collapse. For unstiffened cylindrical composite shells a proposal for a new design method is presented.

  3. Advances in Electromagnetic Modelling through High Performance Computing

    SciTech Connect

    Ko, K.; Folwell, N.; Ge, L.; Guetz, A.; Lee, L.; Li, Z.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.; Xiao, L.; /SLAC

    2006-03-29

    Under the DOE SciDAC project on Accelerator Science and Technology, a suite of electromagnetic codes has been under development at SLAC that are based on unstructured grids for higher accuracy, and use parallel processing to enable large-scale simulation. The new modeling capability is supported by SciDAC collaborations on meshing, solvers, refinement, optimization and visualization. These advances in computational science are described and the application of the parallel eigensolver Omega3P to the cavity design for the International Linear Collider is discussed.

  4. Computer modeling for advanced life support system analysis.

    PubMed

    Drysdale, A

    1997-01-01

    This article discusses the equivalent mass approach to advanced life support system analysis, describes a computer model developed to use this approach, and presents early results from modeling the NASA JSC BioPlex. The model is built using an object oriented approach and G2, a commercially available modeling package Cost factor equivalencies are given for the Volosin scenarios. Plant data from NASA KSC and Utah State University (USU) are used, together with configuration data from the BioPlex design effort. Initial results focus on the importance of obtaining high plant productivity with a flight-like configuration. PMID:11540448

  5. Architecture and data processing alternatives for Tse computer. Volume 1: Tse logic design concepts and the development of image processing machine architectures

    NASA Technical Reports Server (NTRS)

    Rickard, D. A.; Bodenheimer, R. E.

    1976-01-01

    Digital computer components which perform two dimensional array logic operations (Tse logic) on binary data arrays are described. The properties of Golay transforms which make them useful in image processing are reviewed, and several architectures for Golay transform processors are presented with emphasis on the skeletonizing algorithm. Conventional logic control units developed for the Golay transform processors are described. One is a unique microprogrammable control unit that uses a microprocessor to control the Tse computer. The remaining control units are based on programmable logic arrays. Performance criteria are established and utilized to compare the various Golay transform machines developed. A critique of Tse logic is presented, and recommendations for additional research are included.

  6. Toward a scalable quantum computing architecture with mixed species ion chains

    NASA Astrophysics Data System (ADS)

    Wright, John; Auchter, Carolyn; Chou, Chen-Kuan; Graham, Richard D.; Noel, Thomas W.; Sakrejda, Tomasz; Zhou, Zichao; Blinov, Boris B.

    2016-01-01

    We report on progress toward implementing mixed ion species quantum information processing for a scalable ion-trap architecture. Mixed species chains may help solve several problems with scaling ion-trap quantum computation to large numbers of qubits. Initial temperature measurements of linear Coulomb crystals containing barium and ytterbium ions indicate that the mass difference does not significantly impede cooling at low ion numbers. Average motional occupation numbers are estimated to be bar{n} ≈ 130 quanta per mode for chains with small numbers of ions, which is within a factor of three of the Doppler limit for barium ions in our trap. We also discuss generation of ion-photon entanglement with barium ions with a fidelity of F ≥ 0.84 , which is an initial step towards remote ion-ion coupling in a more scalable quantum information architecture. Further, we are working to implement these techniques in surface traps in order to exercise greater control over ion chain ordering and positioning.

  7. Computational ocean acoustics: Advances in 3D ocean acoustic modeling

    NASA Astrophysics Data System (ADS)

    Schmidt, Henrik; Jensen, Finn B.

    2012-11-01

    The numerical model of ocean acoustic propagation developed in the 1980's are still in widespread use today, and the field of computational ocean acoustics is often considered a mature field. However, the explosive increase in computational power available to the community has created opportunities for modeling phenomena that earlier were beyond reach. Most notably, three-dimensional propagation and scattering problems have been prohibitive computationally, but are now addressed routinely using brute force numerical approaches such as the Finite Element Method, in particular for target scattering problems, where they are being combined with the traditional wave theory propagation models in hybrid modeling frameworks. Also, recent years has seen the development of hybrid approaches coupling oceanographic circulation models with acoustic propagation models, enabling the forecasting of sonar performance uncertainty in dynamic ocean environments. These and other advances made over the last couple of decades support the notion that the field of computational ocean acoustics is far from being mature. [Work supported by the Office of Naval Research, Code 321OA].

  8. A development architecture for serious games using BCI (brain computer interface) sensors.

    PubMed

    Sung, Yunsick; Cho, Kyungeun; Um, Kyhyun

    2012-01-01

    Games that use brainwaves via brain-computer interface (BCI) devices, to improve brain functions are known as BCI serious games. Due to the difficulty of developing BCI serious games, various BCI engines and authoring tools are required, and these reduce the development time and cost. However, it is desirable to reduce the amount of technical knowledge of brain functions and BCI devices needed by game developers. Moreover, a systematic BCI serious game development process is required. In this paper, we present a methodology for the development of BCI serious games. We describe an architecture, authoring tools, and development process of the proposed methodology, and apply it to a game development approach for patients with mild cognitive impairment as an example. This application demonstrates that BCI serious games can be developed on the basis of expert-verified theories. PMID:23202227

  9. Coupling Multi-Component Models with MPH on Distributed MemoryComputer Architectures

    SciTech Connect

    He, Yun; Ding, Chris

    2005-03-24

    A growing trend in developing large and complex applications on today's Teraflop scale computers is to integrate stand-alone and/or semi-independent program components into a comprehensive simulation package. One example is the Community Climate System Model which consists of atmosphere, ocean, land-surface and sea-ice components. Each component is semi-independent and has been developed at a different institution. We study how this multi-component, multi-executable application can run effectively on distributed memory architectures. For the first time, we clearly identify five effective execution modes and develop the MPH library to support application development utilizing these modes. MPH performs component-name registration, resource allocation and initial component handshaking in a flexible way.

  10. Matrix multiplication operations with data pre-conditioning in a high performance computing architecture

    DOEpatents

    Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

    2013-11-05

    Mechanisms for performing matrix multiplication operations with data pre-conditioning in a high performance computing architecture are provided. A vector load operation is performed to load a first vector operand of the matrix multiplication operation to a first target vector register. A load and splat operation is performed to load an element of a second vector operand and replicating the element to each of a plurality of elements of a second target vector register. A multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the matrix multiplication operation. The partial product of the matrix multiplication operation is accumulated with other partial products of the matrix multiplication operation.

  11. A Development Architecture for Serious Games Using BCI (Brain Computer Interface) Sensors

    PubMed Central

    Sung, Yunsick; Cho, Kyungeun; Um, Kyhyun

    2012-01-01

    Games that use brainwaves via brain–computer interface (BCI) devices, to improve brain functions are known as BCI serious games. Due to the difficulty of developing BCI serious games, various BCI engines and authoring tools are required, and these reduce the development time and cost. However, it is desirable to reduce the amount of technical knowledge of brain functions and BCI devices needed by game developers. Moreover, a systematic BCI serious game development process is required. In this paper, we present a methodology for the development of BCI serious games. We describe an architecture, authoring tools, and development process of the proposed methodology, and apply it to a game development approach for patients with mild cognitive impairment as an example. This application demonstrates that BCI serious games can be developed on the basis of expert-verified theories. PMID:23202227

  12. JPL control/structure interaction test bed real-time control computer architecture

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    1989-01-01

    The Control/Structure Interaction Program is a technology development program for spacecraft that exhibit interactions between the control system and structural dynamics. The program objectives include development and verification of new design concepts - such as active structure - and new tools - such as combined structure and control optimization algorithm - and their verification in ground and possibly flight test. A focus mission spacecraft was designed based upon a space interferometer and is the basis for design of the ground test article. The ground test bed objectives include verification of the spacecraft design concepts, the active structure elements and certain design tools such as the new combined structures and controls optimization tool. In anticipation of CSI technology flight experiments, the test bed control electronics must emulate the computation capacity and control architectures of space qualifiable systems as well as the command and control networks that will be used to connect investigators with the flight experiment hardware. The Test Bed facility electronics were functionally partitioned into three units: a laboratory data acquisition system for structural parameter identification and performance verification; an experiment supervisory computer to oversee the experiment, monitor the environmental parameters and perform data logging; and a multilevel real-time control computing system. The design of the Test Bed electronics is presented along with hardware and software component descriptions. The system should break new ground in experimental control electronics and is of interest to anyone working in the verification of control concepts for large structures.

  13. Architecture and Functionality of the Advanced Life Support On-Line Project Information System (OPIS)

    NASA Technical Reports Server (NTRS)

    Hogan, John A.; Levri, Julie A.; Morrow, Rich; Cavazzoni, Jim; Rodriquez, Luis F.; Riano, Rebecca; Whitaker, Dawn R.

    2004-01-01

    An ongoing effort is underway at NASA Amcs Research Center (ARC) tu develop an On-line Project Information System (OPIS) for the Advanced Life Support (ALS) Program. The objective of this three-year project is to develop, test, revise and deploy OPIS to enhance the quality of decision-making metrics and attainment of Program goals through improved knowledge sharing. OPIS will centrally locate detailed project information solicited from investigators on an annual basis and make it readily accessible by the ALS Community via a web-accessible interface. The data will be stored in an object-oriented relational database (created in MySQL(Trademark) located on a secure server at NASA ARC. OPE will simultaneously serve several functions, including being an R&TD status information hub that can potentially serve as the primary annual reporting mechanism. Using OPIS, ALS managers and element leads will be able to carry out informed research and technology development investment decisions, and allow analysts to perform accurate systems evaluations. Additionally, the range and specificity of information solicited will serve to educate technology developers of programmatic needs. OPIS will collect comprehensive information from all ALS projects as well as highly detailed information specific to technology development in each ALS area (Waste, Water, Air, Biomass, Food, Thermal, and Control). Because the scope of needed information can vary dramatically between areas, element-specific technology information is being compiled with the aid of multiple specialized working groups. This paper presents the current development status in terms of the architecture and functionality of OPIS. Possible implementation approaches for OPIS are also discussed.

  14. Architecture and Functionality of the Advanced Life Support On-Line Project Information System

    NASA Technical Reports Server (NTRS)

    Hogan, John A.; Levri, Julie A.; Morrow, Rich; Cavazzoni, Jim; Rodriguez, Luis F.; Riano, Rebecca; Whitaker, Dawn R.

    2004-01-01

    An ongoing effort is underway at NASA Ames Research Center (ARC) to develop an On-line Project Information System (OPIS) for the Advanced Life Support (ALS) Program. The objective of this three-year project is to develop, test, revise and deploy OPIS to enhance the quality of decision-making metrics and attainment of Program goals through improved knowledge sharing. OPIS will centrally locate detailed project information solicited from investigators on an annual basis and make it readily accessible by the ALS Community via a Web-accessible interface. The data will be stored in an object-oriented relational database (created in MySQL) located on a secure server at NASA ARC. OPE will simultaneously serve several functions, including being an research and technology development (R&TD) status information hub that can potentially serve as the primary annual reporting mechanism for ALS-funded projects. Using OPIS, ALS managers and element leads will be able to carry out informed R&TD investment decisions, and allow analysts to perform accurate systems evaluations. Additionally, the range and specificity of information solicited will serve to educate technology developers of programmatic needs. OPIS will collect comprehensive information from all ALS projects as well as highly detailed information specific to technology development in each ALS area (Waste, Water, Air, Biomass, Food, Thermal, Controls and Systems Analysis). Because the scope of needed information can vary dramatically between areas, element-specific technology information is being compiled with the aid of multiple specialized working groups. This paper presents the current development status in terms of the architecture and functionality of OPIS. Possible implementation approaches for OPIS are also discussed.

  15. (Advanced materials, robotics, and advanced computers for use in nuclear power plants)

    SciTech Connect

    White, J.D.

    1989-11-17

    The aim of the IAEA Technical Committee Workshop was to provide an opportunity to exchange information on the status of advances in technologies such as improved materials, robotics, and advanced computers already used or expected to be used in the design of nuclear power plants, and to review possible applications of advanced technologies in future reactor designs. Papers were given in these areas by Belgium, France, Mexico, Canada, Russia, India, and the United States. Notably absent from this meeting were Japan, Germany, Italy, Spain, the United Kingdom, and the Scandinavian countries -- all of whom are working in the areas of interest to this meeting. Most of the workshop discussion, however, was focused on advanced controls (including human-machine interface and software development and testing) and electronic descriptions of power plants. Verification and validation of design was also a topic of considerable discussion. The traveler was surprised at the progress made in 3-D electronic images of nuclear power plants and automatic updating of these images to reflect as-built conditions. Canadian plants and one Mexican plant have used photogrammetry to update electronic drawings automatically. The Canadians also have started attaching other electronic data bases to the electronic drawings. These data bases include parts information and maintenance work. The traveler observed that the Advanced Controls Program is better balanced and more forward looking than other nuclear controls R D activities described. The French participants made this observation in the meeting and expressed interest in collaborative work in this area.

  16. A portable, GUI-based, object-oriented client-server architecture for computer-based patient record (CPR) systems.

    PubMed

    Schleyer, T K

    1995-01-01

    Software applications for computer-based patient records require substantial development investments. Portable, open software architectures are one way to delay or avoid software application obsolescence. The Clinical Management System at Temple University School of Dentistry uses a portable, GUI-based, object-oriented client-server architecture. Two main criteria determined this approach: preservation of investment in software development and a smooth migration path to a Computer-based Patient Record. The application is separated into three layers: graphical user interface, database interface, and application functionality Implementation with generic cross-platform development tools ensures maximum portability. PMID:7662879

  17. Advances in computed tomography evaluation of skull base diseases.

    PubMed

    Prevedello, Luciano M

    2014-10-01

    Introduction Computed tomography (CT) is a key component in the evaluation of skull base diseases. With its ability to clearly delineate the osseous anatomy, CT can provide not only important tips to diagnosis but also key information for surgical planning. Objectives The purpose of this article is to describe some of the main CT imaging features that contribute to the diagnosis of skull base tumors, review recent knowledge related to bony manifestations of these conditions, and summarize recent technological advances in CT that contribute to image quality and improved diagnosis. Data Synthesis Recent advances in CT technology allow fine-detailed evaluation of the bony anatomy using submillimetric sections. Dual-energy CT material decomposition capabilities allow clear separation between contrast material, bone, and soft tissues with many clinical applications in the skull base. Dual-energy technology has also the ability to decrease image degradation from metallic hardwares using some techniques that can result in similar or even decreased radiation to patients. Conclusions CT is very useful in the evaluation of skull base diseases, and recent technological advances can increase disease conspicuity resulting in improved diagnostic capabilities and enhanced surgical planning. PMID:25992136

  18. Introduction to systolic algorithms and architectures

    SciTech Connect

    Bentley, J.L.; Kung, H.T.

    1983-01-01

    The authors survey the class of systolic special-purpose computer architectures and algorithms, which are particularly well-suited for implementation in very large scale integrated circuitry (VLSI). They give a brief introduction to systolic arrays for a reader with a broad technical background and some experience in using a computer, but who is not necessarily a computer scientist. In addition they briefly survey the technological advances in VLSI that led to the development of systolic algorithms and architectures. 38 references.

  19. ''Beauty of Wholeness and Beauty of Partiality.'' New Terms Defining the Concept of Beauty in Architecture in Terms of Sustainability and Computer Aided Design

    ERIC Educational Resources Information Center

    Farid, Ayman A.; Zaghloul, Weaam M.; Dewidar, Khaled M.

    2014-01-01

    The great shift in sustainability and computer aided design in the field of architecture caused a remarkable change in the architecture philosophy, new aspects of beauty and aesthetic values are being introduced, and traditional definitions for beauty cannot fully cover this aspects, which causes a gap between; new architecture works criticism and…

  20. Recent Advances in Computational Mechanics of the Human Knee Joint

    PubMed Central

    Kazemi, M.; Dabiri, Y.; Li, L. P.

    2013-01-01

    Computational mechanics has been advanced in every area of orthopedic biomechanics. The objective of this paper is to provide a general review of the computational models used in the analysis of the mechanical function of the knee joint in different loading and pathological conditions. Major review articles published in related areas are summarized first. The constitutive models for soft tissues of the knee are briefly discussed to facilitate understanding the joint modeling. A detailed review of the tibiofemoral joint models is presented thereafter. The geometry reconstruction procedures as well as some critical issues in finite element modeling are also discussed. Computational modeling can be a reliable and effective method for the study of mechanical behavior of the knee joint, if the model is constructed correctly. Single-phase material models have been used to predict the instantaneous load response for the healthy knees and repaired joints, such as total and partial meniscectomies, ACL and PCL reconstructions, and joint replacements. Recently, poromechanical models accounting for fluid pressurization in soft tissues have been proposed to study the viscoelastic response of the healthy and impaired knee joints. While the constitutive modeling has been considerably advanced at the tissue level, many challenges still exist in applying a good material model to three-dimensional joint simulations. A complete model validation at the joint level seems impossible presently, because only simple data can be obtained experimentally. Therefore, model validation may be concentrated on the constitutive laws using multiple mechanical tests of the tissues. Extensive model verifications at the joint level are still crucial for the accuracy of the modeling. PMID:23509602

  1. Creating science-driven computer architecture: A new patch to scientific leadership

    SciTech Connect

    Simon, Horst D.; McCurdy, C. William; Kramer, T.C.; Stevens, Rick; McCoy,Mike; Seager, Mark; Zacharia, Thomas; Bair, Ray; Studham, Scott; Camp, William; Leland, Robert; Morrison, John; Feiereisen, William

    2003-05-16

    We believe that it is critical for the future of high end computing in the United States to bring into existence a new class of computational capability that is optimal for science. In recent years scientific computing has increasingly become dependent on hardware that is designed and optimized for commercial applications. Science in this country has greatly benefited from the improvements in computers that derive from advances in microprocessors following Moore's Law, and a strategy of relying on machines optimized primarily for business applications. However within the last several years, in part because of the challenge presented by the appearance of the Japanese Earth Simulator, the sense has been growing in the scientific community that a new strategy is needed. A more aggressive strategy than reliance only on market forces driven by business applications is necessary in order to achieve a better alignment between the needs of scientific computing and the platforms available. The United States should undertake a program that will result in scientific computing capability that durably returns the advantage to American science, because doing so is crucial to the country's future. Such a strategy must also be sustainable. New classes of computer designs will not only revolutionize the power of supercomputing for science, but will also affect scientific computing at all scales. What is called for is the opening of a new frontier of scientific capability that will ensure that American science is greatly enabled in its pursuit of research in critical areas such as nanoscience, climate prediction, combustion, modeling in the life sciences, and fusion energy, as well as in meeting essential needs for national security. In this white paper we propose a strategy for accomplishing this mission, pursuing different directions of hardware development and deployment, and establishing a highly capable networking and grid infrastructure connecting these platforms to the broad

  2. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Report: Exascale Computing Initiative Review

    SciTech Connect

    Reed, Daniel; Berzins, Martin; Pennington, Robert; Sarkar, Vivek; Taylor, Valerie

    2015-08-01

    On November 19, 2014, the Advanced Scientific Computing Advisory Committee (ASCAC) was charged with reviewing the Department of Energy’s conceptual design for the Exascale Computing Initiative (ECI). In particular, this included assessing whether there are significant gaps in the ECI plan or areas that need to be given priority or extra management attention. Given the breadth and depth of previous reviews of the technical challenges inherent in exascale system design and deployment, the subcommittee focused its assessment on organizational and management issues, considering technical issues only as they informed organizational or management priorities and structures. This report presents the observations and recommendations of the subcommittee.

  3. Block sparse Cholesky algorithms on advanced uniprocessor computers

    SciTech Connect

    Ng, E.G.; Peyton, B.W.

    1991-12-01

    As with many other linear algebra algorithms, devising a portable implementation of sparse Cholesky factorization that performs well on the broad range of computer architectures currently available is a formidable challenge. Even after limiting our attention to machines with only one processor, as we have done in this report, there are still several interesting issues to consider. For dense matrices, it is well known that block factorization algorithms are the best means of achieving this goal. We take this approach for sparse factorization as well. This paper has two primary goals. First, we examine two sparse Cholesky factorization algorithms, the multifrontal method and a blocked left-looking sparse Cholesky method, in a systematic and consistent fashion, both to illustrate the strengths of the blocking techniques in general and to obtain a fair evaluation of the two approaches. Second, we assess the impact of various implementation techniques on time and storage efficiency, paying particularly close attention to the work-storage requirement of the two methods and their variants.

  4. Nano-photonic organic solar cell architecture for advanced light management utilizing dual photonic crystals

    NASA Astrophysics Data System (ADS)

    Peer, Akshit; Biswas, Rana

    2015-09-01

    Organic solar cells have rapidly increasing efficiencies, but typically absorb less than half of the incident solar spectrum. To increase broadband light absorption, we rigorously design experimentally realizable solar cell architectures based on dual photonic crystals. Our optimized architecture consists of a polymer microlens at the air-glass interface, coupled with a photonic-plasmonic crystal at the metal cathode. The microlens focuses light on the periodic nanostructure that generates strong light diffraction. Waveguiding modes and surface plasmon modes together enhance long wavelength absorption in P3HT-PCBM. The architecture has a period of 500 nm, with absorption and photocurrent enhancement of 49% and 58%, respectively.

  5. NASA Trapezoidal Wing Computations Including Transition and Advanced Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Lee-Rausch, E. M.

    2012-01-01

    Flow about the NASA Trapezoidal Wing is computed with several turbulence models by using grids from the first High Lift Prediction Workshop in an effort to advance understanding of computational fluid dynamics modeling for this type of flowfield. Transition is accounted for in many of the computations. In particular, a recently-developed 4-equation transition model is utilized and works well overall. Accounting for transition tends to increase lift and decrease moment, which improves the agreement with experiment. Upper surface flap separation is reduced, and agreement with experimental surface pressures and velocity profiles is improved. The predicted shape of wakes from upstream elements is strongly influenced by grid resolution in regions above the main and flap elements. Turbulence model enhancements to account for rotation and curvature have the general effect of increasing lift and improving the resolution of the wing tip vortex as it convects downstream. However, none of the models improve the prediction of surface pressures near the wing tip, where more grid resolution is needed.

  6. Design And Implementation Of A Multi-Sensor Fusion Algorithm On A Hypercube Computer Architecture

    NASA Astrophysics Data System (ADS)

    Glover, Charles W.

    1990-03-01

    A multi-sensor integration (MSI) algorithm written for sequential single processor computer architecture has been transformed into a concurrent algorithm and implemented in parallel on a multi-processor hypercube computer architecture. This paper will present the philosophy and methodologies used in the decomposition of the sequential MSI algorithm, and its transformation into a parallel MSI algorithm. The parallel MSI algorithm was implemented on a NCUBETM hypercube computer. The performance of the parallel MSI algorithm has been measured and compared against its sequential counterpart by running test case scenarios through a simulation program. The simulation program allows the user to define the trajectories of all players in the scenario, and to pick the sensor suites of the players and their operating characteristics. For example, an air-to-air engagement scenario was used as one of the test cases. In this scenario, two friend aircrafts were being attacked by six foe aircraft in a pincer maneuver. Both the friend and foe aircrafts launch missiles at several different time points in the engagement. The sensor suites on each aircraft are dual mode RADAR, dual mode IRST, and ESM sensors. The modes of the sensors are switched as needed throughout the scenario. The RADAR sensor is used only intermittently, thus most of the MSI information is obtained from passive sensing. The maneuvers in this scenario caused aircraft and missile to constantly fly in and out of sensors field-of-view (F0V). This resulted in the MSI algorithm to constantly reacquire, initiate, and delete new tracks as it tracked all objects in the scenario. The objective was to determine performance of the parallel MSI algorithm in such a complex environment, and to determine how many multi-processors (nodes) of the hypercube could be effectively used by an aircraft in such an environment. For the scenario just discussed, a 4-node hypercube was found to be the optimal size and a factor two in speedup

  7. Optimization of compute unified device architecture for real-time ultrahigh-resolution optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Aum, Jaehong; Han, Jae-Ho; Jeong, Jichai

    2015-01-01

    We propose an optimized signal processing scheme that utilizes the compute unified device architecture (CUDA) for real-time spectral domain optical coherence tomography (OCT). Because linear spline interpolation and the direct spectral reshaping method have low data and control dependencies, these algorithms maximally utilize graphic processing unit (GPU) resources for dispersion control. In addition, data transfer between main memory and GPU, regarded as one of the most wasteful and time-consuming processes in GPU computing, is executed in parallel with the signal processing by overlapping kernel execution and data transfers. Experimental results obtained from application of the proposed scheme to a laboratory constructed OCT system comprising five spectrally shifted SLDs indicate that the OCT system has an axial resolution of 4.8 μm and transverse resolution of 13 μm in air. Further, coherence artifacts are reduced by 3-14 dB over the side-lobes in the point spread function. The optimization of CUDA enables OCT imaging rates up to 350 kHz (A-lines/sec) with a single GTX680 GPU.

  8. Compute unified device architecture (CUDA)-based parallelization of WRF Kessler cloud microphysics scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Wang, Jun; Allen Huang, H.-L.; Goldberg, Mitchell D.

    2013-03-01

    In recent years, graphics processing units (GPUs) have emerged as a low-cost, low-power and a very high performance alternative to conventional central processing units (CPUs). The latest GPUs offer a speedup of two-to-three orders of magnitude over CPU for various science and engineering applications. The Weather Research and Forecasting (WRF) model is the latest-generation numerical weather prediction model. It has been designed to serve both operational forecasting and atmospheric research needs. It proves useful for a broad spectrum of applications for domain scales ranging from meters to hundreds of kilometers. WRF computes an approximate solution to the differential equations which govern the air motion of the whole atmosphere. Kessler microphysics module in WRF is a simple warm cloud scheme that includes water vapor, cloud water and rain. Microphysics processes which are modeled are rain production, fall and evaporation. The accretion and auto-conversion of cloud water processes are also included along with the production of cloud water from condensation. In this paper, we develop an efficient WRF Kessler microphysics scheme which runs on Graphics Processing Units (GPUs) using the NVIDIA Compute Unified Device Architecture (CUDA). The GPU-based implementation of Kessler microphysics scheme achieves a significant speedup of 70× over its CPU based single-threaded counterpart. When a 4 GPU system is used, we achieve an overall speedup of 132× as compared to the single thread CPU version.

  9. Advances in Digital Calibration Techniques Enabling Real-Time Beamforming SweepSAR Architectures

    NASA Technical Reports Server (NTRS)

    Hoffman, James P.; Perkovic, Dragana; Ghaemi, Hirad; Horst, Stephen; Shaffer, Scott; Veilleux, Louise

    2013-01-01

    Real-time digital beamforming, combined with lightweight, large aperture reflectors, enable SweepSAR architectures, which promise significant increases in instrument capability for solid earth and biomass remote sensing. These new instrument concepts require new methods for calibrating the multiple channels, which are combined on-board, in real-time. The benefit of this effort is that it enables a new class of lightweight radar architecture, Digital Beamforming with SweepSAR, providing significantly larger swath coverage than conventional SAR architectures for reduced mass and cost. This paper will review the on-going development of the digital calibration architecture for digital beamforming radar instrument, such as the proposed Earth Radar Mission's DESDynI (Deformation, Ecosystem Structure, and Dynamics of Ice) instrument. This proposed instrument's baseline design employs SweepSAR digital beamforming and requires digital calibration. We will review the overall concepts and status of the system architecture, algorithm development, and the digital calibration testbed currently being developed. We will present results from a preliminary hardware demonstration. We will also discuss the challenges and opportunities specific to this novel architecture.

  10. T and D-Bench--Innovative Combined Support for Education and Research in Computer Architecture and Embedded Systems

    ERIC Educational Resources Information Center

    Soares, S. N.; Wagner, F. R.

    2011-01-01

    Teaching and Design Workbench (T&D-Bench) is a framework aimed at education and research in the areas of computer architecture and embedded systems. It includes a set of features not found in other educational environments. This set of features is the result of an original combination of design requirements for T&D-Bench: that the framework should…

  11. Simulator Building as a Problem-Based Learning Approach for Teaching Students in a Computer Architecture Course

    ERIC Educational Resources Information Center

    Ang, Li-minn; Seng, Kah Phooi

    2008-01-01

    This paper presents a Problem-Based Learning (PBL) approach to support and promote deeper student learning in a computer architecture course. A PBL approach using a simulator building activity was used to teach part of the course. The remainder of the course was taught using a traditional lecture-tutorial approach. Qualitative data was collected…

  12. World Wide Web interface for advanced SPECT reconstruction algorithms implemented on a remote massively parallel computer.

    PubMed

    Formiconi, A R; Passeri, A; Guelfi, M R; Masoni, M; Pupi, A; Meldolesi, U; Malfetti, P; Calori, L; Guidazzoli, A

    1997-11-01

    Data from Single Photon Emission Computed Tomography (SPECT) studies are blurred by inevitable physical phenomena occurring during data acquisition. These errors may be compensated by means of reconstruction algorithms which take into account accurate physical models of the data acquisition procedure. Unfortunately, this approach involves high memory requirements as well as a high computational burden which cannot be afforded by the computer systems of SPECT acquisition devices. In this work the possibility of accessing High Performance Computing and Networking (HPCN) resources through a World Wide Web interface for the advanced reconstruction of SPECT data in a clinical environment was investigated. An iterative algorithm with an accurate model of the variable system response was ported on the Multiple Instruction Multiple Data (MIMD) parallel architecture of a Cray T3D massively parallel computer. The system was accessible even from low cost PC-based workstations through standard TCP/IP networking. A speedup factor of 148 was predicted by the benchmarks run on the Cray T3D. A complete brain study of 30 (64 x 64) slices was reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s which corresponds to an actual speed-up factor of 135. The technique was extended to a more accurate 3D modeling of the system response for a true 3D reconstruction of SPECT data; the reconstruction time of the same data set with this more accurate model was 5 min. This work demonstrates the possibility of exploiting remote HPCN resources from hospital sites by means of low cost workstations using standard communication protocols and an user-friendly WWW interface without particular problems for routine use. PMID:9506406

  13. SciDAC Advances and Applications in Computational Beam Dynamics

    SciTech Connect

    Ryne, R.; Abell, D.; Adelmann, A.; Amundson, J.; Bohn, C.; Cary, J.; Colella, P.; Dechow, D.; Decyk, V.; Dragt, A.; Gerber, R.; Habib, S.; Higdon, D.; Katsouleas, T.; Ma, K.-L.; McCorquodale, P.; Mihalcea, D.; Mitchell, C.; Mori, W.; Mottershead, C.T.; Neri, F.; Pogorelov, I.; Qiang, J.; Samulyak, R.; Serafini, D.; Shalf, J.; Siegerist, C.; Spentzouris, P.; Stoltz, P.; Terzic, B.; Venturini, M.; Walstrom, P.

    2005-06-26

    SciDAC has had a major impact on computational beam dynamics and the design of particle accelerators. Particle accelerators--which account for half of the facilities in the DOE Office of Science Facilities for the Future of Science 20 Year Outlook--are crucial for US scientific, industrial, and economic competitiveness. Thanks to SciDAC, accelerator design calculations that were once thought impossible are now carried routinely, and new challenging and important calculations are within reach. SciDAC accelerator modeling codes are being used to get the most science out of existing facilities, to produce optimal designs for future facilities, and to explore advanced accelerator concepts that may hold the key to qualitatively new ways of accelerating charged particle beams. In this poster we present highlights from the SciDAC Accelerator Science and Technology (AST) project Beam Dynamics focus area in regard to algorithm development, software development, and applications.

  14. Computer sciences

    NASA Technical Reports Server (NTRS)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  15. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  16. Reliability of an interactive computer program for advance care planning.

    PubMed

    Schubart, Jane R; Levi, Benjamin H; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J

    2012-06-01

    Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83-0.95, and 0.86-0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time. PMID:22512830

  17. Reliability of an Interactive Computer Program for Advance Care Planning

    PubMed Central

    Levi, Benjamin H.; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J

    2012-01-01

    Abstract Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83–0.95, and 0.86–0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time. PMID:22512830

  18. TerraFERMA: Harnessing Advanced Computational Libraries in Earth Science

    NASA Astrophysics Data System (ADS)

    Wilson, C. R.; Spiegelman, M.; van Keken, P.

    2012-12-01

    Many important problems in Earth sciences can be described by non-linear coupled systems of partial differential equations. These "multi-physics" problems include thermo-chemical convection in Earth and planetary interiors, interactions of fluids and magmas with the Earth's mantle and crust and coupled flow of water and ice. These problems are of interest to a large community of researchers but are complicated to model and understand. Much of this complexity stems from the nature of multi-physics where small changes in the coupling between variables or constitutive relations can lead to radical changes in behavior, which in turn affect critical computational choices such as discretizations, solvers and preconditioners. To make progress in understanding such coupled systems requires a computational framework where multi-physics problems can be described at a high-level while maintaining the flexibility to easily modify the solution algorithm. Fortunately, recent advances in computational science provide a basis for implementing such a framework. Here we present the Transparent Finite Element Rapid Model Assembler (TerraFERMA), which leverages several advanced open-source libraries for core functionality. FEniCS (fenicsproject.org) provides a high level language for describing the weak forms of coupled systems of equations, and an automatic code generator that produces finite element assembly code. PETSc (www.mcs.anl.gov/petsc) provides a wide range of scalable linear and non-linear solvers that can be composed into effective multi-physics preconditioners. SPuD (amcg.ese.ic.ac.uk/Spud) is an application neutral options system that provides both human and machine-readable interfaces based on a single xml schema. Our software integrates these libraries and provides the user with a framework for exploring multi-physics problems. A single options file fully describes the problem, including all equations, coefficients and solver options. Custom compiled applications are

  19. Advances in computer technology: impact on the practice of medicine.

    PubMed

    Groth-Vasselli, B; Singh, K; Farnsworth, P N

    1995-01-01

    Advances in computer technology provide a wide range of applications which are revolutionizing the practice of medicine. The development of new software for the office creates a web of communication among physicians, staff members, health care facilities and associated agencies. This provides the physician with the prospect of a paperless office. At the other end of the spectrum, the development of 3D work stations and software based on computational chemistry permits visualization of protein molecules involved in disease. Computer assisted molecular modeling has been used to construct working 3D models of lens alpha-crystallin. The 3D structure of alpha-crystallin is basic to our understanding of the molecular mechanisms involved in lens fiber cell maturation, stabilization of the inner nuclear region, the maintenance of lens transparency and cataractogenesis. The major component of the high molecular weight aggregates that occur during cataractogenesis is alpha-crystallin subunits. Subunits of alpha-crystallin occur in other tissues of the body. In the central nervous system accumulation of these subunits in the form of dense inclusion bodies occurs in pathological conditions such as Alzheimer's disease, Huntington's disease, multiple sclerosis and toxoplasmosis (Iwaki, Wisniewski et al., 1992), as well as neoplasms of astrocyte origin (Iwaki, Iwaki, et al., 1991). Also cardiac ischemia is associated with an increased alpha B synthesis (Chiesi, Longoni et al., 1990). On a more global level, the molecular structure of alpha-crystallin may provide information pertaining to the function of small heat shock proteins, hsp, in maintaining cell stability under the stress of disease. PMID:8721907

  20. Activities and operations of Argonne's Advanced Computing Research Facility: February 1990 through April 1991

    SciTech Connect

    Pieper, G.W.

    1991-05-01

    This report reviews the activities and operations of the Advanced Computing Research Facility (ACRF) from February 1990 through April 1991. The ACRF is operated by the Mathematics and Computer Science Division at Argonne National Laboratory. The facility's principal objective is to foster research in parallel computing. Toward this objective, the ACRF operates experimental advanced computers, supports investigations in parallel computing, and sponsors technology transfer efforts to industry and academia. 5 refs., 1 fig.

  1. A neuron-inspired computational architecture for spatiotemporal visual processing: real-time visual sensory integration for humanoid robots.

    PubMed

    Holzbach, Andreas; Cheng, Gordon

    2014-06-01

    In this article, we present a neurologically motivated computational architecture for visual information processing. The computational architecture's focus lies in multiple strategies: hierarchical processing, parallel and concurrent processing, and modularity. The architecture is modular and expandable in both hardware and software, so that it can also cope with multisensory integrations - making it an ideal tool for validating and applying computational neuroscience models in real time under real-world conditions. We apply our architecture in real time to validate a long-standing biologically inspired visual object recognition model, HMAX. In this context, the overall aim is to supply a humanoid robot with the ability to perceive and understand its environment with a focus on the active aspect of real-time spatiotemporal visual processing. We show that our approach is capable of simulating information processing in the visual cortex in real time and that our entropy-adaptive modification of HMAX has a higher efficiency and classification performance than the standard model (up to ∼+6%). PMID:24687170

  2. On learning navigation behaviors for small mobile robots with reservoir computing architectures.

    PubMed

    Antonelo, Eric Aislan; Schrauwen, Benjamin

    2015-04-01

    This paper proposes a general reservoir computing (RC) learning framework that can be used to learn navigation behaviors for mobile robots in simple and complex unknown partially observable environments. RC provides an efficient way to train recurrent neural networks by letting the recurrent part of the network (called reservoir) be fixed while only a linear readout output layer is trained. The proposed RC framework builds upon the notion of navigation attractor or behavior that can be embedded in the high-dimensional space of the reservoir after learning. The learning of multiple behaviors is possible because the dynamic robot behavior, consisting of a sensory-motor sequence, can be linearly discriminated in the high-dimensional nonlinear space of the dynamic reservoir. Three learning approaches for navigation behaviors are shown in this paper. The first approach learns multiple behaviors based on the examples of navigation behaviors generated by a supervisor, while the second approach learns goal-directed navigation behaviors based only on rewards. The third approach learns complex goal-directed behaviors, in a supervised way, using a hierarchical architecture whose internal predictions of contextual switches guide the sequence of basic navigation behaviors toward the goal. PMID:25794381

  3. Fault Tolerant Architecture For A Fly-By-Light Flight Control Computer

    NASA Astrophysics Data System (ADS)

    Thompson, Kevin; Stipanovich, John; Smith, Brian; Reddy, Mahesh C.

    1990-02-01

    The next generation of flight control computers will utilize fiber optic technology to produce a fly-by-light flight control system. Optical transducers and optical fibers will take the place of electrical position transducers and wires, torsion bars, bell cranks, and cables. Applications for this fly-by-light technology include space launch vehicles, upperstages, space-craft, and commercial/military aircraft. Optical fibers are lighter than mechanical transmission media and unlike conven-tional wire transmissions are not susceptible to electromagnetic interference (EMI) and high energy emission sources. This paper will give an overview of a fault tolerant In-Line Monitored optical flight control system being developed at Boeing Aerospace & Electronics in Seattle, Washington. This system uses passive transducers with fiber optic interconnections which hold promises to virtually eliminate EMI threats to flight control system performance and flight safety and also provide significant weight savings. The main emphasis of this paper will be the In-Line Monitored architecture of the optical transducer system required for use in a fault tolerant flight control system.

  4. Optimal neural network architecture selection: effects on computer-aided detection of mammographic microcalcifications

    NASA Astrophysics Data System (ADS)

    Gurcan, Metin N.; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M.; Petrick, Nicholas; Helvie, Mark A.

    2002-05-01

    We evaluated the effectiveness of an optimal convolution neural network (CNN) architecture selected by simulated annealing for improving the performance of a computer-aided diagnosis (CAD) system designed for the detection of microcalcification clusters on digitized mammograms. The performances of the CAD programs with manually and optimally selected CNNs were compared using an independent test set. This set included 472 mammograms and contained 253 biopsy-proven malignant clusters. Free-response receiver operating characteristic (FROC) analysis was used for evaluation of the detection accuracy. At a false positive (FP) rate of 0.7 per image, the film-based sensitivity was 84.6% with the optimized CNN, in comparison with 77.2% with the manually selected CNN. If clusters having images in both craniocaudal and mediolateral oblique views were analyzed together and a cluster was considered to be detected when it was detected in one or both views, at 0.7 FPs/image, the sensitivity was 93.3% with the optimized CNN and 87.0% with the manually selected CNN. This study indicates that classification of true positive and FP signals is an important step of the CAD program and that the detection accuracy of the program can be considerably improved by optimizing this step with an automated optimization algorithm.

  5. X-Ray Computed Tomography Reveals the Response of Root System Architecture to Soil Texture.

    PubMed

    Rogers, Eric D; Monaenkova, Daria; Mijar, Medhavinee; Nori, Apoorva; Goldman, Daniel I; Benfey, Philip N

    2016-07-01

    Root system architecture (RSA) impacts plant fitness and crop yield by facilitating efficient nutrient and water uptake from the soil. A better understanding of the effects of soil on RSA could improve crop productivity by matching roots to their soil environment. We used x-ray computed tomography to perform a detailed three-dimensional quantification of changes in rice (Oryza sativa) RSA in response to the physical properties of a granular substrate. We characterized the RSA of eight rice cultivars in five different growth substrates and determined that RSA is the result of interactions between genotype and growth environment. We identified cultivar-specific changes in RSA in response to changing growth substrate texture. The cultivar Azucena exhibited low RSA plasticity in all growth substrates, whereas cultivar Bala root depth was a function of soil hardness. Our imaging techniques provide a framework to study RSA in different growth environments, the results of which can be used to improve root traits with agronomic potential. PMID:27208237

  6. Open Computer Forensic Architecture a Way to Process Terabytes of Forensic Disk Images

    NASA Astrophysics Data System (ADS)

    Vermaas, Oscar; Simons, Joep; Meijer, Rob

    This chapter describes the Open Computer Forensics Architecture (OCFA), an automated system that dissects complex file types, extracts metadata from files and ultimately creates indexes on forensic images of seized computers. It consists of a set of collaborating processes, called modules. Each module is specialized in processing a certain file type. When it receives a so called 'evidence', the information that has been extracted so far about the file together with the actual data, it either adds new information about the file or uses the file to derive a new 'evidence'. All evidence, original and derived, is sent to a router after being processed by a particular module. The router decides which module should process the evidence next, based upon the metadata associated with the evidence. Thus the OCFA system can recursively process images until from every compound file the embedded files, if any, are extracted, all information that the system can derive, has been derived and all extracted text is indexed. Compound files include, but are not limited to, archive- and zip-files, disk images, text documents of various formats and, for example, mailboxes. The output of an OCFA run is a repository full of derived files, a database containing all extracted information about the files and an index which can be used when searching. This is presented in a web interface. Moreover, processed data is easily fed to third party software for further analysis or to be used in data mining or text mining-tools. The main advantages of the OCFA system are Scalability, it is able to process large amounts of data.

  7. Open Computer Forensic Architecture a Way to Process Terabytes of Forensic Disk Images

    NASA Astrophysics Data System (ADS)

    Vermaas, Oscar; Simons, Joep; Meijer, Rob

    This chapter describes the Open Computer Forensics Architecture (OCFA), an automated system that dissects complex file types, extracts metadata from files and ultimately creates indexes on forensic images of seized computers. It consists of a set of collaborating processes, called modules. Each module is specialized in processing a certain file type. When it receives a so called 'evidence', the information that has been extracted so far about the file together with the actual data, it either adds new information about the file or uses the file to derive a new 'evidence'. All evidence, original and derived, is sent to a router after being processed by a particular module. The router decides which module should process the evidence next, based upon the metadata associated with the evidence. Thus the OCFA system can recursively process images until from every compound file the embedded files, if any, are extracted, all information that the system can derive, has been derived and all extracted text is indexed. Compound files include, but are not limited to, archive- and zip-files, disk images, text documents of various formats and, for example, mailboxes. The output of an OCFA run is a repository full of derived files, a database containing all extracted information about the files and an index which can be used when searching. This is presented in a web interface. Moreover, processed data is easily fed to third party software for further analysis or to be used in data mining or text mining-tools. The main advantages of the OCFA system are 1. Scalability, it is able to process large amounts of data.

  8. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning.

    PubMed

    Shin, Hoo-Chang; Roth, Holger R; Gao, Mingchen; Lu, Le; Xu, Ziyue; Nogues, Isabella; Yao, Jianhua; Mollura, Daniel; Summers, Ronald M

    2016-05-01

    Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks. PMID:26886976

  9. High-speed, automatic controller design considerations for integrating array processor, multi-microprocessor, and host computer system architectures

    NASA Technical Reports Server (NTRS)

    Jacklin, S. A.; Leyland, J. A.; Warmbrodt, W.

    1985-01-01

    Modern control systems must typically perform real-time identification and control, as well as coordinate a host of other activities related to user interaction, online graphics, and file management. This paper discusses five global design considerations which are useful to integrate array processor, multimicroprocessor, and host computer system architectures into versatile, high-speed controllers. Such controllers are capable of very high control throughput, and can maintain constant interaction with the nonreal-time or user environment. As an application example, the architecture of a high-speed, closed-loop controller used to actively control helicopter vibration is briefly discussed. Although this system has been designed for use as the controller for real-time rotorcraft dynamics and control studies in a wind tunnel environment, the controller architecture can generally be applied to a wide range of automatic control applications.

  10. A Parallel Implementation of a Smoothed Particle Hydrodynamics Method on Graphics Hardware Using the Compute Unified Device Architecture

    SciTech Connect

    Wong Unhong; Wong Honcheng; Tang Zesheng

    2010-05-21

    The smoothed particle hydrodynamics (SPH), which is a class of meshfree particle methods (MPMs), has a wide range of applications from micro-scale to macro-scale as well as from discrete systems to continuum systems. Graphics hardware, originally designed for computer graphics, now provide unprecedented computational power for scientific computation. Particle system needs a huge amount of computations in physical simulation. In this paper, an efficient parallel implementation of a SPH method on graphics hardware using the Compute Unified Device Architecture is developed for fluid simulation. Comparing to the corresponding CPU implementation, our experimental results show that the new approach allows significant speedups of fluid simulation through handling huge amount of computations in parallel on graphics hardware.

  11. HONEI: A collection of libraries for numerical computations targeting multiple processor architectures

    NASA Astrophysics Data System (ADS)

    van Dyk, Danny; Geveler, Markus; Mallach, Sven; Ribbrock, Dirk; Göddeke, Dominik; Gutwenger, Carsten

    2009-12-01

    We present HONEI, an open-source collection of libraries offering a hardware oriented approach to numerical calculations. HONEI abstracts the hardware, and applications written on top of HONEI can be executed on a wide range of computer architectures such as CPUs, GPUs and the Cell processor. We demonstrate the flexibility and performance of our approach with two test applications, a Finite Element multigrid solver for the Poisson problem and a robust and fast simulation of shallow water waves. By linking against HONEI's libraries, we achieve a two-fold speedup over straight forward C++ code using HONEI's SSE backend, and additional 3-4 and 4-16 times faster execution on the Cell and a GPU. A second important aspect of our approach is that the full performance capabilities of the hardware under consideration can be exploited by adding optimised application-specific operations to the HONEI libraries. HONEI provides all necessary infrastructure for development and evaluation of such kernels, significantly simplifying their development. Program summaryProgram title: HONEI Catalogue identifier: AEDW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv2 No. of lines in distributed program, including test data, etc.: 216 180 No. of bytes in distributed program, including test data, etc.: 1 270 140 Distribution format: tar.gz Programming language: C++ Computer: x86, x86_64, NVIDIA CUDA GPUs, Cell blades and PlayStation 3 Operating system: Linux RAM: at least 500 MB free Classification: 4.8, 4.3, 6.1 External routines: SSE: none; [1] for GPU, [2] for Cell backend Nature of problem: Computational science in general and numerical simulation in particular have reached a turning point. The revolution developers are facing is not primarily driven by a change in (problem-specific) methodology, but rather by the fundamental paradigm shift of the

  12. The Jupyter/IPython architecture: a unified view of computational research, from interactive exploration to communication and publication.

    NASA Astrophysics Data System (ADS)

    Ragan-Kelley, M.; Perez, F.; Granger, B.; Kluyver, T.; Ivanov, P.; Frederic, J.; Bussonier, M.

    2014-12-01

    IPython has provided terminal-based tools for interactive computing in Python since 2001. The notebook document format and multi-process architecture introduced in 2011 have expanded the applicable scope of IPython into teaching, presenting, and sharing computational work, in addition to interactive exploration. The new architecture also allows users to work in any language, with implementations in Python, R, Julia, Haskell, and several other languages. The language agnostic parts of IPython have been renamed to Jupyter, to better capture the notion that a cross-language design can encapsulate commonalities present in computational research regardless of the programming language being used. This architecture offers components like the web-based Notebook interface, that supports rich documents that combine code and computational results with text narratives, mathematics, images, video and any media that a modern browser can display. This interface can be used not only in research, but also for publication and education, as notebooks can be converted to a variety of output formats, including HTML and PDF. Recent developments in the Jupyter project include a multi-user environment for hosting notebooks for a class or research group, a live collaboration notebook via Google Docs, and better support for languages other than Python.

  13. Activities and operations of the Advanced Computing Research Facility, October 1986-October 1987

    SciTech Connect

    Pieper, G.W.

    1987-01-01

    This paper contains a description of the work being carried out at the advanced computing research facility at Argonne National Laboratory. Topics covered are upgrading of computers, networking changes, algorithms, parallel programming, programming languages, and user training. (LSP)

  14. Advanced electric field computation for RF sheaths prediction with TOPICA

    NASA Astrophysics Data System (ADS)

    Milanesio, Daniele; Maggiora, Riccardo

    2012-10-01

    The design of an Ion Cyclotron (IC) launcher is not only driven by its coupling properties, but also by its capability of maintaining low parallel electric fields in front of it, in order to provide good power transfer to plasma and to reduce the impurities production. However, due to the impossibility to verify the antenna performances before the starting of the operations, advanced numerical simulation tools are the only alternative to carry out a proper antenna design. With this in mind, it should be clear that the adoption of a code, such as TOPICA [1], able to precisely take into account a realistic antenna geometry and an accurate plasma description, is extremely important to achieve these goals. Because of the recently introduced features that allow to compute the electric field distribution everywhere inside the antenna enclosure and in the plasma column, the TOPICA code appears to be the only alternative to understand which elements may have a not negligible impact on the antenna design and then to suggest further optimizations in order to mitigate RF potentials. The present work documents the evaluation of the electric field map from actual antennas, like the Tore Supra Q5 and the JET A2 launchers, and the foreseen ITER IC antenna. [4pt] [1] D. Milanesio et al., Nucl. Fusion 49, 115019 (2009).

  15. Quantitative Computed Tomography and Image Analysis for Advanced Muscle Assessment

    PubMed Central

    Edmunds, Kyle Joseph; Gíslason, Magnus K.; Arnadottir, Iris D.; Marcante, Andrea; Piccione, Francesco; Gargiulo, Paolo

    2016-01-01

    Medical imaging is of particular interest in the field of translational myology, as extant literature describes the utilization of a wide variety of techniques to non-invasively recapitulate and quantity various internal and external tissue morphologies. In the clinical context, medical imaging remains a vital tool for diagnostics and investigative assessment. This review outlines the results from several investigations on the use of computed tomography (CT) and image analysis techniques to assess muscle conditions and degenerative process due to aging or pathological conditions. Herein, we detail the acquisition of spiral CT images and the use of advanced image analysis tools to characterize muscles in 2D and 3D. Results from these studies recapitulate changes in tissue composition within muscles, as visualized by the association of tissue types to specified Hounsfield Unit (HU) values for fat, loose connective tissue or atrophic muscle, and normal muscle, including fascia and tendon. We show how results from these analyses can be presented as both average HU values and compositions with respect to total muscle volumes, demonstrating the reliability of these tools to monitor, assess and characterize muscle degeneration. PMID:27478562

  16. Activities and operations of the Advanced Computing Research Facility, January 1989--January 1990

    SciTech Connect

    Pieper, G.W.

    1990-02-01

    This report reviews the activities and operations of the Advanced Computing Research Facility (ACRF) for the period January 1, 1989, through January 31, 1990. The ACRF is operated by the Mathematics and Computer Science Division at Argonne National Laboratory. The facility's principal objective is to foster research in parallel computing. Toward this objective, the ACRF continues to operate experimental advanced computers and to sponsor new technology transfer efforts and new research projects. 4 refs., 8 figs.

  17. Analysis and coding technique based on computational intelligence methods and image-understanding architecture

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2000-05-01

    Human vision involves higher-level knowledge and top-bottom processes for resolving ambiguity and uncertainty in the real images. Even very advanced low-level image processing can not give any advantages without a highly effective knowledge-representation and reasoning system that is the solution of image understanding problem. Methods of image analysis and coding are directly based on the methods of knowledge representation and processing. Article suggests such models and mechanisms in form of Spatial Turing Machine that in place of symbols and tapes works with hierarchical networks represented dually as discrete and continuous structures. Such networks are able to perform both graph and diagrammatic operations being the basis of intelligence. Computational intelligence methods provide transformation of continuous image information into the discrete structures, making it available for analysis. Article shows that symbols naturally emerge in such networks, giving opportunity to use symbolic operations. Such framework naturally combines methods of machine learning, classification and analogy with induction, deduction and other methods of higher level reasoning. Based on these principles image understanding system provides more flexible ways of handling with ambiguity and uncertainty in the real images and does not require supercomputers. That opens way to new technologies in the computer vision and image databases.

  18. A rapid-acquisition architecture for advanced avionics and spread-spectrum applications

    NASA Astrophysics Data System (ADS)

    Jordan, Michael T.; Luecke, James R.

    A rapid-acquisition architecture for frequency hopping (FH), direct sequence (DS), and various forms of hybrid spread-spectrum waveforms is presented. This concept offers extraordinary improvements in flexibility and adaptability, as well as significant advantages in size, weight, and power reduction over those of conventional systems. The general concept is based on matching application-specific IC variable-length digital matched filter technology to programmable digital frequency synthesizers for an adaptable FH/DS spread-spectrum system. Within a given time interval a specific set of outputs from each code-matched filter (CMF) appears which represents the correlation of the received signal against a certain code offset. This is the key function for fast and reliable acquisition of the spread-spectrum signals. The basic configuration consists of the fast-acquisition subsystem, which receives the signals and downconverts, or dehops, by a frequency-hopping local oscillator driven by the known pseudonoise frequency hop pattern. This architecture configuration offers ease of technology insertion as new developments in technology may emerge. The modular philosophy allows for future expansion of the initial architecture in a cost-effective manner.

  19. Computer-based Studies of Diffusion through Stomata of Different Architecture

    PubMed Central

    Roth-Nebelsick, Anita

    2007-01-01

    Background and Aims The influence of stomatal architecture on stomatal conductance and on the developing concentration gradient was explored quantitatively by comparing diffusion rates of water vapour and CO2 occurring in a set of three-dimensional stoma models. The influence on diffusion of an internal cuticle, a sunken stoma, a partially closed stoma and of substomatal chambers of two different sizes was considered. Methods The study was performed by using a commercial computer program based on the Finite Element Method which allows for the simulation of diffusion in three dimensions. By using this method, diffusion was generated by prescribed gas concentrations at the boundaries of the substomatal chamber and outside of the leaf. The program calculates the distribution of gas concentrations over the entire model space. Key Results Locating the stomatal pore at the bottom of a stomatal antechamber with a depth of 20 µm decreased the conductance significantly (at roughly about 30 %). The humidity directly above the stomatal pore is significantly higher with the stomatal antechamber present. Lining the walls of the substomatal chamber with an internal cuticle which suppresses evaporation had an even stronger effect by reducing the conductance to 60 % of the original value. The study corroborates therefore the results of former studies that water will evaporate preferentially at sites in the immediate vicinity to the stomatal pore if no internal cuticle is present. The conductance decrease affects only water vapour and not CO2. Increasing the substomatal chamber increases CO2 uptake, whereas transpiration increases if an internal cuticle is present. Conclusions Variation of stomatal structure may, with unchanged pore size and depth, profoundly affect gas exchange and the pathways of liquid water inside the leaf. Equations for calculation of stomatal conductance which are solely based on stomatal density and pore depth and size can significantly overestimate

  20. Contrast-enhanced nanofocus computed tomography images the cartilage subtissue architecture in three dimensions.

    PubMed

    Kerckhofs, G; Sainz, J; Wevers, M; Van de Putte, T; Schrooten, J

    2013-01-01

    We describe a non-destructive imaging method, named contrast-enhanced nanofocus X-ray computed tomography (CE-nanoCT), that permits simultaneously imaging and quantifying in 3D the (sub)tissue architecture and (biochemical) composition of cartilage and bone in small animal models at a novel contrast and spatial resolution. To demonstrate the potential of this novel methodology, a newborn mouse was scanned using CE-nanoCT. This allowed simultaneously visualising the bone and cartilage structure much like the traditional alcian blue-alizarin red skeletal stain. Additionally, it enabled a 3D visualisation at such a high spatial image resolution that internal, micro-scale structures could be digitally dissected and evaluated for size, structure and composition. Ex vivo treatment with papain, that is known to specifically remove the non-calcified cartilage layer but keep the calcified cartilage intact, proved CE-nanoCT to be applicable to visualise the subdivisions within the hyaline cartilage of the articular joint of mice. The quantitative power of CE-nanoCT in vivo was evaluated using a mouse model for osteoarthritis (OA), where OA-like cartilage lesions are induced by meniscus destabilisation surgery. The thickness of both the non-calcified and calcified cartilage layer in the knee joint of such mice was visualised and quantified in 3D and compared to unaffected mice. Finally, to show that different forms of cartilage and tissue combinations can be distinguished using CE-nanoCT, different cartilaginous body parts of the mouse were imaged. In conclusion, CE-nanoCT can provide novel insights in preclinical research by quantifying in a non-destructive 3D manner pathological differences, in particular in developing mice, newborns or adults. PMID:23389752

  1. Hardware Architectures for Data-Intensive Computing Problems: A Case Study for String Matching

    SciTech Connect

    Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel

    2012-12-28

    DNA analysis is an emerging application of high performance bioinformatic. Modern sequencing machinery are able to provide, in few hours, large input streams of data, which needs to be matched against exponentially growing databases of known fragments. The ability to recognize these patterns effectively and fastly may allow extending the scale and the reach of the investigations performed by biology scientists. Aho-Corasick is an exact, multiple pattern matching algorithm often at the base of this application. High performance systems are a promising platform to accelerate this algorithm, which is computationally intensive but also inherently parallel. Nowadays, high performance systems also include heterogeneous processing elements, such as Graphic Processing Units (GPUs), to further accelerate parallel algorithms. Unfortunately, the Aho-Corasick algorithm exhibits large performance variability, depending on the size of the input streams, on the number of patterns to search and on the number of matches, and poses significant challenges on current high performance software and hardware implementations. An adequate mapping of the algorithm on the target architecture, coping with the limit of the underlining hardware, is required to reach the desired high throughputs. In this paper, we discuss the implementation of the Aho-Corasick algorithm for GPU-accelerated high performance systems. We present an optimized implementation of Aho-Corasick for GPUs and discuss its tradeoffs on the Tesla T10 and he new Tesla T20 (codename Fermi) GPUs. We then integrate the optimized GPU code, respectively, in a MPI-based and in a pthreads-based load balancer to enable execution of the algorithm on clusters and large sharedmemory multiprocessors (SMPs) accelerated with multiple GPUs.

  2. Systolic architectures for the computation of the discrete Hartley and the discrete cosine transforms based on prime factor decomposition

    SciTech Connect

    Chakrabarti, C. . Dept. of Electrical Engineering); Ja Ja, J. . Dept. of Electrical Engineering)

    1990-11-01

    This paper proposes two-dimensional systolic array implementations for computing the discrete Hartley (DHT) and the discrete cosine transforms (DCT) when the transform size N is decomposable into mutually prime factors. The existing two-dimensional formulations for DHT and DCT are modified and the corresponding algorithms are mapped into two-dimensional systolic arrays. The resulting architecture is fully pipelined with no control units. The hardware design is based on bit serial left to right MSB to LSB binary arithmetic.

  3. Irreversibility of T-Cell Specification: Insights from Computational Modelling of a Minimal Network Architecture

    PubMed Central

    Manesso, Erica; Kueh, Hao Yuan; Freedman, George; Rothenberg, Ellen V.

    2016-01-01

    Background/Objectives A cascade of gene activations under the control of Notch signalling is required during T-cell specification, when T-cell precursors gradually lose the potential to undertake other fates and become fully committed to the T-cell lineage. We elucidate how the gene/protein dynamics for a core transcriptional module governs this important process by computational means. Methods We first assembled existing knowledge about transcription factors known to be important for T-cell specification to form a minimal core module consisting of TCF-1, GATA-3, BCL11B, and PU.1 aiming at dynamical modeling. Model architecture was based on published experimental measurements of the effects on each factor when each of the others is perturbed. While several studies provided gene expression measurements at different stages of T-cell development, pure time series are not available, thus precluding a straightforward study of the dynamical interactions among these genes. We therefore translate stage dependent data into time series. A feed-forward motif with multiple positive feed-backs can account for the observed delay between BCL11B versus TCF-1 and GATA-3 activation by Notch signalling. With a novel computational approach, all 32 possible interactions among Notch signalling, TCF-1, and GATA-3 are explored by translating combinatorial logic expressions into differential equations for BCL11B production rate. Results Our analysis reveals that only 3 of 32 possible configurations, where GATA-3 works as a dimer, are able to explain not only the time delay, but very importantly, also give rise to irreversibility. The winning models explain the data within the 95% confidence region and are consistent with regard to decay rates. Conclusions This first generation model for early T-cell specification has relatively few players. Yet it explains the gradual transition into a committed state with no return. Encoding logics in a rate equation setting allows determination of

  4. Computer architecture providing high-performance and low-cost solutions for fast fMRI reconstruction

    NASA Astrophysics Data System (ADS)

    Chao, Hui; Goddard, J. Iain

    1998-07-01

    Due to the dynamic nature of brain studies in functional magnetic resonance imaging (fMRI), fast pulse sequences such as echo planar imaging (EPI) and spiral are often used for higher temporal resolution. Hundreds of frames of two- dimensional (2-D) images or multiple three-dimensional (3-D) images are often acquired to cover a larger space and time range. Therefore, fMRI often requires a much larger data storage, faster data transfer rate and higher processing power than conventional MRI. In Mercury Computer Systems' PCI-based embedded computer system, the computer architecture allows the concurrent use of a DMA engine for data transfer and CPU for data processing. This architecture allows a multicomputer to distribute processing and data with minimal time spent transferring data. Different types and numbers of processors are available to optimize system performance for the application. The fMRI reconstruction was first implemented in Mercury's PCI-based embedded computer system by using one digital signal processing (DSP) chip, with the host computer running under the Windows NTR platform. Double buffers in SRAM or cache were created for concurrent I/O and processing. The fMRI reconstruction was then implemented in parallel using multiple DSP chips. Data transfer and interprocessor synchronization were carefully managed to optimize algorithm efficiency. The image reconstruction times were measured with different numbers of processors ranging from one to 10. With one DSP chip, the timing for reconstructing 100 fMRI images measuring 128 X 64 pixels was 1.24 seconds, which is already faster than most existing commercial MRI systems. This PCI-based embedded multicomputer architecture, which has a nearly linear improvement in performance, provides high performance for fMRI processing. In summary, this embedded multicomputer system allows the choice of computer topologies to fit the specific application to achieve maximum system performance.

  5. Feasibility of a Hybrid Brain-Computer Interface for Advanced Functional Electrical Therapy

    PubMed Central

    Savić, Andrej M.; Malešević, Nebojša M.; Popović, Mirjana B.

    2014-01-01

    We present a feasibility study of a novel hybrid brain-computer interface (BCI) system for advanced functional electrical therapy (FET) of grasp. FET procedure is improved with both automated stimulation pattern selection and stimulation triggering. The proposed hybrid BCI comprises the two BCI control signals: steady-state visual evoked potentials (SSVEP) and event-related desynchronization (ERD). The sequence of the two stages, SSVEP-BCI and ERD-BCI, runs in a closed-loop architecture. The first stage, SSVEP-BCI, acts as a selector of electrical stimulation pattern that corresponds to one of the three basic types of grasp: palmar, lateral, or precision. In the second stage, ERD-BCI operates as a brain switch which activates the stimulation pattern selected in the previous stage. The system was tested in 6 healthy subjects who were all able to control the device with accuracy in a range of 0.64–0.96. The results provided the reference data needed for the planned clinical study. This novel BCI may promote further restoration of the impaired motor function by closing the loop between the “will to move” and contingent temporally synchronized sensory feedback. PMID:24616644

  6. High dynamic range pixel architecture for advanced diagnostic medical x-ray imaging applications

    SciTech Connect

    Izadi, Mohammad Hadi; Karim, Karim S.

    2006-05-15

    The most widely used architecture in large-area amorphous silicon (a-Si) flat panel imagers is a passive pixel sensor (PPS), which consists of a detector and a readout switch. While the PPS has the advantage of being compact and amenable toward high-resolution imaging, small PPS output signals are swamped by external column charge amplifier and data line thermal noise, which reduce the minimum readable sensor input signal. In contrast to PPS circuits, on-pixel amplifiers in a-Si technology reduce readout noise to levels that can meet even the stringent requirements for low noise digital x-ray fluoroscopy (<1000 noise electrons). However, larger voltages at the pixel input cause the output of the amplified pixel to become nonlinear thus reducing the dynamic range. We reported a hybrid amplified pixel architecture based on a combination of PPS and amplified pixel designs that, in addition to low noise performance, also resulted in large-signal linearity and consequently higher dynamic range [K. S. Karim et al., Proc. SPIE 5368, 657 (2004)]. The additional benefit in large-signal linearity, however, came at the cost of an additional pixel transistor. We present an amplified pixel design that achieves the goals of low noise performance and large-signal linearity without the need for an additional pixel transistor. Theoretical calculations and simulation results for noise indicate the applicability of the amplified a-Si pixel architecture for high dynamic range, medical x-ray imaging applications that require switching between low exposure, real-time fluoroscopy and high-exposure radiography.

  7. The dynamic information architecture system : an advanced simulation framework for military and civilian applications.

    SciTech Connect

    Campbell, A. P.; Hummel, J. R.

    1998-01-08

    DIAS, the Dynamic Information Architecture System, is an object-oriented simulation system that was designed to provide an integrating framework in which new or legacy software applications can operate in a context-driven frame of reference. DIAS provides a flexible and extensible mechanism to allow disparate, and mixed language, software applications to interoperate. DIAS captures the dynamic interplay between different processes or phenomena in the same frame of reference. Finally, DIAS accommodates a broad range of analysis contexts, with widely varying spatial and temporal resolutions and fidelity.

  8. Advanced Certification Program for Computer Graphic Specialists. Final Performance Report.

    ERIC Educational Resources Information Center

    Parkland Coll., Champaign, IL.

    A pioneer program in computer graphics was implemented at Parkland College (Illinois) to meet the demand for specialized technicians to visualize data generated on high performance computers. In summer 1989, 23 students were accepted into the pilot program. Courses included C programming, calculus and analytic geometry, computer graphics, and…

  9. Important Advances in Technology and Unique Applications to Cardiovascular Computed Tomography

    PubMed Central

    Chaikriangkrai, Kongkiat; Choi, Su Yeon; Nabi, Faisal; Chang, Su Min

    2014-01-01

    For the past decade, multidetector cardiac computed tomography and its main application, coronary computed tomography angiography, have been established as a noninvasive technique for anatomical assessment of coronary arteries. This new era of coronary artery evaluation by coronary computed tomography angiography has arisen from the rapid advancement in computed tomography technology, which has led to massive diagnostic and prognostic clinical studies in various patient populations. This article gives a brief overview of current multidetector cardiac computed tomography systems, developing cardiac computed tomography technologies in both hardware and software fields, innovative radiation exposure reduction measures, multidetector cardiac computed tomography functional studies, and their newer clinical applications beyond coronary computed tomography angiography. PMID:25574342

  10. Important advances in technology and unique applications to cardiovascular computed tomography.

    PubMed

    Chaikriangkrai, Kongkiat; Choi, Su Yeon; Nabi, Faisal; Chang, Su Min

    2014-01-01

    For the past decade, multidetector cardiac computed tomography and its main application, coronary computed tomography angiography, have been established as a noninvasive technique for anatomical assessment of coronary arteries. This new era of coronary artery evaluation by coronary computed tomography angiography has arisen from the rapid advancement in computed tomography technology, which has led to massive diagnostic and prognostic clinical studies in various patient populations. This article gives a brief overview of current multidetector cardiac computed tomography systems, developing cardiac computed tomography technologies in both hardware and software fields, innovative radiation exposure reduction measures, multidetector cardiac computed tomography functional studies, and their newer clinical applications beyond coronary computed tomography angiography. PMID:25574342

  11. Generalized Advanced Propeller Analysis System (GAPAS). Volume 2: Computer program user manual

    NASA Technical Reports Server (NTRS)

    Glatt, L.; Crawford, D. R.; Kosmatka, J. B.; Swigart, R. J.; Wong, E. W.

    1986-01-01

    The Generalized Advanced Propeller Analysis System (GAPAS) computer code is described. GAPAS was developed to analyze advanced technology multi-bladed propellers which operate on aircraft with speeds up to Mach 0.8 and altitudes up to 40,000 feet. GAPAS includes technology for analyzing aerodynamic, structural, and acoustic performance of propellers. The computer code was developed for the CDC 7600 computer and is currently available for industrial use on the NASA Langley computer. A description of all the analytical models incorporated in GAPAS is included. Sample calculations are also described as well as users requirements for modifying the analysis system. Computer system core requirements and running times are also discussed.

  12. The DANTE Boltzmann transport solver: An unstructured mesh, 3-D, spherical harmonics algorithm compatible with parallel computer architectures

    SciTech Connect

    McGhee, J.M.; Roberts, R.M.; Morel, J.E.

    1997-06-01

    A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner for scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated.

  13. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Among the highly parallel computing architectures required for advanced scientific computation, those designated 'MIMD' and 'SIMD' have yielded the best results to date. The present development status evaluation of such architectures shown neither to have attained a decisive advantage in most near-homogeneous problems' treatment; in the cases of problems involving numerous dissimilar parts, however, such currently speculative architectures as 'neural networks' or 'data flow' machines may be entailed. Data flow computers are the most practical form of MIMD fine-grained parallel computers yet conceived; they automatically solve the problem of assigning virtual processors to the real processors in the machine.

  14. Delta: An object-oriented finite element code architecture for massively parallel computers

    SciTech Connect

    Weatherby, J.R.; Schutt, J.A.; Peery, J.S.; Hogan, R.E.

    1996-02-01

    Delta is an object-oriented code architecture based on the finite element method which enables simulation of a wide range of engineering mechanics problems in a parallel processing environment. Written in C{sup ++}, Delta is a natural framework for algorithm development and for research involving coupling of mechanics from different Engineering Science disciplines. To enhance flexibility and encourage code reuse, the architecture provides a clean separation of the major aspects of finite element programming. Spatial discretization, temporal discretization, and the solution of linear and nonlinear systems of equations are each implemented separately, independent from the governing field equations. Other attractive features of the Delta architecture include support for constitutive models with internal variables, reusable ``matrix-free`` equation solvers, and support for region-to-region variations in the governing equations and the active degrees of freedom. A demonstration code built from the Delta architecture has been used in two-dimensional and three-dimensional simulations involving dynamic and quasi-static solid mechanics, transient and steady heat transport, and flow in porous media.

  15. Computer Aided Design of Ka-Band Waveguide Power Combining Architectures for Interplanetary Spacecraft

    NASA Technical Reports Server (NTRS)

    Vaden, Karl R.

    2006-01-01

    Communication systems for future NASA interplanetary spacecraft require transmitter power ranging from several hundred watts to kilowatts. Several hybrid junctions are considered as elements within a corporate combining architecture for high power Ka-band space traveling-wave tube amplifiers (TWTAs). This report presents the simulated transmission characteristics of several hybrid junctions designed for a low loss, high power waveguide based power combiner.

  16. The Advance of Computing from the Ground to the Cloud

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2009-01-01

    A trend toward the abstraction of computing platforms that has been developing in the broader IT arena over the last few years is just beginning to make inroads into the library technology scene. Cloud computing offers for libraries many interesting possibilities that may help reduce technology costs and increase capacity, reliability, and…

  17. SKIRT: An advanced dust radiative transfer code with a user-friendly architecture

    NASA Astrophysics Data System (ADS)

    Camps, P.; Baes, M.

    2015-03-01

    We discuss the architecture and design principles that underpin the latest version of SKIRT, a state-of-the-art open source code for simulating continuum radiation transfer in dusty astrophysical systems, such as spiral galaxies and accretion disks. SKIRT employs the Monte Carlo technique to emulate the relevant physical processes including scattering, absorption and emission by the dust. The code features a wealth of built-in geometries, radiation source spectra, dust characterizations, dust grids, and detectors, in addition to various mechanisms for importing snapshots generated by hydrodynamical simulations. The configuration for a particular simulation is defined at run-time through a user-friendly interface suitable for both occasional and power users. These capabilities are enabled by careful C++ code design. The programming interfaces between components are well defined and narrow. Adding a new feature is usually as simple as adding another class; the user interface automatically adjusts to allow configuring the new options. We argue that many scientific codes, like SKIRT, can benefit from careful object-oriented design and from a friendly user interface, even if it is not a graphical user interface.

  18. Advanced texture filtering: a versatile framework for reconstructing multi-dimensional image data on heterogeneous architectures

    NASA Astrophysics Data System (ADS)

    Zellmann, Stefan; Percan, Yvonne; Lang, Ulrich

    2015-01-01

    Reconstruction of 2-d image primitives or of 3-d volumetric primitives is one of the most common operations performed by the rendering components of modern visualization systems. Because this operation is often aided by GPUs, reconstruction is typically restricted to first-order interpolation. With the advent of in situ visualization, the assumption that rendering algorithms are in general executed on GPUs is however no longer adequate. We thus propose a framework that provides versatile texture filtering capabilities: up to third-order reconstruction using various types of cubic filtering and interpolation primitives; cache-optimized algorithms that integrate seamlessly with GPGPU rendering or with software rendering that was optimized for cache-friendly "Structure of Array" (SoA) access patterns; a memory management layer (MML) that gracefully hides the complexities of extra data copies necessary for memory access optimizations such as swizzling, for rendering on GPGPUs, or for reconstruction schemes that rely on pre-filtered data arrays. We prove the effectiveness of our software architecture by integrating it into and validating it using the open source direct volume rendering (DVR) software DeskVOX.

  19. Advanced Solid State Nanopores Architectures: From Early Cancer Detection to Nano-electrochemistry

    NASA Astrophysics Data System (ADS)

    Bashir, Rashid

    2013-03-01

    Solid-state nanopores (ssNPs) are potentially low-cost and highly scalable technologies for rapid and reliable se-quencing of the human diploid genome for under 1,000. The ssNPs detect ionic current changes while molecules translocate through the pore. Several key challenges must be overcome in order for ssNPs to become ubiquitous in the fields of medical diagnostics and personalized healthcare. One major challenge is to reduce the speed at which DNA translocates through the nanopore from microseconds to milliseconds per nucleotide, enabling reliable identification of single nucleotides. The other major challenge is to improve the sensitivity of the approach requiring new sensing modalities and novel device architectures. In this paper, we review our recent efforts to (i) develop ssNPs for early cancer detection, (ii) to embed graphene electrodes in dielectric nanolaminates to form 3 and 4 terminal nanopore devices, and (iii) we demonstrate a nanopore based structure consisting of stacked graphene and Al2O3 dielectric layers to study electrochemical activity at graphene edges. The electrochemical signal corresponding to the atomically thin graphene layer could also provide a pathway to DNA sequencing. Supported by National Institute of Health.

  20. Robust quantum gates and a bus architecture for quantum computing with rare-earth-ion-doped crystals

    SciTech Connect

    Wesenberg, Janus; Moelmer, Klaus

    2003-07-01

    We present a composite pulse controlled phase gate which, together with a bus architecture, improves the feasibility of a recent quantum computing proposal based on rare-earth-ion-doped crystals. The proposed gate operation is tolerant to variations between ions of coupling strengths, pulse lengths, and frequency shifts. In the absence of decoherence effects, it achieves worst case fidelities above 0.999 with relative variations in coupling strength as high as 10% and frequency shifts up to several percent of the resonant Rabi frequency of the laser used to implement the gate. We outline an experiment to demonstrate the creation and detection of maximally entangled states in the system.