Sample records for computing power required

  1. Computer program analyzes and monitors electrical power systems (POSIMO)

    NASA Technical Reports Server (NTRS)

    Jaeger, K.

    1972-01-01

    Requirements to monitor and/or simulate electric power distribution, power balance, and charge budget are discussed. Computer program to analyze power system and generate set of characteristic power system data is described. Application to status indicators to denote different exclusive conditions is presented.

  2. User's manual for the Shuttle Electric Power System analysis computer program (SEPS), volume 2 of program documentation

    NASA Technical Reports Server (NTRS)

    Bains, R. W.; Herwig, H. A.; Luedeman, J. K.; Torina, E. M.

    1974-01-01

    The Shuttle Electric Power System Analysis SEPS computer program which performs detailed load analysis including predicting energy demands and consumables requirements of the shuttle electric power system along with parameteric and special case studies on the shuttle electric power system is described. The functional flow diagram of the SEPS program is presented along with data base requirements and formats, procedure and activity definitions, and mission timeline input formats. Distribution circuit input and fixed data requirements are included. Run procedures and deck setups are described.

  3. Proposal for grid computing for nuclear applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.

    2014-02-12

    The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

  4. Network, system, and status software enhancements for the autonomously managed electrical power system breadboard. Volume 2: Protocol specification

    NASA Technical Reports Server (NTRS)

    Mckee, James W.

    1990-01-01

    This volume (2 of 4) contains the specification, structured flow charts, and code listing for the protocol. The purpose of an autonomous power system on a spacecraft is to relieve humans from having to continuously monitor and control the generation, storage, and distribution of power in the craft. This implies that algorithms will have been developed to monitor and control the power system. The power system will contain computers on which the algorithms run. There should be one control computer system that makes the high level decisions and sends commands to and receive data from the other distributed computers. This will require a communications network and an efficient protocol by which the computers will communicate. One of the major requirements on the protocol is that it be real time because of the need to control the power elements.

  5. Design and Integration of a Three Degrees-of-Freedom Robotic Vehicle with Control Moment Gyro for the Autonomous Multi-Agent Physically Interacting Spacecraft (AMPHIS) Testbed

    DTIC Science & Technology

    2006-09-01

    required directional control for each thruster due to their high precision and equivalent power and computer interface requirements to those for the...Universal Serial Bus) ports, LPT (Line Printing Terminal) and KVM (Keyboard-Video- Mouse) interfaces. Additionally, power is supplied to the computer through...of the IDE cable to the Prometheus Development Kit ACC-IDEEXT. Connect a small drive power connector from the desktop ATX power supply to the ACC

  6. Computer program for design and performance analysis of navigation-aid power systems. Program documentation. Volume 1: Software requirements document

    NASA Technical Reports Server (NTRS)

    Goltz, G.; Kaiser, L. M.; Weiner, H.

    1977-01-01

    A computer program has been developed for designing and analyzing the performance of solar array/battery power systems for the U.S. Coast Guard Navigational Aids. This program is called the Design Synthesis/Performance Analysis (DSPA) Computer Program. The basic function of the Design Synthesis portion of the DSPA program is to evaluate functional and economic criteria to provide specifications for viable solar array/battery power systems. The basic function of the Performance Analysis portion of the DSPA program is to simulate the operation of solar array/battery power systems under specific loads and environmental conditions. This document establishes the software requirements for the DSPA computer program, discusses the processing that occurs within the program, and defines the necessary interfaces for operation.

  7. 77 FR 50726 - Software Requirement Specifications for Digital Computer Software and Complex Electronics Used in...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ... Computer Software and Complex Electronics Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear...-1209, ``Software Requirement Specifications for Digital Computer Software and Complex Electronics used... Electronics Engineers (ANSI/IEEE) Standard 830-1998, ``IEEE Recommended Practice for Software Requirements...

  8. Computer program (POWREQ) for power requirements of mass transit vehicles

    DOT National Transportation Integrated Search

    1977-08-01

    This project was performed to develop a computer program suitable for use in systematic analyses requiring estimates of the energy requirements of mass transit vehicles as a function of driving schedules and vehicle size, shape, and gross weight. The...

  9. Delivering better power: the role of simulation in reducing the environmental impact of aircraft engines.

    PubMed

    Menzies, Kevin

    2014-08-13

    The growth in simulation capability over the past 20 years has led to remarkable changes in the design process for gas turbines. The availability of relatively cheap computational power coupled to improvements in numerical methods and physical modelling in simulation codes have enabled the development of aircraft propulsion systems that are more powerful and yet more efficient than ever before. However, the design challenges are correspondingly greater, especially to reduce environmental impact. The simulation requirements to achieve a reduced environmental impact are described along with the implications of continued growth in available computational power. It is concluded that achieving the environmental goals will demand large-scale multi-disciplinary simulations requiring significantly increased computational power, to enable optimization of the airframe and propulsion system over the entire operational envelope. However even with massive parallelization, the limits imposed by communications latency will constrain the time required to achieve a solution, and therefore the position of such large-scale calculations in the industrial design process. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  10. Microdot - A Four-Bit Microcontroller Designed for Distributed Low-End Computing in Satellites

    NASA Astrophysics Data System (ADS)

    2002-03-01

    Many satellites are an integrated collection of sensors and actuators that require dedicated real-time control. For single processor systems, additional sensors require an increase in computing power and speed to provide the multi-tasking capability needed to service each sensor. Faster processors cost more and consume more power, which taxes a satellite's power resources and may lead to shorter satellite lifetimes. An alternative design approach is a distributed network of small and low power microcontrollers designed for space that handle the computing requirements of each individual sensor and actuator. The design of microdot, a four-bit microcontroller for distributed low-end computing, is presented. The design is based on previous research completed at the Space Electronics Branch, Air Force Research Laboratory (AFRL/VSSE) at Kirtland AFB, NM, and the Air Force Institute of Technology at Wright-Patterson AFB, OH. The Microdot has 29 instructions and a 1K x 4 instruction memory. The distributed computing architecture is based on the Philips Semiconductor I2C Serial Bus Protocol. A prototype was implemented and tested using an Altera Field Programmable Gate Array (FPGA). The prototype was operable to 9.1 MHz. The design was targeted for fabrication in a radiation-hardened-by-design gate-array cell library for the TSMC 0.35 micrometer CMOS process.

  11. Power and Performance Trade-offs for Space Time Adaptive Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gawande, Nitin A.; Manzano Franco, Joseph B.; Tumeo, Antonino

    Computational efficiency – performance relative to power or energy – is one of the most important concerns when designing RADAR processing systems. This paper analyzes power and performance trade-offs for a typical Space Time Adaptive Processing (STAP) application. We study STAP implementations for CUDA and OpenMP on two computationally efficient architectures, Intel Haswell Core I7-4770TE and NVIDIA Kayla with a GK208 GPU. We analyze the power and performance of STAP’s computationally intensive kernels across the two hardware testbeds. We also show the impact and trade-offs of GPU optimization techniques. We show that data parallelism can be exploited for efficient implementationmore » on the Haswell CPU architecture. The GPU architecture is able to process large size data sets without increase in power requirement. The use of shared memory has a significant impact on the power requirement for the GPU. A balance between the use of shared memory and main memory access leads to an improved performance in a typical STAP application.« less

  12. EPPRD: An Efficient Privacy-Preserving Power Requirement and Distribution Aggregation Scheme for a Smart Grid.

    PubMed

    Zhang, Lei; Zhang, Jing

    2017-08-07

    A Smart Grid (SG) facilitates bidirectional demand-response communication between individual users and power providers with high computation and communication performance but also brings about the risk of leaking users' private information. Therefore, improving the individual power requirement and distribution efficiency to ensure communication reliability while preserving user privacy is a new challenge for SG. Based on this issue, we propose an efficient and privacy-preserving power requirement and distribution aggregation scheme (EPPRD) based on a hierarchical communication architecture. In the proposed scheme, an efficient encryption and authentication mechanism is proposed for better fit to each individual demand-response situation. Through extensive analysis and experiment, we demonstrate how the EPPRD resists various security threats and preserves user privacy while satisfying the individual requirement in a semi-honest model; it involves less communication overhead and computation time than the existing competing schemes.

  13. EPPRD: An Efficient Privacy-Preserving Power Requirement and Distribution Aggregation Scheme for a Smart Grid

    PubMed Central

    Zhang, Lei; Zhang, Jing

    2017-01-01

    A Smart Grid (SG) facilitates bidirectional demand-response communication between individual users and power providers with high computation and communication performance but also brings about the risk of leaking users’ private information. Therefore, improving the individual power requirement and distribution efficiency to ensure communication reliability while preserving user privacy is a new challenge for SG. Based on this issue, we propose an efficient and privacy-preserving power requirement and distribution aggregation scheme (EPPRD) based on a hierarchical communication architecture. In the proposed scheme, an efficient encryption and authentication mechanism is proposed for better fit to each individual demand-response situation. Through extensive analysis and experiment, we demonstrate how the EPPRD resists various security threats and preserves user privacy while satisfying the individual requirement in a semi-honest model; it involves less communication overhead and computation time than the existing competing schemes. PMID:28783122

  14. Operate a Nuclear Power Plant.

    ERIC Educational Resources Information Center

    Frimpter, Bonnie J.; And Others

    1983-01-01

    Describes classroom use of a computer program originally published in Creative Computing magazine. "The Nuclear Power Plant" (runs on Apple II with 48K memory) simulates the operating of a nuclear generating station, requiring students to make decisions as they assume the task of managing the plant. (JN)

  15. 78 FR 47015 - Software Requirement Specifications for Digital Computer Software Used in Safety Systems of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-02

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Requirement Specifications for Digital Computer Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory Commission... issuing a revised regulatory guide (RG), revision 1 of RG 1.172, ``Software Requirement Specifications for...

  16. Minimization search method for data inversion

    NASA Technical Reports Server (NTRS)

    Fymat, A. L.

    1975-01-01

    Technique has been developed for determining values of selected subsets of independent variables in mathematical formulations. Required computation time increases with first power of the number of variables. This is in contrast with classical minimization methods for which computational time increases with third power of the number of variables.

  17. Evaluation of reinitialization-free nonvolatile computer systems for energy-harvesting Internet of things applications

    NASA Astrophysics Data System (ADS)

    Onizawa, Naoya; Tamakoshi, Akira; Hanyu, Takahiro

    2017-08-01

    In this paper, reinitialization-free nonvolatile computer systems are designed and evaluated for energy-harvesting Internet of things (IoT) applications. In energy-harvesting applications, as power supplies generated from renewable power sources cause frequent power failures, data processed need to be backed up when power failures occur. Unless data are safely backed up before power supplies diminish, reinitialization processes are required when power supplies are recovered, which results in low energy efficiencies and slow operations. Using nonvolatile devices in processors and memories can realize a faster backup than a conventional volatile computer system, leading to a higher energy efficiency. To evaluate the energy efficiency upon frequent power failures, typical computer systems including processors and memories are designed using 90 nm CMOS or CMOS/magnetic tunnel junction (MTJ) technologies. Nonvolatile ARM Cortex-M0 processors with 4 kB MRAMs are evaluated using a typical computing benchmark program, Dhrystone, which shows a few order-of-magnitude reductions in energy in comparison with a volatile processor with SRAM.

  18. Cellular computational generalized neuron network for frequency situational intelligence in a multi-machine power system.

    PubMed

    Wei, Yawei; Venayagamoorthy, Ganesh Kumar

    2017-09-01

    To prevent large interconnected power system from a cascading failure, brownout or even blackout, grid operators require access to faster than real-time information to make appropriate just-in-time control decisions. However, the communication and computational system limitations of currently used supervisory control and data acquisition (SCADA) system can only deliver delayed information. However, the deployment of synchrophasor measurement devices makes it possible to capture and visualize, in near-real-time, grid operational data with extra granularity. In this paper, a cellular computational network (CCN) approach for frequency situational intelligence (FSI) in a power system is presented. The distributed and scalable computing unit of the CCN framework makes it particularly flexible for customization for a particular set of prediction requirements. Two soft-computing algorithms have been implemented in the CCN framework: a cellular generalized neuron network (CCGNN) and a cellular multi-layer perceptron network (CCMLPN), for purposes of providing multi-timescale frequency predictions, ranging from 16.67 ms to 2 s. These two developed CCGNN and CCMLPN systems were then implemented on two different scales of power systems, one of which installed a large photovoltaic plant. A real-time power system simulator at weather station within the Real-Time Power and Intelligent Systems (RTPIS) laboratory at Clemson, SC, was then used to derive typical FSI results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1995-01-01

    The scope of this project dealt with the investigation of the requirements to support distributed computing of scientific computations over a cluster of cooperative workstations. Various experiments on computations for the solution of simultaneous linear equations were performed in the early phase of the project to gain experience in the general nature and requirements of scientific applications. A specification of a distributed integrated computing environment, DICE, based on a distributed shared memory communication paradigm has been developed and evaluated. The distributed shared memory model facilitates porting existing parallel algorithms that have been designed for shared memory multiprocessor systems to the new environment. The potential of this new environment is to provide supercomputing capability through the utilization of the aggregate power of workstations cooperating in a cluster interconnected via a local area network. Workstations, generally, do not have the computing power to tackle complex scientific applications, making them primarily useful for visualization, data reduction, and filtering as far as complex scientific applications are concerned. There is a tremendous amount of computing power that is left unused in a network of workstations. Very often a workstation is simply sitting idle on a desk. A set of tools can be developed to take advantage of this potential computing power to create a platform suitable for large scientific computations. The integration of several workstations into a logical cluster of distributed, cooperative, computing stations presents an alternative to shared memory multiprocessor systems. In this project we designed and evaluated such a system.

  20. Power monitoring and control for large scale projects: SKA, a case study

    NASA Astrophysics Data System (ADS)

    Barbosa, Domingos; Barraca, João. Paulo; Maia, Dalmiro; Carvalho, Bruno; Vieira, Jorge; Swart, Paul; Le Roux, Gerhard; Natarajan, Swaminathan; van Ardenne, Arnold; Seca, Luis

    2016-07-01

    Large sensor-based science infrastructures for radio astronomy like the SKA will be among the most intensive datadriven projects in the world, facing very high demanding computation, storage, management, and above all power demands. The geographically wide distribution of the SKA and its associated processing requirements in the form of tailored High Performance Computing (HPC) facilities, require a Greener approach towards the Information and Communications Technologies (ICT) adopted for the data processing to enable operational compliance to potentially strict power budgets. Addressing the reduction of electricity costs, improve system power monitoring and the generation and management of electricity at system level is paramount to avoid future inefficiencies and higher costs and enable fulfillments of Key Science Cases. Here we outline major characteristics and innovation approaches to address power efficiency and long-term power sustainability for radio astronomy projects, focusing on Green ICT for science and Smart power monitoring and control.

  1. Application of supercomputers to computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Peterson, V. L.

    1984-01-01

    Computers are playing an increasingly important role in the field of aerodynamics such that they now serve as a major complement to wind tunnels in aerospace research and development. Factors pacing advances in computational aerodynamics are identified, including the amount of computational power required to take the next major step in the discipline. Example results obtained from the successively refined forms of the governing equations are discussed, both in the context of levels of computer power required and the degree to which they either further the frontiers of research or apply to problems of practical importance. Finally, the Numerical Aerodynamic Simulation (NAS) Program - with its 1988 target of achieving a sustained computational rate of 1 billion floating point operations per second and operating with a memory of 240 million words - is discussed in terms of its goals and its projected effect on the future of computational aerodynamics.

  2. Achieving supercomputer performance for neural net simulation with an array of digital signal processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, U.A.; Baumle, B.; Kohler, P.

    1992-10-01

    Music, a DSP-based system with a parallel distributed-memory architecture, provides enormous computing power yet retains the flexibility of a general-purpose computer. Reaching a peak performance of 2.7 Gflops at a significantly lower cost, power consumption, and space requirement than conventional supercomputers, Music is well suited to computationally intensive applications such as neural network simulation. 12 refs., 9 figs., 2 tabs.

  3. Storage peak gas-turbine power unit

    NASA Technical Reports Server (NTRS)

    Tsinkotski, B.

    1980-01-01

    A storage gas-turbine power plant using a two-cylinder compressor with intermediate cooling is studied. On the basis of measured characteristics of a .25 Mw compressor computer calculations of the parameters of the loading process of a constant capacity storage unit (05.3 million cu m) were carried out. The required compressor power as a function of time with and without final cooling was computed. Parameters of maximum loading and discharging of the storage unit were calculated, and it was found that for the complete loading of a fully unloaded storage unit, a capacity of 1 to 1.5 million cubic meters is required, depending on the final cooling.

  4. Using a cloud to replenish parched groundwater modeling efforts.

    PubMed

    Hunt, Randall J; Luchette, Joseph; Schreuder, Willem A; Rumbaugh, James O; Doherty, John; Tonkin, Matthew J; Rumbaugh, Douglas B

    2010-01-01

    Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate "virtual" computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.

  5. Using a cloud to replenish parched groundwater modeling efforts

    USGS Publications Warehouse

    Hunt, Randall J.; Luchette, Joseph; Schreuder, Willem A.; Rumbaugh, James O.; Doherty, John; Tonkin, Matthew J.; Rumbaugh, Douglas B.

    2010-01-01

    Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate “virtual” computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.

  6. Power Efficient Hardware Architecture of SHA-1 Algorithm for Trusted Mobile Computing

    NASA Astrophysics Data System (ADS)

    Kim, Mooseop; Ryou, Jaecheol

    The Trusted Mobile Platform (TMP) is developed and promoted by the Trusted Computing Group (TCG), which is an industry standard body to enhance the security of the mobile computing environment. The built-in SHA-1 engine in TMP is one of the most important circuit blocks and contributes the performance of the whole platform because it is used as key primitives supporting platform integrity and command authentication. Mobile platforms have very stringent limitations with respect to available power, physical circuit area, and cost. Therefore special architecture and design methods for low power SHA-1 circuit are required. In this paper, we present a novel and efficient hardware architecture of low power SHA-1 design for TMP. Our low power SHA-1 hardware can compute 512-bit data block using less than 7,000 gates and has a power consumption about 1.1 mA on a 0.25μm CMOS process.

  7. Bounds on the power of proofs and advice in general physical theories.

    PubMed

    Lee, Ciarán M; Hoban, Matty J

    2016-06-01

    Quantum theory presents us with the tools for computational and communication advantages over classical theory. One approach to uncovering the source of these advantages is to determine how computation and communication power vary as quantum theory is replaced by other operationally defined theories from a broad framework of such theories. Such investigations may reveal some of the key physical features required for powerful computation and communication. In this paper, we investigate how simple physical principles bound the power of two different computational paradigms which combine computation and communication in a non-trivial fashion: computation with advice and interactive proof systems. We show that the existence of non-trivial dynamics in a theory implies a bound on the power of computation with advice. Moreover, we provide an explicit example of a theory with no non-trivial dynamics in which the power of computation with advice is unbounded. Finally, we show that the power of simple interactive proof systems in theories where local measurements suffice for tomography is non-trivially bounded. This result provides a proof that [Formula: see text] is contained in [Formula: see text], which does not make use of any uniquely quantum structure-such as the fact that observables correspond to self-adjoint operators-and thus may be of independent interest.

  8. Power module Data Management System (DMS) study

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Computer trades and analyses of selected Power Module Data Management Subsystem issues to support concurrent inhouse MSFC Power Study are provided. The charts which summarize and describe the results are presented. Software requirements and definitions are included.

  9. Development of small scale cluster computer for numerical analysis

    NASA Astrophysics Data System (ADS)

    Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.

    2017-09-01

    In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.

  10. Using SRAM Based FPGAs for Power-Aware High Performance Wireless Sensor Networks

    PubMed Central

    Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa

    2012-01-01

    While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today’s applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements. PMID:22736971

  11. Using SRAM based FPGAs for power-aware high performance wireless sensor networks.

    PubMed

    Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa

    2012-01-01

    While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today's applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements.

  12. Computational fluid dynamics at NASA Ames and the numerical aerodynamic simulation program

    NASA Technical Reports Server (NTRS)

    Peterson, V. L.

    1985-01-01

    Computers are playing an increasingly important role in the field of aerodynamics such as that they now serve as a major complement to wind tunnels in aerospace research and development. Factors pacing advances in computational aerodynamics are identified, including the amount of computational power required to take the next major step in the discipline. The four main areas of computational aerodynamics research at NASA Ames Research Center which are directed toward extending the state of the art are identified and discussed. Example results obtained from approximate forms of the governing equations are presented and discussed, both in the context of levels of computer power required and the degree to which they either further the frontiers of research or apply to programs of practical importance. Finally, the Numerical Aerodynamic Simulation Program--with its 1988 target of achieving a sustained computational rate of 1 billion floating-point operations per second--is discussed in terms of its goals, status, and its projected effect on the future of computational aerodynamics.

  13. Mapping suitability areas for concentrated solar power plants using remote sensing data

    DOE PAGES

    Omitaomu, Olufemi A.; Singh, Nagendra; Bhaduri, Budhendra L.

    2015-05-14

    The political push to increase power generation from renewable sources such as solar energy requires knowing the best places to site new solar power plants with respect to the applicable regulatory, operational, engineering, environmental, and socioeconomic criteria. Therefore, in this paper, we present applications of remote sensing data for mapping suitability areas for concentrated solar power plants. Our approach uses digital elevation model derived from NASA s Shuttle Radar Topographic Mission (SRTM) at a resolution of 3 arc second (approx. 90m resolution) for estimating global solar radiation for the study area. Then, we develop a computational model built on amore » Geographic Information System (GIS) platform that divides the study area into a grid of cells and estimates site suitability value for each cell by computing a list of metrics based on applicable siting requirements using GIS data. The computed metrics include population density, solar energy potential, federal lands, and hazardous facilities. Overall, some 30 GIS data are used to compute eight metrics. The site suitability value for each cell is computed as an algebraic sum of all metrics for the cell with the assumption that all metrics have equal weight. Finally, we color each cell according to its suitability value. Furthermore, we present results for concentrated solar power that drives a stream turbine and parabolic mirror connected to a Stirling Engine.« less

  14. Evaluation of the Lattice-Boltzmann Equation Solver PowerFLOW for Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Luo, Li-Shi; Singer, Bart A.; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    A careful comparison of the performance of a commercially available Lattice-Boltzmann Equation solver (Power-FLOW) was made with a conventional, block-structured computational fluid-dynamics code (CFL3D) for the flow over a two-dimensional NACA-0012 airfoil. The results suggest that the version of PowerFLOW used in the investigation produced solutions with large errors in the computed flow field; these errors are attributed to inadequate resolution of the boundary layer for reasons related to grid resolution and primitive turbulence modeling. The requirement of square grid cells in the PowerFLOW calculations limited the number of points that could be used to span the boundary layer on the wing and still keep the computation size small enough to fit on the available computers. Although not discussed in detail, disappointing results were also obtained with PowerFLOW for a cavity flow and for the flow around a generic helicopter configuration.

  15. Magnetic Flux Compression Using Detonation Plasma Armatures and Superconductor Stators: Integrated Propulsion and Power Applications

    NASA Technical Reports Server (NTRS)

    Litchford, Ron; Robertson, Tony; Hawk, Clark; Turner, Matt; Koelfgen, Syri

    1999-01-01

    This presentation discusses the use of magnetic flux compression for space flight applications as a propulsion and other power applications. The qualities of this technology that make it suitable for spaceflight propulsion and power, are that it has high power density, it can give multimegawatt energy bursts, and terawatt power bursts, it can produce the pulse power for low impedance dense plasma devices (e.g., pulse fusion drivers), and it can produce direct thrust. The issues of a metal vs plasma armature are discussed, and the requirements for high energy output, and fast pulse rise time requires a high speed armature. The plasma armature enables repetitive firing capabilities. The issues concerning the high temperature superconductor stator are also discussed. The concept of the radial mode pulse power generator is described. The proposed research strategy combines the use of computational modeling (i.e., magnetohydrodynamic computations, and finite element modeling) and laboratory experiments to create a demonstration device.

  16. Model implementation for dynamic computation of system cost

    NASA Astrophysics Data System (ADS)

    Levri, J.; Vaccari, D.

    The Advanced Life Support (ALS) Program metric is the ratio of the equivalent system mass (ESM) of a mission based on International Space Station (ISS) technology to the ESM of that same mission based on ALS technology. ESM is a mission cost analog that converts the volume, power, cooling and crewtime requirements of a mission into mass units to compute an estimate of the life support system emplacement cost. Traditionally, ESM has been computed statically, using nominal values for system sizing. However, computation of ESM with static, nominal sizing estimates cannot capture the peak sizing requirements driven by system dynamics. In this paper, a dynamic model for a near-term Mars mission is described. The model is implemented in Matlab/Simulink' for the purpose of dynamically computing ESM. This paper provides a general overview of the crew, food, biomass, waste, water and air blocks in the Simulink' model. Dynamic simulations of the life support system track mass flow, volume and crewtime needs, as well as power and cooling requirement profiles. The mission's ESM is computed, based upon simulation responses. Ultimately, computed ESM values for various system architectures will feed into an optimization search (non-derivative) algorithm to predict parameter combinations that result in reduced objective function values.

  17. Thermal and Power Challenges in High Performance Computing Systems

    NASA Astrophysics Data System (ADS)

    Natarajan, Venkat; Deshpande, Anand; Solanki, Sudarshan; Chandrasekhar, Arun

    2009-05-01

    This paper provides an overview of the thermal and power challenges in emerging high performance computing platforms. The advent of new sophisticated applications in highly diverse areas such as health, education, finance, entertainment, etc. is driving the platform and device requirements for future systems. The key ingredients of future platforms are vertically integrated (3D) die-stacked devices which provide the required performance characteristics with the associated form factor advantages. Two of the major challenges to the design of through silicon via (TSV) based 3D stacked technologies are (i) effective thermal management and (ii) efficient power delivery mechanisms. Some of the key challenges that are articulated in this paper include hot-spot superposition and intensification in a 3D stack, design/optimization of thermal through silicon vias (TTSVs), non-uniform power loading of multi-die stacks, efficient on-chip power delivery, minimization of electrical hotspots etc.

  18. Application of enhanced modern structured analysis techniques to Space Station Freedom electric power system requirements

    NASA Technical Reports Server (NTRS)

    Biernacki, John; Juhasz, John; Sadler, Gerald

    1991-01-01

    A team of Space Station Freedom (SSF) system engineers are in the process of extensive analysis of the SSF requirements, particularly those pertaining to the electrical power system (EPS). The objective of this analysis is the development of a comprehensive, computer-based requirements model, using an enhanced modern structured analysis methodology (EMSA). Such a model provides a detailed and consistent representation of the system's requirements. The process outlined in the EMSA methodology is unique in that it allows the graphical modeling of real-time system state transitions, as well as functional requirements and data relationships, to be implemented using modern computer-based tools. These tools permit flexible updating and continuous maintenance of the models. Initial findings resulting from the application of EMSA to the EPS have benefited the space station program by linking requirements to design, providing traceability of requirements, identifying discrepancies, and fostering an understanding of the EPS.

  19. Factors affecting frequency and orbit utilization by high power transmission satellite systems.

    NASA Technical Reports Server (NTRS)

    Kuhns, P. W.; Miller, E. F.; O'Malley, T. A.

    1972-01-01

    The factors affecting the sharing of the geostationary orbit by high power (primarily television) satellite systems having the same or adjacent coverage areas and by satellites occupying the same orbit segment are examined and examples using the results of computer computations are given. The factors considered include: required protection ratio, receiver antenna patterns, relative transmitter power, transmitter antenna patterns, satellite grouping, and coverage pattern overlap. The results presented indicate the limits of system characteristics and orbit deployment which can result from mixing systems.

  20. Factors affecting frequency and orbit utilization by high power transmission satellite systems

    NASA Technical Reports Server (NTRS)

    Kuhns, P. W.; Miller, E. F.; Malley, T. A.

    1972-01-01

    The factors affecting the sharing of the geostationary orbit by high power (primarily television) satellite systems having the same or adjacent coverage areas and by satellites occupying the same orbit segment are examined and examples using the results of computer computations are given. The factors considered include: required protection ratio, receiver antenna patterns, relative transmitter power, transmitter antenna patterns, satellite grouping, and coverage pattern overlap. The results presented indicated the limits of system characteristics and orbit deployment which can result from mixing systems.

  1. Smart Sampling and HPC-based Probabilistic Look-ahead Contingency Analysis Implementation and its Evaluation with Real-world Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Etingov, Pavel V.; Ren, Huiying

    This paper describes a probabilistic look-ahead contingency analysis application that incorporates smart sampling and high-performance computing (HPC) techniques. Smart sampling techniques are implemented to effectively represent the structure and statistical characteristics of uncertainty introduced by different sources in the power system. They can significantly reduce the data set size required for multiple look-ahead contingency analyses, and therefore reduce the time required to compute them. High-performance-computing (HPC) techniques are used to further reduce computational time. These two techniques enable a predictive capability that forecasts the impact of various uncertainties on potential transmission limit violations. The developed package has been tested withmore » real world data from the Bonneville Power Administration. Case study results are presented to demonstrate the performance of the applications developed.« less

  2. Computer Security for Commercial Nuclear Power Plants - Literature Review for Korea Hydro Nuclear Power Central Research Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duran, Felicia Angelica; Waymire, Russell L.

    2013-10-01

    Sandia National Laboratories (SNL) is providing training and consultation activities on security planning and design for the Korea Hydro and Nuclear Power Central Research Institute (KHNPCRI). As part of this effort, SNL performed a literature review on computer security requirements, guidance and best practices that are applicable to an advanced nuclear power plant. This report documents the review of reports generated by SNL and other organizations [U.S. Nuclear Regulatory Commission, Nuclear Energy Institute, and International Atomic Energy Agency] related to protection of information technology resources, primarily digital controls and computer resources and their data networks. Copies of the key documentsmore » have also been provided to KHNP-CRI.« less

  3. A site oriented supercomputer for theoretical physics: The Fermilab Advanced Computer Program Multi Array Processor System (ACMAPS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nash, T.; Atac, R.; Cook, A.

    1989-03-06

    The ACPMAPS multipocessor is a highly cost effective, local memory parallel computer with a hypercube or compound hypercube architecture. Communication requires the attention of only the two communicating nodes. The design is aimed at floating point intensive, grid like problems, particularly those with extreme computing requirements. The processing nodes of the system are single board array processors, each with a peak power of 20 Mflops, supported by 8 Mbytes of data and 2 Mbytes of instruction memory. The system currently being assembled has a peak power of 5 Gflops. The nodes are based on the Weitek XL Chip set. Themore » system delivers performance at approximately $300/Mflop. 8 refs., 4 figs.« less

  4. Using 3D infrared imaging to calibrate and refine computational fluid dynamic modeling for large computer and data centers

    NASA Astrophysics Data System (ADS)

    Stockton, Gregory R.

    2011-05-01

    Over the last 10 years, very large government, military, and commercial computer and data center operators have spent millions of dollars trying to optimally cool data centers as each rack has begun to consume as much as 10 times more power than just a few years ago. In fact, the maximum amount of data computation in a computer center is becoming limited by the amount of available power, space and cooling capacity at some data centers. Tens of millions of dollars and megawatts of power are being annually spent to keep data centers cool. The cooling and air flows dynamically change away from any predicted 3-D computational fluid dynamic modeling during construction and as time goes by, and the efficiency and effectiveness of the actual cooling rapidly departs even farther from predicted models. By using 3-D infrared (IR) thermal mapping and other techniques to calibrate and refine the computational fluid dynamic modeling and make appropriate corrections and repairs, the required power for data centers can be dramatically reduced which reduces costs and also improves reliability.

  5. The mass of massive rover software

    NASA Technical Reports Server (NTRS)

    Miller, David P.

    1993-01-01

    A planetary rover, like a spacecraft, must be fully self contained. Once launched, a rover can only receive information from its designers, and if solar powered, power from the Sun. As the distance from Earth increases, and the demands for power on the rover increase, there is a serious tradeoff between communication and computation. Both of these subsystems are very power hungry, and both can be the major driver of the rover's power subsystem, and therefore the minimum mass and size of the rover. This situation and software techniques that can be used to reduce the requirements on both communication and computation, allowing the overall robot mass to be greatly reduced, are discussed.

  6. Plasmonic computing of spatial differentiation

    NASA Astrophysics Data System (ADS)

    Zhu, Tengfeng; Zhou, Yihan; Lou, Yijie; Ye, Hui; Qiu, Min; Ruan, Zhichao; Fan, Shanhui

    2017-05-01

    Optical analog computing offers high-throughput low-power-consumption operation for specialized computational tasks. Traditionally, optical analog computing in the spatial domain uses a bulky system of lenses and filters. Recent developments in metamaterials enable the miniaturization of such computing elements down to a subwavelength scale. However, the required metamaterial consists of a complex array of meta-atoms, and direct demonstration of image processing is challenging. Here, we show that the interference effects associated with surface plasmon excitations at a single metal-dielectric interface can perform spatial differentiation. And we experimentally demonstrate edge detection of an image without any Fourier lens. This work points to a simple yet powerful mechanism for optical analog computing at the nanoscale.

  7. Integrated Computer-Aided Drafting Instruction (ICADI).

    ERIC Educational Resources Information Center

    Chen, C. Y.; McCampbell, David H.

    Until recently, computer-aided drafting and design (CAD) systems were almost exclusively operated on mainframes or minicomputers and their cost prohibited many schools from offering CAD instruction. Today, many powerful personal computers are capable of performing the high-speed calculation and analysis required by the CAD application; however,…

  8. Space Station power distribution and control

    NASA Technical Reports Server (NTRS)

    Willis, A. H.

    1986-01-01

    A general description of the Space Station is given with the basic requirements of the power distribution and controls system presented. The dual bus and branch circuit concepts are discussed and a computer control method presented.

  9. Cloud computing for comparative genomics with windows azure platform.

    PubMed

    Kim, Insik; Jung, Jae-Yoon; Deluca, Todd F; Nelson, Tristan H; Wall, Dennis P

    2012-01-01

    Cloud computing services have emerged as a cost-effective alternative for cluster systems as the number of genomes and required computation power to analyze them increased in recent years. Here we introduce the Microsoft Azure platform with detailed execution steps and a cost comparison with Amazon Web Services.

  10. Cloud Computing for Comparative Genomics with Windows Azure Platform

    PubMed Central

    Kim, Insik; Jung, Jae-Yoon; DeLuca, Todd F.; Nelson, Tristan H.; Wall, Dennis P.

    2012-01-01

    Cloud computing services have emerged as a cost-effective alternative for cluster systems as the number of genomes and required computation power to analyze them increased in recent years. Here we introduce the Microsoft Azure platform with detailed execution steps and a cost comparison with Amazon Web Services. PMID:23032609

  11. REGENERATIVE TRANSISTOR AMPLIFIER

    DOEpatents

    Kabell, L.J.

    1958-11-25

    Electrical circults for use in computers and the like are described. particularly a regenerative bistable transistor amplifler which is iurned on by a clock signal when an information signal permits and is turned off by the clock signal. The amplifier porforms the above function with reduced power requirements for the clock signal and circuit operation. The power requirements are reduced in one way by employing transformer coupling which increases the collector circuit efficiency by eliminating the loss of power in the collector load resistor.

  12. Modeling nonlinear ultrasound propagation in heterogeneous media with power law absorption using a k-space pseudospectral method.

    PubMed

    Treeby, Bradley E; Jaros, Jiri; Rendell, Alistair P; Cox, B T

    2012-06-01

    The simulation of nonlinear ultrasound propagation through tissue realistic media has a wide range of practical applications. However, this is a computationally difficult problem due to the large size of the computational domain compared to the acoustic wavelength. Here, the k-space pseudospectral method is used to reduce the number of grid points required per wavelength for accurate simulations. The model is based on coupled first-order acoustic equations valid for nonlinear wave propagation in heterogeneous media with power law absorption. These are derived from the equations of fluid mechanics and include a pressure-density relation that incorporates the effects of nonlinearity, power law absorption, and medium heterogeneities. The additional terms accounting for convective nonlinearity and power law absorption are expressed as spatial gradients making them efficient to numerically encode. The governing equations are then discretized using a k-space pseudospectral technique in which the spatial gradients are computed using the Fourier-collocation method. This increases the accuracy of the gradient calculation and thus relaxes the requirement for dense computational grids compared to conventional finite difference methods. The accuracy and utility of the developed model is demonstrated via several numerical experiments, including the 3D simulation of the beam pattern from a clinical ultrasound probe.

  13. Unity Power Factor Operated PFC Converter Based Power Supply for Computers

    NASA Astrophysics Data System (ADS)

    Singh, Shikha; Singh, Bhim; Bhuvaneswari, G.; Bist, Vashist

    2017-11-01

    Power Supplies (PSs) employed in personal computers pollute the single phase ac mains by drawing distorted current at a substandard Power Factor (PF). The harmonic distortion of the supply current in these personal computers are observed 75% to 90% with the Crest Factor (CF) being very high which escalates losses in the distribution system. To find a tangible solution to these issues, a non-isolated PFC converter is employed at the input of isolated converter that is capable of improving the input power quality apart from regulating the dc voltage at its output. This is given to the isolated stage that yields completely isolated and stiffly regulated multiple output voltages which is the prime requirement of computer PS. The operation of the proposed PS is evaluated under various operating conditions and the results show improved performance depicting nearly unity PF and low input current harmonics. The prototype of this PS is developed in laboratory environment and test results are recorded which corroborate the power quality improvement observed in simulation results under various operating conditions.

  14. Computer Aided Design of Ka-Band Waveguide Power Combining Architectures for Interplanetary Spacecraft

    NASA Technical Reports Server (NTRS)

    Vaden, Karl R.

    2006-01-01

    Communication systems for future NASA interplanetary spacecraft require transmitter power ranging from several hundred watts to kilowatts. Several hybrid junctions are considered as elements within a corporate combining architecture for high power Ka-band space traveling-wave tube amplifiers (TWTAs). This report presents the simulated transmission characteristics of several hybrid junctions designed for a low loss, high power waveguide based power combiner.

  15. Research on spacecraft electrical power conversion

    NASA Technical Reports Server (NTRS)

    Wilson, T. G.

    1983-01-01

    The history of spacecraft electrical power conversion in literature, research and practice is reviewed. It is noted that the design techniques, analyses and understanding which were developed make today's contribution to power computers and communication installations. New applications which require more power, improved dynamic response, greater reliability, and lower cost are outlined. The switching mode approach in electronic power conditioning is discussed. Technical aspects of the research are summarized.

  16. Fast Computation and Assessment Methods in Power System Analysis

    NASA Astrophysics Data System (ADS)

    Nagata, Masaki

    Power system analysis is essential for efficient and reliable power system operation and control. Recently, online security assessment system has become of importance, as more efficient use of power networks is eagerly required. In this article, fast power system analysis techniques such as contingency screening, parallel processing and intelligent systems application are briefly surveyed from the view point of their application to online dynamic security assessment.

  17. Microprocessor control and networking for the amps breadboard

    NASA Technical Reports Server (NTRS)

    Floyd, Stephen A.

    1987-01-01

    Future space missions will require more sophisticated power systems, implying higher costs and more extensive crew and ground support involvement. To decrease this human involvement, as well as to protect and most efficiently utilize this important resource, NASA has undertaken major efforts to promote progress in the design and development of autonomously managed power systems. Two areas being actively pursued are autonomous power system (APS) breadboards and knowledge-based expert system (KBES) applications. The former are viewed as a requirement for the timely development of the latter. Not only will they serve as final testbeds for the various KBES applications, but will play a major role in the knowledge engineering phase of their development. The current power system breadboard designs are of a distributed microprocessor nature. The distributed nature, plus the need to connect various external computer capabilities (i.e., conventional host computers and symbolic processors), places major emphasis on effective networking. The communications and networking technologies for the first power system breadboard/test facility are described.

  18. Service Mediation and Negotiation Bootstrapping as First Achievements Towards Self-adaptable Cloud Services

    NASA Astrophysics Data System (ADS)

    Brandic, Ivona; Music, Dejan; Dustdar, Schahram

    Nowadays, novel computing paradigms as for example Cloud Computing are gaining more and more on importance. In case of Cloud Computing users pay for the usage of the computing power provided as a service. Beforehand they can negotiate specific functional and non-functional requirements relevant for the application execution. However, providing computing power as a service bears different research challenges. On one hand dynamic, versatile, and adaptable services are required, which can cope with system failures and environmental changes. On the other hand, human interaction with the system should be minimized. In this chapter we present the first results in establishing adaptable, versatile, and dynamic services considering negotiation bootstrapping and service mediation achieved in context of the Foundations of Self-Governing ICT Infrastructures (FoSII) project. We discuss novel meta-negotiation and SLA mapping solutions for Cloud services bridging the gap between current QoS models and Cloud middleware and representing important prerequisites for the establishment of autonomic Cloud services.

  19. Large-Scale Distributed Computational Fluid Dynamics on the Information Power Grid Using Globus

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen; Biswas, Rupak; Saini, Subhash; VanderWijngaart, Robertus; Yarrow, Maurice; Zechtzer, Lou; Foster, Ian; Larsson, Olle

    1999-01-01

    This paper describes an experiment in which a large-scale scientific application development for tightly-coupled parallel machines is adapted to the distributed execution environment of the Information Power Grid (IPG). A brief overview of the IPG and a description of the computational fluid dynamics (CFD) algorithm are given. The Globus metacomputing toolkit is used as the enabling device for the geographically-distributed computation. Modifications related to latency hiding and Load balancing were required for an efficient implementation of the CFD application in the IPG environment. Performance results on a pair of SGI Origin 2000 machines indicate that real scientific applications can be effectively implemented on the IPG; however, a significant amount of continued effort is required to make such an environment useful and accessible to scientists and engineers.

  20. Simulation Models for the Electric Power Requirements in a Guideway Transit System

    DOT National Transportation Integrated Search

    1980-04-01

    This report describes a computer simulation model developed at the Transportation Systems Center to study the electrical power distribution characteristics of Automated Guideway Transit (AGT) systems. The objective of this simulation effort is to pro...

  1. COMPUTER TECHNOLOGY AND SOCIAL CHANGE,

    DTIC Science & Technology

    This paper presents a discussion of the social , political, economic and psychological problems associated with the rapid growth and development of...public officials and responsible groups is required to increase public understanding of the computer as a powerful tool, to select appropriate

  2. A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.

    NASA Astrophysics Data System (ADS)

    Wehner, M. F.; Oliker, L.; Shalf, J.

    2008-12-01

    Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.

  3. A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)

    2001-01-01

    NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.

  4. Digital computer study of nuclear reactor thermal transients during startup of 60-kWe Brayton power conversion system

    NASA Technical Reports Server (NTRS)

    Jefferies, K. S.; Tew, R. C.

    1974-01-01

    A digital computer study was made of reactor thermal transients during startup of the Brayton power conversion loop of a 60-kWe reactor Brayton power system. A startup procedure requiring the least Brayton system complication was tried first; this procedure caused violations of design limits on key reactor variables. Several modifications of this procedure were then found which caused no design limit violations. These modifications involved: (1) using a slower rate of increase in gas flow; (2) increasing the initial reactor power level to make the reactor respond faster; and (3) appropriate reactor control drum manipulation during the startup transient.

  5. 49 CFR 232.15 - Movement of defective equipment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the safe repair of the car. (d) Computation of percent operative power brakes. (1) The percentage of operative power brakes in a train shall be based on the number of control valves in the train. The... contained on the stencil, sticker, or badge plate required by § 232.103(g) for considering the power brakes...

  6. Clarkson First College to Require Computer Literacy.

    ERIC Educational Resources Information Center

    Technological Horizons in Education, 1983

    1983-01-01

    Freshmen at Clarkson College of Technology (Potsdam, NY) will be issued a Zenith microcomputer. Every aspect of Clarkson's curriculum will be redesigned to capitalize on the new computing and word processing power. Students will pay $200/semester and a one-time $200 maintenance fee and will keep the computer when they graduate. (Author/JN)

  7. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  8. JESS facility modification and environmental/power plans

    NASA Technical Reports Server (NTRS)

    Bordeaux, T. A.

    1984-01-01

    Preliminary plans for facility modifications and environmental/power systems for the JESS (Joint Exercise Support System) computer laboratory and Freedom Hall are presented. Blueprints are provided for each of the facilities and an estimate of the air conditioning requirements is given.

  9. An assessment of future computer system needs for large-scale computation

    NASA Technical Reports Server (NTRS)

    Lykos, P.; White, J.

    1980-01-01

    Data ranging from specific computer capability requirements to opinions about the desirability of a national computer facility are summarized. It is concluded that considerable attention should be given to improving the user-machine interface. Otherwise, increased computer power may not improve the overall effectiveness of the machine user. Significant improvement in throughput requires highly concurrent systems plus the willingness of the user community to develop problem solutions for that kind of architecture. An unanticipated result was the expression of need for an on-going cross-disciplinary users group/forum in order to share experiences and to more effectively communicate needs to the manufacturers.

  10. Low-power, transparent optical network interface for high bandwidth off-chip interconnects.

    PubMed

    Liboiron-Ladouceur, Odile; Wang, Howard; Garg, Ajay S; Bergman, Keren

    2009-04-13

    The recent emergence of multicore architectures and chip multiprocessors (CMPs) has accelerated the bandwidth requirements in high-performance processors for both on-chip and off-chip interconnects. For next generation computing clusters, the delivery of scalable power efficient off-chip communications to each compute node has emerged as a key bottleneck to realizing the full computational performance of these systems. The power dissipation is dominated by the off-chip interface and the necessity to drive high-speed signals over long distances. We present a scalable photonic network interface approach that fully exploits the bandwidth capacity offered by optical interconnects while offering significant power savings over traditional E/O and O/E approaches. The power-efficient interface optically aggregates electronic serial data streams into a multiple WDM channel packet structure at time-of-flight latencies. We demonstrate a scalable optical network interface with 70% improvement in power efficiency for a complete end-to-end PCI Express data transfer.

  11. Stream-based Hebbian eigenfilter for real-time neuronal spike discrimination

    PubMed Central

    2012-01-01

    Background Principal component analysis (PCA) has been widely employed for automatic neuronal spike sorting. Calculating principal components (PCs) is computationally expensive, and requires complex numerical operations and large memory resources. Substantial hardware resources are therefore needed for hardware implementations of PCA. General Hebbian algorithm (GHA) has been proposed for calculating PCs of neuronal spikes in our previous work, which eliminates the needs of computationally expensive covariance analysis and eigenvalue decomposition in conventional PCA algorithms. However, large memory resources are still inherently required for storing a large volume of aligned spikes for training PCs. The large size memory will consume large hardware resources and contribute significant power dissipation, which make GHA difficult to be implemented in portable or implantable multi-channel recording micro-systems. Method In this paper, we present a new algorithm for PCA-based spike sorting based on GHA, namely stream-based Hebbian eigenfilter, which eliminates the inherent memory requirements of GHA while keeping the accuracy of spike sorting by utilizing the pseudo-stationarity of neuronal spikes. Because of the reduction of large hardware storage requirements, the proposed algorithm can lead to ultra-low hardware resources and power consumption of hardware implementations, which is critical for the future multi-channel micro-systems. Both clinical and synthetic neural recording data sets were employed for evaluating the accuracy of the stream-based Hebbian eigenfilter. The performance of spike sorting using stream-based eigenfilter and the computational complexity of the eigenfilter were rigorously evaluated and compared with conventional PCA algorithms. Field programmable logic arrays (FPGAs) were employed to implement the proposed algorithm, evaluate the hardware implementations and demonstrate the reduction in both power consumption and hardware memories achieved by the streaming computing Results and discussion Results demonstrate that the stream-based eigenfilter can achieve the same accuracy and is 10 times more computationally efficient when compared with conventional PCA algorithms. Hardware evaluations show that 90.3% logic resources, 95.1% power consumption and 86.8% computing latency can be reduced by the stream-based eigenfilter when compared with PCA hardware. By utilizing the streaming method, 92% memory resources and 67% power consumption can be saved when compared with the direct implementation of GHA. Conclusion Stream-based Hebbian eigenfilter presents a novel approach to enable real-time spike sorting with reduced computational complexity and hardware costs. This new design can be further utilized for multi-channel neuro-physiological experiments or chronic implants. PMID:22490725

  12. Intelligent redundant actuation system requirements and preliminary system design

    NASA Technical Reports Server (NTRS)

    Defeo, P.; Geiger, L. J.; Harris, J.

    1985-01-01

    Several redundant actuation system configurations were designed and demonstrated to satisfy the stringent operational requirements of advanced flight control systems. However, this has been accomplished largely through brute force hardware redundancy, resulting in significantly increased computational requirements on the flight control computers which perform the failure analysis and reconfiguration management. Modern technology now provides powerful, low-cost microprocessors which are effective in performing failure isolation and configuration management at the local actuator level. One such concept, called an Intelligent Redundant Actuation System (IRAS), significantly reduces the flight control computer requirements and performs the local tasks more comprehensively than previously feasible. The requirements and preliminary design of an experimental laboratory system capable of demonstrating the concept and sufficiently flexible to explore a variety of configurations are discussed.

  13. Development of spectral analysis math models and software program and spectral analyzer, digital converter interface equipment design

    NASA Technical Reports Server (NTRS)

    Hayden, W. L.; Robinson, L. H.

    1972-01-01

    Spectral analyses of angle-modulated communication systems is studied by: (1) performing a literature survey of candidate power spectrum computational techniques, determining the computational requirements, and formulating a mathematical model satisfying these requirements; (2) implementing the model on UNIVAC 1230 digital computer as the Spectral Analysis Program (SAP); and (3) developing the hardware specifications for a data acquisition system which will acquire an input modulating signal for SAP. The SAP computational technique uses extended fast Fourier transform and represents a generalized approach for simple and complex modulating signals.

  14. Thermoelectric-Driven Autonomous Sensors for a Biomass Power Plant

    NASA Astrophysics Data System (ADS)

    Rodríguez, A.; Astrain, D.; Martínez, A.; Gubía, E.; Sorbet, F. J.

    2013-07-01

    This work presents the design and development of a thermoelectric generator intended to harness waste heat in a biomass power plant, and generate electric power to operate sensors and the required electronics for wireless communication. The first objective of the work is to design the optimum thermoelectric generator to harness heat from a hot surface, and generate electric power to operate a flowmeter and a wireless transmitter. The process is conducted by using a computational model, presented in previous papers, to determine the final design that meets the requirements of electric power consumption and number of transmissions per minute. Finally, the thermoelectric generator is simulated to evaluate its performance. The final device transmits information every 5 s. Moreover, it is completely autonomous and can be easily installed, since no electric wires are required.

  15. Programmable neural processing on a smartdust for brain-computer interfaces.

    PubMed

    Yuwen Sun; Shimeng Huang; Oresko, Joseph J; Cheng, Allen C

    2010-10-01

    Brain-computer interfaces (BCIs) offer tremendous promise for improving the quality of life for disabled individuals. BCIs use spike sorting to identify the source of each neural firing. To date, spike sorting has been performed by either using off-chip analysis, which requires a wired connection penetrating the skull to a bulky external power/processing unit, or via custom application-specific integrated circuits that lack the programmability to perform different algorithms and upgrades. In this research, we propose and test the feasibility of performing on-chip, real-time spike sorting on a programmable smartdust, including feature extraction, classification, compression, and wireless transmission. A detailed power/performance tradeoff analysis using DVFS is presented. Our experimental results show that the execution time and power density meet the requirements to perform real-time spike sorting and wireless transmission on a single neural channel.

  16. A fast technique for computing syndromes of BCH and RS codes. [deep space network

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.; Miller, R. L.

    1979-01-01

    A combination of the Chinese Remainder Theorem and Winograd's algorithm is used to compute transforms of odd length over GF(2 to the m power). Such transforms are used to compute the syndromes needed for decoding CBH and RS codes. The present scheme requires substantially fewer multiplications and additions than the conventional method of computing the syndromes directly.

  17. Accelerating statistical image reconstruction algorithms for fan-beam x-ray CT using cloud computing

    NASA Astrophysics Data System (ADS)

    Srivastava, Somesh; Rao, A. Ravishankar; Sheinin, Vadim

    2011-03-01

    Statistical image reconstruction algorithms potentially offer many advantages to x-ray computed tomography (CT), e.g. lower radiation dose. But, their adoption in practical CT scanners requires extra computation power, which is traditionally provided by incorporating additional computing hardware (e.g. CPU-clusters, GPUs, FPGAs etc.) into a scanner. An alternative solution is to access the required computation power over the internet from a cloud computing service, which is orders-of-magnitude more cost-effective. This is because users only pay a small pay-as-you-go fee for the computation resources used (i.e. CPU time, storage etc.), and completely avoid purchase, maintenance and upgrade costs. In this paper, we investigate the benefits and shortcomings of using cloud computing for statistical image reconstruction. We parallelized the most time-consuming parts of our application, the forward and back projectors, using MapReduce, the standard parallelization library on clouds. From preliminary investigations, we found that a large speedup is possible at a very low cost. But, communication overheads inside MapReduce can limit the maximum speedup, and a better MapReduce implementation might become necessary in the future. All the experiments for this paper, including development and testing, were completed on the Amazon Elastic Compute Cloud (EC2) for less than $20.

  18. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads.

    PubMed

    Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-05-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.

  19. Application of Blind Quantum Computation to Two-Party Quantum Computation

    NASA Astrophysics Data System (ADS)

    Sun, Zhiyuan; Li, Qin; Yu, Fang; Chan, Wai Hong

    2018-06-01

    Blind quantum computation (BQC) allows a client who has only limited quantum power to achieve quantum computation with the help of a remote quantum server and still keep the client's input, output, and algorithm private. Recently, Kashefi and Wallden extended BQC to achieve two-party quantum computation which allows two parties Alice and Bob to perform a joint unitary transform upon their inputs. However, in their protocol Alice has to prepare rotated single qubits and perform Pauli operations, and Bob needs to have a powerful quantum computer. In this work, we also utilize the idea of BQC to put forward an improved two-party quantum computation protocol in which the operations of both Alice and Bob are simplified since Alice only needs to apply Pauli operations and Bob is just required to prepare and encrypt his input qubits.

  20. Application of Blind Quantum Computation to Two-Party Quantum Computation

    NASA Astrophysics Data System (ADS)

    Sun, Zhiyuan; Li, Qin; Yu, Fang; Chan, Wai Hong

    2018-03-01

    Blind quantum computation (BQC) allows a client who has only limited quantum power to achieve quantum computation with the help of a remote quantum server and still keep the client's input, output, and algorithm private. Recently, Kashefi and Wallden extended BQC to achieve two-party quantum computation which allows two parties Alice and Bob to perform a joint unitary transform upon their inputs. However, in their protocol Alice has to prepare rotated single qubits and perform Pauli operations, and Bob needs to have a powerful quantum computer. In this work, we also utilize the idea of BQC to put forward an improved two-party quantum computation protocol in which the operations of both Alice and Bob are simplified since Alice only needs to apply Pauli operations and Bob is just required to prepare and encrypt his input qubits.

  1. Driven by Power? Probe Question and Presentation Format Effects on Causal Judgment

    ERIC Educational Resources Information Center

    Perales, Jose C.; Shanks, David R.

    2008-01-01

    It has been proposed that causal power (defined as the probability with which a candidate cause would produce an effect in the absence of any other background causes) can be intuitively computed from cause-effect covariation information. Estimation of power is assumed to require a special type of counterfactual probe question, worded to remove…

  2. A Framework to Design the Computational Load Distribution of Wireless Sensor Networks in Power Consumption Constrained Environments

    PubMed Central

    Sánchez-Álvarez, David; Rodríguez-Pérez, Francisco-Javier

    2018-01-01

    In this paper, we present a work based on the computational load distribution among the homogeneous nodes and the Hub/Sink of Wireless Sensor Networks (WSNs). The main contribution of the paper is an early decision support framework helping WSN designers to take decisions about computational load distribution for those WSNs where power consumption is a key issue (when we refer to “framework” in this work, we are considering it as a support tool to make decisions where the executive judgment can be included along with the set of mathematical tools of the WSN designer; this work shows the need to include the load distribution as an integral component of the WSN system for making early decisions regarding energy consumption). The framework takes advantage of the idea that balancing sensors nodes and Hub/Sink computational load can lead to improved energy consumption for the whole or at least the battery-powered nodes of the WSN. The approach is not trivial and it takes into account related issues such as the required data distribution, nodes, and Hub/Sink connectivity and availability due to their connectivity features and duty-cycle. For a practical demonstration, the proposed framework is applied to an agriculture case study, a sector very relevant in our region. In this kind of rural context, distances, low costs due to vegetable selling prices and the lack of continuous power supplies may lead to viable or inviable sensing solutions for the farmers. The proposed framework systematize and facilitates WSN designers the required complex calculations taking into account the most relevant variables regarding power consumption, avoiding full/partial/prototype implementations, and measurements of different computational load distribution potential solutions for a specific WSN. PMID:29570645

  3. Reactor transient control in support of PFR/TREAT TUCOP experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burrows, D.R.; Larsen, G.R.; Harrison, L.J.

    1984-01-01

    Unique energy deposition and experiment control requirements posed bythe PFR/TREAT series of transient undercooling/overpower (TUCOP) experiments resulted in equally unique TREAT reactor operations. New reactor control computer algorithms were written and used with the TREAT reactor control computer system to perform such functions as early power burst generation (based on test train flow conditions), burst generation produced by a step insertion of reactivity following a controlled power ramp, and shutdown (SCRAM) initiators based on both test train conditions and energy deposition. Specialized hardware was constructed to simulate test train inputs to the control computer system so that computer algorithms couldmore » be tested in real time without irradiating the experiment.« less

  4. Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi

    DOE PAGES

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...

    2015-05-22

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less

  5. The multi-disciplinary design study: A life cycle cost algorithm

    NASA Technical Reports Server (NTRS)

    Harding, R. R.; Pichi, F. J.

    1988-01-01

    The approach and results of a Life Cycle Cost (LCC) analysis of the Space Station Solar Dynamic Power Subsystem (SDPS) including gimbal pointing and power output performance are documented. The Multi-Discipline Design Tool (MDDT) computer program developed during the 1986 study has been modified to include the design, performance, and cost algorithms for the SDPS as described. As with the Space Station structural and control subsystems, the LCC of the SDPS can be computed within the MDDT program as a function of the engineering design variables. Two simple examples of MDDT's capability to evaluate cost sensitivity and design based on LCC are included. MDDT was designed to accept NASA's IMAT computer program data as input so that IMAT's detailed structural and controls design capability can be assessed with expected system LCC as computed by MDDT. No changes to IMAT were required. Detailed knowledge of IMAT is not required to perform the LCC analyses as the interface with IMAT is noninteractive.

  6. Subsonic aircraft: Evolution and the matching of size to performance

    NASA Technical Reports Server (NTRS)

    Loftin, L. K., Jr.

    1980-01-01

    Methods for estimating the approximate size, weight, and power of aircraft intended to meet specified performance requirements are presented for both jet-powered and propeller-driven aircraft. The methods are simple and require only the use of a pocket computer for rapid application to specific sizing problems. Application of the methods is illustrated by means of sizing studies of a series of jet-powered and propeller-driven aircraft with varying design constraints. Some aspects of the technical evolution of the airplane from 1918 to the present are also briefly discussed.

  7. Application of SLURM, BOINC, and GlusterFS as Software System for Sustainable Modeling and Data Analytics

    NASA Astrophysics Data System (ADS)

    Kashansky, Vladislav V.; Kaftannikov, Igor L.

    2018-02-01

    Modern numerical modeling experiments and data analytics problems in various fields of science and technology reveal a wide variety of serious requirements for distributed computing systems. Many scientific computing projects sometimes exceed the available resource pool limits, requiring extra scalability and sustainability. In this paper we share the experience and findings of our own on combining the power of SLURM, BOINC and GlusterFS as software system for scientific computing. Especially, we suggest a complete architecture and highlight important aspects of systems integration.

  8. Computational models of an inductive power transfer system for electric vehicle battery charge

    NASA Astrophysics Data System (ADS)

    Anele, A. O.; Hamam, Y.; Chassagne, L.; Linares, J.; Alayli, Y.; Djouani, K.

    2015-09-01

    One of the issues to be solved for electric vehicles (EVs) to become a success is the technical solution of its charging system. In this paper, computational models of an inductive power transfer (IPT) system for EV battery charge are presented. Based on the fundamental principles behind IPT systems, 3 kW single phase and 22 kW three phase IPT systems for Renault ZOE are designed in MATLAB/Simulink. The results obtained based on the technical specifications of the lithium-ion battery and charger type of Renault ZOE show that the models are able to provide the total voltage required by the battery. Also, considering the charging time for each IPT model, they are capable of delivering the electricity needed to power the ZOE. In conclusion, this study shows that the designed computational IPT models may be employed as a support structure needed to effectively power any viable EV.

  9. Real-time depth processing for embedded platforms

    NASA Astrophysics Data System (ADS)

    Rahnama, Oscar; Makarov, Aleksej; Torr, Philip

    2017-05-01

    Obtaining depth information of a scene is an important requirement in many computer-vision and robotics applications. For embedded platforms, passive stereo systems have many advantages over their active counterparts (i.e. LiDAR, Infrared). They are power efficient, cheap, robust to lighting conditions and inherently synchronized to the RGB images of the scene. However, stereo depth estimation is a computationally expensive task that operates over large amounts of data. For embedded applications which are often constrained by power consumption, obtaining accurate results in real-time is a challenge. We demonstrate a computationally and memory efficient implementation of a stereo block-matching algorithm in FPGA. The computational core achieves a throughput of 577 fps at standard VGA resolution whilst consuming less than 3 Watts of power. The data is processed using an in-stream approach that minimizes memory-access bottlenecks and best matches the raster scan readout of modern digital image sensors.

  10. Phase change energy storage for solar dynamic power systems

    NASA Technical Reports Server (NTRS)

    Chiaramonte, F. P.; Taylor, J. D.

    1992-01-01

    This paper presents the results of a transient computer simulation that was developed to study phase change energy storage techniques for Space Station Freedom (SSF) solar dynamic (SD) power systems. Such SD systems may be used in future growth SSF configurations. Two solar dynamic options are considered in this paper: Brayton and Rankine. Model elements consist of a single node receiver and concentrator, and takes into account overall heat engine efficiency and power distribution characteristics. The simulation not only computes the energy stored in the receiver phase change material (PCM), but also the amount of the PCM required for various combinations of load demands and power system mission constraints. For a solar dynamic power system in low earth orbit, the amount of stored PCM energy is calculated by balancing the solar energy input and the energy consumed by the loads corrected by an overall system efficiency. The model assumes an average 75 kW SD power system load profile which is connected to user loads via dedicated power distribution channels. The model then calculates the stored energy in the receiver and subsequently estimates the quantity of PCM necessary to meet peaking and contingency requirements. The model can also be used to conduct trade studies on the performance of SD power systems using different storage materials.

  11. Phase change energy storage for solar dynamic power systems

    NASA Astrophysics Data System (ADS)

    Chiaramonte, F. P.; Taylor, J. D.

    This paper presents the results of a transient computer simulation that was developed to study phase change energy storage techniques for Space Station Freedom (SSF) solar dynamic (SD) power systems. Such SD systems may be used in future growth SSF configurations. Two solar dynamic options are considered in this paper: Brayton and Rankine. Model elements consist of a single node receiver and concentrator, and takes into account overall heat engine efficiency and power distribution characteristics. The simulation not only computes the energy stored in the receiver phase change material (PCM), but also the amount of the PCM required for various combinations of load demands and power system mission constraints. For a solar dynamic power system in low earth orbit, the amount of stored PCM energy is calculated by balancing the solar energy input and the energy consumed by the loads corrected by an overall system efficiency. The model assumes an average 75 kW SD power system load profile which is connected to user loads via dedicated power distribution channels. The model then calculates the stored energy in the receiver and subsequently estimates the quantity of PCM necessary to meet peaking and contingency requirements. The model can also be used to conduct trade studies on the performance of SD power systems using different storage materials.

  12. Feasibility Study and Cost Benefit Analysis of Thin-Client Computer System Implementation Onboard United States Navy Ships

    DTIC Science & Technology

    2007-06-01

    management issues he encountered ruled out the Expanion as a viable option for thin-client computing in the Navy. An improvement in thin-client...44 Requirements to capabilities (2004). Retrieved April 29, 2007, from Vision Presence Power: A Program Guide to the U.S. Navy – 2004...Retrieved April 29, 2007, from Vision Presence Power: A Program Guide to the U.S. Navy – 2004 Edition, p. 128. Web site: http://www.chinfo.navy.mil

  13. PowerPlay: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem

    PubMed Central

    Schmidhuber, Jürgen

    2013-01-01

    Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. Given a general problem-solving architecture, at any given time, the novel algorithmic framework PowerPlay (Schmidhuber, 2011) searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Newly invented tasks may require to achieve a wow-effect by making previously learned skills more efficient such that they require less time and space. New skills may (partially) re-use previously learned skills. The greedy search of typical PowerPlay variants uses time-optimal program search to order candidate pairs of tasks and solver modifications by their conditional computational (time and space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. This biases the search toward pairs that can be described compactly and validated quickly. The computational costs of validating new tasks need not grow with task repertoire size. Standard problem solver architectures of personal computers or neural networks tend to generalize by solving numerous tasks outside the self-invented training set; PowerPlay’s ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to Gödel’s sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing repertoire of problem-solving procedures can be exploited by a parallel search for solutions to additional externally posed tasks. PowerPlay may be viewed as a greedy but practical implementation of basic principles of creativity (Schmidhuber, 2006a, 2010). A first experimental analysis can be found in separate papers (Srivastava et al., 2012a,b, 2013). PMID:23761771

  14. Galileo spacecraft power management and distribution system

    NASA Technical Reports Server (NTRS)

    Detwiler, R. C.; Smith, R. L.

    1990-01-01

    The Galileo PMAD (power management and distribution system) is described, and the design drivers that established the final as-built hardware are discussed. The spacecraft is powered by two general-purpose heat-source-radioisotope thermoelectric generators. Power bus regulation is provided by a shunt regulator. Galileo PMAD distributes a 570-W beginning of mission (BOM) power source to a user complement of some 137 load elements. Extensive use of pyrotechnics requires two pyro switching subassemblies. They initiate 148 squibs which operate the 47 pyro devices on the spacecraft. Detection and correction of faults in the Galileo PMAD is an autonomous feature dictated by requirements for long life and reliability in the absence of ground-based support. Volatile computer memories in the spacecraft command and data system and attitude control system require a continuous source of backup power during all anticipated power bus fault scenarios. Power for the Jupiter Probe is conditioned, isolated, and controlled by a Probe interface subassembly. Flight performance of the spacecraft and the PMAD has been successful to date, with no major anomalies.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loef, P.A.; Smed, T.; Andersson, G.

    The minimum singular value of the power flow Jacobian matrix has been used as a static voltage stability index, indicating the distance between the studied operating point and the steady state voltage stability limit. In this paper a fast method to calculate the minimum singular value and the corresponding (left and right) singular vectors is presented. The main advantages of the developed algorithm are the small amount of computation time needed, and that it only requires information available from an ordinary program for power flow calculations. Furthermore, the proposed method fully utilizes the sparsity of the power flow Jacobian matrixmore » and hence the memory requirements for the computation are low. These advantages are preserved when applied to various submatrices of the Jacobian matrix, which can be useful in constructing special voltage stability indices. The developed algorithm was applied to small test systems as well as to a large (real size) system with over 1000 nodes, with satisfactory results.« less

  16. Variable Generation Power Forecasting as a Big Data Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haupt, Sue Ellen; Kosovic, Branko

    To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less

  17. Variable Generation Power Forecasting as a Big Data Problem

    DOE PAGES

    Haupt, Sue Ellen; Kosovic, Branko

    2016-10-10

    To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less

  18. Predicting Cloud Computing Technology Adoption by Organizations: An Empirical Integration of Technology Acceptance Model and Theory of Planned Behavior

    ERIC Educational Resources Information Center

    Ekufu, ThankGod K.

    2012-01-01

    Organizations are finding it difficult in today's economy to implement the vast information technology infrastructure required to effectively conduct their business operations. Despite the fact that some of these organizations are leveraging on the computational powers and the cost-saving benefits of computing on the Internet cloud, others…

  19. Custom Sky-Image Mosaics from NASA's Information Power Grid

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph; Collier, James; Craymer, Loring; Curkendall, David

    2005-01-01

    yourSkyG is the second generation of the software described in yourSky: Custom Sky-Image Mosaics via the Internet (NPO-30556), NASA Tech Briefs, Vol. 27, No. 6 (June 2003), page 45. Like its predecessor, yourSkyG supplies custom astronomical image mosaics of sky regions specified by requesters using client computers connected to the Internet. Whereas yourSky constructs mosaics on a local multiprocessor system, yourSkyG performs the computations on NASA s Information Power Grid (IPG), which is capable of performing much larger mosaicking tasks. (The IPG is high-performance computation and data grid that integrates geographically distributed 18 NASA Tech Briefs, September 2005 computers, databases, and instruments.) A user of yourSkyG can specify parameters describing a mosaic to be constructed. yourSkyG then constructs the mosaic on the IPG and makes it available for downloading by the user. The complexities of determining which input images are required to construct a mosaic, retrieving the required input images from remote sky-survey archives, uploading the images to the computers on the IPG, performing the computations remotely on the Grid, and downloading the resulting mosaic from the Grid are all transparent to the user

  20. Mobile high-performance computing (HPC) for synthetic aperture radar signal processing

    NASA Astrophysics Data System (ADS)

    Misko, Joshua; Kim, Youngsoo; Qi, Chenchen; Sirkeci, Birsen

    2018-04-01

    The importance of mobile high-performance computing has emerged in numerous battlespace applications at the tactical edge in hostile environments. Energy efficient computing power is a key enabler for diverse areas ranging from real-time big data analytics and atmospheric science to network science. However, the design of tactical mobile data centers is dominated by power, thermal, and physical constraints. Presently, it is very unlikely to achieve required computing processing power by aggregating emerging heterogeneous many-core processing platforms consisting of CPU, Field Programmable Gate Arrays and Graphic Processor cores constrained by power and performance. To address these challenges, we performed a Synthetic Aperture Radar case study for Automatic Target Recognition (ATR) using Deep Neural Networks (DNNs). However, these DNN models are typically trained using GPUs with gigabytes of external memories and massively used 32-bit floating point operations. As a result, DNNs do not run efficiently on hardware appropriate for low power or mobile applications. To address this limitation, we proposed for compressing DNN models for ATR suited to deployment on resource constrained hardware. This proposed compression framework utilizes promising DNN compression techniques including pruning and weight quantization while also focusing on processor features common to modern low-power devices. Following this methodology as a guideline produced a DNN for ATR tuned to maximize classification throughput, minimize power consumption, and minimize memory footprint on a low-power device.

  1. Approximating lens power.

    PubMed

    Kaye, Stephen B

    2009-04-01

    To provide a scalar measure of refractive error, based on geometric lens power through principal, orthogonal and oblique meridians, that is not limited to the paraxial and sag height approximations. A function is derived to model sections through the principal meridian of a lens, followed by rotation of the section through orthogonal and oblique meridians. Average focal length is determined using the definition for the average of a function. Average univariate power in the principal meridian (including spherical aberration), can be computed from the average of a function over the angle of incidence as determined by the parameters of the given lens, or adequately computed from an integrated series function. Average power through orthogonal and oblique meridians, can be similarly determined using the derived formulae. The widely used computation for measuring refractive error, the spherical equivalent, introduces non-constant approximations, leading to a systematic bias. The equations proposed provide a good univariate representation of average lens power and are not subject to a systematic bias. They are particularly useful for the analysis of aggregate data, correlating with biological treatment variables and for developing analyses, which require a scalar equivalent representation of refractive power.

  2. Computational Investigation of a Boundary-Layer Ingesting Propulsion System for the Common Research Model

    NASA Technical Reports Server (NTRS)

    Blumenthal, Brennan T.; Elmiligui, Alaa; Geiselhart, Karl A.; Campbell, Richard L.; Maughmer, Mark D.; Schmitz, Sven

    2016-01-01

    The present paper examines potential propulsive and aerodynamic benefits of integrating a Boundary-Layer Ingestion (BLI) propulsion system into a typical commercial aircraft using the Common Research Model (CRM) geometry and the NASA Tetrahedral Unstructured Software System (TetrUSS). The Numerical Propulsion System Simulation (NPSS) environment is used to generate engine conditions for CFD analysis. Improvements to the BLI geometry are made using the Constrained Direct Iterative Surface Curvature (CDISC) design method. Previous studies have shown reductions of up to 25% in terms of propulsive power required for cruise for other axisymmetric geometries using the BLI concept. An analysis of engine power requirements, drag, and lift coefficients using the baseline and BLI geometries coupled with the NPSS model are shown. Potential benefits of the BLI system relating to cruise propulsive power are quantified using a power balance method, and a comparison to the baseline case is made. Iterations of the BLI geometric design are shown and any improvements between subsequent BLI designs presented. Simulations are conducted for a cruise flight condition of Mach 0.85 at an altitude of 38,500 feet and an angle of attack of 2 deg for all geometries. A comparison between available wind tunnel data, previous computational results, and the original CRM model is presented for model verification purposes along with full results for BLI power savings. Results indicate a 14.4% reduction in engine power requirements at cruise for the BLI configuration over the baseline geometry. Minor shaping of the aft portion of the fuselage using CDISC has been shown to increase the benefit from Boundary-Layer Ingestion further, resulting in a 15.6% reduction in power requirements for cruise as well as a drag reduction of eighteen counts over the baseline geometry.

  3. Computational Investigation of a Boundary-Layer Ingestion Propulsion System for the Common Research Model

    NASA Technical Reports Server (NTRS)

    Blumenthal, Brennan

    2016-01-01

    This thesis will examine potential propulsive and aerodynamic benefits of integrating a boundary-layer ingestion (BLI) propulsion system with a typical commercial aircraft using the Common Research Model geometry and the NASA Tetrahedral Unstructured Software System (TetrUSS). The Numerical Propulsion System Simulation (NPSS) environment will be used to generate engine conditions for CFD analysis. Improvements to the BLI geometry will be made using the Constrained Direct Iterative Surface Curvature (CDISC) design method. Previous studies have shown reductions of up to 25% in terms of propulsive power required for cruise for other axisymmetric geometries using the BLI concept. An analysis of engine power requirements, drag, and lift coefficients using the baseline and BLI geometries coupled with the NPSS model are shown. Potential benefits of the BLI system relating to cruise propulsive power are quantified using a power balance method and a comparison to the baseline case is made. Iterations of the BLI geometric design are shown and any improvements between subsequent BLI designs presented. Simulations are conducted for a cruise flight condition of Mach 0.85 at an altitude of 38,500 feet and an angle of attack of 2deg for all geometries. A comparison between available wind tunnel data, previous computational results, and the original CRM model is presented for model verification purposes along with full results for BLI power savings. Results indicate a 14.3% reduction in engine power requirements at cruise for the BLI configuration over the baseline geometry. Minor shaping of the aft portion of the fuselage using CDISC has been shown to increase the benefit from boundary-layer ingestion further, resulting in a 15.6% reduction in power requirements for cruise as well as a drag reduction of eighteen counts over the baseline geometry.

  4. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads

    PubMed Central

    Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-01-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922

  5. 46 CFR 110.25-1 - Plans and information required for new construction.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-line wiring diagram of the power system, supported, by cable lists, panelboard summaries, and other... computed operating loads for each condition of operation. (c) Elementary and isometric or deck wiring plans...) Manual alarm system; and (11) Supervised patrol system. (d) Deck wiring or schematic plans of power...

  6. The Use of Computer Simulation Techniques in Educational Planning.

    ERIC Educational Resources Information Center

    Wilson, Charles Z.

    Computer simulations provide powerful models for establishing goals, guidelines, and constraints in educational planning. They are dynamic models that allow planners to examine logical descriptions of organizational behavior over time as well as permitting consideration of the large and complex systems required to provide realistic descriptions of…

  7. Initiating a Programmatic Assessment Report

    ERIC Educational Resources Information Center

    Berkaliev, Zaur; Devi, Shavila; Fasshauer, Gregory E.; Hickernell, Fred J.; Kartal, Ozgul; Li, Xiaofan; McCray, Patrick; Whitney, Stephanie; Zawojewski, Judith S.

    2014-01-01

    In the context of a department of applied mathematics, a program assessment was conducted to assess the departmental goal of enabling undergraduate students to recognize, appreciate, and apply the power of computational tools in solving mathematical problems that cannot be solved by hand, or would require extensive and tedious hand computation. A…

  8. Tableau Economique: Teaching Economics with a Tablet Computer

    ERIC Educational Resources Information Center

    Scott, Robert H., III

    2011-01-01

    The typical method of instruction in economics is chalk and talk. Economics courses often require writing equations and drawing graphs and charts, which are all best done in freehand. Unlike static PowerPoint presentations, tablet computers create dynamic nonlinear presentations. Wireless technology allows professors to write on their tablets and…

  9. The Use of High Performance Computing (HPC) to Strengthen the Development of Army Systems

    DTIC Science & Technology

    2011-11-01

    accurately predicting the supersonic magus effect about spinning cones, ogive- cylinders , and boat-tailed afterbodies. This work led to the successful...successful computer model of the proposed product or system, one can then build prototypes on the computer and study the effects on the performance of...needed. The NRC report discusses the requirements for effective use of such computing power. One needs “models, algorithms, software, hardware

  10. GRID INDEPENDENT FUEL CELL OPERATED SMART HOME

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dr. Mohammad S. Alam

    2003-12-07

    A fuel cell power plant, which utilizes a smart energy management and control (SEMaC) system, supplying the power need of laboratory based ''home'' has been purchased and installed. The ''home'' consists of two rooms, each approximately 250 sq. ft. Every appliance and power outlet is under the control of a host computer, running the SEMaC software package. It is possible to override the computer, in the event that an appliance or power outage is required. Detailed analysis and simulation of the fuel cell operated smart home has been performed. Two journal papers has been accepted for publication and another journalmore » paper is under review. Three theses have been completed and three additional theses are in progress.« less

  11. Requirements specification for nickel cadmium battery expert system

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The requirements for performance, design, test, and qualification of a computer program identified as NICBES, Nickel Cadmium Battery Expert System, is established. The specific spacecraft power system configuration selected was the Hubble Space Telescope (HST) Electrical Power System (EPS) Testbed. Power for the HST comes from a system of 13 Solar Panel Arrays (SPAs) linked to 6 Nickel Cadmium Batteries which are connected to 3 Busses. An expert system, NICBES, will be developed at Martin Marietta Aerospace to recognize a testbed anomaly, identify the malfunctioning component and recommend a course of action. Besides fault diagnosis, NICBES will be able to evaluate battery status, give advice on battery status and provide decision support for the operator. These requirements are detailed.

  12. Decentralized Optimal Dispatch of Photovoltaic Inverters in Residential Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Dhople, Sairaj V.; Johnson, Brian B.

    Summary form only given. Decentralized methods for computing optimal real and reactive power setpoints for residential photovoltaic (PV) inverters are developed in this paper. It is known that conventional PV inverter controllers, which are designed to extract maximum power at unity power factor, cannot address secondary performance objectives such as voltage regulation and network loss minimization. Optimal power flow techniques can be utilized to select which inverters will provide ancillary services, and to compute their optimal real and reactive power setpoints according to well-defined performance criteria and economic objectives. Leveraging advances in sparsity-promoting regularization techniques and semidefinite relaxation, this papermore » shows how such problems can be solved with reduced computational burden and optimality guarantees. To enable large-scale implementation, a novel algorithmic framework is introduced - based on the so-called alternating direction method of multipliers - by which optimal power flow-type problems in this setting can be systematically decomposed into sub-problems that can be solved in a decentralized fashion by the utility and customer-owned PV systems with limited exchanges of information. Since the computational burden is shared among multiple devices and the requirement of all-to-all communication can be circumvented, the proposed optimization approach scales favorably to large distribution networks.« less

  13. Current Grid Generation Strategies and Future Requirements in Hypersonic Vehicle Design, Analysis and Testing

    NASA Technical Reports Server (NTRS)

    Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)

    1998-01-01

    Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.

  14. Software Engineering for Scientific Computer Simulations

    NASA Astrophysics Data System (ADS)

    Post, Douglass E.; Henderson, Dale B.; Kendall, Richard P.; Whitney, Earl M.

    2004-11-01

    Computer simulation is becoming a very powerful tool for analyzing and predicting the performance of fusion experiments. Simulation efforts are evolving from including only a few effects to many effects, from small teams with a few people to large teams, and from workstations and small processor count parallel computers to massively parallel platforms. Successfully making this transition requires attention to software engineering issues. We report on the conclusions drawn from a number of case studies of large scale scientific computing projects within DOE, academia and the DoD. The major lessons learned include attention to sound project management including setting reasonable and achievable requirements, building a good code team, enforcing customer focus, carrying out verification and validation and selecting the optimum computational mathematics approaches.

  15. Composite Cores

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Spang & Company's new configuration of converter transformer cores is a composite of gapped and ungapped cores assembled together in concentric relationship. The net effect of the composite design is to combine the protection from saturation offered by the gapped core with the lower magnetizing requirement of the ungapped core. The uncut core functions under normal operating conditions and the cut core takes over during abnormal operation to prevent power surges and their potentially destructive effect on transistors. Principal customers are aerospace and defense manufacturers. Cores also have applicability in commercial products where precise power regulation is required, as in the power supplies for large mainframe computers.

  16. Next-generation genotype imputation service and methods.

    PubMed

    Das, Sayantan; Forer, Lukas; Schönherr, Sebastian; Sidore, Carlo; Locke, Adam E; Kwong, Alan; Vrieze, Scott I; Chew, Emily Y; Levy, Shawn; McGue, Matt; Schlessinger, David; Stambolian, Dwight; Loh, Po-Ru; Iacono, William G; Swaroop, Anand; Scott, Laura J; Cucca, Francesco; Kronenberg, Florian; Boehnke, Michael; Abecasis, Gonçalo R; Fuchsberger, Christian

    2016-10-01

    Genotype imputation is a key component of genetic association studies, where it increases power, facilitates meta-analysis, and aids interpretation of signals. Genotype imputation is computationally demanding and, with current tools, typically requires access to a high-performance computing cluster and to a reference panel of sequenced genomes. Here we describe improvements to imputation machinery that reduce computational requirements by more than an order of magnitude with no loss of accuracy in comparison to standard imputation tools. We also describe a new web-based service for imputation that facilitates access to new reference panels and greatly improves user experience and productivity.

  17. Design, Specification, and Synthesis of Aircraft Electric Power Systems Control Logic

    NASA Astrophysics Data System (ADS)

    Xu, Huan

    Cyber-physical systems integrate computation, networking, and physical processes. Substantial research challenges exist in the design and verification of such large-scale, distributed sensing, actuation, and control systems. Rapidly improving technology and recent advances in control theory, networked systems, and computer science give us the opportunity to drastically improve our approach to integrated flow of information and cooperative behavior. Current systems rely on text-based specifications and manual design. Using new technology advances, we can create easier, more efficient, and cheaper ways of developing these control systems. This thesis will focus on design considerations for system topologies, ways to formally and automatically specify requirements, and methods to synthesize reactive control protocols, all within the context of an aircraft electric power system as a representative application area. This thesis consists of three complementary parts: synthesis, specification, and design. The first section focuses on the synthesis of central and distributed reactive controllers for an aircraft elec- tric power system. This approach incorporates methodologies from computer science and control. The resulting controllers are correct by construction with respect to system requirements, which are formulated using the specification language of linear temporal logic (LTL). The second section addresses how to formally specify requirements and introduces a domain-specific language for electric power systems. A software tool automatically converts high-level requirements into LTL and synthesizes a controller. The final sections focus on design space exploration. A design methodology is proposed that uses mixed-integer linear programming to obtain candidate topologies, which are then used to synthesize controllers. The discrete-time control logic is then verified in real-time by two methods: hardware and simulation. Finally, the problem of partial observability and dynamic state estimation is explored. Given a set placement of sensors on an electric power system, measurements from these sensors can be used in conjunction with control logic to infer the state of the system.

  18. Power System Decomposition for Practical Implementation of Bulk-Grid Voltage Control Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.

    Power system algorithms such as AC optimal power flow and coordinated volt/var control of the bulk power system are computationally intensive and become difficult to solve in operational time frames. The computational time required to run these algorithms increases exponentially as the size of the power system increases. The solution time for multiple subsystems is less than that for solving the entire system simultaneously, and the local nature of the voltage problem lends itself to such decomposition. This paper describes an algorithm that can be used to perform power system decomposition from the point of view of the voltage controlmore » problem. Our approach takes advantage of the dominant localized effect of voltage control and is based on clustering buses according to the electrical distances between them. One of the contributions of the paper is to use multidimensional scaling to compute n-dimensional Euclidean coordinates for each bus based on electrical distance to perform algorithms like K-means clustering. A simple coordinated reactive power control of photovoltaic inverters for voltage regulation is used to demonstrate the effectiveness of the proposed decomposition algorithm and its components. The proposed decomposition method is demonstrated on the IEEE 118-bus system.« less

  19. Fuel cells for low power applications

    NASA Astrophysics Data System (ADS)

    Heinzel, A.; Hebling, C.; Müller, M.; Zedda, M.; Müller, C.

    Electronic devices show an ever-increasing power demand and thus, require innovative concepts for power supply. For a wide range of power and energy capacity, membrane fuel cells are an attractive alternative to conventional batteries. The main advantages are the flexibility with respect to power and capacity achievable with different devices for energy conversion and energy storage, the long lifetime and long service life, the good ecological balance, very low self-discharge. Therefore, the development of fuel cell systems for portable electronic devices is an attractive, although also a challenging, goal. The fuel for a membrane fuel cell might be hydrogen from a hydride storage system or methanol/water as a liquid alternative. The main differences between the two systems are the much higher power density for hydrogen fuel cells, the higher energy density per weight for the liquid fuel, safety aspects and infrastructure for fuel supply for hydride materials. For different applications, different system designs are required. High power cells are required for portable computers, low power methanol fuel cells required for mobile phones in hybrid systems with batteries and micro-fuel cells are required, e.g. for hand held PCs in the sub-Watt range. All these technologies are currently under development. Performance data and results of simulations and experimental investigations will be presented.

  20. NETMARK

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Koga, Dennis (Technical Monitor)

    2002-01-01

    This presentation discuss NASA's proposed NETMARK knowledge management tool which aims 'to control and interoperate with every block in a document, email, spreadsheet, power point, database, etc. across the lifecycle'. Topics covered include: system software requirements and hardware requirements, seamless information systems, computer architecture issues, and potential benefits to NETMARK users.

  1. HyperForest: A high performance multi-processor architecture for real-time intelligent systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, P. Jr.; Rebeil, J.P.; Pollard, H.

    1997-04-01

    Intelligent Systems are characterized by the intensive use of computer power. The computer revolution of the last few years is what has made possible the development of the first generation of Intelligent Systems. Software for second generation Intelligent Systems will be more complex and will require more powerful computing engines in order to meet real-time constraints imposed by new robots, sensors, and applications. A multiprocessor architecture was developed that merges the advantages of message-passing and shared-memory structures: expendability and real-time compliance. The HyperForest architecture will provide an expandable real-time computing platform for computationally intensive Intelligent Systems and open the doorsmore » for the application of these systems to more complex tasks in environmental restoration and cleanup projects, flexible manufacturing systems, and DOE`s own production and disassembly activities.« less

  2. Optical computing research

    NASA Astrophysics Data System (ADS)

    Goodman, Joseph W.

    1987-10-01

    Work Accomplished: OPTICAL INTERCONNECTIONS - the powerful interconnect abilities of optical beams have led much optimism about the possible roles for optics in solving interconnect problems at various levels of computer architecture. Examined were the powerful requirements of optical interconnects at the gate-to-gate and chip-to-chip levels. OPTICAL NEUTRAL NETWORKS - basic studies of the convergence properties on the Holfield model, based on mathematical approach - graph theory. OPTICS AND ARTIFICIAL INTELLIGENCE - review the field of optical processing and artificial intelligence, with the aim of finding areas that might be particularly attractive for future investigation(s).

  3. The Power of the Test for Treatment Effects in Three-Level Block Randomized Designs

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2008-01-01

    Experiments that involve nested structures may assign treatment conditions either to subgroups (such as classrooms) or individuals within subgroups (such as students). The design of such experiments requires knowledge of the intraclass correlation structure to compute the sample sizes necessary to achieve adequate power to detect the treatment…

  4. An FPGA computing demo core for space charge simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Jinyuan; Huang, Yifei; /Fermilab

    2009-01-01

    In accelerator physics, space charge simulation requires large amount of computing power. In a particle system, each calculation requires time/resource consuming operations such as multiplications, divisions, and square roots. Because of the flexibility of field programmable gate arrays (FPGAs), we implemented this task with efficient use of the available computing resources and completely eliminated non-calculating operations that are indispensable in regular micro-processors (e.g. instruction fetch, instruction decoding, etc.). We designed and tested a 16-bit demo core for computing Coulomb's force in an Altera Cyclone II FPGA device. To save resources, the inverse square-root cube operation in our design is computedmore » using a memory look-up table addressed with nine to ten most significant non-zero bits. At 200 MHz internal clock, our demo core reaches a throughput of 200 M pairs/s/core, faster than a typical 2 GHz micro-processor by about a factor of 10. Temperature and power consumption of FPGAs were also lower than those of micro-processors. Fast and convenient, FPGAs can serve as alternatives to time-consuming micro-processors for space charge simulation.« less

  5. A high-speed linear algebra library with automatic parallelism

    NASA Technical Reports Server (NTRS)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  6. Power System Information Delivering System Based on Distributed Object

    NASA Astrophysics Data System (ADS)

    Tanaka, Tatsuji; Tsuchiya, Takehiko; Tamura, Setsuo; Seki, Tomomichi; Kubota, Kenji

    In recent years, improvement in computer performance and development of computer network technology or the distributed information processing technology has a remarkable thing. Moreover, the deregulation is starting and will be spreading in the electric power industry in Japan. Consequently, power suppliers are required to supply low cost power with high quality services to customers. Corresponding to these movements the authors have been proposed SCOPE (System Configuration Of PowEr control system) architecture for distributed EMS/SCADA (Energy Management Systems / Supervisory Control and Data Acquisition) system based on distributed object technology, which offers the flexibility and expandability adapting those movements. In this paper, the authors introduce a prototype of the power system information delivering system, which was developed based on SCOPE architecture. This paper describes the architecture and the evaluation results of this prototype system. The power system information delivering system supplies useful power systems information such as electric power failures to the customers using Internet and distributed object technology. This system is new type of SCADA system which monitors failure of power transmission system and power distribution system with geographic information integrated way.

  7. A spacecraft integrated power/attitude control system

    NASA Technical Reports Server (NTRS)

    Keckler, C. R.; Jacobs, K. L.

    1974-01-01

    A study to determine the viability and application of a system capable of performing the dual function of power storage/generation and attitude control has been conducted. Results from the study indicate that an integrated power/attitude control system (IPACS) can satisfy future mission requirements while providing significant savings in weight, volume, and cost over conventional systems. A failure-mode configuration of an IPACS was applied to a shuttle-launched RAM free-flyer and simulated using make-do hardware linked to a hybrid computer. Data from the simulation runs indicate that control interactions resulting from heavy power demands have minimal effect on system control effectiveness. The system was shown to be capable of meeting the stringent pointing requirements of 1 arc-second while operating under the influence of an orbital disturbance environment and during periods of momentum variations imposed by energy transfer requirements.

  8. Parallel Calculations in LS-DYNA

    NASA Astrophysics Data System (ADS)

    Vartanovich Mkrtychev, Oleg; Aleksandrovich Reshetov, Andrey

    2017-11-01

    Nowadays, structural mechanics exhibits a trend towards numeric solutions being found for increasingly extensive and detailed tasks, which requires that capacities of computing systems be enhanced. Such enhancement can be achieved by different means. E.g., in case a computing system is represented by a workstation, its components can be replaced and/or extended (CPU, memory etc.). In essence, such modification eventually entails replacement of the entire workstation, i.e. replacement of certain components necessitates exchange of others (faster CPUs and memory devices require buses with higher throughput etc.). Special consideration must be given to the capabilities of modern video cards. They constitute powerful computing systems capable of running data processing in parallel. Interestingly, the tools originally designed to render high-performance graphics can be applied for solving problems not immediately related to graphics (CUDA, OpenCL, Shaders etc.). However, not all software suites utilize video cards’ capacities. Another way to increase capacity of a computing system is to implement a cluster architecture: to add cluster nodes (workstations) and to increase the network communication speed between the nodes. The advantage of this approach is extensive growth due to which a quite powerful system can be obtained by combining not particularly powerful nodes. Moreover, separate nodes may possess different capacities. This paper considers the use of a clustered computing system for solving problems of structural mechanics with LS-DYNA software. To establish a range of dependencies a mere 2-node cluster has proven sufficient.

  9. Description of CASCOMP Comprehensive Airship Sizing and Performance Computer Program, Volume 2

    NASA Technical Reports Server (NTRS)

    Davis, J.

    1975-01-01

    The computer program CASCOMP, which may be used in comparative design studies of lighter than air vehicles by rapidly providing airship size and mission performance data, was prepared and documented. The program can be used to define design requirements such as weight breakdown, required propulsive power, and physical dimensions of airships which are designed to meet specified mission requirements. The program is also useful in sensitivity studies involving both design trade-offs and performance trade-offs. The input to the program primarily consists of a series of single point values such as hull overall fineness ratio, number of engines, airship hull and empennage drag coefficients, description of the mission profile, and weights of fixed equipment, fixed useful load and payload. In order to minimize computation time, the program makes ample use of optional computation paths.

  10. The potential benefits of photonics in the computing platform

    NASA Astrophysics Data System (ADS)

    Bautista, Jerry

    2005-03-01

    The increase in computational requirements for real-time image processing, complex computational fluid dynamics, very large scale data mining in the health industry/Internet, and predictive models for financial markets are driving computer architects to consider new paradigms that rely upon very high speed interconnects within and between computing elements. Further challenges result from reduced power requirements, reduced transmission latency, and greater interconnect density. Optical interconnects may solve many of these problems with the added benefit extended reach. In addition, photonic interconnects provide relative EMI immunity which is becoming an increasing issue with a greater dependence on wireless connectivity. However, to be truly functional, the optical interconnect mesh should be able to support arbitration, addressing, etc. completely in the optical domain with a BER that is more stringent than "traditional" communication requirements. Outlined are challenges in the advanced computing environment, some possible optical architectures and relevant platform technologies, as well roughly sizing these opportunities which are quite large relative to the more "traditional" optical markets.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goltz, G.; Weiner, H.

    A computer program has been developed for designing and analyzing the performance of solar array/battery power systems for the U. S. Coast Guard Navigational Aids. This program is called the Design Synthesis/Performance Analysis (DSPA) Computer Program. The basic function of the Design Synthesis portion of the DSPA program is to evaluate functional and economic criteria to provide specifications for viable solar array/battery power systems. The basic function of the Performance Analysis portion of the DSPA program is to simulate the operation of solar array/battery power systems under specific loads and environmental conditions. This document provides all the information necessary tomore » access the DSPA programs, to input required data and to generate appropriate Design Synthesis or Performance Analysis Output.« less

  12. Mars rover local navigation and hazard avoidance

    NASA Technical Reports Server (NTRS)

    Wilcox, B. H.; Gennery, D. B.; Mishkin, A. H.

    1989-01-01

    A Mars rover sample return mission has been proposed for the late 1990's. Due to the long speed-of-light delays between earth and Mars, some autonomy on the rover is highly desirable. JPL has been conducting research in two possible modes of rover operation, Computer-Aided Remote Driving and Semiautonomous Navigation. A recently-completed research program used a half-scale testbed vehicle to explore several of the concepts in semiautonomous navigation. A new, full-scale vehicle with all computational and power resources on-board will be used in the coming year to demonstrate relatively fast semiautonomous navigation. The computational and power requirements for Mars rover local navigation and hazard avoidance are discussed.

  13. Mars Rover Local Navigation And Hazard Avoidance

    NASA Astrophysics Data System (ADS)

    Wilcox, B. H.; Gennery, D. B.; Mishkin, A. H.

    1989-03-01

    A Mars rover sample return mission has been proposed for the late 1990's. Due to the long speed-of-light delays between Earth and Mars, some autonomy on the rover is highly desirable. JPL has been conducting research in two possible modes of rover operation, Computer-Aided Remote Driving and Semiautonomous Navigation. A recently-completed research program used a half-scale testbed vehicle to explore several of the concepts in semiautonomous navigation. A new, full-scale vehicle with all computational and power resources on-board will be used in the coming year to demonstrate relatively fast semiautonomous navigation. The computational and power requirements for Mars rover local navigation and hazard avoidance are discussed.

  14. Experimental Investigation of 60 GHz Transmission Characteristics Between Computers on a Conference Table for WPAN Applications

    NASA Technical Reports Server (NTRS)

    Ponchak, George E.; Amadjikpe, Arnaud L.; Choudhury, Debabani; Papapolymerou, John

    2011-01-01

    In this paper, the first measurements of the received radiated power between antennas located on a conference table to simulate the environment of antennas embedded in laptop computers for 60 GHz Wireless Personal Area Network (WPAN) applications is presented. A high gain horn antenna and a medium gain microstrip patch antenna for two linear polarizations are compared. It is shown that for a typical conference table arrangement with five computers, books, pens, and coffee cups, the antennas should be placed a minimum of 5 cm above the table, but that a height of greater than 20 cm may be required to maximize the received power in all cases.

  15. EPA/ECLSS consumables analyses for the Spacelab 1 flight

    NASA Technical Reports Server (NTRS)

    Steines, G. J.; Pipher, M. D.

    1976-01-01

    The results of electrical power system (EPS) and environmental control/life support system (ECLSS) consumables analyses of the Spacelab 1 mission are presented. The analyses were performed to assess the capability of the orbiter systems to support the proposed mission and to establish the various non propulsive consumables requirements. The EPS analysis was performed using the shuttle electrical power system (SEPS) analysis computer program. The ECLSS analysis was performed using the shuttle environmental consumables requirements evaluation tool (SECRET) program.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darrow, Ken; Hedman, Bruce

    Data centers represent a rapidly growing and very energy intensive activity in commercial, educational, and government facilities. In the last five years the growth of this sector was the electric power equivalent to seven new coal-fired power plants. Data centers consume 1.5% of the total power in the U.S. Growth over the next five to ten years is expected to require a similar increase in power generation. This energy consumption is concentrated in buildings that are 10-40 times more energy intensive than a typical office building. The sheer size of the market, the concentrated energy consumption per facility, and themore » tendency of facilities to cluster in 'high-tech' centers all contribute to a potential power infrastructure crisis for the industry. Meeting the energy needs of data centers is a moving target. Computing power is advancing rapidly, which reduces the energy requirements for data centers. A lot of work is going into improving the computing power of servers and other processing equipment. However, this increase in computing power is increasing the power densities of this equipment. While fewer pieces of equipment may be needed to meet a given data processing load, the energy density of a facility designed to house this higher efficiency equipment will be as high as or higher than it is today. In other words, while the data center of the future may have the IT power of ten data centers of today, it is also going to have higher power requirements and higher power densities. This report analyzes the opportunities for CHP technologies to assist primary power in making the data center more cost-effective and energy efficient. Broader application of CHP will lower the demand for electricity from central stations and reduce the pressure on electric transmission and distribution infrastructure. This report is organized into the following sections: (1) Data Center Market Segmentation--the description of the overall size of the market, the size and types of facilities involved, and the geographic distribution. (2) Data Center Energy Use Trends--a discussion of energy use and expected energy growth and the typical energy consumption and uses in data centers. (3) CHP Applicability--Potential configurations, CHP case studies, applicable equipment, heat recovery opportunities (cooling), cost and performance benchmarks, and power reliability benefits (4) CHP Drivers and Hurdles--evaluation of user benefits, social benefits, market structural issues and attitudes toward CHP, and regulatory hurdles. (5) CHP Paths to Market--Discussion of technical needs, education, strategic partnerships needed to promote CHP in the IT community.« less

  17. Computer Language For Optimization Of Design

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.; Lucas, Stephen H.

    1991-01-01

    SOL is computer language geared to solution of design problems. Includes mathematical modeling and logical capabilities of computer language like FORTRAN; also includes additional power of nonlinear mathematical programming methods at language level. SOL compiler takes SOL-language statements and generates equivalent FORTRAN code and system calls. Provides syntactic and semantic checking for recovery from errors and provides detailed reports containing cross-references to show where each variable used. Implemented on VAX/VMS computer systems. Requires VAX FORTRAN compiler to produce executable program.

  18. Radiation Tolerant, FPGA-Based SmallSat Computer System

    NASA Technical Reports Server (NTRS)

    LaMeres, Brock J.; Crum, Gary A.; Martinez, Andres; Petro, Andrew

    2015-01-01

    The Radiation Tolerant, FPGA-based SmallSat Computer System (RadSat) computing platform exploits a commercial off-the-shelf (COTS) Field Programmable Gate Array (FPGA) with real-time partial reconfiguration to provide increased performance, power efficiency and radiation tolerance at a fraction of the cost of existing radiation hardened computing solutions. This technology is ideal for small spacecraft that require state-of-the-art on-board processing in harsh radiation environments but where using radiation hardened processors is cost prohibitive.

  19. Survey and Analysis of Environmental Requirements for Shipboard Electronic Equipment Applications. Appendix A. Volume 2.

    DTIC Science & Technology

    1991-07-31

    INTELLIGENT SCSI DMV-719 MAS MIL CONTROLLER DY-4 SYSTEMS BYTE-WIDE MEMORY CARD DMV-536 MEM MIL DY-4 SYSTEMS POWER SUPPLY UNIT DMV-870 PWR MIL P age No. 5 06/10...FORCE COMPUTERS PROCESSOR CPU-386 SERIES SBC COM FORCE COMPUTERS ADVANCED SYSTEM CONTROL ASCU -1/2 SBC COM UNITI FORCE COMPUTERS GRAPHICS CONTROLLER AGC...RECORD VENDOR: JANZ COMPUTER AG DIVISION: VENDOR ADDRESS: Im Doerener Feld 3 D-4790 Paderborn Germany MARKETING: Johannes Kunz TECHNICAL: Arnulf

  20. Spacecraft solid state power distribution switch

    NASA Technical Reports Server (NTRS)

    Praver, G. A.; Theisinger, P. C.

    1986-01-01

    As a spacecraft performs its mission, various loads are connected to the spacecraft power bus in response to commands from an on board computer, a function called power distribution. For the Mariner Mark II set of planetary missions, the power bus is 30 volts dc and when loads are connected or disconnected, both the bus and power return side must be switched. In addition, the power distribution function must be immune to single point failures and, when power is first applied, all switches must be in a known state. Traditionally, these requirements have been met by electromechanical latching relays. This paper describes a solid state switch which not only satisfies the requirements but incorporates several additional features including soft turn on, programmable current trip point with noise immunity, instantaneous current limiting, and direct telemetry of load currents and switch status. A breadboard of the design has been constructed and some initial test results are included.

  1. Variable gravity research facility

    NASA Technical Reports Server (NTRS)

    Allan, Sean; Ancheta, Stan; Beine, Donna; Cink, Brian; Eagon, Mark; Eckstein, Brett; Luhman, Dan; Mccowan, Daniel; Nations, James; Nordtvedt, Todd

    1988-01-01

    Spin and despin requirements; sequence of activities required to assemble the Variable Gravity Research Facility (VGRF); power systems technology; life support; thermal control systems; emergencies; communication systems; space station applications; experimental activities; computer modeling and simulation of tether vibration; cost analysis; configuration of the crew compartments; and tether lengths and rotation speeds are discussed.

  2. Intraclass Correlation Values for Planning Group-Randomized Trials in Education

    ERIC Educational Resources Information Center

    Hedges, Larry V.; Hedberg, E. C.

    2007-01-01

    Experiments that assign intact groups to treatment conditions are increasingly common in social research. In educational research, the groups assigned are often schools. The design of group-randomized experiments requires knowledge of the intraclass correlation structure to compute statistical power and sample sizes required to achieve adequate…

  3. Optimal subinterval selection approach for power system transient stability simulation

    DOE PAGES

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less

  4. Data communication requirements for the advanced NAS network

    NASA Technical Reports Server (NTRS)

    Levin, Eugene; Eaton, C. K.; Young, Bruce

    1986-01-01

    The goal of the Numerical Aerodynamic Simulation (NAS) Program is to provide a powerful computational environment for advanced research and development in aeronautics and related disciplines. The present NAS system consists of a Cray 2 supercomputer connected by a data network to a large mass storage system, to sophisticated local graphics workstations, and by remote communications to researchers throughout the United States. The program plan is to continue acquiring the most powerful supercomputers as they become available. In the 1987/1988 time period it is anticipated that a computer with 4 times the processing speed of a Cray 2 will be obtained and by 1990 an additional supercomputer with 16 times the speed of the Cray 2. The implications of this 20-fold increase in processing power on the data communications requirements are described. The analysis was based on models of the projected workload and system architecture. The results are presented together with the estimates of their sensitivity to assumptions inherent in the models.

  5. A Computational framework for telemedicine.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, I.; von Laszewski, G.; Thiruvathukal, G. K.

    1998-07-01

    Emerging telemedicine applications require the ability to exploit diverse and geographically distributed resources. Highspeed networks are used to integrate advanced visualization devices, sophisticated instruments, large databases, archival storage devices, PCs, workstations, and supercomputers. This form of telemedical environment is similar to networked virtual supercomputers, also known as metacomputers. Metacomputers are already being used in many scientific application areas. In this article, we analyze requirements necessary for a telemedical computing infrastructure and compare them with requirements found in a typical metacomputing environment. We will show that metacomputing environments can be used to enable a more powerful and unified computational infrastructure formore » telemedicine. The Globus metacomputing toolkit can provide the necessary low level mechanisms to enable a large scale telemedical infrastructure. The Globus toolkit components are designed in a modular fashion and can be extended to support the specific requirements for telemedicine.« less

  6. Challenges in reducing the computational time of QSTS simulations for distribution system analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deboever, Jeremiah; Zhang, Xiaochen; Reno, Matthew J.

    The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10more » to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.« less

  7. A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus

    NASA Astrophysics Data System (ADS)

    Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir

    2016-07-01

    This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.

  8. PREPARING FOR EXASCALE: ORNL Leadership Computing Application Requirements and Strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joubert, Wayne; Kothe, Douglas B; Nam, Hai Ah

    2009-12-01

    In 2009 the Oak Ridge Leadership Computing Facility (OLCF), a U.S. Department of Energy (DOE) facility at the Oak Ridge National Laboratory (ORNL) National Center for Computational Sciences (NCCS), elicited petascale computational science requirements from leading computational scientists in the international science community. This effort targeted science teams whose projects received large computer allocation awards on OLCF systems. A clear finding of this process was that in order to reach their science goals over the next several years, multiple projects will require computational resources in excess of an order of magnitude more powerful than those currently available. Additionally, for themore » longer term, next-generation science will require computing platforms of exascale capability in order to reach DOE science objectives over the next decade. It is generally recognized that achieving exascale in the proposed time frame will require disruptive changes in computer hardware and software. Processor hardware will become necessarily heterogeneous and will include accelerator technologies. Software must undergo the concomitant changes needed to extract the available performance from this heterogeneous hardware. This disruption portends to be substantial, not unlike the change to the message passing paradigm in the computational science community over 20 years ago. Since technological disruptions take time to assimilate, we must aggressively embark on this course of change now, to insure that science applications and their underlying programming models are mature and ready when exascale computing arrives. This includes initiation of application readiness efforts to adapt existing codes to heterogeneous architectures, support of relevant software tools, and procurement of next-generation hardware testbeds for porting and testing codes. The 2009 OLCF requirements process identified numerous actions necessary to meet this challenge: (1) Hardware capabilities must be advanced on multiple fronts, including peak flops, node memory capacity, interconnect latency, interconnect bandwidth, and memory bandwidth. (2) Effective parallel programming interfaces must be developed to exploit the power of emerging hardware. (3) Science application teams must now begin to adapt and reformulate application codes to the new hardware and software, typified by hierarchical and disparate layers of compute, memory and concurrency. (4) Algorithm research must be realigned to exploit this hierarchy. (5) When possible, mathematical libraries must be used to encapsulate the required operations in an efficient and useful way. (6) Software tools must be developed to make the new hardware more usable. (7) Science application software must be improved to cope with the increasing complexity of computing systems. (8) Data management efforts must be readied for the larger quantities of data generated by larger, more accurate science models. Requirements elicitation, analysis, validation, and management comprise a difficult and inexact process, particularly in periods of technological change. Nonetheless, the OLCF requirements modeling process is becoming increasingly quantitative and actionable, as the process becomes more developed and mature, and the process this year has identified clear and concrete steps to be taken. This report discloses (1) the fundamental science case driving the need for the next generation of computer hardware, (2) application usage trends that illustrate the science need, (3) application performance characteristics that drive the need for increased hardware capabilities, (4) resource and process requirements that make the development and deployment of science applications on next-generation hardware successful, and (5) summary recommendations for the required next steps within the computer and computational science communities.« less

  9. Formulation of advanced consumables management models: Executive summary. [modeling spacecraft environmental control, life support, and electric power supply systems

    NASA Technical Reports Server (NTRS)

    Daly, J. K.; Torian, J. G.

    1979-01-01

    An overview of studies conducted to establish the requirements for advanced subsystem analytical tools is presented. Modifications are defined for updating current computer programs used to analyze environmental control, life support, and electric power supply systems so that consumables for future advanced spacecraft may be managed.

  10. Margin and sensitivity methods for security analysis of electric power systems

    NASA Astrophysics Data System (ADS)

    Greene, Scott L.

    Reliable operation of large scale electric power networks requires that system voltages and currents stay within design limits. Operation beyond those limits can lead to equipment failures and blackouts. Security margins measure the amount by which system loads or power transfers can change before a security violation, such as an overloaded transmission line, is encountered. This thesis shows how to efficiently compute security margins defined by limiting events and instabilities, and the sensitivity of those margins with respect to assumptions, system parameters, operating policy, and transactions. Security margins to voltage collapse blackouts, oscillatory instability, generator limits, voltage constraints and line overloads are considered. The usefulness of computing the sensitivities of these margins with respect to interarea transfers, loading parameters, generator dispatch, transmission line parameters, and VAR support is established for networks as large as 1500 buses. The sensitivity formulas presented apply to a range of power system models. Conventional sensitivity formulas such as line distribution factors, outage distribution factors, participation factors and penalty factors are shown to be special cases of the general sensitivity formulas derived in this thesis. The sensitivity formulas readily accommodate sparse matrix techniques. Margin sensitivity methods are shown to work effectively for avoiding voltage collapse blackouts caused by either saddle node bifurcation of equilibria or immediate instability due to generator reactive power limits. Extremely fast contingency analysis for voltage collapse can be implemented with margin sensitivity based rankings. Interarea transfer can be limited by voltage limits, line limits, or voltage stability. The sensitivity formulas presented in this thesis apply to security margins defined by any limit criteria. A method to compute transfer margins by directly locating intermediate events reduces the total number of loadflow iterations required by each margin computation and provides sensitivity information at minimal additional cost. Estimates of the effect of simultaneous transfers on the transfer margins agree well with the exact computations for a network model derived from a portion of the U.S grid. The accuracy of the estimates over a useful range of conditions and the ease of obtaining the estimates suggest that the sensitivity computations will be of practical value.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potts, C.; Faber, M.; Gunderson, G.

    The as-built lattice of the Rapid Cycling Synchrotron (RCS) had two sets of correction sextupoles and two sets of quadrupoles energized by dc power supplies to control the tune and the tune tilt. With this method of powering these magnets, adjustment of tune conditions during the accelerating cycle as needed was not possible. A set of dynamically programmable power supplies has been built and operated to provide the required chromaticity adjustment. The short accelerating time (16.7 ms) of the RCS and the inductance of the magnets dictated large transistor amplifier power supplies. The required time resolution and waveform flexibility indicatedmore » the desirability of computer control. Both the amplifiers and controls are described, along with resulting improvements in the beam performance. 5 refs.« less

  12. Preliminary design of an auxiliary power unit for the space shuttle. Volume 4: Selected system supporting studies

    NASA Technical Reports Server (NTRS)

    Hamilton, M. L.; Burriss, W. L.

    1972-01-01

    Selected system supporting analyses in conjunction with the preliminary design of an auxiliary power unit (APU) for the space shuttle are presented. Both steady state and transient auxiliary power unit performance, based on digital computer programs, were examined. The selected APU provides up to 400 horsepower out of the gearbox, weighs 227 pounds, and requires 2 pounds per shaft horsepower hour of propellants.

  13. Start-up capabilities of photovoltaic module for the International Space Station

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hajela, G.; Hague, L.

    1997-12-31

    The International Space Station (ISS) uses four photovoltaic modules (PVMs) to supply electric power for the US On-Orbit Segment (USOS). The ISS is assembled on orbit over a period of about 5 years and over 40 stages. PVMs are launched and integrated with the ISS at different times during the ISS assembly. During early stages, the electric power is provided by the integrated truss segment (ITS) P6; subsequently, ITS P4, S4, and S6 are launched. PVMs are launched into space in the National Space Transportation System (NSTS) cargo bay. Each PVM consists of two independent power channels. The NSTS docksmore » with the ISS, the PVM is removed from the cargo bay and installed on the ISS. At this stage the PVM is in stowed configuration and its batteries are in fully discharged state. The start-up consists of initialization and checkout of all hardware, deployment of SAW and photovoltaic radiator (PVR), thermal conditioning batteries, and charging batteries; not necessarily in the same order for all PVMs. PVMs are designed to be capable of on-orbit start-up, within a specified time period, when external power is applied to a specified electrical interface. This paper describes the essential steps required for PVM start-up and how these operations are performed for various PVMs. The integrated operations scenarios (IOS) prepared by the NASA, Johnson Space Center, details specific procedures and timelines for start-up of each PVM. The paper describes how dormant batteries are brought to their normal operating temperature range and then charged to 100% state of charge (SOC). Total time required to complete start-up is computed and compared to the IOS timelines. External power required during start-up is computed and compared to the requirements and/or available power on ISS. Also described is how these start-up procedures can be adopted for restart of PVMs when required.« less

  14. Interactomes to Biological Phase Space: a call to begin thinking at a new level in computational biology.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, George S.; Brown, William Michael

    2007-09-01

    Techniques for high throughput determinations of interactomes, together with high resolution protein collocalizations maps within organelles and through membranes will soon create a vast resource. With these data, biological descriptions, akin to the high dimensional phase spaces familiar to physicists, will become possible. These descriptions will capture sufficient information to make possible realistic, system-level models of cells. The descriptions and the computational models they enable will require powerful computing techniques. This report is offered as a call to the computational biology community to begin thinking at this scale and as a challenge to develop the required algorithms and codes tomore » make use of the new data.3« less

  15. Optimal reactive planning with security constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, W.R.; Cheng, D.T.Y.; Dixon, A.M.

    1995-12-31

    The National Grid Company (NGC) of England and Wales has developed a computer program, SCORPION, to help system planners optimize the location and size of new reactive compensation plant on the transmission system. The reactive power requirements of the NGC system have risen as a result of increased power flows and the shorter timescale on which power stations are commissioned and withdrawn from service. In view of the high costs involved, it is important that reactive compensation be installed as economically as possible, without compromising security. Traditional methods based on iterative use of a load flow program are labor intensivemore » and subjective. SCORPION determines a near-optimal pattern of new reactive sources which are required to satisfy voltage constraints for normal and contingent states of operation of the transmission system. The algorithm processes the system states sequentially, instead of optimizing all of them simultaneously. This allows a large number of system states to be considered with an acceptable run time and computer memory requirement. Installed reactive sources are treated as continuous, rather than discrete, variables. However, the program has a restart facility which enables the user to add realistically sized reactive sources explicitly and thereby work towards a realizable solution to the planning problem.« less

  16. TOPEX electrical power system

    NASA Technical Reports Server (NTRS)

    Chetty, P. R. K.; Roufberg, Lew; Costogue, Ernest

    1991-01-01

    The TOPEX mission requirements which impact the power requirements and analyses are presented. A description of the electrical power system (EPS), including energy management and battery charging methods that were conceived and developed to meet the identified satellite requirements, is included. Analysis of the TOPEX EPS confirms that all of its electrical performance and reliability requirements have been met. The TOPEX EPS employs the flight-proven modular power system (MPS) which is part of the Multimission Modular Spacecraft and provides high reliability, abbreviated development effort and schedule, and low cost. An energy balance equation, unique to TOPEX, has been derived to confirm that the batteries will be completely recharged following each eclipse, under worst-case conditions. TOPEX uses three NASA Standard 50AH Ni-Cd batteries, each with 22 cells in series. The MPS contains battery charge control and protection based on measurements of battery currents, voltages, temperatures, and computed depth-of-discharge. In case of impending battery depletion, the MPS automatically implements load shedding.

  17. Applications of automatic differentiation in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Carle, A.; Bischof, C.; Haigler, Kara J.; Newman, Perry A.

    1994-01-01

    Automatic differentiation (AD) is a powerful computational method that provides for computing exact sensitivity derivatives (SD) from existing computer programs for multidisciplinary design optimization (MDO) or in sensitivity analysis. A pre-compiler AD tool for FORTRAN programs called ADIFOR has been developed. The ADIFOR tool has been easily and quickly applied by NASA Langley researchers to assess the feasibility and computational impact of AD in MDO with several different FORTRAN programs. These include a state-of-the-art three dimensional multigrid Navier-Stokes flow solver for wings or aircraft configurations in transonic turbulent flow. With ADIFOR the user specifies sets of independent and dependent variables with an existing computer code. ADIFOR then traces the dependency path throughout the code, applies the chain rule to formulate derivative expressions, and generates new code to compute the required SD matrix. The resulting codes have been verified to compute exact non-geometric and geometric SD for a variety of cases. in less time than is required to compute the SD matrix using centered divided differences.

  18. In silico designing of power conversion efficient organic lead dyes for solar cells using todays innovative approaches to assure renewable energy for future

    NASA Astrophysics Data System (ADS)

    Kar, Supratik; Roy, Juganta K.; Leszczynski, Jerzy

    2017-06-01

    Advances in solar cell technology require designing of new organic dye sensitizers for dye-sensitized solar cells with high power conversion efficiency to circumvent the disadvantages of silicon-based solar cells. In silico studies including quantitative structure-property relationship analysis combined with quantum chemical analysis were employed to understand the primary electron transfer mechanism and photo-physical properties of 273 arylamine organic dyes from 11 diverse chemical families explicit to iodine electrolyte. The direct quantitative structure-property relationship models enable identification of the essential electronic and structural attributes necessary for quantifying the molecular prerequisites of 11 classes of arylamine organic dyes, responsible for high power conversion efficiency of dye-sensitized solar cells. Tetrahydroquinoline, N,N'-dialkylaniline and indoline have been least explored classes under arylamine organic dyes for dye-sensitized solar cells. Therefore, the identified properties from the corresponding quantitative structure-property relationship models of the mentioned classes were employed in designing of "lead dyes". Followed by, a series of electrochemical and photo-physical parameters were computed for designed dyes to check the required variables for electron flow of dye-sensitized solar cells. The combined computational techniques yielded seven promising lead dyes each for all three chemical classes considered. Significant (130, 183, and 46%) increment in predicted %power conversion efficiency was observed comparing with the existing dye with highest experimental %power conversion efficiency value for tetrahydroquinoline, N,N'-dialkylaniline and indoline, respectively maintaining required electrochemical parameters.

  19. Conceptual studies on the integration of a nuclear reactor system to a manned rover for Mars missions

    NASA Technical Reports Server (NTRS)

    El-Genk, Mohamed S.; Morley, Nicholas J.

    1991-01-01

    Multiyear civilian manned missions to explore the surface of Mars are thought by NASA to be possible early in the next century. Expeditions to Mars, as well as permanent bases, are envisioned to require enhanced piloted vehicles to conduct science and exploration activities. Piloted rovers, with 30 kWe user net power (for drilling, sampling and sample analysis, onboard computer and computer instrumentation, vehicle thermal management, and astronaut life support systems) in addition to mobility are being considered. The rover design, for this study, included a four car train type vehicle complete with a hybrid solar photovoltaic/regenerative fuel cell auxiliary power system (APS). This system was designed to power the primary control vehicle. The APS supplies life support power for four astronauts and a limited degree of mobility allowing the primary control vehicle to limp back to either a permanent base or an accent vehicle. The results showed that the APS described above, with a mass of 667 kg, was sufficient to provide live support power and a top speed of five km/h for 6 hours per day. It was also seen that the factors that had the largest effect on the APS mass were the life support power, the number of astronauts, and the PV cell efficiency. The topics covered include: (1) power system options; (2) rover layout and design; (3) parametric analysis of total mass and power requirements for a manned Mars rover; (4) radiation shield design; and (5) energy conversion systems.

  20. Password Cracking Using Sony Playstations

    NASA Astrophysics Data System (ADS)

    Kleinhans, Hugo; Butts, Jonathan; Shenoi, Sujeet

    Law enforcement agencies frequently encounter encrypted digital evidence for which the cryptographic keys are unknown or unavailable. Password cracking - whether it employs brute force or sophisticated cryptanalytic techniques - requires massive computational resources. This paper evaluates the benefits of using the Sony PlayStation 3 (PS3) to crack passwords. The PS3 offers massive computational power at relatively low cost. Moreover, multiple PS3 systems can be introduced easily to expand parallel processing when additional power is needed. This paper also describes a distributed framework designed to enable law enforcement agents to crack encrypted archives and applications in an efficient and cost-effective manner.

  1. Subsystems component definitions summary program

    NASA Technical Reports Server (NTRS)

    Scott, A. Don; Thomas, Carolyn C.; Simonsen, Lisa C.; Hall, John B., Jr.

    1991-01-01

    A computer program, the Subsystems Component Definitions Summary (SUBCOMDEF), was developed to provide a quick and efficient means of summarizing large quantities of subsystems component data in terms of weight, volume, resupply, and power. The program was validated using Space Station Freedom Program Definition Requirements Document data for the internal and external thermal control subsystem. Once all component descriptions, unit weights and volumes, resupply, and power data are input, the user may obtain a summary report of user-specified portions of the subsystem or of the entire subsystem as a whole. Any combination or all of the parameters of wet and dry weight, wet and dry volume, resupply weight and volume, and power may be displayed. The user may vary the resupply period according to individual mission requirements, as well as the number of hours per day power consuming components operate. Uses of this program are not limited only to subsystem component summaries. Any applications that require quick, efficient, and accurate weight, volume, resupply, or power summaries would be well suited to take advantage of SUBCOMDEF's capabilities.

  2. Computerized power supply analysis: State equation generation and terminal models

    NASA Technical Reports Server (NTRS)

    Garrett, S. J.

    1978-01-01

    To aid engineers that design power supply systems two analysis tools that can be used with the state equation analysis package were developed. These tools include integration routines that start with the description of a power supply in state equation form and yield analytical results. The first tool uses a computer program that works with the SUPER SCEPTRE circuit analysis program and prints the state equation for an electrical network. The state equations developed automatically by the computer program are used to develop an algorithm for reducing the number of state variables required to describe an electrical network. In this way a second tool is obtained in which the order of the network is reduced and a simpler terminal model is obtained.

  3. High performance TWT development for the microwave power module

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whaley, D.R.; Armstrong, C.M.; Groshart, G.

    1996-12-31

    Northrop Grumman`s ongoing development of microwave power modules (MPM) provides microwave power at various power levels, frequencies, and bandwidths for a variety of applications. Present day requirements for the vacuum power booster traveling wave tubes of the microwave power module are becoming increasingly more demanding, necessitating the need for further enhancement of tube performance. The MPM development program at Northrop Grumman is designed specifically to meet this need by construction and test of a series of new tubes aimed at verifying computation and reaching high efficiency design goals. Tubes under test incorporate several different helix designs, as well as varyingmore » electron gun and magnetic confinement configurations. Current efforts also include further development of state-of-the-art TWT modeling and computational methods at Northrop Grumman incorporating new, more accurate models into existing design tools and developing new tools to be used in all aspects of traveling wave tube design. Current status of the Northrop Grumman MPM TWT development program will be presented.« less

  4. The CRAF/Cassini power subsystem - Preliminary design update

    NASA Technical Reports Server (NTRS)

    Atkins, Kenneth L.; Brisendine, Philip; Clark, Karla; Klein, John; Smith, Richard

    1991-01-01

    A chronology is provided of the rationale leading from the early Mariner spacecraft to the CRAF/Cassini Mariner Mark II power subsystem architecture. The display pathway began with a hybrid including a solar photovoltaic array, a radioisotope thermoelectric generator (RTG), and a battery supplying a power profile with a peak loading of about 300 W. The initial concept was to distribute power through a new solid-state, programmable switch controlled by an embedded microprocessor. As the overall mission, science, and project design matured, the power requirements increased. The design evolved from the hybrid to two RTG plus batteries to meet peak loadings of near 500 W in 1989. Later that year, circumstances led to abandonment of the distributed computer concept and a return to centralized control. Finally, as power requirements continued to grow, a third RTG was added to the design and the battery removed, with the return to the discharge-controller for transients during fault recovery procedures.

  5. Economic Comparison of Processes Using Spreadsheet Programs

    NASA Technical Reports Server (NTRS)

    Ferrall, J. F.; Pappano, A. W.; Jennings, C. N.

    1986-01-01

    Inexpensive approach aids plant-design decisions. Commercially available electronic spreadsheet programs aid economic comparison of different processes for producing particular end products. Facilitates plantdesign decisions without requiring large expenditures for powerful mainframe computers.

  6. Coarse Grid CFD for underresolved simulation

    NASA Astrophysics Data System (ADS)

    Class, Andreas G.; Viellieber, Mathias O.; Himmel, Steffen R.

    2010-11-01

    CFD simulation of the complete reactor core of a nuclear power plant requires exceedingly huge computational resources so that this crude power approach has not been pursued yet. The traditional approach is 1D subchannel analysis employing calibrated transport models. Coarse grid CFD is an attractive alternative technique based on strongly under-resolved CFD and the inviscid Euler equations. Obviously, using inviscid equations and coarse grids does not resolve all the physics requiring additional volumetric source terms modelling viscosity and other sub-grid effects. The source terms are implemented via correlations derived from fully resolved representative simulations which can be tabulated or computed on the fly. The technique is demonstrated for a Carnot diffusor and a wire-wrap fuel assembly [1]. [4pt] [1] Himmel, S.R. phd thesis, Stuttgart University, Germany 2009, http://bibliothek.fzk.de/zb/berichte/FZKA7468.pdf

  7. Fusion Energy Sciences Exascale Requirements Review. An Office of Science review sponsored jointly by Advanced Scientific Computing Research and Fusion Energy Sciences, January 27-29, 2016, Gaithersburg, Maryland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Choong-Seock; Greenwald, Martin; Riley, Katherine

    The additional computing power offered by the planned exascale facilities could be transformational across the spectrum of plasma and fusion research — provided that the new architectures can be efficiently applied to our problem space. The collaboration that will be required to succeed should be viewed as an opportunity to identify and exploit cross-disciplinary synergies. To assess the opportunities and requirements as part of the development of an overall strategy for computing in the exascale era, the Exascale Requirements Review meeting of the Fusion Energy Sciences (FES) community was convened January 27–29, 2016, with participation from a broad range ofmore » fusion and plasma scientists, specialists in applied mathematics and computer science, and representatives from the U.S. Department of Energy (DOE) and its major computing facilities. This report is a summary of that meeting and the preparatory activities for it and includes a wealth of detail to support the findings. Technical opportunities, requirements, and challenges are detailed in this report (and in the recent report on the Workshop on Integrated Simulation). Science applications are described, along with mathematical and computational enabling technologies. Also see http://exascaleage.org/fes/ for more information.« less

  8. Managing Power Heterogeneity

    NASA Astrophysics Data System (ADS)

    Pruhs, Kirk

    A particularly important emergent technology is heterogeneous processors (or cores), which many computer architects believe will be the dominant architectural design in the future. The main advantage of a heterogeneous architecture, relative to an architecture of identical processors, is that it allows for the inclusion of processors whose design is specialized for particular types of jobs, and for jobs to be assigned to a processor best suited for that job. Most notably, it is envisioned that these heterogeneous architectures will consist of a small number of high-power high-performance processors for critical jobs, and a larger number of lower-power lower-performance processors for less critical jobs. Naturally, the lower-power processors would be more energy efficient in terms of the computation performed per unit of energy expended, and would generate less heat per unit of computation. For a given area and power budget, heterogeneous designs can give significantly better performance for standard workloads. Moreover, even processors that were designed to be homogeneous, are increasingly likely to be heterogeneous at run time: the dominant underlying cause is the increasing variability in the fabrication process as the feature size is scaled down (although run time faults will also play a role). Since manufacturing yields would be unacceptably low if every processor/core was required to be perfect, and since there would be significant performance loss from derating the entire chip to the functioning of the least functional processor (which is what would be required in order to attain processor homogeneity), some processor heterogeneity seems inevitable in chips with many processors/cores.

  9. Design of transonic airfoil sections using a similarity theory

    NASA Technical Reports Server (NTRS)

    Nixon, D.

    1978-01-01

    A study of the available methods for transonic airfoil and wing design indicates that the most powerful technique is the numerical optimization procedure. However, the computer time for this method is relatively large because of the amount of computation required in the searches during optimization. The optimization method requires that base and calibration solutions be computed to determine a minimum drag direction. The design space is then computationally searched in this direction; it is these searches that dominate the computation time. A recent similarity theory allows certain transonic flows to be calculated rapidly from the base and calibration solutions. In this paper the application of the similarity theory to design problems is examined with the object of at least partially eliminating the costly searches of the design optimization method. An example of an airfoil design is presented.

  10. Systems Biology-Based Identification of Mycobacterium tuberculosis Persistence Genes in Mouse Lungs

    PubMed Central

    Dutta, Noton K.; Bandyopadhyay, Nirmalya; Veeramani, Balaji; Lamichhane, Gyanu; Karakousis, Petros C.; Bader, Joel S.

    2014-01-01

    ABSTRACT Identifying Mycobacterium tuberculosis persistence genes is important for developing novel drugs to shorten the duration of tuberculosis (TB) treatment. We developed computational algorithms that predict M. tuberculosis genes required for long-term survival in mouse lungs. As the input, we used high-throughput M. tuberculosis mutant library screen data, mycobacterial global transcriptional profiles in mice and macrophages, and functional interaction networks. We selected 57 unique, genetically defined mutants (18 previously tested and 39 untested) to assess the predictive power of this approach in the murine model of TB infection. We observed a 6-fold enrichment in the predicted set of M. tuberculosis genes required for persistence in mouse lungs relative to randomly selected mutant pools. Our results also allowed us to reclassify several genes as required for M. tuberculosis persistence in vivo. Finally, the new results implicated additional high-priority candidate genes for testing. Experimental validation of computational predictions demonstrates the power of this systems biology approach for elucidating M. tuberculosis persistence genes. PMID:24549847

  11. Load-Following Power Timeline Analyses for the International Space Station

    NASA Technical Reports Server (NTRS)

    Fincannon, James; Delleur, Ann; Green, Robert; Hojnicki, Jeffrey

    1996-01-01

    Spacecraft are typically complex assemblies of interconnected systems and components that have highly time-varying thermal communications, and power requirements. It is essential that systems designers be able to assess the capability of the spacecraft to meet these requirements which should represent a realistic projection of demand for these resources once the vehicle is on-orbit. To accomplish the assessment from the power standpoint, a computer code called ECAPS has been developed at NASA Lewis Research Center that performs a load-driven analysis of a spacecraft power system given time-varying distributed loading and other mission data. This program is uniquely capable of synthesizing all of the changing spacecraft conditions into a single, seamless analysis for a complete mission. This paper presents example power load timelines with which numerous data are integrated to provide a realistic assessment of the load-following capabilities of the power system. Results of analyses show how well the power system can meet the time-varying power resource demand.

  12. Heavy Lift Vehicle (HLV) Avionics Flight Computing Architecture Study

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.; Chen, Yuan; Morgan, Dwayne R.; Butler, A. Marc; Sdhuh, Joseph M.; Petelle, Jennifer K.; Gwaltney, David A.; Coe, Lisa D.; Koelbl, Terry G.; Nguyen, Hai D.

    2011-01-01

    A NASA multi-Center study team was assembled from LaRC, MSFC, KSC, JSC and WFF to examine potential flight computing architectures for a Heavy Lift Vehicle (HLV) to better understand avionics drivers. The study examined Design Reference Missions (DRMs) and vehicle requirements that could impact the vehicles avionics. The study considered multiple self-checking and voting architectural variants and examined reliability, fault-tolerance, mass, power, and redundancy management impacts. Furthermore, a goal of the study was to develop the skills and tools needed to rapidly assess additional architectures should requirements or assumptions change.

  13. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    PubMed

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .

  14. Constructing probabilistic scenarios for wide-area solar power generation

    DOE PAGES

    Woodruff, David L.; Deride, Julio; Staid, Andrea; ...

    2017-12-22

    Optimizing thermal generation commitments and dispatch in the presence of high penetrations of renewable resources such as solar energy requires a characterization of their stochastic properties. In this study, we describe novel methods designed to create day-ahead, wide-area probabilistic solar power scenarios based only on historical forecasts and associated observations of solar power production. Each scenario represents a possible trajectory for solar power in next-day operations with an associated probability computed by algorithms that use historical forecast errors. Scenarios are created by segmentation of historic data, fitting non-parametric error distributions using epi-splines, and then computing specific quantiles from these distributions.more » Additionally, we address the challenge of establishing an upper bound on solar power output. Our specific application driver is for use in stochastic variants of core power systems operations optimization problems, e.g., unit commitment and economic dispatch. These problems require as input a range of possible future realizations of renewables production. However, the utility of such probabilistic scenarios extends to other contexts, e.g., operator and trader situational awareness. Finally, we compare the performance of our approach to a recently proposed method based on quantile regression, and demonstrate that our method performs comparably to this approach in terms of two widely used methods for assessing the quality of probabilistic scenarios: the Energy score and the Variogram score.« less

  15. Constructing probabilistic scenarios for wide-area solar power generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodruff, David L.; Deride, Julio; Staid, Andrea

    Optimizing thermal generation commitments and dispatch in the presence of high penetrations of renewable resources such as solar energy requires a characterization of their stochastic properties. In this study, we describe novel methods designed to create day-ahead, wide-area probabilistic solar power scenarios based only on historical forecasts and associated observations of solar power production. Each scenario represents a possible trajectory for solar power in next-day operations with an associated probability computed by algorithms that use historical forecast errors. Scenarios are created by segmentation of historic data, fitting non-parametric error distributions using epi-splines, and then computing specific quantiles from these distributions.more » Additionally, we address the challenge of establishing an upper bound on solar power output. Our specific application driver is for use in stochastic variants of core power systems operations optimization problems, e.g., unit commitment and economic dispatch. These problems require as input a range of possible future realizations of renewables production. However, the utility of such probabilistic scenarios extends to other contexts, e.g., operator and trader situational awareness. Finally, we compare the performance of our approach to a recently proposed method based on quantile regression, and demonstrate that our method performs comparably to this approach in terms of two widely used methods for assessing the quality of probabilistic scenarios: the Energy score and the Variogram score.« less

  16. A computer-based specification methodology

    NASA Technical Reports Server (NTRS)

    Munck, Robert G.

    1986-01-01

    Standard practices for creating and using system specifications are inadequate for large, advanced-technology systems. A need exists to break away from paper documents in favor of documents that are stored in computers and which are read and otherwise used with the help of computers. An SADT-based system, running on the proposed Space Station data management network, could be a powerful tool for doing much of the required technical work of the Station, including creating and operating the network itself.

  17. LEMON - LHC Era Monitoring for Large-Scale Infrastructures

    NASA Astrophysics Data System (ADS)

    Marian, Babik; Ivan, Fedorko; Nicholas, Hook; Hector, Lansdale Thomas; Daniel, Lenkes; Miroslav, Siket; Denis, Waldron

    2011-12-01

    At the present time computer centres are facing a massive rise in virtualization and cloud computing as these solutions bring advantages to service providers and consolidate the computer centre resources. However, as a result the monitoring complexity is increasing. Computer centre management requires not only to monitor servers, network equipment and associated software but also to collect additional environment and facilities data (e.g. temperature, power consumption, cooling efficiency, etc.) to have also a good overview of the infrastructure performance. The LHC Era Monitoring (Lemon) system is addressing these requirements for a very large scale infrastructure. The Lemon agent that collects data on every client and forwards the samples to the central measurement repository provides a flexible interface that allows rapid development of new sensors. The system allows also to report on behalf of remote devices such as switches and power supplies. Online and historical data can be visualized via a web-based interface or retrieved via command-line tools. The Lemon Alarm System component can be used for notifying the operator about error situations. In this article, an overview of the Lemon monitoring is provided together with a description of the CERN LEMON production instance. No direct comparison is made with other monitoring tool.

  18. Ambiguity resolution for satellite Doppler positioning systems

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Marini, J.

    1979-01-01

    The implementation of satellite-based Doppler positioning systems frequently requires the recovery of transmitter position from a single pass of Doppler data. The least-squares approach to the problem yields conjugate solutions on either side of the satellite subtrack. It is important to develop a procedure for choosing the proper solution which is correct in a high percentage of cases. A test for ambiguity resolution which is the most powerful in the sense that it maximizes the probability of a correct decision is derived. When systematic error sources are properly included in the least-squares reduction process to yield an optimal solution the test reduces to choosing the solution which provides the smaller valuation of the least-squares loss function. When systematic error sources are ignored in the least-squares reduction, the most powerful test is a quadratic form comparison with the weighting matrix of the quadratic form obtained by computing the pseudoinverse of a reduced-rank square matrix. A formula for computing the power of the most powerful test is provided. Numerical examples are included in which the power of the test is computed for situations that are relevant to the design of a satellite-aided search and rescue system.

  19. Framework Resources Multiply Computing Power

    NASA Technical Reports Server (NTRS)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  20. A Smoothed Eclipse Model for Solar Electric Propulsion Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Aziz, Jonathan D.; Scheeres, Daniel J.; Parker, Jeffrey S.; Englander, Jacob A.

    2017-01-01

    Solar electric propulsion (SEP) is the dominant design option for employing low-thrust propulsion on a space mission. Spacecraft solar arrays power the SEP system but are subject to blackout periods during solar eclipse conditions. Discontinuity in power available to the spacecraft must be accounted for in trajectory optimization, but gradient-based methods require a differentiable power model. This work presents a power model that smooths the eclipse transition from total eclipse to total sunlight with a logistic function. Example trajectories are computed with differential dynamic programming, a second-order gradient-based method.

  1. The electromagnetic modeling of thin apertures using the finite-difference time-domain technique

    NASA Technical Reports Server (NTRS)

    Demarest, Kenneth R.

    1987-01-01

    A technique which computes transient electromagnetic responses of narrow apertures in complex conducting scatterers was implemented as an extension of previously developed Finite-Difference Time-Domain (FDTD) computer codes. Although these apertures are narrow with respect to the wavelengths contained within the power spectrum of excitation, this technique does not require significantly more computer resources to attain the increased resolution at the apertures. In the report, an analytical technique which utilizes Babinet's principle to model the apertures is developed, and an FDTD computer code which utilizes this technique is described.

  2. A parallel-processing approach to computing for the geographic sciences; applications and systems enhancements

    USGS Publications Warehouse

    Crane, Michael; Steinwand, Dan; Beckmann, Tim; Krpan, Greg; Liu, Shu-Guang; Nichols, Erin; Haga, Jim; Maddox, Brian; Bilderback, Chris; Feller, Mark; Homer, George

    2001-01-01

    The overarching goal of this project is to build a spatially distributed infrastructure for information science research by forming a team of information science researchers and providing them with similar hardware and software tools to perform collaborative research. Four geographically distributed Centers of the U.S. Geological Survey (USGS) are developing their own clusters of low-cost, personal computers into parallel computing environments that provide a costeffective way for the USGS to increase participation in the high-performance computing community. Referred to as Beowulf clusters, these hybrid systems provide the robust computing power required for conducting information science research into parallel computing systems and applications.

  3. Verification of Space Station Secondary Power System Stability Using Design of Experiment

    NASA Technical Reports Server (NTRS)

    Karimi, Kamiar J.; Booker, Andrew J.; Mong, Alvin C.; Manners, Bruce

    1998-01-01

    This paper describes analytical methods used in verification of large DC power systems with applications to the International Space Station (ISS). Large DC power systems contain many switching power converters with negative resistor characteristics. The ISS power system presents numerous challenges with respect to system stability such as complex sources and undefined loads. The Space Station program has developed impedance specifications for sources and loads. The overall approach to system stability consists of specific hardware requirements coupled with extensive system analysis and testing. Testing of large complex distributed power systems is not practical due to size and complexity of the system. Computer modeling has been extensively used to develop hardware specifications as well as to identify system configurations for lab testing. The statistical method of Design of Experiments (DoE) is used as an analysis tool for verification of these large systems. DOE reduces the number of computer runs which are necessary to analyze the performance of a complex power system consisting of hundreds of DC/DC converters. DoE also provides valuable information about the effect of changes in system parameters on the performance of the system. DoE provides information about various operating scenarios and identification of the ones with potential for instability. In this paper we will describe how we have used computer modeling to analyze a large DC power system. A brief description of DoE is given. Examples using applications of DoE to analysis and verification of the ISS power system are provided.

  4. Development of a small-scale computer cluster

    NASA Astrophysics Data System (ADS)

    Wilhelm, Jay; Smith, Justin T.; Smith, James E.

    2008-04-01

    An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Xiao; Blazek, Jonathan A.; McEwen, Joseph E.

    Cosmological perturbation theory is a powerful tool to predict the statistics of large-scale structure in the weakly non-linear regime, but even at 1-loop order it results in computationally expensive mode-coupling integrals. Here we present a fast algorithm for computing 1-loop power spectra of quantities that depend on the observer's orientation, thereby generalizing the FAST-PT framework (McEwen et al., 2016) that was originally developed for scalars such as the matter density. This algorithm works for an arbitrary input power spectrum and substantially reduces the time required for numerical evaluation. We apply the algorithm to four examples: intrinsic alignments of galaxies inmore » the tidal torque model; the Ostriker-Vishniac effect; the secondary CMB polarization due to baryon flows; and the 1-loop matter power spectrum in redshift space. Code implementing this algorithm and these applications is publicly available at https://github.com/JoeMcEwen/FAST-PT.« less

  6. History of the numerical aerodynamic simulation program

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Ballhaus, William F., Jr.

    1987-01-01

    The Numerical Aerodynamic Simulation (NAS) program has reached a milestone with the completion of the initial operating configuration of the NAS Processing System Network. This achievement is the first major milestone in the continuing effort to provide a state-of-the-art supercomputer facility for the national aerospace community and to serve as a pathfinder for the development and use of future supercomputer systems. The underlying factors that motivated the initiation of the program are first identified and then discussed. These include the emergence and evolution of computational aerodynamics as a powerful new capability in aerodynamics research and development, the computer power required for advances in the discipline, the complementary nature of computation and wind tunnel testing, and the need for the government to play a pathfinding role in the development and use of large-scale scientific computing systems. Finally, the history of the NAS program is traced from its inception in 1975 to the present time.

  7. Efficient Bayesian mixed model analysis increases association power in large cohorts

    PubMed Central

    Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L

    2014-01-01

    Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633

  8. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potok, Thomas E; Schuman, Catherine D; Young, Steven R

    Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less

  9. Methods and benefits of experimental seismic evaluation of nuclear power plants. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1979-07-01

    This study reviews experimental techniques, instrumentation requirements, safety considerations, and benefits of performing vibration tests on nuclear power plant containments and internal components. The emphasis is on testing to improve seismic structural models. Techniques for identification of resonant frequencies, damping, and mode shapes, are discussed. The benefits of testing with regard to increased damping and more accurate computer models are oulined. A test plan, schedule and budget are presented for a typical PWR nuclear power plant.

  10. Adventures in Computational Grids

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Sometimes one supercomputer is not enough. Or your local supercomputers are busy, or not configured for your job. Or you don't have any supercomputers. You might be trying to simulate worldwide weather changes in real time, requiring more compute power than you could get from any one machine. Or you might be collecting microbiological samples on an island, and need to examine them with a special microscope located on the other side of the continent. These are the times when you need a computational grid.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potts, C.; Faber, M.; Gunderson, G.

    The as-built lattice of the Rapid-Cycling Synchrotron (RCS) had two sets of correction sextupoles and two sets of quadrupoles energized by dc power supplies to control the tune and the tune tilt. With this method of powering these magnets, adjustment of tune conditions during the accelerating cycle as needed was not possible. A set of dynamically programmable power supplies has been built and operated to provide the required chromaticity adjustment. The short accelerating time (16.7 ms) of the RCS and the inductance of the magnets dictated large transistor amplifier power supplies. The required time resolution and waveform flexibility indicated themore » desirability of computer control. Both the amplifiers and controls are described, along with resulting improvements in the beam performance. A set of octupole magnets and programmable power supplies with similar dynamic qualities have been constructed and installed to control the anticipated high-intensity transverse instability. This system will be operational in the spring of 1981.« less

  12. Design of point-of-care (POC) microfluidic medical diagnostic devices

    NASA Astrophysics Data System (ADS)

    Leary, James F.

    2018-02-01

    Design of inexpensive and portable hand-held microfluidic flow/image cytometry devices for initial medical diagnostics at the point of initial patient contact by emergency medical personnel in the field requires careful design in terms of power/weight requirements to allow for realistic portability as a hand-held, point-of-care medical diagnostics device. True portability also requires small micro-pumps for high-throughput capability. Weight/power requirements dictate use of super-bright LEDs and very small silicon photodiodes or nanophotonic sensors that can be powered by batteries. Signal-to-noise characteristics can be greatly improved by appropriately pulsing the LED excitation sources and sampling and subtracting noise in between excitation pulses. The requirements for basic computing, imaging, GPS and basic telecommunications can be simultaneously met by use of smartphone technologies, which become part of the overall device. Software for a user-interface system, limited real-time computing, real-time imaging, and offline data analysis can be accomplished through multi-platform software development systems that are well-suited to a variety of currently available cellphone technologies which already contain all of these capabilities. Microfluidic cytometry requires judicious use of small sample volumes and appropriate statistical sampling by microfluidic cytometry or imaging for adequate statistical significance to permit real-time (typically < 15 minutes) medical decisions for patients at the physician's office or real-time decision making in the field. One or two drops of blood obtained by pin-prick should be able to provide statistically meaningful results for use in making real-time medical decisions without the need for blood fractionation, which is not realistic in the field.

  13. Lightning electromagnetics

    NASA Technical Reports Server (NTRS)

    Wahid, Parveen

    1995-01-01

    This project involved the determination of the effective radiated power of lightning sources and the polarization of the radiating source. This requires the computation of the antenna patterns at all the LDAR site receiving antennas. The known radiation patterns and RF signal levels measured at the antennas will be used to determine the effective radiated power of the lightning source. The azimuth and elevation patterns of the antennas in the LDAR system were computed using flight test data that was gathered specifically for this purpose. The results presented in this report deal with the azimuth patterns for all the antennas and the elevation patterns for three of the seven sites.

  14. Inexact hardware for modelling weather & climate

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, Tim

    2014-05-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing exact calculations in exchange for improvements in performance and potentially accuracy and a reduction in power consumption. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud resolving atmospheric modelling. The impact of both, hardware induced faults and low precision arithmetic is tested in the dynamical core of a global atmosphere model. Our simulations show that both approaches to inexact calculations do not substantially affect the quality of the model simulations, provided they are restricted to act only on smaller scales. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations.

  15. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.

  16. Development of a Low Inductance Linear Alternator for Stirling Power Convertors

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.; Schifer, Nicholas A.

    2017-01-01

    The free-piston Stirling power convertor is a promising technology for high efficiency heat-to-electricity power conversion in space. Stirling power convertors typically utilize linear alternators for converting mechanical motion into electricity. The linear alternator is one of the heaviest components of modern Stirling power convertors. In addition, state-of-art Stirling linear alternators usually require the use of tuning capacitors or active power factor correction controllers to maximize convertor output power. The linear alternator to be discussed in this paper, eliminates the need for tuning capacitors and delivers electrical power output in which current is inherently in phase with voltage. No power factor correction is needed. In addition, the linear alternator concept requires very little iron, so core loss has been virtually eliminated. This concept is a unique moving coil design where the magnetic flux path is defined by the magnets themselves. This paper presents computational predictions for two different low inductance alternator configurations, and compares the predictions with experimental data for one of the configurations that has been built and is currently being tested.

  17. Development of a Low-Inductance Linear Alternator for Stirling Power Convertors

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.; Schifer, Nicholas A.

    2017-01-01

    The free-piston Stirling power convertor is a promising technology for high-efficiency heat-to-electricity power conversion in space. Stirling power convertors typically utilize linear alternators for converting mechanical motion into electricity. The linear alternator is one of the heaviest components of modern Stirling power convertors. In addition, state-of-the-art Stirling linear alternators usually require the use of tuning capacitors or active power factor correction controllers to maximize convertor output power. The linear alternator to be discussed in this paper eliminates the need for tuning capacitors and delivers electrical power output in which current is inherently in phase with voltage. No power factor correction is needed. In addition, the linear alternator concept requires very little iron, so core loss has been virtually eliminated. This concept is a unique moving coil design where the magnetic flux path is defined by the magnets themselves. This paper presents computational predictions for two different low inductance alternator configurations. Additionally, one of the configurations was built and tested at GRC, and the experimental data is compared with the predictions.

  18. Power throttling of collections of computing elements

    DOEpatents

    Bellofatto, Ralph E [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Crumley, Paul G [Yorktown Heights, NY; Gara, Alan G [Mount Kidsco, NY; Giampapa, Mark E [Irvington, NY; Gooding,; Thomas, M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Megerian, Mark G [Rochester, MN; Ohmacht, Martin [Yorktown Heights, NY; Reed, Don D [Mantorville, MN; Swetz, Richard A [Mahopac, NY; Takken, Todd [Brewster, NY

    2011-08-16

    An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

  19. Automated distribution system management for multichannel space power systems

    NASA Technical Reports Server (NTRS)

    Fleck, G. W.; Decker, D. K.; Graves, J.

    1983-01-01

    A NASA sponsored study of space power distribution system technology is in progress to develop an autonomously managed power system (AMPS) for large space power platforms. The multichannel, multikilowatt, utility-type power subsystem proposed presents new survivability requirements and increased subsystem complexity. The computer controls under development for the power management system must optimize the power subsystem performance and minimize the life cycle cost of the platform. A distribution system management philosophy has been formulated which incorporates these constraints. Its implementation using a TI9900 microprocessor and FORTH as the programming language is presented. The approach offers a novel solution to the perplexing problem of determining the optimal combination of loads which should be connected to each power channel for a versatile electrical distribution concept.

  20. Light weight portable operator control unit using an Android-enabled mobile phone

    NASA Astrophysics Data System (ADS)

    Fung, Nicholas

    2011-05-01

    There have been large gains in the field of robotics, both in hardware sophistication and technical capabilities. However, as more capable robots have been developed and introduced to battlefield environments, the problem of interfacing with human controllers has proven to be challenging. Particularly in the field of military applications, controller requirements can be stringent and can range from size and power consumption, to durability and cost. Traditional operator control units (OCUs) tend to resemble laptop personal computers (PCs), as these devices are mobile and have ample computing power. However, laptop PCs are bulky and have greater power requirements. To approach this problem, a light weight, inexpensive controller was created based on a mobile phone running the Android operating system. It was designed to control an iRobot Packbot through the Army Research Laboratory (ARL) in-house Agile Computing Infrastructure (ACI). The hardware capabilities of the mobile phone, such as Wi- Fi communications, touch screen interface, and the flexibility of the Android operating system, made it a compelling platform. The Android based OCU offers a more portable package and can be easily carried by a soldier along with normal gear requirements. In addition, the one hand operation of the Android OCU allows for the Soldier to keep an unoccupied hand for greater flexibility. To validate the Android OCU as a capable controller, experimental data was collected evaluating use of the controller and a traditional, tablet PC based OCU. Initial analysis suggests that the Android OCU performed positively in qualitative data collected from participants.

  1. Aortic Wave Dynamics and Its Influence on Left Ventricular Workload

    PubMed Central

    Pahlevan, Niema M.; Gharib, Morteza

    2011-01-01

    The pumping mechanism of the heart is pulsatile, so the heart generates pulsatile flow that enters into the compliant aorta in the form of pressure and flow waves. We hypothesized that there exists a specific heart rate at which the external left ventricular (LV) power is minimized. To test this hypothesis, we used a computational model to explore the effects of heart rate (HR) and aortic rigidity on left ventricular (LV) power requirement. While both mean and pulsatile parts of the pressure play an important role in LV power requirement elevation, at higher rigidities the effect of pulsatility becomes more dominant. For any given aortic rigidity, there exists an optimum HR that minimizes the LV power requirement at a given cardiac output. The optimum HR shifts to higher values as the aorta becomes more rigid. To conclude, there is an optimum condition for aortic waves that minimizes the LV pulsatile load and consequently the total LV workload. PMID:21853075

  2. Two-loop controller for maximizing performance of a grid-connected photovoltaic - fuel cell hybrid power plant

    NASA Astrophysics Data System (ADS)

    Ro, Kyoungsoo

    The study started with the requirement that a photovoltaic (PV) power source should be integrated with other supplementary power sources whether it operates in a stand-alone or grid-connected mode. First, fuel cells for a backup of varying PV power were compared in detail with batteries and were found to have more operational benefits. Next, maximizing performance of a grid-connected PV-fuel cell hybrid system by use of a two-loop controller was discussed. One loop is a neural network controller for maximum power point tracking, which extracts maximum available solar power from PV arrays under varying conditions of insolation, temperature, and system load. A real/reactive power controller (RRPC) is the other loop. The RRPC meets the system's requirement for real and reactive powers by controlling incoming fuel to fuel cell stacks as well as switching control signals to a power conditioning subsystem. The RRPC is able to achieve more versatile control of real/reactive powers than the conventional power sources since the hybrid power plant does not contain any rotating mass. Results of time-domain simulations prove not only effectiveness of the proposed computer models of the two-loop controller, but also their applicability for use in transient stability analysis of the hybrid power plant. Finally, environmental evaluation of the proposed hybrid plant was made in terms of plant's land requirement and lifetime COsb2 emissions, and then compared with that of the conventional fossil-fuel power generating forms.

  3. Interpreting Space-Mission LET Requirements for SEGR in Power MOSFETs

    NASA Technical Reports Server (NTRS)

    Lauenstein, J. M.; Ladbury, R. L.; Batchelor, D. A.; Goldsman, N.; Kim, H. S.; Phan, A. M.

    2010-01-01

    A Technology Computer Aided Design (TCAD) simulation-based method is developed to evaluate whether derating of high-energy heavy-ion accelerator test data bounds the risk for single-event gate rupture (SEGR) from much higher energy on-orbit ions for a mission linear energy transfer (LET) requirement. It is shown that a typical derating factor of 0.75 applied to a single-event effect (SEE) response curve defined by high-energy accelerator SEGR test data provides reasonable on-orbit hardness assurance, although in a high-voltage power MOSFET, it did not bound the risk of failure.

  4. A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems

    DOE PAGES

    Molzahn, Daniel K.; Dorfler, Florian K.; Sandberg, Henrik; ...

    2017-07-25

    Historically, centrally computed algorithms have been the primary means of power system optimization and control. With increasing penetrations of distributed energy resources requiring optimization and control of power systems with many controllable devices, distributed algorithms have been the subject of significant research interest. Here, this paper surveys the literature of distributed algorithms with applications to optimization and control of power systems. In particular, this paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.

  5. A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molzahn, Daniel K.; Dorfler, Florian K.; Sandberg, Henrik

    Historically, centrally computed algorithms have been the primary means of power system optimization and control. With increasing penetrations of distributed energy resources requiring optimization and control of power systems with many controllable devices, distributed algorithms have been the subject of significant research interest. Here, this paper surveys the literature of distributed algorithms with applications to optimization and control of power systems. In particular, this paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.

  6. Optimal Load Shedding and Generation Rescheduling for Overload Suppression in Large Power Systems.

    NASA Astrophysics Data System (ADS)

    Moon, Young-Hyun

    Ever-increasing size, complexity and operation costs in modern power systems have stimulated the intensive study of an optimal Load Shedding and Generator Rescheduling (LSGR) strategy in the sense of a secure and economic system operation. The conventional approach to LSGR has been based on the application of LP (Linear Programming) with the use of an approximately linearized model, and the LP algorithm is currently considered to be the most powerful tool for solving the LSGR problem. However, all of the LP algorithms presented in the literature essentially lead to the following disadvantages: (i) piecewise linearization involved in the LP algorithms requires the introduction of a number of new inequalities and slack variables, which creates significant burden to the computing facilities, and (ii) objective functions are not formulated in terms of the state variables of the adopted models, resulting in considerable numerical inefficiency in the process of computing the optimal solution. A new approach is presented, based on the development of a new linearized model and on the application of QP (Quadratic Programming). The changes in line flows as a result of changes to bus injection power are taken into account in the proposed model by the introduction of sensitivity coefficients, which avoids the mentioned second disadvantages. A precise method to calculate these sensitivity coefficients is given. A comprehensive review of the theory of optimization is included, in which results of the development of QP algorithms for LSGR as based on Wolfe's method and Kuhn -Tucker theory are evaluated in detail. The validity of the proposed model and QP algorithms has been verified and tested on practical power systems, showing the significant reduction of both computation time and memory requirements as well as the expected lower generation costs of the optimal solution as compared with those obtained from computing the optimal solution with LP. Finally, it is noted that an efficient reactive power compensation algorithm is developed to suppress voltage disturbances due to load sheddings, and that a new method for multiple contingency simulation is presented.

  7. Modeling and analysis of power processing systems: Feasibility investigation and formulation of a methodology

    NASA Technical Reports Server (NTRS)

    Biess, J. J.; Yu, Y.; Middlebrook, R. D.; Schoenfeld, A. D.

    1974-01-01

    A review is given of future power processing systems planned for the next 20 years, and the state-of-the-art of power processing design modeling and analysis techniques used to optimize power processing systems. A methodology of modeling and analysis of power processing equipment and systems has been formulated to fulfill future tradeoff studies and optimization requirements. Computer techniques were applied to simulate power processor performance and to optimize the design of power processing equipment. A program plan to systematically develop and apply the tools for power processing systems modeling and analysis is presented so that meaningful results can be obtained each year to aid the power processing system engineer and power processing equipment circuit designers in their conceptual and detail design and analysis tasks.

  8. Universal quantum computation with little entanglement.

    PubMed

    Van den Nest, Maarten

    2013-02-08

    We show that universal quantum computation can be achieved in the standard pure-state circuit model while the entanglement entropy of every bipartition is small in each step of the computation. The entanglement entropy required for large-scale quantum computation even tends to zero. Moreover we show that the same conclusion applies to many entanglement measures commonly used in the literature. This includes e.g., the geometric measure, localizable entanglement, multipartite concurrence, squashed entanglement, witness-based measures, and more generally any entanglement measure which is continuous in a certain natural sense. These results demonstrate that many entanglement measures are unsuitable tools to assess the power of quantum computers.

  9. Computers for real time flight simulation: A market survey

    NASA Technical Reports Server (NTRS)

    Bekey, G. A.; Karplus, W. J.

    1977-01-01

    An extensive computer market survey was made to determine those available systems suitable for current and future flight simulation studies at Ames Research Center. The primary requirement is for the computation of relatively high frequency content (5 Hz) math models representing powered lift flight vehicles. The Rotor Systems Research Aircraft (RSRA) was used as a benchmark vehicle for computation comparison studies. The general nature of helicopter simulations and a description of the benchmark model are presented, and some of the sources of simulation difficulties are examined. A description of various applicable computer architectures is presented, along with detailed discussions of leading candidate systems and comparisons between them.

  10. Computer control of a microgravity mammalian cell bioreactor

    NASA Technical Reports Server (NTRS)

    Hall, William A.

    1987-01-01

    The initial steps taken in developing a completely menu driven and totally automated computer control system for a bioreactor are discussed. This bioreactor is an electro-mechanical cell growth system cell requiring vigorous control of slowly changing parameters, many of which are so dynamically interactive that computer control is a necessity. The process computer will have two main functions. First, it will provide continuous environmental control utilizing low signal level transducers as inputs and high powered control devices such as solenoids and motors as outputs. Secondly, it will provide continuous environmental monitoring, including mass data storage and periodic data dumps to a supervisory computer.

  11. Far field and wavefront characterization of a high-power semiconductor laser for free space optical communications

    NASA Technical Reports Server (NTRS)

    Cornwell, Donald M., Jr.; Saif, Babak N.

    1991-01-01

    The spatial pointing angle and far field beamwidth of a high-power semiconductor laser are characterized as a function of CW power and also as a function of temperature. The time-averaged spatial pointing angle and spatial lobe width were measured under intensity-modulated conditions. The measured pointing deviations are determined to be well within the pointing requirements of the NASA Laser Communications Transceiver (LCT) program. A computer-controlled Mach-Zehnder phase-shifter interferometer is used to characterize the wavefront quality of the laser. The rms phase error over the entire pupil was measured as a function of CW output power. Time-averaged measurements of the wavefront quality are also made under intensity-modulated conditions. The measured rms phase errors are determined to be well within the wavefront quality requirements of the LCT program.

  12. Systematic Evaluation of Stochastic Methods in Power System Scheduling and Dispatch with Renewable Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yishen; Zhou, Zhi; Liu, Cong

    2016-08-01

    As more wind power and other renewable resources are being integrated into the electric power grid, the forecast uncertainty brings operational challenges for the power system operators. In this report, different operational strategies for uncertainty management are presented and evaluated. A comprehensive and consistent simulation framework is developed to analyze the performance of different reserve policies and scheduling techniques under uncertainty in wind power. Numerical simulations are conducted on a modified version of the IEEE 118-bus system with a 20% wind penetration level, comparing deterministic, interval, and stochastic unit commitment strategies. The results show that stochastic unit commitment provides amore » reliable schedule without large increases in operational costs. Moreover, decomposition techniques, such as load shift factor and Benders decomposition, can help in overcoming the computational obstacles to stochastic unit commitment and enable the use of a larger scenario set to represent forecast uncertainty. In contrast, deterministic and interval unit commitment tend to give higher system costs as more reserves are being scheduled to address forecast uncertainty. However, these approaches require a much lower computational effort Choosing a proper lower bound for the forecast uncertainty is important for balancing reliability and system operational cost in deterministic and interval unit commitment. Finally, we find that the introduction of zonal reserve requirements improves reliability, but at the expense of higher operational costs.« less

  13. Computational protein design-the next generation tool to expand synthetic biology applications.

    PubMed

    Gainza-Cirauqui, Pablo; Correia, Bruno Emanuel

    2018-05-02

    One powerful approach to engineer synthetic biology pathways is the assembly of proteins sourced from one or more natural organisms. However, synthetic pathways often require custom functions or biophysical properties not displayed by natural proteins, limitations that could be overcome through modern protein engineering techniques. Structure-based computational protein design is a powerful tool to engineer new functional capabilities in proteins, and it is beginning to have a profound impact in synthetic biology. Here, we review efforts to increase the capabilities of synthetic biology using computational protein design. We focus primarily on computationally designed proteins not only validated in vitro, but also shown to modulate different activities in living cells. Efforts made to validate computational designs in cells can illustrate both the challenges and opportunities in the intersection of protein design and synthetic biology. We also highlight protein design approaches, which although not validated as conveyors of new cellular function in situ, may have rapid and innovative applications in synthetic biology. We foresee that in the near-future, computational protein design will vastly expand the functional capabilities of synthetic cells. Copyright © 2018. Published by Elsevier Ltd.

  14. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill; Feiereisen, William (Technical Monitor)

    2000-01-01

    The term "Grid" refers to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. The vision for NASN's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks that will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: The scientist / design engineer whose primary interest is problem solving (e.g., determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user if the tool designer: The computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. This paper describes the current state of IPG (the operational testbed), the set of capabilities being put into place for the operational prototype IPG, as well as some of the longer term R&D tasks.

  15. Design of microstrip components by computer

    NASA Technical Reports Server (NTRS)

    Cisco, T. C.

    1972-01-01

    A number of computer programs are presented for use in the synthesis of microwave components in microstrip geometries. The programs compute the electrical and dimensional parameters required to synthesize couplers, filters, circulators, transformers, power splitters, diode switches, multipliers, diode attenuators and phase shifters. Additional programs are included to analyze and optimize cascaded transmission lines and lumped element networks, to analyze and synthesize Chebyshev and Butterworth filter prototypes, and to compute mixer intermodulation products. The programs are written in FORTRAN and the emphasis of the study is placed on the use of these programs and not on the theoretical aspects of the structures.

  16. Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)

    NASA Technical Reports Server (NTRS)

    Dalton, Shelly D.; Daley, Philip C.

    1988-01-01

    As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.

  17. Science-Driven Computing: NERSC's Plan for 2006-2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simon, Horst D.; Kramer, William T.C.; Bailey, David H.

    NERSC has developed a five-year strategic plan focusing on three components: Science-Driven Systems, Science-Driven Services, and Science-Driven Analytics. (1) Science-Driven Systems: Balanced introduction of the best new technologies for complete computational systems--computing, storage, networking, visualization and analysis--coupled with the activities necessary to engage vendors in addressing the DOE computational science requirements in their future roadmaps. (2) Science-Driven Services: The entire range of support activities, from high-quality operations and user services to direct scientific support, that enable a broad range of scientists to effectively use NERSC systems in their research. NERSC will concentrate on resources needed to realize the promise ofmore » the new highly scalable architectures for scientific discovery in multidisciplinary computational science projects. (3) Science-Driven Analytics: The architectural and systems enhancements and services required to integrate NERSC's powerful computational and storage resources to provide scientists with new tools to effectively manipulate, visualize, and analyze the huge data sets derived from simulations and experiments.« less

  18. Advanced space power requirements and techniques. Task 1: Mission projections and requirements. Volume 3: Appendices. [cost estimates and computer programs

    NASA Technical Reports Server (NTRS)

    Wolfe, M. G.

    1978-01-01

    Contents: (1) general study guidelines and assumptions; (2) launch vehicle performance and cost assumptions; (3) satellite programs 1959 to 1979; (4) initiative mission and design characteristics; (5) satellite listing; (6) spacecraft design model; (7) spacecraft cost model; (8) mission cost model; and (9) nominal and optimistic budget program cost summaries.

  19. Turbulence modeling of free shear layers for high performance aircraft

    NASA Technical Reports Server (NTRS)

    Sondak, Douglas

    1993-01-01

    In many flowfield computations, accuracy of the turbulence model employed is frequently a limiting factor in the overall accuracy of the computation. This is particularly true for complex flowfields such as those around full aircraft configurations. Free shear layers such as wakes, impinging jets (in V/STOL applications), and mixing layers over cavities are often part of these flowfields. Although flowfields have been computed for full aircraft, the memory and CPU requirements for these computations are often excessive. Additional computer power is required for multidisciplinary computations such as coupled fluid dynamics and conduction heat transfer analysis. Massively parallel computers show promise in alleviating this situation, and the purpose of this effort was to adapt and optimize CFD codes to these new machines. The objective of this research effort was to compute the flowfield and heat transfer for a two-dimensional jet impinging normally on a cool plate. The results of this research effort were summarized in an AIAA paper titled 'Parallel Implementation of the k-epsilon Turbulence Model'. Appendix A contains the full paper.

  20. Inertial effects on mechanically braked Wingate power calculations.

    PubMed

    Reiser, R F; Broker, J P; Peterson, M L

    2000-09-01

    The standard procedure for determining subject power output from a 30-s Wingate test on a mechanically braked (friction-loaded) ergometer includes only the braking resistance and flywheel velocity in the computations. However, the inertial effects associated with accelerating and decelerating the crank and flywheel also require energy and, therefore, represent a component of the subject's power output. The present study was designed to determine the effects of drive-system inertia on power output calculations. Twenty-eight male recreational cyclists completed Wingate tests on a Monark 324E mechanically braked ergometer (resistance: 8.5% body mass (BM), starting cadence: 60 rpm). Power outputs were then compared using both standard (without inertial contribution) and corrected methods (with inertial contribution) of calculating power output. Relative 5-s peak power and 30-s average power for the corrected method (14.8 +/- 1.2 W x kg(-1) BM; 9.9 +/- 0.7 W x kg(-1) BM) were 20.3% and 3.1% greater than that of the standard method (12.3 +/- 0.7 W x kg(-1) BM; 9.6 +/- 0.7 W x kg(-1) BM), respectively. Relative 5-s minimum power for the corrected method (6.8 +/- 0.7 W x kg(-1) BM) was 6.8% less than that of the standard method (7.3 +/- 0.8 W x kg(-1) BM). The combined differences in the peak power and minimum power produced a fatigue index for the corrected method (54 +/- 5%) that was 31.7% greater than that of the standard method (41 +/- 6%). All parameter differences were significant (P < 0.01). The inertial contribution to power output was dominated by the flywheel; however, the contribution from the crank was evident. These results indicate that the inertial components of the ergometer drive system influence the power output characteristics, requiring care when computing, interpreting, and comparing Wingate results, particularly among different ergometer designs and test protocols.

  1. Research on computer-aided design of modern marine power systems

    NASA Astrophysics Data System (ADS)

    Ding, Dongdong; Zeng, Fanming; Chen, Guojun

    2004-03-01

    To make the MPS (Marine Power System) design process more economical and easier, a new CAD scheme is brought forward which takes much advantage of VR (Virtual Reality) and AI (Artificial Intelligence) technologies. This CAD system can shorten the period of design and reduce the requirements on designers' experience in large scale. And some key issues like the selection of hardware and software of such a system are discussed.

  2. Validation Test Report for the Automated Optical Processing System (AOPS) Version 4.8

    DTIC Science & Technology

    2013-06-28

    be familiar with UNIX; BASH shell programming; and remote sensing, particularly regarding computer processing of satellite data. The system memory ...and storage requirements are difficult to gauge. The amount of memory needed is dependent upon the amount and type of satellite data you wish to...process; the larger the area, the larger the memory requirement. For example, the entire Atlantic Ocean will require more processing power than the

  3. Leveraging the Power of High Performance Computing for Next Generation Sequencing Data Analysis: Tricks and Twists from a High Throughput Exome Workflow

    PubMed Central

    Wonczak, Stephan; Thiele, Holger; Nieroda, Lech; Jabbari, Kamel; Borowski, Stefan; Sinha, Vishal; Gunia, Wilfried; Lang, Ulrich; Achter, Viktor; Nürnberg, Peter

    2015-01-01

    Next generation sequencing (NGS) has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC) systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files. PMID:25942438

  4. Hypersonic Inlet for a Laser Powered Propulsion System

    NASA Astrophysics Data System (ADS)

    Harrland, Alan; Doolan, Con; Wheatley, Vincent; Froning, Dave

    2011-11-01

    Propulsion within the lightcraft concept is produced via laser induced detonation of an incoming hypersonic air stream. This process requires suitable engine configurations that offer good performance over all flight speeds and angles of attack to ensure the required thrust is maintained. Stream traced hypersonic inlets have demonstrated the required performance in conventional hydrocarbon fuelled scramjet engines, and has been applied to the laser powered lightcraft vehicle. This paper will outline the current methodology employed in the inlet design, with a particular focus on the performance of the lightcraft inlet at angles of attack. Fully three-dimensional turbulent computational fluid dynamics simulations have been performed on a variety of inlet configurations. The performance of the lightcraft inlets have been evaluated at differing angles of attack. An idealized laser detonation simulation has also been performed to validate that the lightcraft inlet does not unstart during the laser powered propulsion cycle.

  5. Neural-like computing with populations of superparamagnetic basis functions.

    PubMed

    Mizrahi, Alice; Hirtzlin, Tifenn; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Grollier, Julie; Querlioz, Damien

    2018-04-18

    In neuroscience, population coding theory demonstrates that neural assemblies can achieve fault-tolerant information processing. Mapped to nanoelectronics, this strategy could allow for reliable computing with scaled-down, noisy, imperfect devices. Doing so requires that the population components form a set of basis functions in terms of their response functions to inputs, offering a physical substrate for computing. Such a population can be implemented with CMOS technology, but the corresponding circuits have high area or energy requirements. Here, we show that nanoscale magnetic tunnel junctions can instead be assembled to meet these requirements. We demonstrate experimentally that a population of nine junctions can implement a basis set of functions, providing the data to achieve, for example, the generation of cursive letters. We design hybrid magnetic-CMOS systems based on interlinked populations of junctions and show that they can learn to realize non-linear variability-resilient transformations with a low imprint area and low power.

  6. Triple-server blind quantum computation using entanglement swapping

    NASA Astrophysics Data System (ADS)

    Li, Qin; Chan, Wai Hong; Wu, Chunhui; Wen, Zhonghua

    2014-04-01

    Blind quantum computation allows a client who does not have enough quantum resources or technologies to achieve quantum computation on a remote quantum server such that the client's input, output, and algorithm remain unknown to the server. Up to now, single- and double-server blind quantum computation have been considered. In this work, we propose a triple-server blind computation protocol where the client can delegate quantum computation to three quantum servers by the use of entanglement swapping. Furthermore, the three quantum servers can communicate with each other and the client is almost classical since one does not require any quantum computational power, quantum memory, and the ability to prepare any quantum states and only needs to be capable of getting access to quantum channels.

  7. Reliable computation from contextual correlations

    NASA Astrophysics Data System (ADS)

    Oestereich, André L.; Galvão, Ernesto F.

    2017-12-01

    An operational approach to the study of computation based on correlations considers black boxes with one-bit inputs and outputs, controlled by a limited classical computer capable only of performing sums modulo-two. In this setting, it was shown that noncontextual correlations do not provide any extra computational power, while contextual correlations were found to be necessary for the deterministic evaluation of nonlinear Boolean functions. Here we investigate the requirements for reliable computation in this setting; that is, the evaluation of any Boolean function with success probability bounded away from 1 /2 . We show that bipartite CHSH quantum correlations suffice for reliable computation. We also prove that an arbitrarily small violation of a multipartite Greenberger-Horne-Zeilinger noncontextuality inequality also suffices for reliable computation.

  8. Compressive sensing scalp EEG signals: implementations and practical performance.

    PubMed

    Abdulghani, Amir M; Casson, Alexander J; Rodriguez-Villegas, Esther

    2012-11-01

    Highly miniaturised, wearable computing and communication systems allow unobtrusive, convenient and long term monitoring of a range of physiological parameters. For long term operation from the physically smallest batteries, the average power consumption of a wearable device must be very low. It is well known that the overall power consumption of these devices can be reduced by the inclusion of low power consumption, real-time compression of the raw physiological data in the wearable device itself. Compressive sensing is a new paradigm for providing data compression: it has shown significant promise in fields such as MRI; and is potentially suitable for use in wearable computing systems as the compression process required in the wearable device has a low computational complexity. However, the practical performance very much depends on the characteristics of the signal being sensed. As such the utility of the technique cannot be extrapolated from one application to another. Long term electroencephalography (EEG) is a fundamental tool for the investigation of neurological disorders and is increasingly used in many non-medical applications, such as brain-computer interfaces. This article investigates in detail the practical performance of different implementations of the compressive sensing theory when applied to scalp EEG signals.

  9. Responding to Information Needs in the 1980s.

    ERIC Educational Resources Information Center

    McGraw, Harold W., Jr.

    1979-01-01

    Argues that technological developments in cable television, computers, and telecommunications could decentralize power and put the resources of the new technology more broadly at the command of individuals and small groups, but that this potential requires action to be realized. (Author)

  10. The SpaceCube Family of Hybrid On-Board Science Data Processors: An Update

    NASA Astrophysics Data System (ADS)

    Flatley, T.

    2012-12-01

    SpaceCube is an FPGA based on-board hybrid science data processing system developed at the NASA Goddard Space Flight Center (GSFC). The goal of the SpaceCube program is to provide 10x to 100x improvements in on-board computing power while lowering relative power consumption and cost. The SpaceCube design strategy incorporates commercial rad-tolerant FPGA technology and couples it with an upset mitigation software architecture to provide "order of magnitude" improvements in computing power over traditional rad-hard flight systems. Many of the missions proposed in the Earth Science Decadal Survey (ESDS) will require "next generation" on-board processing capabilities to meet their specified mission goals. Advanced laser altimeter, radar, lidar and hyper-spectral instruments are proposed for at least ten of the ESDS missions, and all of these instrument systems will require advanced on-board processing capabilities to facilitate the timely conversion of Earth Science data into Earth Science information. Both an "order of magnitude" increase in processing power and the ability to "reconfigure on the fly" are required to implement algorithms that detect and react to events, to produce data products on-board for applications such as direct downlink, quick look, and "first responder" real-time awareness, to enable "sensor web" multi-platform collaboration, and to perform on-board "lossless" data reduction by migrating typical ground-based processing functions on-board, thus reducing on-board storage and downlink requirements. This presentation will highlight a number of SpaceCube technology developments to date and describe current and future efforts, including the collaboration with the U.S. Department of Defense - Space Test Program (DoD/STP) on the STP-H4 ISS experiment pallet (launch June 2013) that will demonstrate SpaceCube 2.0 technology on-orbit.; ;

  11. Square Kilometre Array Science Data Processing

    NASA Astrophysics Data System (ADS)

    Nikolic, Bojan; SDP Consortium, SKA

    2014-04-01

    The Square Kilometre Array (SKA) is planned to be, by a large factor, the largest and most sensitive radio telescope ever constructed. The first phase of the telescope (SKA1), now in the design phase, will in itself represent a major leap in capabilities compared to current facilities. These advances are to a large extent being made possible by advances in available computer processing power so that that larger numbers of smaller, simpler and cheaper receptors can be used. As a result of greater reliance and demands on computing, ICT is becoming an ever more integral part of the telescope. The Science Data Processor is the part of the SKA system responsible for imaging, calibration, pulsar timing, confirmation of pulsar candidates, derivation of some further derived data products, archiving and providing the data to the users. It will accept visibilities at data rates at several TB/s and require processing power for imaging in range 100 petaFLOPS -- ~1 ExaFLOPS, putting SKA1 into the regime of exascale radio astronomy. In my talk I will present the overall SKA system requirements and how they drive these high data throughput and processing requirements. Some of the key challenges for the design of SDP are: - Identifying sufficient parallelism to utilise very large numbers of separate compute cores that will be required to provide exascale computing throughput - Managing efficiently the high internal data flow rates - A conceptual architecture and software engineering approach that will allow adaptation of the algorithms as we learn about the telescope and the atmosphere during the commissioning and operational phases - System management that will deal gracefully with (inevitably frequent) failures of individual units of the processing system In my talk I will present possible initial architectures for the SDP system that attempt to address these and other challenges.

  12. Guest Editorial High Performance Computing (HPC) Applications for a More Resilient and Efficient Power Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhenyu Henry; Tate, Zeb; Abhyankar, Shrirang

    The power grid has been evolving over the last 120 years, but it is seeing more changes in this decade and next than it has seen over the past century. In particular, the widespread deployment of intermittent renewable generation, smart loads and devices, hierarchical and distributed control technologies, phasor measurement units, energy storage, and widespread usage of electric vehicles will require fundamental changes in methods and tools for the operation and planning of the power grid. The resulting new dynamic and stochastic behaviors will demand the inclusion of more complexity in modeling the power grid. Solving such complex models inmore » the traditional computing environment will be a major challenge. Along with the increasing complexity of power system models, the increasing complexity of smart grid data further adds to the prevailing challenges. In this environment, the myriad of smart sensors and meters in the power grid increase by multiple orders of magnitude, so do the volume and speed of the data. The information infrastructure will need to drastically change to support the exchange of enormous amounts of data as smart grid applications will need the capability to collect, assimilate, analyze and process the data, to meet real-time grid functions. High performance computing (HPC) holds the promise to enhance these functions, but it is a great resource that has not been fully explored and adopted for the power grid domain.« less

  13. Prediction of dry ice mass for firefighting robot actuation

    NASA Astrophysics Data System (ADS)

    Ajala, M. T.; Khan, Md R.; Shafie, A. A.; Salami, MJE; Mohamad Nor, M. I.

    2017-11-01

    The limitation in the performance of electric actuated firefighting robots in high-temperature fire environment has led to research on the alternative propulsion system for the mobility of firefighting robots in such environment. Capitalizing on the limitations of these electric actuators we suggested a gas-actuated propulsion system in our earlier study. The propulsion system is made up of a pneumatic motor as the actuator (for the robot) and carbon dioxide gas (self-generated from dry ice) as the power source. To satisfy the consumption requirement (9cfm) of the motor for efficient actuation of the robot in the fire environment, the volume of carbon dioxide gas, as well as the corresponding mass of the dry ice that will produce the required volume for powering and actuation of the robot, must be determined. This article, therefore, presents the computational analysis to predict the volumetric requirement and the dry ice mass sufficient to power a carbon dioxide gas propelled autonomous firefighting robot in a high-temperature environment. The governing equation of the sublimation of dry ice to carbon dioxide is established. An operating time of 2105.53s and operating pressure ranges from 137.9kPa to 482.65kPa were achieved following the consumption rate of the motor. Thus, 8.85m3 is computed as the volume requirement of the CAFFR while the corresponding dry ice mass for the CAFFR actuation ranges from 21.67kg to 75.83kg depending on the operating pressure.

  14. An efficient approach for improving virtual machine placement in cloud computing environment

    NASA Astrophysics Data System (ADS)

    Ghobaei-Arani, Mostafa; Shamsi, Mahboubeh; Rahmanian, Ali A.

    2017-11-01

    The ever increasing demand for the cloud services requires more data centres. The power consumption in the data centres is a challenging problem for cloud computing, which has not been considered properly by the data centre developer companies. Especially, large data centres struggle with the power cost and the Greenhouse gases production. Hence, employing the power efficient mechanisms are necessary to optimise the mentioned effects. Moreover, virtual machine (VM) placement can be used as an effective method to reduce the power consumption in data centres. In this paper by grouping both virtual and physical machines, and taking into account the maximum absolute deviation during the VM placement, the power consumption as well as the service level agreement (SLA) deviation in data centres are reduced. To this end, the best-fit decreasing algorithm is utilised in the simulation to reduce the power consumption by about 5% compared to the modified best-fit decreasing algorithm, and at the same time, the SLA violation is improved by 6%. Finally, the learning automata are used to a trade-off between power consumption reduction from one side, and SLA violation percentage from the other side.

  15. Energy efficient wireless sensor network for structural health monitoring using distributed embedded piezoelectric transducers

    NASA Astrophysics Data System (ADS)

    Li, Peng; Olmi, Claudio; Song, Gangbing

    2010-04-01

    Piezoceramic based transducers are widely researched and used for structural health monitoring (SHM) systems due to the piezoceramic material's inherent advantage of dual sensing and actuation. Wireless sensor network (WSN) technology benefits from advances made in piezoceramic based structural health monitoring systems, allowing easy and flexible installation, low system cost, and increased robustness over wired system. However, piezoceramic wireless SHM systems still faces some drawbacks, one of these is that the piezoceramic based SHM systems require relatively high computational capabilities to calculate damage information, however, battery powered WSN sensor nodes have strict power consumption limitation and hence limited computational power. On the other hand, commonly used centralized processing networks require wireless sensors to transmit all data back to the network coordinator for analysis. This signal processing procedure can be problematic for piezoceramic based SHM applications as it is neither energy efficient nor robust. In this paper, we aim to solve these problems with a distributed wireless sensor network for piezoceramic base structural health monitoring systems. Three important issues: power system, waking up from sleep impact detection, and local data processing, are addressed to reach optimized energy efficiency. Instead of sweep sine excitation that was used in the early research, several sine frequencies were used in sequence to excite the concrete structure. The wireless sensors record the sine excitations and compute the time domain energy for each sine frequency locally to detect the energy change. By comparing the data of the damaged concrete frame with the healthy data, we are able to find out the damage information of the concrete frame. A relative powerful wireless microcontroller was used to carry out the sampling and distributed data processing in real-time. The distributed wireless network dramatically reduced the data transmission between wireless sensor and the wireless coordinator, which in turn reduced the power consumption of the overall system.

  16. Toward an automated parallel computing environment for geosciences

    NASA Astrophysics Data System (ADS)

    Zhang, Huai; Liu, Mian; Shi, Yaolin; Yuen, David A.; Yan, Zhenzhen; Liang, Guoping

    2007-08-01

    Software for geodynamic modeling has not kept up with the fast growing computing hardware and network resources. In the past decade supercomputing power has become available to most researchers in the form of affordable Beowulf clusters and other parallel computer platforms. However, to take full advantage of such computing power requires developing parallel algorithms and associated software, a task that is often too daunting for geoscience modelers whose main expertise is in geosciences. We introduce here an automated parallel computing environment built on open-source algorithms and libraries. Users interact with this computing environment by specifying the partial differential equations, solvers, and model-specific properties using an English-like modeling language in the input files. The system then automatically generates the finite element codes that can be run on distributed or shared memory parallel machines. This system is dynamic and flexible, allowing users to address different problems in geosciences. It is capable of providing web-based services, enabling users to generate source codes online. This unique feature will facilitate high-performance computing to be integrated with distributed data grids in the emerging cyber-infrastructures for geosciences. In this paper we discuss the principles of this automated modeling environment and provide examples to demonstrate its versatility.

  17. Automating security monitoring and analysis for Space Station Freedom's electric power system

    NASA Technical Reports Server (NTRS)

    Dolce, James L.; Sobajic, Dejan J.; Pao, Yoh-Han

    1990-01-01

    Operating a large, space power system requires classifying the system's status and analyzing its security. Conventional algorithms are used by terrestrial electric utilities to provide such information to their dispatchers, but their application aboard Space Station Freedom will consume too much processing time. A new approach for monitoring and analysis using adaptive pattern techniques is presented. This approach yields an on-line security monitoring and analysis algorithm that is accurate and fast; and thus, it can free the Space Station Freedom's power control computers for other tasks.

  18. Automating security monitoring and analysis for Space Station Freedom's electric power system

    NASA Technical Reports Server (NTRS)

    Dolce, James L.; Sobajic, Dejan J.; Pao, Yoh-Han

    1990-01-01

    Operating a large, space power system requires classifying the system's status and analyzing its security. Conventional algorithms are used by terrestrial electric utilities to provide such information to their dispatchers, but their application aboard Space Station Freedom will consume too much processing time. A novel approach for monitoring and analysis using adaptive pattern techniques is presented. This approach yields an on-line security monitoring and analysis algorithm that is accurate and fast; and thus, it can free the Space Station Freedom's power control computers for other tasks.

  19. Analysis on energy consumption index system of thermal power plant

    NASA Astrophysics Data System (ADS)

    Qian, J. B.; Zhang, N.; Li, H. F.

    2017-05-01

    Currently, the increasingly tense situation in the context of resources, energy conservation is a realistic choice to ease the energy constraint contradictions, reduce energy consumption thermal power plants has become an inevitable development direction. And combined with computer network technology to build thermal power “small index” to monitor and optimize the management system, the power plant is the application of information technology and to meet the power requirements of the product market competition. This paper, first described the research status of thermal power saving theory, then attempted to establish the small index system and build “small index” monitoring and optimization management system in thermal power plant. Finally elaborated key issues in the field of small thermal power plant technical and economic indicators to be further studied and resolved.

  20. Ultrasonic power measurement system based on acousto-optic interaction.

    PubMed

    He, Liping; Zhu, Fulong; Chen, Yanming; Duan, Ke; Lin, Xinxin; Pan, Yongjun; Tao, Jiaquan

    2016-05-01

    Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of optical devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.

  1. Ultrasonic power measurement system based on acousto-optic interaction

    NASA Astrophysics Data System (ADS)

    He, Liping; Zhu, Fulong; Chen, Yanming; Duan, Ke; Lin, Xinxin; Pan, Yongjun; Tao, Jiaquan

    2016-05-01

    Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of optical devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.

  2. Profiling an application for power consumption during execution on a compute node

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

    2013-09-17

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  3. Applications of high power lasers. [using reflection holograms for machining and surface treatment

    NASA Technical Reports Server (NTRS)

    Angus, J. C.

    1979-01-01

    The use of computer generated, reflection holograms in conjunction with high power lasers for precision machining of metals and ceramics was investigated. The Reflection holograms which were developed and made to work at both optical wavelength (He-Ne, 6328 A) and infrared (CO2, 10.6) meet the primary practical requirement of ruggedness and are relatively economical and simple to fabricate. The technology is sufficiently advanced now so that reflection holography could indeed be used as a practical manufacturing device in certain applications requiring low power densities. However, the present holograms are energy inefficient and much of the laser power is lost in the zero order spot and higher diffraction orders. Improvements of laser machining over conventional methods are discussed and addition applications are listed. Possible uses in the electronics industry include drilling holes in printed circuit boards making soldered connections, and resistor trimming.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Read, Michael; Ives, Robert Lawrence; Marsden, David

    The Phase II program developed an internal RF coupler that transforms the whispering gallery RF mode produced in gyrotron cavities to an HE11 waveguide mode propagating in corrugated waveguide. This power is extracted from the vacuum using a broadband, chemical vapor deposited (CVD) diamond, Brewster angle window capable of transmitting more than 1.5 MW CW of RF power over a broad range of frequencies. This coupling system eliminates the Mirror Optical Units now required to externally couple Gaussian output power into corrugated waveguide, significantly reducing system cost and increasing efficiency. The program simulated the performance using a broad range ofmore » advanced computer codes to optimize the design. Both a direct coupler and Brewster angle window were built and tested at low and high power. Test results confirmed the performance of both devices and demonstrated they are capable of achieving the required performance for scientific, defense, industrial, and medical applications.« less

  5. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  6. CALIPSO Instrument OFF

    Atmospheric Science Data Center

    2013-12-05

    ... the 100MeV levels are above the 1 pfu, which requires the computer to be powered down.   Recovery planning has begun and will be ... Payload was returned to Data Acquisition and regular nightly science data downlinks following the Inclination Maneuver on April 16th at ...

  7. Selecting the Right Software.

    ERIC Educational Resources Information Center

    Shearn, Joseph

    1987-01-01

    Selection of administrative software requires analyzing present needs and, to meet future needs, choosing software that will function with a more powerful computer system. Other important factors to include are a professional system demonstration, maintenance and training, and financial considerations that allow leasing or renting alternatives.…

  8. Cheminformatics and Computational Chemistry: A Powerful Combination for the Encoding of Process Science

    EPA Science Inventory

    The registration of new chemicals under the Toxicological Substances Control Act (TSCA) and new pesticides under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) requires knowledge of the process science underlying the transformation of organic chemicals in natural...

  9. Fail-safe bidirectional valve driver

    NASA Technical Reports Server (NTRS)

    Fujimoto, H.

    1974-01-01

    Cross-coupled diodes are added to commonly used bidirectional valve driver circuit to protect circuit and power supply. Circuit may be used in systems requiring fail-safe bidirectional valve operation, particularly in chemical- and petroleum-processing control systems and computer-controlled hydraulic or pneumatic systems.

  10. GAPIT version 2: an enhanced integrated tool for genomic association and prediction

    USDA-ARS?s Scientific Manuscript database

    Most human diseases and agriculturally important traits are complex. Dissecting their genetic architecture requires continued development of innovative and powerful statistical methods. Corresponding advances in computing tools are critical to efficiently use these statistical innovations and to enh...

  11. ATTACK WARNING: Better Management Required to Resolve NORAD Integration Deficiencies

    DTIC Science & Technology

    1989-07-01

    protocols, Cumbersome Integration different manufacturers’ computer systems can communicate with eachother . The warning and assessment subsystems...by treating TW/AA system as a single system subject to program review and oversight by the Defense Acquisition Board. Within this management...restore the unit to operation quickly enough after a power loss to meet NORAD mis- sion requirements. The Air Force intends to have the contractor

  12. Technology Assessment: 1983 Forecast of Future Test Technology Requirements.

    DTIC Science & Technology

    1983-06-01

    effectively utilizes existing vehicle space , power and support equipment while maintaining critical interfaces with on-board computers and fire control...Scan Converter EAR Electronically Agile Radar E-O Electro-Optics FET Field Effect Transistor FLIR Forward Looking Infrared GaAs Gallium Arsenide HEL...They might be a part of a large ATE system due to such things as the environmental effects on noise and signal/power loss. A summary of meaningful

  13. Assessment of Li/SOCL2 Battery Technology; Reserve, Thin-Cell Design. Volume 3

    DTIC Science & Technology

    1990-06-01

    power density and efficiency of an operating electrochemical system . The method is general - the examples to illustrate the selected points pertain to... System : Design, Manufacturing and QC Considerations), S. Szpak, P. A. Mosier-Boss, and J. J. Smith, 34th International Power Sources Symposium, Cherry...I) the computer time required to evaluate the integral in Eqn. Ill, and (iii the lack of generality in the attainable lineshapes. However, since this

  14. The Challenges of Human-Autonomy Teaming

    NASA Technical Reports Server (NTRS)

    Vera, Alonso

    2017-01-01

    Machine intelligence is improving rapidly based on advances in big data analytics, deep learning algorithms, networked operations, and continuing exponential growth in computing power (Moores Law). This growth in the power and applicability of increasingly intelligent systems will change the roles humans, shifting them to tasks where adaptive problem solving, reasoning and decision-making is required. This talk will address the challenges involved in engineering autonomous systems that function effectively with humans in aeronautics domains.

  15. Utilizing HDF4 File Content Maps for the Cloud

    NASA Technical Reports Server (NTRS)

    Lee, Hyokyung Joe

    2016-01-01

    We demonstrate a prototype study that HDF4 file content map can be used for efficiently organizing data in cloud object storage system to facilitate cloud computing. This approach can be extended to any binary data formats and to any existing big data analytics solution powered by cloud computing because HDF4 file content map project started as long term preservation of NASA data that doesn't require HDF4 APIs to access data.

  16. The power of pezonomics

    NASA Technical Reports Server (NTRS)

    Orr, Joel N.

    1995-01-01

    This reflection of human-computer interface and its requirements as virtual technology is advanced, proposes a new term: 'Pezonomics'. The term replaces the term ergonomics ('the law of work') with a definition pointing to 'the law of play.' The necessity of this term, the author reasons, comes from the need to 'capture the essence of play and calibrate our computer systems to its cadences.' Pezonomics will ensure that artificial environments, in particular virtual reality, are user friendly.

  17. Rational calculation accuracy in acousto-optical matrix-vector processor

    NASA Astrophysics Data System (ADS)

    Oparin, V. V.; Tigin, Dmitry V.

    1994-01-01

    The high speed of parallel computations for a comparatively small-size processor and acceptable power consumption makes the usage of acousto-optic matrix-vector multiplier (AOMVM) attractive for processing of large amounts of information in real time. The limited accuracy of computations is an essential disadvantage of such a processor. The reduced accuracy requirements allow for considerable simplification of the AOMVM architecture and the reduction of the demands on its components.

  18. NAVO MSRC Navigator. Spring 2003

    DTIC Science & Technology

    2003-01-01

    computational model run on the IBM POWER4 (MARCELLUS) in support of the Airborne Laser Challenge Project II. The data were visualized using Alias|Wavefront Maya...Turbulence in a Jet Stream in the Airborne Laser Context High Performance Computing 11 Largest NAVO MSRC System Becomes Even Bigger and Better 11 Using the smp...centimeters (cm). The resolution requirement to resolve the microjets and the flow outside in the combustor is too severe for any single numerical method

  19. In-Situ Tuff Water Migration/Heater Experiment: posttest thermal analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eaton, R.R.; Johnstone, J.K.; Nunziato, J.W.

    This report describes posttest laboratory experiments and thermal computations for the In-Situ Tuff Water Migration/Heater Experiment that was conducted in Grouse Canyon Welded Tuff in G-Tunnel, Nevada Test Site. Posttest laboratory experiments were designed to determine the accuracy of the temperatures measured by the rockwall thermocouples during the in-situ test. The posttest laboratory experiments showed that the measured in-situ rockwall temperatures were 10 to 20{sup 0}C higher than the true rockwall temperatures. The posttest computational results, obtained with the thermal conduction code COYOTE, were compared with the experimentally obtained data and with calculated pretest results. Daily heater output power fluctuationsmore » (+-4%) caused by input power line variations and the sensitivity of temperature to heater output power required care in selecting the average heater output power values used in the code. The posttest calculated results compare reasonably well with the experimental data. 10 references, 14 figures, 5 tables.« less

  20. RighTime: A real time clock correcting program for MS-DOS-based computer systems

    NASA Technical Reports Server (NTRS)

    Becker, G. Thomas

    1993-01-01

    A computer program is described which effectively eliminates the misgivings of the DOS system clock in PC/AT-class computers. RighTime is a small, sophisticated memory-resident program that automatically corrects both the DOS system clock and the hardware 'CMOS' real time clock (RTC) in real time. RighTime learns what corrections are required without operator interaction beyond the occasional accurate time set. Both warm (power on) and cool (power off) errors are corrected, usually yielding better than one part per million accuracy in the typical desktop computer with no additional hardware, and RighTime increases the system clock resolution from approximately 0.0549 second to 0.01 second. Program tools are also available which allow visualization of RighTime's actions, verification of its performance, display of its history log, and which provide data for graphing of the system clock behavior. The program has found application in a wide variety of industries, including astronomy, satellite tracking, communications, broadcasting, transportation, public utilities, manufacturing, medicine, and the military.

  1. An efficient computational method for characterizing the effects of random surface errors on the average power pattern of reflectors

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1983-01-01

    Based on the works of Ruze (1966) and Vu (1969), a novel mathematical model has been developed to determine efficiently the average power pattern degradations caused by random surface errors. In this model, both nonuniform root mean square (rms) surface errors and nonuniform illumination functions are employed. In addition, the model incorporates the dependence on F/D in the construction of the solution. The mathematical foundation of the model rests on the assumption that in each prescribed annular region of the antenna, the geometrical rms surface value is known. It is shown that closed-form expressions can then be derived, which result in a very efficient computational method for the average power pattern. Detailed parametric studies are performed with these expressions to determine the effects of different random errors and illumination tapers on parameters such as gain loss and sidelobe levels. The results clearly demonstrate that as sidelobe levels decrease, their dependence on the surface rms/wavelength becomes much stronger and, for a specified tolerance level, a considerably smaller rms/wavelength is required to maintain the low sidelobes within the required bounds.

  2. Master Software Requirements Specification

    NASA Technical Reports Server (NTRS)

    Hu, Chaumin

    2003-01-01

    A basic function of a computational grid such as the NASA Information Power Grid (IPG) is to allow users to execute applications on remote computer systems. The Globus Resource Allocation Manager (GRAM) provides this functionality in the IPG and many other grids at this time. While the functionality provided by GRAM clients is adequate, GRAM does not support useful features such as staging several sets of files, running more than one executable in a single job submission, and maintaining historical information about execution operations. This specification is intended to provide the environmental and software functional requirements for the IPG Job Manager V2.0 being developed by AMTI for NASA.

  3. Requirements for a multifunctional code architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiihonen, O.; Juslin, K.

    1997-07-01

    The present paper studies a set of requirements for a multifunctional simulation software architecture in the light of experiences gained in developing and using the APROS simulation environment. The huge steps taken in the development of computer hardware and software during the last ten years are changing the status of the traditional nuclear safety analysis software. The affordable computing power on the safety analysts table by far exceeds the possibilities offered to him/her ten years ago. At the same time the features of everyday office software tend to set standards to the way the input data and calculational results aremore » managed.« less

  4. Gyrokinetic micro-turbulence simulations on the NERSC 16-way SMP IBM SP computer: experiences and performance results

    NASA Astrophysics Data System (ADS)

    Ethier, Stephane; Lin, Zhihong

    2001-10-01

    Earlier this year, the National Energy Research Scientific Computing center (NERSC) took delivery of the second most powerful computer in the world. With its 2,528 processors running at a peak performance of 1.5 GFlops, this IBM SP machine has a theoretical performance of almost 3.8 TFlops. To efficiently harness such computing power in one single code is not an easy task and requires a good knowledge of the computer's architecture. Here we present the steps that we followed to improve our gyrokinetic micro-turbulence code GTC in order to take advantage of the new 16-way shared memory nodes of the NERSC IBM SP. Performance results are shown as well as details about the improved mixed-mode MPI-OpenMP model that we use. The enhancements to the code allowed us to tackle much bigger problem sizes, getting closer to our goal of simulating an ITER-size tokamak with both kinetic ions and electrons.(This work is supported by DOE Contract No. DE-AC02-76CH03073 (PPPL), and in part by the DOE Fusion SciDAC Project.)

  5. High Available COTS Based Computer for Space

    NASA Astrophysics Data System (ADS)

    Hartmann, J.; Magistrati, Giorgio

    2015-09-01

    The availability and reliability factors of a system are central requirements of a target application. From a simple fuel injection system used in cars up to a flight control system of an autonomous navigating spacecraft, each application defines its specific availability factor under the target application boundary conditions. Increasing quality requirements on data processing systems used in space flight applications calling for new architectures to fulfill the availability, reliability as well as the increase of the required data processing power. Contrary to the increased quality request simplification and use of COTS components to decrease costs while keeping the interface compatibility to currently used system standards are clear customer needs. Data processing system design is mostly dominated by strict fulfillment of the customer requirements and reuse of available computer systems were not always possible caused by obsolescence of EEE-Parts, insufficient IO capabilities or the fact that available data processing systems did not provide the required scalability and performance.

  6. High-power klystrons

    NASA Astrophysics Data System (ADS)

    Siambis, John G.; True, Richard B.; Symons, R. S.

    1994-05-01

    Novel emerging applications in advanced linear collider accelerators, ionospheric and atmospheric sensing and modification and a wide spectrum of industrial processing applications, have resulted in microwave tube requirements that call for further development of high power klystrons in the range from S-band to X-band. In the present paper we review recent progress in high power klystron development and discuss some of the issues and scaling laws for successful design. We also discuss recent progress in electron guns with potential grading electrodes for high voltage with short and long pulse operation via computer simulations obtained from the code DEMEOS, as well as preliminary experimental results. We present designs for high power beam collectors.

  7. Joint terminals and relay optimization for two-way power line information exchange systems with QoS constraints

    NASA Astrophysics Data System (ADS)

    Wu, Xiaolin; Rong, Yue

    2015-12-01

    The quality-of-service (QoS) criteria (measured in terms of the minimum capacity requirement in this paper) are very important to practical indoor power line communication (PLC) applications as they greatly affect the user experience. With a two-way multicarrier relay configuration, in this paper we investigate the joint terminals and relay power optimization for the indoor broadband PLC environment, where the relay node works in the amplify-and-forward (AF) mode. As the QoS-constrained power allocation problem is highly non-convex, the globally optimal solution is computationally intractable to obtain. To overcome this challenge, we propose an alternating optimization (AO) method to decompose this problem into three convex/quasi-convex sub-problems. Simulation results demonstrate the fast convergence of the proposed algorithm under practical PLC channel conditions. Compared with the conventional bidirectional direct transmission (BDT) system, the relay-assisted two-way information exchange (R2WX) scheme can meet the same QoS requirement with less total power consumption.

  8. Rechargeable lithium battery for use in applications requiring a low to high power output

    DOEpatents

    Bates, John B.

    1996-01-01

    Rechargeable lithium batteries which employ characteristics of thin-film batteries can be used to satisfy power requirements within a relatively broad range. Thin-film battery cells utilizing a film of anode material, a film of cathode material and an electrolyte of an amorphorus lithium phosphorus oxynitride can be connected in series or parallel relationship for the purpose of withdrawing electrical power simultaneously from the cells. In addition, such battery cells which employ a lithium intercalation compound as its cathode material can be connected in a manner suitable for supplying power for the operation of an electric vehicle. Still further, by incorporating within the battery cell a relatively thick cathode of a lithium intercalation compound, a relatively thick anode of lithium and an electrolyte film of lithium phosphorus oxynitride, the battery cell is rendered capable of supplying power for any of a number of consumer products, such as a laptop computer or a cellular telephone.

  9. Rechargeable lithium battery for use in applications requiring a low to high power output

    DOEpatents

    Bates, John B.

    1997-01-01

    Rechargeable lithium batteries which employ characteristics of thin-film batteries can be used to satisfy power requirements within a relatively broad range. Thin-film battery cells utilizing a film of anode material, a film of cathode material and an electrolyte of an amorphous lithium phosphorus oxynitride can be connected in series or parallel relationship for the purpose of withdrawing electrical power simultaneously from the cells. In addition, such battery cells which employ a lithium intercalation compound as its cathode material can be connected in a manner suitable for supplying power for the operation of an electric vehicle. Still further, by incorporating within the battery cell a relatively thick cathode of a lithium intercalation compound, a relatively thick anode of lithium and an electrolyte film of lithium phosphorus oxynitride, the battery cell is rendered capable of supplying power for any of a number of consumer products, such as a laptop computer or a cellular telephone.

  10. Chance-Constrained AC Optimal Power Flow: Reformulations and Efficient Algorithms

    DOE PAGES

    Roald, Line Alnaes; Andersson, Goran

    2017-08-29

    Higher levels of renewable electricity generation increase uncertainty in power system operation. To ensure secure system operation, new tools that account for this uncertainty are required. Here, in this paper, we adopt a chance-constrained AC optimal power flow formulation, which guarantees that generation, power flows and voltages remain within their bounds with a pre-defined probability. We then discuss different chance-constraint reformulations and solution approaches for the problem. Additionally, we first discuss an analytical reformulation based on partial linearization, which enables us to obtain a tractable representation of the optimization problem. We then provide an efficient algorithm based on an iterativemore » solution scheme which alternates between solving a deterministic AC OPF problem and assessing the impact of uncertainty. This more flexible computational framework enables not only scalable implementations, but also alternative chance-constraint reformulations. In particular, we suggest two sample based reformulations that do not require any approximation or relaxation of the AC power flow equations.« less

  11. Experimental Study and Optimization of Thermoelectricity-Driven Autonomous Sensors for the Chimney of a Biomass Power Plant

    NASA Astrophysics Data System (ADS)

    Rodríguez, A.; Astrain, D.; Martínez, A.; Aranguren, P.

    2014-06-01

    In the work discussed in this paper a thermoelectric generator was developed to harness waste heat from the exhaust gas of a boiler in a biomass power plant and thus generate electric power to operate a flowmeter installed in the chimney, to make it autonomous. The main objective was to conduct an experimental study to optimize a previous design obtained after computational work based on a simulation model for thermoelectric generators. First, several places inside and outside the chimney were considered as sites for the thermoelectricity-driven autonomous sensor. Second, the thermoelectric generator was built and tested to assess the effect of the cold-side heat exchanger on the electric power, power consumption by the flowmeter, and transmission frequency. These tests provided the best configuration for the heat exchanger, which met the transmission requirements for different working conditions. The final design is able to transmit every second and requires neither batteries nor electric wires. It is a promising application in the field of thermoelectric generation.

  12. Lunar PMAD technology assessment

    NASA Technical Reports Server (NTRS)

    Metcalf, Kenneth J.

    1992-01-01

    This report documents an initial set of power conditioning models created to generate 'ballpark' power management and distribution (PMAD) component mass and size estimates. It contains converter, rectifier, inverter, transformer, remote bus isolator (RBI), and remote power controller (RPC) models. These models allow certain studies to be performed; however, additional models are required to assess a full range of PMAD alternatives. The intent is to eventually form a library of PMAD models that will allow system designers to evaluate various power system architectures and distribution techniques quickly and consistently. The models in this report are designed primarily for space exploration initiative (SEI) missions requiring continuous power and supporting manned operations. The mass estimates were developed by identifying the stages in a component and obtaining mass breakdowns for these stages from near term electronic hardware elements. Technology advances were then incorporated to generate hardware masses consistent with the 2000 to 2010 time period. The mass of a complete component is computed by algorithms that calculate the masses of the component stages, control and monitoring, enclosure, and thermal management subsystem.

  13. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  14. HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation

    NASA Technical Reports Server (NTRS)

    Sterling, Thomas; Bergman, Larry

    2000-01-01

    Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention semiconductor logic. Wave Division Multiplexing optical communications can approach a peak per fiber bandwidth of 1 Tbps and the new Data Vortex network topology employing this technology can connect tens of thousands of ports providing a bi-section bandwidth on the order of a Petabyte per second with latencies well below 100 nanoseconds, even under heavy loads. Processor-in-Memory (PIM) technology combines logic and memory on the same chip exposing the internal bandwidth of the memory row buffers at low latency. And holographic storage photorefractive storage technologies provide high-density memory with access a thousand times faster than conventional disk technologies. Together these technologies enable a new class of shared memory system architecture with a peak performance in the range of a Petaflops but size and power requirements comparable to today's largest Teraflops scale systems. To achieve high-sustained performance, HTMT combines an advanced multithreading processor architecture with a memory-driven coarse-grained latency management strategy called "percolation", yielding high efficiency while reducing the much of the parallel programming burden. This paper will present the basic system architecture characteristics made possible through this series of advanced technologies and then give a detailed description of the new percolation approach to runtime latency management.

  15. Decentralized State Estimation and Remedial Control Action for Minimum Wind Curtailment Using Distributed Computing Platform

    DOE PAGES

    Liu, Ren; Srivastava, Anurag K.; Bakken, David E.; ...

    2017-08-17

    Intermittency of wind energy poses a great challenge for power system operation and control. Wind curtailment might be necessary at the certain operating condition to keep the line flow within the limit. Remedial Action Scheme (RAS) offers quick control action mechanism to keep reliability and security of the power system operation with high wind energy integration. In this paper, a new RAS is developed to maximize the wind energy integration without compromising the security and reliability of the power system based on specific utility requirements. A new Distributed Linear State Estimation (DLSE) is also developed to provide the fast andmore » accurate input data for the proposed RAS. A distributed computational architecture is designed to guarantee the robustness of the cyber system to support RAS and DLSE implementation. The proposed RAS and DLSE is validated using the modified IEEE-118 Bus system. Simulation results demonstrate the satisfactory performance of the DLSE and the effectiveness of RAS. Real-time cyber-physical testbed has been utilized to validate the cyber-resiliency of the developed RAS against computational node failure.« less

  16. Detecting chaos in irregularly sampled time series.

    PubMed

    Kulp, C W

    2013-09-01

    Recently, Wiebe and Virgin [Chaos 22, 013136 (2012)] developed an algorithm which detects chaos by analyzing a time series' power spectrum which is computed using the Discrete Fourier Transform (DFT). Their algorithm, like other time series characterization algorithms, requires that the time series be regularly sampled. Real-world data, however, are often irregularly sampled, thus, making the detection of chaotic behavior difficult or impossible with those methods. In this paper, a characterization algorithm is presented, which effectively detects chaos in irregularly sampled time series. The work presented here is a modification of Wiebe and Virgin's algorithm and uses the Lomb-Scargle Periodogram (LSP) to compute a series' power spectrum instead of the DFT. The DFT is not appropriate for irregularly sampled time series. However, the LSP is capable of computing the frequency content of irregularly sampled data. Furthermore, a new method of analyzing the power spectrum is developed, which can be useful for differentiating between chaotic and non-chaotic behavior. The new characterization algorithm is successfully applied to irregularly sampled data generated by a model as well as data consisting of observations of variable stars.

  17. Decentralized State Estimation and Remedial Control Action for Minimum Wind Curtailment Using Distributed Computing Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ren; Srivastava, Anurag K.; Bakken, David E.

    Intermittency of wind energy poses a great challenge for power system operation and control. Wind curtailment might be necessary at the certain operating condition to keep the line flow within the limit. Remedial Action Scheme (RAS) offers quick control action mechanism to keep reliability and security of the power system operation with high wind energy integration. In this paper, a new RAS is developed to maximize the wind energy integration without compromising the security and reliability of the power system based on specific utility requirements. A new Distributed Linear State Estimation (DLSE) is also developed to provide the fast andmore » accurate input data for the proposed RAS. A distributed computational architecture is designed to guarantee the robustness of the cyber system to support RAS and DLSE implementation. The proposed RAS and DLSE is validated using the modified IEEE-118 Bus system. Simulation results demonstrate the satisfactory performance of the DLSE and the effectiveness of RAS. Real-time cyber-physical testbed has been utilized to validate the cyber-resiliency of the developed RAS against computational node failure.« less

  18. Profiling an application for power consumption during execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2012-08-21

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  19. Temperature Distribution Within a Defect-Free Silicon Carbide Diode Predicted by a Computational Model

    NASA Technical Reports Server (NTRS)

    Kuczmarski, Maria A.; Neudeck, Philip G.

    2000-01-01

    Most solid-state electronic devices diodes, transistors, and integrated circuits are based on silicon. Although this material works well for many applications, its properties limit its ability to function under extreme high-temperature or high-power operating conditions. Silicon carbide (SiC), with its desirable physical properties, could someday replace silicon for these types of applications. A major roadblock to realizing this potential is the quality of SiC material that can currently be produced. Semiconductors require very uniform, high-quality material, and commercially available SiC tends to suffer from defects in the crystalline structure that have largely been eliminated in silicon. In some power circuits, these defects can focus energy into an extremely small area, leading to overheating that can damage the device. In an effort to better understand the way that these defects affect the electrical performance and reliability of an SiC device in a power circuit, the NASA Glenn Research Center at Lewis Field began an in-house three-dimensional computational modeling effort. The goal is to predict the temperature distributions within a SiC diode structure subjected to the various transient overvoltage breakdown stresses that occur in power management circuits. A commercial computational fluid dynamics computer program (FLUENT-Fluent, Inc., Lebanon, New Hampshire) was used to build a model of a defect-free SiC diode and generate a computational mesh. A typical breakdown power density was applied over 0.5 msec in a heated layer at the junction between the p-type SiC and n-type SiC, and the temperature distribution throughout the diode was then calculated. The peak temperature extracted from the computational model agreed well (within 6 percent) with previous first-order calculations of the maximum expected temperature at the end of the breakdown pulse. This level of agreement is excellent for a model of this type and indicates that three-dimensional computational modeling can provide useful predictions for this class of problem. The model is now being extended to include the effects of crystal defects. The model will provide unique insights into how high the temperature rises in the vicinity of the defects in a diode at various power densities and pulse durations. This information also will help researchers in understanding and designing SiC devices for safe and reliable operation in high-power circuits.

  20. A new algorithm for real-time optimal dispatch of active and reactive power generation retaining nonlinearity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, L.; Rao, N.D.

    1983-04-01

    This paper presents a new method for optimal dispatch of real and reactive power generation which is based on cartesian coordinate formulation of economic dispatch problem and reclassification of state and control variables associated with generator buses. The voltage and power at these buses are classified as parametric and functional inequality constraints, and are handled by reduced gradient technique and penalty factor approach respectively. The advantage of this classification is the reduction in the size of the equality constraint model, leading to less storage requirement. The rectangular coordinate formulation results in an exact equality constraint model in which the coefficientmore » matrix is real, sparse, diagonally dominant, smaller in size and need be computed and factorized once only in each gradient step. In addition, Lagragian multipliers are calculated using a new efficient procedure. A natural outcome of these features is the solution of the economic dispatch problem, faster than other methods available to date in the literature. Rapid and reliable convergence is an additional desirable characteristic of the method. Digital simulation results are presented on several IEEE test systems to illustrate the range of application of the method visa-vis the popular Dommel-Tinney (DT) procedure. It is found that the proposed method is more reliable, 3-4 times faster and requires 20-30 percent less storage compared to the DT algorithm, while being just as general. Thus, owing to its exactness, robust mathematical model and less computational requirements, the method developed in the paper is shown to be a practically feasible algorithm for on-line optimal power dispatch.« less

  1. SAR processing in the cloud for oil detection in the Arctic

    NASA Astrophysics Data System (ADS)

    Garron, J.; Stoner, C.; Meyer, F. J.

    2016-12-01

    A new world of opportunity is being thawed from the ice of the Arctic, driven by decreased persistent Arctic sea-ice cover, increases in shipping, tourism, natural resource development. Tools that can automatically monitor key sea ice characteristics and potential oil spills are essential for safe passage in these changing waters. Synthetic aperture radar (SAR) data can be used to discriminate sea ice types and oil on the ocean surface and also for feature tracking. Additionally, SAR can image the earth through the night and most weather conditions. SAR data is volumetrically large and requires significant computing power to manipulate. Algorithms designed to identify key environmental features, like oil spills, in SAR imagery require secondary processing, and are computationally intensive, which can functionally limit their application in a real-time setting. Cloud processing is designed to manage big data and big data processing jobs by means of small cycles of off-site computations, eliminating up-front hardware costs. Pairing SAR data with cloud processing has allowed us to create and solidify a processing pipeline for SAR data products in the cloud to compare operational algorithms efficiency and effectiveness when run using an Alaska Satellite Facility (ASF) defined Amazon Machine Image (AMI). The products created from this secondary processing, were compared to determine which algorithm was most accurate in Arctic feature identification, and what operational conditions were required to produce the results on the ASF defined AMI. Results will be used to inform a series of recommendations to oil-spill response data managers and SAR users interested in expanding their analytical computing power.

  2. Collecting data from a sensor network in a single-board computer

    NASA Astrophysics Data System (ADS)

    Casciati, F.; Casciati, S.; Chen, Z.-C.; Faravelli, L.; Vece, M.

    2015-07-01

    The EU-FP7 project SPARTACUS, currently in progress, sees the international cooperation of several partners toward the design and implementation of a satellite based asset tracking for supporting emergency management in crisis operations. Due to the emergency environment, one has to rely on a low power consumption wireless communication. Therefore, the communication hardware and software must be designed to match requirements which can only be foreseen at the level of more or less likely scenarios. The latter aspect suggests a deep use of a simulator (instead of a real network of sensors) to cover extreme situations. The former power consumption remark suggests the use of a minimal computer (Raspberry Pi) as data collector. In this paper, the results of a broad simulation campaign are reported in order to investigate the accuracy of the received data and the global power consumption for each of the considered scenarios.

  3. Development of a solar-powered residential air conditioner: System optimization preliminary specification

    NASA Technical Reports Server (NTRS)

    Rousseau, J.; Hwang, K. C.

    1975-01-01

    Investigations aimed at the optimization of a baseline Rankine cycle solar powered air conditioner and the development of a preliminary system specification were conducted. Efforts encompassed the following: (1) investigations of the use of recuperators/regenerators to enhance the performance of the baseline system, (2) development of an off-design computer program for system performance prediction, (3) optimization of the turbocompressor design to cover a broad range of conditions and permit operation at low heat source water temperatures, (4) generation of parametric data describing system performance (COP and capacity), (5) development and evaluation of candidate system augmentation concepts and selection of the optimum approach, (6) generation of auxiliary power requirement data, (7) development of a complete solar collector-thermal storage-air conditioner computer program, (8) evaluation of the baseline Rankine air conditioner over a five day period simulating the NASA solar house operation, and (9) evaluation of the air conditioner as a heat pump.

  4. The IBM PC at NASA Ames

    NASA Technical Reports Server (NTRS)

    Peredo, James P.

    1988-01-01

    Like many large companies, Ames relies very much on its computing power to get work done. And, like many other large companies, finding the IBM PC a reliable tool, Ames uses it for many of the same types of functions as other companies. Presentation and clarification needs demand much of graphics packages. Programming and text editing needs require simpler, more-powerful packages. The storage space needed by NASA's scientists and users for the monumental amounts of data that Ames needs to keep demand the best database packages that are large and easy to use. Availability to the Micom Switching Network combines the powers of the IBM PC with the capabilities of other computers and mainframes and allows users to communicate electronically. These four primary capabilities of the PC are vital to the needs of NASA's users and help to continue and support the vast amounts of work done by the NASA employees.

  5. A Comprehensive Study on Energy Efficiency and Performance of Flash-based SSD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Seon-Yeon; Kim, Youngjae; Urgaonkar, Bhuvan

    2011-01-01

    Use of flash memory as a storage medium is becoming popular in diverse computing environments. However, because of differences in interface, flash memory requires a hard-disk-emulation layer, called FTL (flash translation layer). Although the FTL enables flash memory storages to replace conventional hard disks, it induces significant computational and space overhead. Despite the low power consumption of flash memory, this overhead leads to significant power consumption in an overall storage system. In this paper, we analyze the characteristics of flash-based storage devices from the viewpoint of power consumption and energy efficiency by using various methodologies. First, we utilize simulation tomore » investigate the interior operation of flash-based storage of flash-based storages. Subsequently, we measure the performance and energy efficiency of commodity flash-based SSDs by using microbenchmarks to identify the block-device level characteristics and macrobenchmarks to reveal their filesystem level characteristics.« less

  6. Reducing software mass through behavior control. [of planetary roving robots

    NASA Technical Reports Server (NTRS)

    Miller, David P.

    1992-01-01

    Attention is given to the tradeoff between communication and computation as regards a planetary rover (both these subsystems are very power-intensive, and both can be the major driver of the rover's power subsystem, and therefore the minimum mass and size of the rover). Software techniques that can be used to reduce the requirements on both communciation and computation, allowing the overall robot mass to be greatly reduced, are discussed. Novel approaches to autonomous control, called behavior control, employ an entirely different approach, and for many tasks will yield a similar or superior level of autonomy to traditional control techniques, while greatly reducing the computational demand. Traditional systems have several expensive processes that operate serially, while behavior techniques employ robot capabilities that run in parallel. Traditional systems make extensive world models, while behavior control systems use minimal world models or none at all.

  7. A Real Time Controller For Applications In Smart Structures

    NASA Astrophysics Data System (ADS)

    Ahrens, Christian P.; Claus, Richard O.

    1990-02-01

    Research in smart structures, especially the area of vibration suppression, has warranted the investigation of advanced computing environments. Real time PC computing power has limited development of high order control algorithms. This paper presents a simple Real Time Embedded Control System (RTECS) in an application of Intelligent Structure Monitoring by way of modal domain sensing for vibration control. It is compared to a PC AT based system for overall functionality and speed. The system employs a novel Reduced Instruction Set Computer (RISC) microcontroller capable of 15 million instructions per second (MIPS) continuous performance and burst rates of 40 MIPS. Advanced Complimentary Metal Oxide Semiconductor (CMOS) circuits are integrated on a single 100 mm by 160 mm printed circuit board requiring only 1 Watt of power. An operating system written in Forth provides high speed operation and short development cycles. The system allows for implementation of Input/Output (I/O) intensive algorithms and provides capability for advanced system development.

  8. Description of real-time Ada software implementation of a power system monitor for the Space Station Freedom PMAD DC testbed

    NASA Technical Reports Server (NTRS)

    Ludwig, Kimberly; Mackin, Michael; Wright, Theodore

    1991-01-01

    The Ada language software development to perform the electrical system monitoring functions for the NASA Lewis Research Center's Power Management and Distribution (PMAD) DC testbed is described. The results of the effort to implement this monitor are presented. The PMAD DC testbed is a reduced-scale prototype of the electrical power system to be used in the Space Station Freedom. The power is controlled by smart switches known as power control components (or switchgear). The power control components are currently coordinated by five Compaq 382/20e computers connected through an 802.4 local area network. One of these computers is designated as the control node with the other four acting as subsidiary controllers. The subsidiary controllers are connected to the power control components with a Mil-Std-1553 network. An operator interface is supplied by adding a sixth computer. The power system monitor algorithm is comprised of several functions including: periodic data acquisition, data smoothing, system performance analysis, and status reporting. Data is collected from the switchgear sensors every 100 milliseconds, then passed through a 2 Hz digital filter. System performance analysis includes power interruption and overcurrent detection. The reporting mechanism notifies an operator of any abnormalities in the system. Once per second, the system monitor provides data to the control node for further processing, such as state estimation. The system monitor required a hardware time interrupt to activate the data acquisition function. The execution time of the code was optimized using an assembly language routine. The routine allows direct vectoring of the processor to Ada language procedures that perform periodic control activities. A summary of the advantages and side effects of this technique are discussed.

  9. Computer Simulations and Literature Survey of Continuously Variable Transmissions for Use in Buses

    DOT National Transportation Integrated Search

    1981-12-01

    Numerous studies have been conducted on the concept of flywheel energy storage for buses. Flywheel systems require a continuously variable transmission (CVT) of some type to transmit power between the flywheel and the drive wheels. However, a CVT can...

  10. APPLICATION OF A FINITE-DIFFERENCE TECHNIQUE TO THE HUMAN RADIOFREQUENCY DOSIMETRY PROBLEM

    EPA Science Inventory

    A powerful finite difference numerical technique has been applied to the human radiofrequency dosimetry problem. The method possesses inherent advantages over the method of moments approach in that its implementation requires much less computer memory. Consequently, it has the ca...

  11. OASIS General Introduction.

    ERIC Educational Resources Information Center

    Stanford Univ., CA.

    Recognizing the need to balance generality and economy in system costs, the Project INFO team at Stanford University developing OASIS has sought to provide generalized and powerful computer support within the normal range of operating and analytical requirements associated with university administration. The specific design objectives of the OASIS…

  12. A stepwise, multi-objective, multi-variable parameter optimization method for the APEX model

    USDA-ARS?s Scientific Manuscript database

    Proper parameterization enables hydrological models to make reliable estimates of non-point source pollution for effective control measures. The automatic calibration of hydrologic models requires significant computational power limiting its application. The study objective was to develop and eval...

  13. Future in biomolecular computation

    NASA Astrophysics Data System (ADS)

    Wimmer, E.

    1988-01-01

    Large-scale computations for biomolecules are dominated by three levels of theory: rigorous quantum mechanical calculations for molecules with up to about 30 atoms, semi-empirical quantum mechanical calculations for systems with up to several hundred atoms, and force-field molecular dynamics studies of biomacromolecules with 10,000 atoms and more including surrounding solvent molecules. It can be anticipated that increased computational power will allow the treatment of larger systems of ever growing complexity. Due to the scaling of the computational requirements with increasing number of atoms, the force-field approaches will benefit the most from increased computational power. On the other hand, progress in methodologies such as density functional theory will enable us to treat larger systems on a fully quantum mechanical level and a combination of molecular dynamics and quantum mechanics can be envisioned. One of the greatest challenges in biomolecular computation is the protein folding problem. It is unclear at this point, if an approach with current methodologies will lead to a satisfactory answer or if unconventional, new approaches will be necessary. In any event, due to the complexity of biomolecular systems, a hierarchy of approaches will have to be established and used in order to capture the wide ranges of length-scales and time-scales involved in biological processes. In terms of hardware development, speed and power of computers will increase while the price/performance ratio will become more and more favorable. Parallelism can be anticipated to become an integral architectural feature in a range of computers. It is unclear at this point, how fast massively parallel systems will become easy enough to use so that new methodological developments can be pursued on such computers. Current trends show that distributed processing such as the combination of convenient graphics workstations and powerful general-purpose supercomputers will lead to a new style of computing in which the calculations are monitored and manipulated as they proceed. The combination of a numeric approach with artificial-intelligence approaches can be expected to open up entirely new possibilities. Ultimately, the most exciding aspect of the future in biomolecular computing will be the unexpected discoveries.

  14. Flight Computer Design for the Space Technology 5 (ST-5) Mission

    NASA Technical Reports Server (NTRS)

    Speer, David; Jackson, George; Raphael, Dave; Day, John H. (Technical Monitor)

    2001-01-01

    As part of NASA's New Millennium Program, the Space Technology 5 mission will validate a variety of technologies for nano-satellite and constellation mission applications. Included are: a miniaturized and low power X-band transponder, a constellation communication and navigation transceiver, a cold gas micro-thruster, two different variable emittance (thermal) controllers, flex cables for solar array power collection, autonomous groundbased constellation management tools, and a new CMOS ultra low-power, radiation-tolerant, +0.5 volt logic technology. The ST-5 focus is on small and low-power. A single-processor, multi-function flight computer will implement direct digital and analog interfaces to all of the other spacecraft subsystems and components. There will not be a distributed data system that uses a standardized serial bus such as MIL-STD-1553 or MIL-STD-1773. The flight software running on the single processor will be responsible for all real-time processing associated with: guidance, navigation and control, command and data handling (C&DH) including uplink/downlink, power switching and battery charge management, science data analysis and storage, intra-constellation communications, and housekeeping data collection and logging. As a nanosatellite trail-blazer for future constellations of up to 100 separate space vehicles, ST-5 will demonstrate a compact (single board), low power (5.5 watts) solution to the data acquisition, control, communications, processing and storage requirements that have traditionally required an entire network of separate circuit boards and/or avionics boxes. In addition to the New Millennium technologies, other major spacecraft subsystems include the power system electronics, a lithium-ion battery, triple-junction solar cell arrays, a science-grade magnetometer, a miniature spinning sun sensor, and a propulsion system.

  15. Experimental Testing and Computational Fluid Dynamics Simulation of Maple Seeds and Performance Analysis as a Wind Turbine

    NASA Astrophysics Data System (ADS)

    Holden, Jacob R.

    Descending maple seeds generate lift to slow their fall and remain aloft in a blowing wind; have the wings of these seeds evolved to descend as slowly as possible? A unique energy balance equation, experimental data, and computational fluid dynamics simulations have all been developed to explore this question from a turbomachinery perspective. The computational fluid dynamics in this work is the first to be performed in the relative reference frame. Maple seed performance has been analyzed for the first time based on principles of wind turbine analysis. Application of the Betz Limit and one-dimensional momentum theory allowed for empirical and computational power and thrust coefficients to be computed for maple seeds. It has been determined that the investigated species of maple seeds perform near the Betz limit for power conversion and thrust coefficient. The power coefficient for a maple seed is found to be in the range of 48-54% and the thrust coefficient in the range of 66-84%. From Betz theory, the stream tube area expansion of the maple seed is necessary for power extraction. Further investigation of computational solutions and mechanical analysis find three key reasons for high maple seed performance. First, the area expansion is driven by maple seed lift generation changing the fluid momentum and requiring area to increase. Second, radial flow along the seed surface is promoted by a sustained leading edge vortex that centrifuges low momentum fluid outward. Finally, the area expansion is also driven by the spanwise area variation of the maple seed imparting a radial force on the flow. These mechanisms result in a highly effective device for the purpose of seed dispersal. However, the maple seed also provides insight into fundamental questions about how turbines can most effectively change the momentum of moving fluids in order to extract useful power or dissipate kinetic energy.

  16. Comparing Server Energy Use and Efficiency Using Small Sample Sizes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coles, Henry C.; Qin, Yong; Price, Phillip N.

    This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel andmore » one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a group is similar to all other components as a group. However, some differences were observed. The Supermicro server used 27 percent more power at idle compared to the other brands. The Intel server had a power supply control feature called cold redundancy, and the data suggest that cold redundancy can provide energy savings at low power levels. Test and evaluation methods that might be used by others having limited resources for IT equipment evaluation are explained in the report.« less

  17. Measuring the reionization 21 cm fluctuations using clustering wedges

    NASA Astrophysics Data System (ADS)

    Raut, Dinesh; Choudhury, Tirthankar Roy; Ghara, Raghunath

    2018-03-01

    One of the main challenges in probing the reionization epoch using the redshifted 21 cm line is that the magnitude of the signal is several orders smaller than the astrophysical foregrounds. One of the methods to deal with the problem is to avoid a wedge-shaped region in the Fourier k⊥ - k∥ space which contains the signal from the spectrally smooth foregrounds. However, measuring the spherically averaged power spectrum using only modes outside this wedge (i.e. in the reionization window) leads to a bias. We provide a prescription, based on expanding the power spectrum in terms of the shifted Legendre polynomials, which can be used to compute the angular moments of the power spectrum in the reionization window. The prescription requires computation of the monopole, quadrupole, and hexadecapole moments of the power spectrum using the theoretical model under consideration and also the knowledge of the effective extent of the foreground wedge in the k⊥ - k∥ plane. One can then calculate the theoretical power spectrum in the window which can be directly compared with observations. The analysis should have implications for avoiding any bias in the parameter constraints using 21 cm power spectrum data.

  18. Description of the SSF PMAD DC testbed control system data acquisition function

    NASA Technical Reports Server (NTRS)

    Baez, Anastacio N.; Mackin, Michael; Wright, Theodore

    1992-01-01

    The NASA LeRC in Cleveland, Ohio has completed the development and integration of a Power Management and Distribution (PMAD) DC Testbed. This testbed is a reduced scale representation of the end to end, sources to loads, Space Station Freedom Electrical Power System (SSF EPS). This unique facility is being used to demonstrate DC power generation and distribution, power management and control, and system operation techniques considered to be prime candidates for the Space Station Freedom. A key capability of the testbed is its ability to be configured to address system level issues in support of critical SSF program design milestones. Electrical power system control and operation issues like source control, source regulation, system fault protection, end-to-end system stability, health monitoring, resource allocation, and resource management are being evaluated in the testbed. The SSF EPS control functional allocation between on-board computers and ground based systems is evolving. Initially, ground based systems will perform the bulk of power system control and operation. The EPS control system is required to continuously monitor and determine the current state of the power system. The DC Testbed Control System consists of standard controllers arranged in a hierarchical and distributed architecture. These controllers provide all the monitoring and control functions for the DC Testbed Electrical Power System. Higher level controllers include the Power Management Controller, Load Management Controller, Operator Interface System, and a network of computer systems that perform some of the SSF Ground based Control Center Operation. The lower level controllers include Main Bus Switch Controllers and Photovoltaic Controllers. Power system status information is periodically provided to the higher level controllers to perform system control and operation. The data acquisition function of the control system is distributed among the various levels of the hierarchy. Data requirements are dictated by the control system algorithms being implemented at each level. A functional description of the various levels of the testbed control system architecture, the data acquisition function, and the status of its implementationis presented.

  19. Optical interconnection networks for high-performance computing systems

    NASA Astrophysics Data System (ADS)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  20. Reducing power consumption during execution of an application on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-10

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: powering up, during compute node initialization, only a portion of computer memory of the compute node, including configuring an operating system for the compute node in the powered up portion of computer memory; receiving, by the operating system, an instruction to load an application for execution; allocating, by the operating system, additional portions of computer memory to the application for use during execution; powering up the additional portions of computer memory allocated for use by the application during execution; and loading, by the operating system, the application into the powered up additional portions of computer memory.

  1. Corona performance of a compact 230-kV line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chartier, V.L.; Blair, D.E.; Easley, M.D.

    Permitting requirements and the acquisition of new rights-of-way for transmission facilities has in recent years become increasingly difficult for most utilities, including Puget Sound Power and Light Company. In order to maintain a high degree of reliability of service while being responsive to public concerns regarding the siting of high voltage (HV) transmission facilities, Puget Power has found it necessary to more heavily rely upon the use of compact lines in franchise corridors. Compaction does, however, precipitant increased levels of audible noise (AN) and radio and TV interference (RI and TVI) due to corona on the conductors and insulator assemblies.more » Puget Power relies upon the Bonneville Power Administration (BPA) Corona and Field Effects computer program to calculate AN and RI for new lines. Since there was some question of the program`s ability to accurately represent quiet 230-kV compact designs, a joint project was undertaken with BPA to verify the program`s algorithms. Long-term measurements made on an operating Puget Power 230-kV compact line confirmed the accuracy of BPA`s AN model; however, the RI measurements were much lower than predicted by the BPA computer and other programs. This paper also describes how the BPA computer program can be used to calculate the voltage needed to expose insulator assemblies to the correct electric field in single test setups in HV laboratories.« less

  2. Computational Analysis of Powered Lift Augmentation for the LEAPTech Distributed Electric Propulsion Wing

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Viken, Sally A.; Carter, Melissa B.; Viken, Jeffrey K.; Wiese, Michael R.; Farr, Norma L.

    2017-01-01

    A computational study of a distributed electric propulsion wing with a 40deg flap deflection has been completed using FUN3D. Two lift-augmentation power conditions were compared with the power-off configuration on the high-lift wing (40deg flap) at a 73 mph freestream flow and for a range of angles of attack from -5 degrees to 14 degrees. The computational study also included investigating the benefit of corotating versus counter-rotating propeller spin direction to powered-lift performance. The results indicate a large benefit in lift coefficient, over the entire range of angle of attack studied, by using corotating propellers that all spin counter to the wingtip vortex. For the landing condition, 73 mph, the unpowered 40deg flap configuration achieved a maximum lift coefficient of 2.3. With high-lift blowing the maximum lift coefficient increased to 5.61. Therefore, the lift augmentation is a factor of 2.4. Taking advantage of the fullspan lift augmentation at similar performance means that a wing powered with the distributed electric propulsion system requires only 42 percent of the wing area of the unpowered wing. This technology will allow wings to be 'cruise optimized', meaning that they will be able to fly closer to maximum lift over drag conditions at the design cruise speed of the aircraft.

  3. Visual Analytics for Power Grid Contingency Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Pak C.; Huang, Zhenyu; Chen, Yousu

    2014-01-20

    Contingency analysis is the process of employing different measures to model scenarios, analyze them, and then derive the best response to remove the threats. This application paper focuses on a class of contingency analysis problems found in the power grid management system. A power grid is a geographically distributed interconnected transmission network that transmits and delivers electricity from generators to end users. The power grid contingency analysis problem is increasingly important because of both the growing size of the underlying raw data that need to be analyzed and the urgency to deliver working solutions in an aggressive timeframe. Failure tomore » do so may bring significant financial, economic, and security impacts to all parties involved and the society at large. The paper presents a scalable visual analytics pipeline that transforms about 100 million contingency scenarios to a manageable size and form for grid operators to examine different scenarios and come up with preventive or mitigation strategies to address the problems in a predictive and timely manner. Great attention is given to the computational scalability, information scalability, visual scalability, and display scalability issues surrounding the data analytics pipeline. Most of the large-scale computation requirements of our work are conducted on a Cray XMT multi-threaded parallel computer. The paper demonstrates a number of examples using western North American power grid models and data.« less

  4. Quantum Walk Schemes for Universal Quantum Computation

    NASA Astrophysics Data System (ADS)

    Underwood, Michael S.

    Random walks are a powerful tool for the efficient implementation of algorithms in classical computation. Their quantum-mechanical analogues, called quantum walks, hold similar promise. Quantum walks provide a model of quantum computation that has recently been shown to be equivalent in power to the standard circuit model. As in the classical case, quantum walks take place on graphs and can undergo discrete or continuous evolution, though quantum evolution is unitary and therefore deterministic until a measurement is made. This thesis considers the usefulness of continuous-time quantum walks to quantum computation from the perspectives of both their fundamental power under various formulations, and their applicability in practical experiments. In one extant scheme, logical gates are effected by scattering processes. The results of an exhaustive search for single-qubit operations in this model are presented. It is shown that the number of distinct operations increases exponentially with the number of vertices in the scattering graph. A catalogue of all graphs on up to nine vertices that implement single-qubit unitaries at a specific set of momenta is included in an appendix. I develop a novel scheme for universal quantum computation called the discontinuous quantum walk, in which a continuous-time quantum walker takes discrete steps of evolution via perfect quantum state transfer through small 'widget' graphs. The discontinuous quantum-walk scheme requires an exponentially sized graph, as do prior discrete and continuous schemes. To eliminate the inefficient vertex resource requirement, a computation scheme based on multiple discontinuous walkers is presented. In this model, n interacting walkers inhabiting a graph with 2n vertices can implement an arbitrary quantum computation on an input of length n, an exponential savings over previous universal quantum walk schemes. This is the first quantum walk scheme that allows for the application of quantum error correction. The many-particle quantum walk can be viewed as a single quantum walk undergoing perfect state transfer on a larger weighted graph, obtained via equitable partitioning. I extend this formalism to non-simple graphs. Examples of the application of equitable partitioning to the analysis of quantum walks and many-particle quantum systems are discussed.

  5. Design of a reversible single precision floating point subtractor.

    PubMed

    Anantha Lakshmi, Av; Sudha, Gf

    2014-01-04

    In recent years, Reversible logic has emerged as a major area of research due to its ability to reduce the power dissipation which is the main requirement in the low power digital circuit design. It has wide applications like low power CMOS design, Nano-technology, Digital signal processing, Communication, DNA computing and Optical computing. Floating-point operations are needed very frequently in nearly all computing disciplines, and studies have shown floating-point addition/subtraction to be the most used floating-point operation. However, few designs exist on efficient reversible BCD subtractors but no work on reversible floating point subtractor. In this paper, it is proposed to present an efficient reversible single precision floating-point subtractor. The proposed design requires reversible designs of an 8-bit and a 24-bit comparator unit, an 8-bit and a 24-bit subtractor, and a normalization unit. For normalization, a 24-bit Reversible Leading Zero Detector and a 24-bit reversible shift register is implemented to shift the mantissas. To realize a reversible 1-bit comparator, in this paper, two new 3x3 reversible gates are proposed The proposed reversible 1-bit comparator is better and optimized in terms of the number of reversible gates used, the number of transistor count and the number of garbage outputs. The proposed work is analysed in terms of number of reversible gates, garbage outputs, constant inputs and quantum costs. Using these modules, an efficient design of a reversible single precision floating point subtractor is proposed. Proposed circuits have been simulated using Modelsim and synthesized using Xilinx Virtex5vlx30tff665-3. The total on-chip power consumed by the proposed 32-bit reversible floating point subtractor is 0.410 W.

  6. ELT-scale Adaptive Optics real-time control with thes Intel Xeon Phi Many Integrated Core Architecture

    NASA Astrophysics Data System (ADS)

    Jenkins, David R.; Basden, Alastair; Myers, Richard M.

    2018-05-01

    We propose a solution to the increased computational demands of Extremely Large Telescope (ELT) scale adaptive optics (AO) real-time control with the Intel Xeon Phi Knights Landing (KNL) Many Integrated Core (MIC) Architecture. The computational demands of an AO real-time controller (RTC) scale with the fourth power of telescope diameter and so the next generation ELTs require orders of magnitude more processing power for the RTC pipeline than existing systems. The Xeon Phi contains a large number (≥64) of low power x86 CPU cores and high bandwidth memory integrated into a single socketed server CPU package. The increased parallelism and memory bandwidth are crucial to providing the performance for reconstructing wavefronts with the required precision for ELT scale AO. Here, we demonstrate that the Xeon Phi KNL is capable of performing ELT scale single conjugate AO real-time control computation at over 1.0kHz with less than 20μs RMS jitter. We have also shown that with a wavefront sensor camera attached the KNL can process the real-time control loop at up to 966Hz, the maximum frame-rate of the camera, with jitter remaining below 20μs RMS. Future studies will involve exploring the use of a cluster of Xeon Phis for the real-time control of the MCAO and MOAO regimes of AO. We find that the Xeon Phi is highly suitable for ELT AO real time control.

  7. Orthorectification by Using Gpgpu Method

    NASA Astrophysics Data System (ADS)

    Sahin, H.; Kulur, S.

    2012-07-01

    Thanks to the nature of the graphics processing, the newly released products offer highly parallel processing units with high-memory bandwidth and computational power of more than teraflops per second. The modern GPUs are not only powerful graphic engines but also they are high level parallel programmable processors with very fast computing capabilities and high-memory bandwidth speed compared to central processing units (CPU). Data-parallel computations can be shortly described as mapping data elements to parallel processing threads. The rapid development of GPUs programmability and capabilities attracted the attentions of researchers dealing with complex problems which need high level calculations. This interest has revealed the concepts of "General Purpose Computation on Graphics Processing Units (GPGPU)" and "stream processing". The graphic processors are powerful hardware which is really cheap and affordable. So the graphic processors became an alternative to computer processors. The graphic chips which were standard application hardware have been transformed into modern, powerful and programmable processors to meet the overall needs. Especially in recent years, the phenomenon of the usage of graphics processing units in general purpose computation has led the researchers and developers to this point. The biggest problem is that the graphics processing units use different programming models unlike current programming methods. Therefore, an efficient GPU programming requires re-coding of the current program algorithm by considering the limitations and the structure of the graphics hardware. Currently, multi-core processors can not be programmed by using traditional programming methods. Event procedure programming method can not be used for programming the multi-core processors. GPUs are especially effective in finding solution for repetition of the computing steps for many data elements when high accuracy is needed. Thus, it provides the computing process more quickly and accurately. Compared to the GPUs, CPUs which perform just one computing in a time according to the flow control are slower in performance. This structure can be evaluated for various applications of computer technology. In this study covers how general purpose parallel programming and computational power of the GPUs can be used in photogrammetric applications especially direct georeferencing. The direct georeferencing algorithm is coded by using GPGPU method and CUDA (Compute Unified Device Architecture) programming language. Results provided by this method were compared with the traditional CPU programming. In the other application the projective rectification is coded by using GPGPU method and CUDA programming language. Sample images of various sizes, as compared to the results of the program were evaluated. GPGPU method can be used especially in repetition of same computations on highly dense data, thus finding the solution quickly.

  8. Elucidating reaction mechanisms on quantum computers.

    PubMed

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M; Wecker, Dave; Troyer, Matthias

    2017-07-18

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.

  9. Elucidating reaction mechanisms on quantum computers

    PubMed Central

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias

    2017-01-01

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources. PMID:28674011

  10. Elucidating reaction mechanisms on quantum computers

    NASA Astrophysics Data System (ADS)

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias

    2017-07-01

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.

  11. Hybrid acousto-optic and digital equalization for microwave digital radio channels

    NASA Astrophysics Data System (ADS)

    Anderson, C. S.; Vanderlugt, A.

    1990-11-01

    Digital radio transmission systems use complex modulation schemes that require powerful signal-processing techniques to correct channel distortions and to minimize BERs. This paper proposes combining the computation power of acoustooptic processing and the accuracy of digital processing to produce a hybrid channel equalizer that exceeds the performance of digital equalization alone. Analysis shows that a hybrid equalizer for 256-level quadrature amplitude modulation (QAM) performs better than a digital equalizer for 64-level QAM.

  12. Parametric Study of Radiative Cooling of Solid Antihydrogen

    DTIC Science & Technology

    1989-03-01

    knowledge of things academic and otherwise. 0 Abstract - .. . / ’A computer model of a cryogenic system for storing solid antimatter is used to explore the...radiative cooling-power requirements for long-term antimatter storage. If vacuum-chamber pressures as low as 1 torr can be reached, and the rest of the...large set of assumptions is valid, milligram quantities of solid antimatter could be stored indefinitely at 1.5 K using cooling powers of less than a

  13. Estimation of depth to magnetic source using maximum entropy power spectra, with application to the Peru-Chile Trench

    USGS Publications Warehouse

    Blakely, Richard J.

    1981-01-01

    Estimations of the depth to magnetic sources using the power spectrum of magnetic anomalies generally require long magnetic profiles. The method developed here uses the maximum entropy power spectrum (MEPS) to calculate depth to source on short windows of magnetic data; resolution is thereby improved. The method operates by dividing a profile into overlapping windows, calculating a maximum entropy power spectrum for each window, linearizing the spectra, and calculating with least squares the various depth estimates. The assumptions of the method are that the source is two dimensional and that the intensity of magnetization includes random noise; knowledge of the direction of magnetization is not required. The method is applied to synthetic data and to observed marine anomalies over the Peru-Chile Trench. The analyses indicate a continuous magnetic basement extending from the eastern margin of the Nazca plate and into the subduction zone. The computed basement depths agree with acoustic basement seaward of the trench axis, but deepen as the plate approaches the inner trench wall. This apparent increase in the computed depths may result from the deterioration of magnetization in the upper part of the ocean crust, possibly caused by compressional disruption of the basaltic layer. Landward of the trench axis, the depth estimates indicate possible thrusting of the oceanic material into the lower slope of the continental margin.

  14. Image Processor Electronics (IPE): The High-Performance Computing System for NASA SWIFT Mission

    NASA Technical Reports Server (NTRS)

    Nguyen, Quang H.; Settles, Beverly A.

    2003-01-01

    Gamma Ray Bursts (GRBs) are believed to be the most powerful explosions that have occurred in the Universe since the Big Bang and are a mystery to the scientific community. Swift, a NASA mission that includes international participation, was designed and built in preparation for a 2003 launch to help to determine the origin of Gamma Ray Bursts. Locating the position in the sky where a burst originates requires intensive computing, because the duration of a GRB can range between a few milliseconds up to approximately a minute. The instrument data system must constantly accept multiple images representing large regions of the sky that are generated by sixteen gamma ray detectors operating in parallel. It then must process the received images very quickly in order to determine the existence of possible gamma ray bursts and their locations. The high-performance instrument data computing system that accomplishes this is called the Image Processor Electronics (IPE). The IPE was designed, built and tested by NASA Goddard Space Flight Center (GSFC) in order to meet these challenging requirements. The IPE is a small size, low power and high performing computing system for space applications. This paper addresses the system implementation and the system hardware architecture of the IPE. The paper concludes with the IPE system performance that was measured during end-to-end system testing.

  15. A cyber infrastructure for the SKA Telescope Manager

    NASA Astrophysics Data System (ADS)

    Barbosa, Domingos; Barraca, João. P.; Carvalho, Bruno; Maia, Dalmiro; Gupta, Yashwant; Natarajan, Swaminathan; Le Roux, Gerhard; Swart, Paul

    2016-07-01

    The Square Kilometre Array Telescope Manager (SKA TM) will be responsible for assisting the SKA Operations and Observation Management, carrying out System diagnosis and collecting Monitoring and Control data from the SKA subsystems and components. To provide adequate compute resources, scalability, operation continuity and high availability, as well as strict Quality of Service, the TM cyber-infrastructure (embodied in the Local Infrastructure - LINFRA) consists of COTS hardware and infrastructural software (for example: server monitoring software, host operating system, virtualization software, device firmware), providing a specially tailored Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) solution. The TM infrastructure provides services in the form of computational power, software defined networking, power, storage abstractions, and high level, state of the art IaaS and PaaS management interfaces. This cyber platform will be tailored to each of the two SKA Phase 1 telescopes (SKA_MID in South Africa and SKA_LOW in Australia) instances, each presenting different computational and storage infrastructures and conditioned by location. This cyber platform will provide a compute model enabling TM to manage the deployment and execution of its multiple components (observation scheduler, proposal submission tools, MandC components, Forensic tools and several Databases, etc). In this sense, the TM LINFRA is primarily focused towards the provision of isolated instances, mostly resorting to virtualization technologies, while defaulting to bare hardware if specifically required due to performance, security, availability, or other requirement.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Liping; Zhu, Fulong, E-mail: zhufulong@hust.edu.cn; Duan, Ke

    Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of opticalmore » devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.« less

  17. A Systematic Determination of Skill and Simulator Requirements for Airline Transport Pilot Certification.

    DTIC Science & Technology

    1985-03-01

    scene contents should provide the needed information simultaneously in each perspec- tive as prioritized. For the others, the requirement is that...turn the airplane using nosewheel steering until lineup is accomplished. Minimize side loads. (3) Apply forward elevator pressure to ensure positive... simultaneously advancing the power toward the computed takeoff setting. Set final takeoff thrust by approxi- mately 60 knots. (6) As the airplane accelerates, keep

  18. Digital optical processing of optical communications: towards an Optical Turing Machine

    NASA Astrophysics Data System (ADS)

    Touch, Joe; Cao, Yinwen; Ziyadi, Morteza; Almaiman, Ahmed; Mohajerin-Ariaei, Amirhossein; Willner, Alan E.

    2017-01-01

    Optical computing is needed to support Tb/s in-network processing in a way that unifies communication and computation using a single data representation that supports in-transit network packet processing, security, and big data filtering. Support for optical computation of this sort requires leveraging the native properties of optical wave mixing to enable computation and switching for programmability. As a consequence, data must be encoded digitally as phase (M-PSK), semantics-preserving regeneration is the key to high-order computation, and data processing at Tb/s rates requires mixing. Experiments have demonstrated viable approaches to phase squeezing and power restoration. This work led our team to develop the first serial, optical Internet hop-count decrement, and to design and simulate optical circuits for calculating the Internet checksum and multiplexing Internet packets. The current exploration focuses on limited-lookback computational models to reduce the need for permanent storage and hybrid nanophotonic circuits that combine phase-aligned comb sources, non-linear mixing, and switching on the same substrate to avoid the macroscopic effects that hamper benchtop prototypes.

  19. Evaluation of a Multicore-Optimized Implementation for Tomographic Reconstruction

    PubMed Central

    Agulleiro, Jose-Ignacio; Fernández, José Jesús

    2012-01-01

    Tomography allows elucidation of the three-dimensional structure of an object from a set of projection images. In life sciences, electron microscope tomography is providing invaluable information about the cell structure at a resolution of a few nanometres. Here, large images are required to combine wide fields of view with high resolution requirements. The computational complexity of the algorithms along with the large image size then turns tomographic reconstruction into a computationally demanding problem. Traditionally, high-performance computing techniques have been applied to cope with such demands on supercomputers, distributed systems and computer clusters. In the last few years, the trend has turned towards graphics processing units (GPUs). Here we present a detailed description and a thorough evaluation of an alternative approach that relies on exploitation of the power available in modern multicore computers. The combination of single-core code optimization, vector processing, multithreading and efficient disk I/O operations succeeds in providing fast tomographic reconstructions on standard computers. The approach turns out to be competitive with the fastest GPU-based solutions thus far. PMID:23139768

  20. Energy harvesting for dielectric elastomer sensing

    NASA Astrophysics Data System (ADS)

    Anderson, Iain A.; Illenberger, Patrin; O'Brien, Ben M.

    2016-04-01

    Soft and stretchy dielectric elastomer (DE) sensors can measure large strains on robotic devices and people. DE strain measurement requires electric energy to run the sensors. Energy is also required for information processing and telemetering of data to phone or computer. Batteries are expensive and recharging is inconvenient. One solution is to harvest energy from the strains that the sensor is exposed to. For this to work the harvester must also be wearable, soft, unobtrusive and profitable from the energy perspective; with more energy harvested than used for strain measurement. A promising way forward is to use the DE sensor as its own energy harvester. Our study indicates that it is feasible for a basic DE sensor to provide its own power to drive its own sensing signal. However telemetry and computation that are additional to this will require substantially more power than the sensing circuit. A strategy would involve keeping the number of Bluetooth data chirps low during the entire period of energy harvesting and to limit transmission to a fraction of the total time spent harvesting energy. There is much still to do to balance the energy budget. This will be a challenge but when we succeed it will open the door to autonomous DE multi-sensor systems without the requirement for battery recharge.

  1. Efficient Sample Delay Calculation for 2-D and 3-D Ultrasound Imaging.

    PubMed

    Ibrahim, Aya; Hager, Pascal A; Bartolini, Andrea; Angiolini, Federico; Arditi, Marcel; Thiran, Jean-Philippe; Benini, Luca; De Micheli, Giovanni

    2017-08-01

    Ultrasound imaging is a reference medical diagnostic technique, thanks to its blend of versatility, effectiveness, and moderate cost. The core computation of all ultrasound imaging methods is based on simple formulae, except for those required to calculate acoustic propagation delays with high precision and throughput. Unfortunately, advanced three-dimensional (3-D) systems require the calculation or storage of billions of such delay values per frame, which is a challenge. In 2-D systems, this requirement can be four orders of magnitude lower, but efficient computation is still crucial in view of low-power implementations that can be battery-operated, enabling usage in numerous additional scenarios. In this paper, we explore two smart designs of the delay generation function. To quantify their hardware cost, we implement them on FPGA and study their footprint and performance. We evaluate how these architectures scale to different ultrasound applications, from a low-power 2-D system to a next-generation 3-D machine. When using numerical approximations, we demonstrate the ability to generate delay values with sufficient throughput to support 10 000-channel 3-D imaging at up to 30 fps while using 63% of a Virtex 7 FPGA, requiring 24 MB of external memory accessed at about 32 GB/s bandwidth. Alternatively, with similar FPGA occupation, we show an exact calculation method that reaches 24 fps on 1225-channel 3-D imaging and does not require external memory at all. Both designs can be scaled to use a negligible amount of resources for 2-D imaging in low-power applications and for ultrafast 2-D imaging at hundreds of frames per second.

  2. Utilization of Virtual Server Technology in Mission Operations

    NASA Technical Reports Server (NTRS)

    Felton, Larry; Lankford, Kimberly; Pitts, R. Lee; Pruitt, Robert W.

    2010-01-01

    Virtualization provides the opportunity to continue to do "more with less"---more computing power with fewer physical boxes, thus reducing the overall hardware footprint, power and cooling requirements, software licenses, and their associated costs. This paper explores the tremendous advantages and any disadvantages of virtualization in all of the environments associated with software and systems development to operations flow. It includes the use and benefits of the Intelligent Platform Management Interface (IPMI) specification, and identifies lessons learned concerning hardware and network configurations. Using the Huntsville Operations Support Center (HOSC) at NASA Marshall Space Flight Center as an example, we demonstrate that deploying virtualized servers as a means of managing computing resources is applicable and beneficial to many areas of application, up to and including flight operations.

  3. Integrated solar energy system optimization

    NASA Astrophysics Data System (ADS)

    Young, S. K.

    1982-11-01

    The computer program SYSOPT, intended as a tool for optimizing the subsystem sizing, performance, and economics of integrated wind and solar energy systems, is presented. The modular structure of the methodology additionally allows simulations when the solar subsystems are combined with conventional technologies, e.g., a utility grid. Hourly energy/mass flow balances are computed for interconnection points, yielding optimized sizing and time-dependent operation of various subsystems. The program requires meteorological data, such as insolation, diurnal and seasonal variations, and wind speed at the hub height of a wind turbine, all of which can be taken from simulations like the TRNSYS program. Examples are provided for optimization of a solar-powered (wind turbine and parabolic trough-Rankine generator) desalinization plant, and a design analysis for a solar powered greenhouse.

  4. Virtualization in the Operations Environments

    NASA Technical Reports Server (NTRS)

    Pitts, Lee; Lankford, Kim; Felton, Larry; Pruitt, Robert

    2010-01-01

    Virtualization provides the opportunity to continue to do "more with less"---more computing power with fewer physical boxes, thus reducing the overall hardware footprint, power and cooling requirements, software licenses, and their associated costs. This paper explores the tremendous advantages and any disadvantages of virtualization in all of the environments associated with software and systems development to operations flow. It includes the use and benefits of the Intelligent Platform Management Interface (IPMI) specification, and identifies lessons learned concerning hardware and network configurations. Using the Huntsville Operations Support Center (HOSC) at NASA Marshall Space Flight Center as an example, we demonstrate that deploying virtualized servers as a means of managing computing resources is applicable and beneficial to many areas of application, up to and including flight operations.

  5. Dose commitments due to radioactive releases from nuclear power plant sites: Methodology and data base. Supplement 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, D.A.

    1996-06-01

    This manual describes a dose assessment system used to estimate the population or collective dose commitments received via both airborne and waterborne pathways by persons living within a 2- to 80-kilometer region of a commercial operating power reactor for a specific year of effluent releases. Computer programs, data files, and utility routines are included which can be used in conjunction with an IBM or compatible personal computer to produce the required dose commitments and their statistical distributions. In addition, maximum individual airborne and waterborne dose commitments are estimated and compared to 10 CFR Part 50, Appendix 1, design objectives. Thismore » supplement is the last report in the NUREG/CR-2850 series.« less

  6. 49 CFR 395.16 - Electronic on-board recording devices.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... transfer through wired and wireless methods to portable computers used by roadside safety assurance... the results of power-on self-tests and diagnostic error codes. (e) Date and time. (1) The date and... part. Wireless communication information interchange methods must comply with the requirements of the...

  7. Computer aided airship design

    NASA Technical Reports Server (NTRS)

    Davis, S. J.; Rosenstein, H.

    1975-01-01

    The Comprehensive Airship Sizing and Performance Computer Program (CASCOMP) is described which was developed and used in the design and evaluation of advanced lighter-than-air (LTA) craft. The program defines design details such as engine size and number, component weight buildups, required power, and the physical dimensions of airships which are designed to meet specified mission requirements. The program is used in a comparative parametric evaluation of six advanced lighter-than-air concepts. The results indicate that fully buoyant conventional airships have the lightest gross lift required when designed for speeds less than 100 knots and the partially buoyant concepts are superior above 100 knots. When compared on the basis of specific productivity, which is a measure of the direct operating cost, the partially buoyant lifting body/tilting prop-rotor concept is optimum.

  8. Modelling switching-time effects in high-frequency power conditioning networks

    NASA Technical Reports Server (NTRS)

    Owen, H. A.; Sloane, T. H.; Rimer, B. H.; Wilson, T. G.

    1979-01-01

    Power transistor networks which switch large currents in highly inductive environments are beginning to find application in the hundred kilohertz switching frequency range. Recent developments in the fabrication of metal-oxide-semiconductor field-effect transistors in the power device category have enhanced the movement toward higher switching frequencies. Models for switching devices and of the circuits in which they are imbedded are required to properly characterize the mechanisms responsible for turning on and turning off effects. Easily interpreted results in the form of oscilloscope-like plots assist in understanding the effects of parametric studies using topology oriented computer-aided analysis methods.

  9. Analysis of self-oscillating dc-to-dc converters

    NASA Technical Reports Server (NTRS)

    Burger, P.

    1974-01-01

    The basic operational characteristics of dc-to-dc converters are analyzed along with the basic physical characteristics of power converters. A simple class of dc-to-dc power converters are chosen which could satisfy any set of operating requirements, and three different controlling methods in this class are described in detail. Necessary conditions for the stability of these converters are measured through analog computer simulation whose curves are related to other operational characteristics, such as ripple and regulation. Further research is suggested for the solution of absolute stability and efficient physical design of this class of power converters.

  10. Scaling of data communications for an advanced supercomputer network

    NASA Technical Reports Server (NTRS)

    Levin, E.; Eaton, C. K.; Young, Bruce

    1986-01-01

    The goal of NASA's Numerical Aerodynamic Simulation (NAS) Program is to provide a powerful computational environment for advanced research and development in aeronautics and related disciplines. The present NAS system consists of a Cray 2 supercomputer connected by a data network to a large mass storage system, to sophisticated local graphics workstations and by remote communication to researchers throughout the United States. The program plan is to continue acquiring the most powerful supercomputers as they become available. The implications of a projected 20-fold increase in processing power on the data communications requirements are described.

  11. EUV/soft x-ray spectra for low B neutron stars

    NASA Technical Reports Server (NTRS)

    Romani, Roger W.; Rajagopal, Mohan; Rogers, Forrest J.; Iglesias, Carlos A.

    1995-01-01

    Recent ROSAT and EUVE detections of spin-powered neutron stars suggest that many emit 'thermal' radiation, peaking in the EUV/soft X-ray band. These data constrain the neutron stars' thermal history, but interpretation requires comparison with model atmosphere computations, since emergent spectra depend strongly on the surface composition and magnetic field. As recent opacity computations show substantial change to absorption cross sections at neutron star photospheric conditions, we report here on new model atmosphere computations employing such data. The results are compared with magnetic atmosphere models and applied to PSR J0437-4715, a low field neutron star.

  12. Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yilk, Todd

    The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.

  13. Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL

    DOE PAGES

    Yilk, Todd

    2018-02-17

    The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.

  14. Source Listings for Computer Code SPIRALI Incompressible, Turbulent Spiral Grooved Cylindrical and Face Seals

    NASA Technical Reports Server (NTRS)

    Walowit, Jed A.; Shapiro, Wibur

    2005-01-01

    This is the source listing of the computer code SPIRALI which predicts the performance characteristics of incompressible cylindrical and face seals with or without the inclusion of spiral grooves. Performance characteristics include load capacity (for face seals), leakage flow, power requirements and dynamic characteristics in the form of stiffness, damping and apparent mass coefficients in 4 degrees of freedom for cylindrical seals and 3 degrees of freedom for face seals. These performance characteristics are computed as functions of seal and groove geometry, load or film thickness, running and disturbance speeds, fluid viscosity, and boundary pressures.

  15. Computational modeling of Radioisotope Thermoelectric Generators (RTG) for interplanetary and deep space travel

    NASA Astrophysics Data System (ADS)

    Nejat, Cyrus; Nejat, Narsis; Nejat, Najmeh

    2014-06-01

    This research project is part of Narsis Nejat Master of Science thesis project that it is done at Shiraz University. The goals of this research are to make a computer model to evaluate the thermal power, electrical power, amount of emitted/absorbed dose, and amount of emitted/absorbed dose rate for static Radioisotope Thermoelectric Generators (RTG)s that is include a comprehensive study of the types of RTG systems and in particular RTG’s fuel resulting from both natural and artificial isotopes, calculation of the permissible dose radioisotope selected from the above, and conceptual design modeling and comparison between several NASA made RTGs with the project computer model pointing out the strong and weakness points for using this model in nuclear industries for simulation. The heat is being converted to electricity by two major methods in RTGs: static conversion and dynamic conversion. The model that is created for this project is for RTGs that heat is being converted to electricity statically. The model approximates good results as being compared with SNAP-3, SNAP-19, MHW, and GPHS RTGs in terms of electrical power, efficiency, specific power, and types of the mission and amount of fuel mass that is required to accomplish the mission.

  16. System Design Techniques for Reducing the Power Requirements of Advanced life Support Systems

    NASA Technical Reports Server (NTRS)

    Finn, Cory; Levri, Julie; Pawlowski, Chris; Crawford, Sekou; Luna, Bernadette (Technical Monitor)

    2000-01-01

    The high power requirement associated with overall operation of regenerative life support systems is a critical Z:p technological challenge. Optimization of individual processors alone will not be sufficient to produce an optimized system. System studies must be used in order to improve the overall efficiency of life support systems. Current research efforts at NASA Ames Research Center are aimed at developing approaches for reducing system power and energy usage in advanced life support systems. System energy integration and energy reuse techniques are being applied to advanced life support, in addition to advanced control methods for efficient distribution of power and thermal resources. An overview of current results of this work will be presented. The development of integrated system designs that reuse waste heat from sources such as crop lighting and solid waste processing systems will reduce overall power and cooling requirements. Using an energy integration technique known as Pinch analysis, system heat exchange designs are being developed that match hot and cold streams according to specific design principles. For various designs, the potential savings for power, heating and cooling are being identified and quantified. The use of state-of-the-art control methods for distribution of resources, such as system cooling water or electrical power, will also reduce overall power and cooling requirements. Control algorithms are being developed which dynamically adjust the use of system resources by the various subsystems and components in order to achieve an overall goal, such as smoothing of power usage and/or heat rejection profiles, while maintaining adequate reserves of food, water, oxygen, and other consumables, and preventing excessive build-up of waste materials. Reductions in the peak loading of the power and thermal systems will lead to lower overall requirements. Computer simulation models are being used to test various control system designs.

  17. SOSPAC- SOLAR SPACE POWER ANALYSIS CODE

    NASA Technical Reports Server (NTRS)

    Selcuk, M. K.

    1994-01-01

    The Solar Space Power Analysis Code, SOSPAC, was developed to examine the solar thermal and photovoltaic power generation options available for a satellite or spacecraft in low earth orbit. SOSPAC is a preliminary systems analysis tool and enables the engineer to compare the areas, weights, and costs of several candidate electric and thermal power systems. The configurations studied include photovoltaic arrays and parabolic dish systems to produce electricity only, and in various combinations to provide both thermal and electric power. SOSPAC has been used for comparison and parametric studies of proposed power systems for the NASA Space Station. The initial requirements are projected to be about 40 kW of electrical power, and a similar amount of thermal power with temperatures above 1000 degrees Centigrade. For objects in low earth orbit, the aerodynamic drag caused by suitably large photovoltaic arrays is very substantial. Smaller parabolic dishes can provide thermal energy at a collection efficiency of about 80%, but at increased cost. SOSPAC allows an analysis of cost and performance factors of five hybrid power generating systems. Input includes electrical and thermal power requirements, sun and shade durations for the satellite, and unit weight and cost for subsystems and components. Performance equations of the five configurations are derived, and the output tabulates total weights of the power plant assemblies, area of the arrays, efficiencies, and costs. SOSPAC is written in FORTRAN IV for batch execution and has been implemented on an IBM PC computer operating under DOS with a central memory requirement of approximately 60K of 8 bit bytes. This program was developed in 1985.

  18. Reducing power consumption during execution of an application on a plurality of compute nodes

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-06-05

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: executing, by each compute node, an application, the application including power consumption directives corresponding to one or more portions of the application; identifying, by each compute node, the power consumption directives included within the application during execution of the portions of the application corresponding to those identified power consumption directives; and reducing power, by each compute node, to one or more components of that compute node according to the identified power consumption directives during execution of the portions of the application corresponding to those identified power consumption directives.

  19. High power communication satellites power systems study

    NASA Astrophysics Data System (ADS)

    Josloff, Allan T.; Peterson, Jerry R.

    1995-01-01

    This paper discusses a planned study to evaluate the commercial attractiveness of high power communication satellites and assesses the attributes of both conventional photovoltaic and reactor power systems. These high power satellites can play a vital role in assuring availability of universally accessible, wide bandwidth communications, for high definition TV, super computer networks and other services. Satellites are ideally suited to provide the wide bandwidths and data rates required and are unique in the ability to provide services directly to the users. As new or relocated markets arise, satellites offer a flexibility that conventional distribution services cannot match, and it is no longer necessary to be near population centers to take advantage of the telecommunication revolution. The geopolitical implications of these substantially enhanced communications capabilities can be significant.

  20. A System Architecture for Efficient Transmission of Massive DNA Sequencing Data.

    PubMed

    Sağiroğlu, Mahmut Şamİl; Külekcİ, M Oğuzhan

    2017-11-01

    The DNA sequencing data analysis pipelines require significant computational resources. In that sense, cloud computing infrastructures appear as a natural choice for this processing. However, the first practical difficulty in reaching the cloud computing services is the transmission of the massive DNA sequencing data from where they are produced to where they will be processed. The daily practice here begins with compressing the data in FASTQ file format, and then sending these data via fast data transmission protocols. In this study, we address the weaknesses in that daily practice and present a new system architecture that incorporates the computational resources available on the client side while dynamically adapting itself to the available bandwidth. Our proposal considers the real-life scenarios, where the bandwidth of the connection between the parties may fluctuate, and also the computing power on the client side may be of any size ranging from moderate personal computers to powerful workstations. The proposed architecture aims at utilizing both the communication bandwidth and the computing resources for satisfying the ultimate goal of reaching the results as early as possible. We present a prototype implementation of the proposed architecture, and analyze several real-life cases, which provide useful insights for the sequencing centers, especially on deciding when to use a cloud service and in what conditions.

  1. A Test of Thick-Target Nonuniform Ionization as an Explanation for Breaks in Solar Flare Hard X-Ray Spectra

    NASA Technical Reports Server (NTRS)

    Holman, gordon; Dennis Brian R.; Tolbert, Anne K.; Schwartz, Richard

    2010-01-01

    Solar nonthermal hard X-ray (HXR) flare spectra often cannot be fitted by a single power law, but rather require a downward break in the photon spectrum. A possible explanation for this spectral break is nonuniform ionization in the emission region. We have developed a computer code to calculate the photon spectrum from electrons with a power-law distribution injected into a thick-target in which the ionization decreases linearly from 100% to zero. We use the bremsstrahlung cross-section from Haug (1997), which closely approximates the full relativistic Bethe-Heitler cross-section, and compare photon spectra computed from this model with those obtained by Kontar, Brown and McArthur (2002), who used a step-function ionization model and the Kramers approximation to the cross-section. We find that for HXR spectra from a target with nonuniform ionization, the difference (Delta-gamma) between the power-law indexes above and below the break has an upper limit between approx.0.2 and 0.7 that depends on the power-law index delta of the injected electron distribution. A broken power-law spectrum with a. higher value of Delta-gamma cannot result from nonuniform ionization alone. The model is applied to spectra obtained around the peak times of 20 flares observed by the Ramaty High Energy Solar Spectroscopic Imager (RHESSI from 2002 to 2004 to determine whether thick-target nonuniform ionization can explain the measured spectral breaks. A Monte Carlo method is used to determine the uncertainties of the best-fit parameters, especially on Delta-gamma. We find that 15 of the 20 flare spectra require a downward spectral break and that at least 6 of these could not be explained by nonuniform ionization alone because they had values of Delta-gamma with less than a 2.5% probability of being consistent with the computed upper limits from the model. The remaining 9 flare spectra, based on this criterion, are consistent with the nonuniform ionization model.

  2. Simulation methods to estimate design power: an overview for applied research.

    PubMed

    Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E

    2011-06-20

    Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.

  3. Simulation methods to estimate design power: an overview for applied research

    PubMed Central

    2011-01-01

    Background Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. Methods We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. Results We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Conclusions Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research. PMID:21689447

  4. Biophysics and systems biology.

    PubMed

    Noble, Denis

    2010-03-13

    Biophysics at the systems level, as distinct from molecular biophysics, acquired its most famous paradigm in the work of Hodgkin and Huxley, who integrated their equations for the nerve impulse in 1952. Their approach has since been extended to other organs of the body, notably including the heart. The modern field of computational biology has expanded rapidly during the first decade of the twenty-first century and, through its contribution to what is now called systems biology, it is set to revise many of the fundamental principles of biology, including the relations between genotypes and phenotypes. Evolutionary theory, in particular, will require re-assessment. To succeed in this, computational and systems biology will need to develop the theoretical framework required to deal with multilevel interactions. While computational power is necessary, and is forthcoming, it is not sufficient. We will also require mathematical insight, perhaps of a nature we have not yet identified. This article is therefore also a challenge to mathematicians to develop such insights.

  5. Biophysics and systems biology

    PubMed Central

    Noble, Denis

    2010-01-01

    Biophysics at the systems level, as distinct from molecular biophysics, acquired its most famous paradigm in the work of Hodgkin and Huxley, who integrated their equations for the nerve impulse in 1952. Their approach has since been extended to other organs of the body, notably including the heart. The modern field of computational biology has expanded rapidly during the first decade of the twenty-first century and, through its contribution to what is now called systems biology, it is set to revise many of the fundamental principles of biology, including the relations between genotypes and phenotypes. Evolutionary theory, in particular, will require re-assessment. To succeed in this, computational and systems biology will need to develop the theoretical framework required to deal with multilevel interactions. While computational power is necessary, and is forthcoming, it is not sufficient. We will also require mathematical insight, perhaps of a nature we have not yet identified. This article is therefore also a challenge to mathematicians to develop such insights. PMID:20123750

  6. A large high vacuum, high pumping speed space simulation chamber for electric propulsion

    NASA Technical Reports Server (NTRS)

    Grisnik, Stanley P.; Parkes, James E.

    1994-01-01

    Testing high power electric propulsion devices poses unique requirements on space simulation facilities. Very high pumping speeds are required to maintain high vacuum levels while handling large volumes of exhaust products. These pumping speeds are significantly higher than those available in most existing vacuum facilities. There is also a requirement for relatively large vacuum chamber dimensions to minimize facility wall/thruster plume interactions and to accommodate far field plume diagnostic measurements. A 4.57 m (15 ft) diameter by 19.2 m (63 ft) long vacuum chamber at NASA Lewis Research Center is described. The chamber utilizes oil diffusion pumps in combination with cryopanels to achieve high vacuum pumping speeds at high vacuum levels. The facility is computer controlled for all phases of operation from start-up, through testing, to shutdown. The computer control system increases the utilization of the facility and reduces the manpower requirements needed for facility operations.

  7. Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fletcher, James H.; Cox, Philip; Harrington, William J

    2013-09-03

    ABSTRACT Project Title: Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing PROJECT OBJECTIVE The objective of the project was to advance portable fuel cell system technology towards the commercial targets of power density, energy density and lifetime. These targets were laid out in the DOE’s R&D roadmap to develop an advanced direct methanol fuel cell power supply that meets commercial entry requirements. Such a power supply will enable mobile computers to operate non-stop, unplugged from the wall power outlet, by using the high energy density of methanol fuel contained in a replaceable fuel cartridge. Specifically this project focusedmore » on balance-of-plant component integration and miniaturization, as well as extensive component, subassembly and integrated system durability and validation testing. This design has resulted in a pre-production power supply design and a prototype that meet the rigorous demands of consumer electronic applications. PROJECT TASKS The proposed work plan was designed to meet the project objectives, which corresponded directly with the objectives outlined in the Funding Opportunity Announcement: To engineer the fuel cell balance-of-plant and packaging to meet the needs of consumer electronic systems, specifically at power levels required for mobile computing. UNF used existing balance-of-plant component technologies developed under its current US Army CERDEC project, as well as a previous DOE project completed by PolyFuel, to further refine them to both miniaturize and integrate their functionality to increase the system power density and energy density. Benefits of UNF’s novel passive water recycling MEA (membrane electrode assembly) and the simplified system architecture it enabled formed the foundation of the design approach. The package design was hardened to address orientation independence, shock, vibration, and environmental requirements. Fuel cartridge and fuel subsystems were improved to ensure effective fuel containment. PROJECT OVERVIEW The University of North Florida (UNF), with project partner the University of Florida, recently completed the Department of Energy (DOE) project entitled “Advanced Direct Methanol Fuel Cell for Mobile Computing”. The primary objective of the project was to advance portable fuel cell system technology towards the commercial targets as laid out in the DOE R&D roadmap by developing a 20-watt, direct methanol fuel cell (DMFC), portable power supply based on the UNF innovative “passive water recovery” MEA. Extensive component, sub-system, and system development and testing was undertaken to meet the rigorous demands of the consumer electronic application. Numerous brassboard (nonpackaged) systems were developed to optimize the integration process and facilitating control algorithm development. The culmination of the development effort was a fully-integrated, DMFC, power supply (referred to as DP4). The project goals were 40 W/kg for specific power, 55 W/l for power density, and 575 Whr/l for energy density. It should be noted that the specific power and power density were for the power section only, and did not include the hybrid battery. The energy density is based on three, 200 ml, fuel cartridges, and also did not include the hybrid battery. The results show that the DP4 system configured without the methanol concentration sensor exceeded all performance goals, achieving 41.5 W/kg for specific power, 55.3 W/l for power density, and 623 Whr/l for energy density. During the project, the DOE revised its technical targets, and the definition of many of these targets, for the portable power application. With this revision, specific power, power density, specific energy (Whr/kg), and energy density are based on the total system, including fuel tank, fuel, and hybridization battery. Fuel capacity is not defined, but the same value is required for all calculations. Test data showed that the DP4 exceeded all 2011 Technical Status values; for example, the DP4 energy density was 373 Whr/l versus the DOE 2011 status of 200 Whr/l. For the DOE 2013 Technical Goals, the operation time was increased from 10 hours to 14.3 hours. Under these conditions, the DP4 closely approached or surpassed the technical targets; for example, the DP4 achieved 468 Whr/l versus the goal of 500 Whr/l. Thus, UNF has successfully met the project goals. A fully-operational, 20-watt DMFC power supply was developed based on the UNF passive water recovery MEA. The power supply meets the project performance goals and advances portable power technology towards the commercialization targets set by the DOE.« less

  8. Structural Analyses of Stirling Power Convertor Heater Head for Long-Term Reliability, Durability, and Performance

    NASA Technical Reports Server (NTRS)

    Halford, Gary R.; Shah, Ashwin; Arya, Vinod K.; Krause, David L.; Bartolotta, Paul A.

    2002-01-01

    Deep-space missions require onboard electric power systems with reliable design lifetimes of up to 10 yr and beyond. A high-efficiency Stirling radioisotope power system is a likely candidate for future deep-space missions and Mars rover applications. To ensure ample durability, the structurally critical heater head of the Stirling power convertor has undergone extensive computational analyses of operating temperatures (up to 650 C), stresses, and creep resistance of the thin-walled Inconel 718 bill of material. Durability predictions are presented in terms of the probability of survival. A benchmark structural testing program has commenced to support the analyses. This report presents the current status of durability assessments.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smed, T.

    Traditional eigenvalue sensitivity for power systems requires the formulation of the system matrix, which lacks sparsity. In this paper, a new sensitivity analysis, derived for a sparse formulation, is presented. Variables that are computed as intermediate results in established eigen value programs for power systems, but not used further, are given a new interpretation. The effect of virtually any control action can be assessed based on a single eigenvalue-eigenvector calculation. In particular, the effect of active and reactive power modulation can be found as a multiplication of two or three complex numbers. The method is illustrated in an example formore » a large power system when applied to the control design for an HVDC-link.« less

  10. Inductance effects in the high-power transmitter crowbar system

    NASA Technical Reports Server (NTRS)

    Daeges, J.; Bhanji, A.

    1987-01-01

    The effective protection of a klystron in a high-power transmitter requires the diversion of all stored energy in the protected circuit through an alternate low-impedance path, the crowbar, such that less than 1 joule of energy is dumped into the klystron during an internal arc. A scheme of adding a bypass inductor in the crowbar-protected circuit of the high-power transmitter was tested using computer simulations and actual measurements under a test load. Although this scheme has several benefits, including less power dissipation in the resistor, the tests show that the presence of inductance in the portion of the circuit to be protected severely hampers effective crowbar operation.

  11. Parallel matrix multiplication on the Connection Machine

    NASA Technical Reports Server (NTRS)

    Tichy, Walter F.

    1988-01-01

    Matrix multiplication is a computation and communication intensive problem. Six parallel algorithms for matrix multiplication on the Connection Machine are presented and compared with respect to their performance and processor usage. For n by n matrices, the algorithms have theoretical running times of O(n to the 2nd power log n), O(n log n), O(n), and O(log n), and require n, n to the 2nd power, n to the 2nd power, and n to the 3rd power processors, respectively. With careful attention to communication patterns, the theoretically predicted runtimes can indeed be achieved in practice. The parallel algorithms illustrate the tradeoffs between performance, communication cost, and processor usage.

  12. Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial

    PubMed Central

    Hallgren, Kevin A.

    2012-01-01

    Many research designs require the assessment of inter-rater reliability (IRR) to demonstrate consistency among observational ratings provided by multiple coders. However, many studies use incorrect statistical procedures, fail to fully report the information necessary to interpret their results, or do not address how IRR affects the power of their subsequent analyses for hypothesis testing. This paper provides an overview of methodological issues related to the assessment of IRR with a focus on study design, selection of appropriate statistics, and the computation, interpretation, and reporting of some commonly-used IRR statistics. Computational examples include SPSS and R syntax for computing Cohen’s kappa and intra-class correlations to assess IRR. PMID:22833776

  13. NOSTOS: a paper-based ubiquitous computing healthcare environment to support data capture and collaboration.

    PubMed

    Bång, Magnus; Larsson, Anders; Eriksson, Henrik

    2003-01-01

    In this paper, we present a new approach to clinical workplace computerization that departs from the window-based user interface paradigm. NOSTOS is an experimental computer-augmented work environment designed to support data capture and teamwork in an emergency room. NOSTOS combines multiple technologies, such as digital pens, walk-up displays, headsets, a smart desk, and sensors to enhance an existing paper-based practice with computer power. The physical interfaces allow clinicians to retain mobile paper-based collaborative routines and still benefit from computer technology. The requirements for the system were elicited from situated workplace studies. We discuss the advantages and disadvantages of augmenting a paper-based clinical work environment.

  14. FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting.

    PubMed

    Alomar, Miquel L; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L

    2016-01-01

    Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.

  15. FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting

    PubMed Central

    Alomar, Miquel L.; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L.

    2016-01-01

    Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting. PMID:26880876

  16. Extreme-scale Algorithms and Solver Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, Jack

    A widening gap exists between the peak performance of high-performance computers and the performance achieved by complex applications running on these platforms. Over the next decade, extreme-scale systems will present major new challenges to algorithm development that could amplify this mismatch in such a way that it prevents the productive use of future DOE Leadership computers due to the following; Extreme levels of parallelism due to multicore processors; An increase in system fault rates requiring algorithms to be resilient beyond just checkpoint/restart; Complex memory hierarchies and costly data movement in both energy and performance; Heterogeneous system architectures (mixing CPUs, GPUs,more » etc.); and Conflicting goals of performance, resilience, and power requirements.« less

  17. 10 CFR 73.55 - Requirements for physical protection of licensed activities in nuclear power reactors against...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... integration of systems, technologies, programs, equipment, supporting processes, and implementing procedures...-in-depth methodologies to minimize the potential for an insider to adversely affect, either directly... protection of digital computer and communication systems and networks. (ii) Site-specific conditions that...

  18. 10 CFR 73.23 - Protection of Safeguards Information-Modified Handling: Specific requirements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    .... Information not classified as Restricted Data or National Security Information related to physical protection... stored in a locked file drawer or cabinet. (3) A mobile device (such as a laptop computer) may also be... of intrusion detection devices, alarm assessment equipment, alarm system wiring, emergency power...

  19. 10 CFR 73.23 - Protection of Safeguards Information-Modified Handling: Specific requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    .... Information not classified as Restricted Data or National Security Information related to physical protection... stored in a locked file drawer or cabinet. (3) A mobile device (such as a laptop computer) may also be... of intrusion detection devices, alarm assessment equipment, alarm system wiring, emergency power...

  20. 10 CFR 73.23 - Protection of Safeguards Information-Modified Handling: Specific requirements.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    .... Information not classified as Restricted Data or National Security Information related to physical protection... stored in a locked file drawer or cabinet. (3) A mobile device (such as a laptop computer) may also be... of intrusion detection devices, alarm assessment equipment, alarm system wiring, emergency power...

  1. 10 CFR 73.23 - Protection of Safeguards Information-Modified Handling: Specific requirements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    .... Information not classified as Restricted Data or National Security Information related to physical protection... stored in a locked file drawer or cabinet. (3) A mobile device (such as a laptop computer) may also be... of intrusion detection devices, alarm assessment equipment, alarm system wiring, emergency power...

  2. 10 CFR 73.23 - Protection of Safeguards Information-Modified Handling: Specific requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    .... Information not classified as Restricted Data or National Security Information related to physical protection... stored in a locked file drawer or cabinet. (3) A mobile device (such as a laptop computer) may also be... of intrusion detection devices, alarm assessment equipment, alarm system wiring, emergency power...

  3. Distributed Estimation, Coding, and Scheduling in Wireless Visual Sensor Networks

    ERIC Educational Resources Information Center

    Yu, Chao

    2013-01-01

    In this thesis, we consider estimation, coding, and sensor scheduling for energy efficient operation of wireless visual sensor networks (VSN), which consist of battery-powered wireless sensors with sensing (imaging), computation, and communication capabilities. The competing requirements for applications of these wireless sensor networks (WSN)…

  4. Rugged Walking Robot

    NASA Technical Reports Server (NTRS)

    Larimer, Stanley J.; Lisec, Thomas R.; Spiessbach, Andrew J.

    1990-01-01

    Proposed walking-beam robot simpler and more rugged than articulated-leg walkers. Requires less data processing, and uses power more efficiently. Includes pair of tripods, one nested in other. Inner tripod holds power supplies, communication equipment, computers, instrumentation, sampling arms, and articulated sensor turrets. Outer tripod holds mast on which antennas for communication with remote control site and video cameras for viewing local and distant terrain mounted. Propels itself by raising, translating, and lowering tripods in alternation. Steers itself by rotating raised tripod on turntable.

  5. Parallel algorithm for computation of second-order sequential best rotations

    NASA Astrophysics Data System (ADS)

    Redif, Soydan; Kasap, Server

    2013-12-01

    Algorithms for computing an approximate polynomial matrix eigenvalue decomposition of para-Hermitian systems have emerged as a powerful, generic signal processing tool. A technique that has shown much success in this regard is the sequential best rotation (SBR2) algorithm. Proposed is a scheme for parallelising SBR2 with a view to exploiting the modern architectural features and inherent parallelism of field-programmable gate array (FPGA) technology. Experiments show that the proposed scheme can achieve low execution times while requiring minimal FPGA resources.

  6. A linear, separable two-parameter model for dual energy CT imaging of proton stopping power computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Dong, E-mail: radon.han@gmail.com; Williamson, Jeffrey F.; Siebers, Jeffrey V.

    2016-01-15

    Purpose: To evaluate the accuracy and robustness of a simple, linear, separable, two-parameter model (basis vector model, BVM) in mapping proton stopping powers via dual energy computed tomography (DECT) imaging. Methods: The BVM assumes that photon cross sections (attenuation coefficients) of unknown materials are linear combinations of the corresponding radiological quantities of dissimilar basis substances (i.e., polystyrene, CaCl{sub 2} aqueous solution, and water). The authors have extended this approach to the estimation of electron density and mean excitation energy, which are required parameters for computing proton stopping powers via the Bethe–Bloch equation. The authors compared the stopping power estimation accuracymore » of the BVM with that of a nonlinear, nonseparable photon cross section Torikoshi parametric fit model (VCU tPFM) as implemented by the authors and by Yang et al. [“Theoretical variance analysis of single- and dual-energy computed tomography methods for calculating proton stopping power ratios of biological tissues,” Phys. Med. Biol. 55, 1343–1362 (2010)]. Using an idealized monoenergetic DECT imaging model, proton ranges estimated by the BVM, VCU tPFM, and Yang tPFM were compared to International Commission on Radiation Units and Measurements (ICRU) published reference values. The robustness of the stopping power prediction accuracy of tissue composition variations was assessed for both of the BVM and VCU tPFM. The sensitivity of accuracy to CT image uncertainty was also evaluated. Results: Based on the authors’ idealized, error-free DECT imaging model, the root-mean-square error of BVM proton stopping power estimation for 175 MeV protons relative to ICRU reference values for 34 ICRU standard tissues is 0.20%, compared to 0.23% and 0.68% for the Yang and VCU tPFM models, respectively. The range estimation errors were less than 1 mm for the BVM and Yang tPFM models, respectively. The BVM estimation accuracy is not dependent on tissue type and proton energy range. The BVM is slightly more vulnerable to CT image intensity uncertainties than the tPFM models. Both the BVM and tPFM prediction accuracies were robust to uncertainties of tissue composition and independent of the choice of reference values. This reported accuracy does not include the impacts of I-value uncertainties and imaging artifacts and may not be achievable on current clinical CT scanners. Conclusions: The proton stopping power estimation accuracy of the proposed linear, separable BVM model is comparable to or better than that of the nonseparable tPFM models proposed by other groups. In contrast to the tPFM, the BVM does not require an iterative solving for effective atomic number and electron density at every voxel; this improves the computational efficiency of DECT imaging when iterative, model-based image reconstruction algorithms are used to minimize noise and systematic imaging artifacts of CT images.« less

  7. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-01-10

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  8. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  9. A Big Data Approach to Analyzing Market Volatility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Bethel, E. Wes; Gu, Ming

    2013-06-05

    Understanding the microstructure of the financial market requires the processing of a vast amount of data related to individual trades, and sometimes even multiple levels of quotes. Analyzing such a large volume of data requires tremendous computing power that is not easily available to financial academics and regulators. Fortunately, public funded High Performance Computing (HPC) power is widely available at the National Laboratories in the US. In this paper we demonstrate that the HPC resource and the techniques for data-intensive sciences can be used to greatly accelerate the computation of an early warning indicator called Volume-synchronized Probability of Informed tradingmore » (VPIN). The test data used in this study contains five and a half year's worth of trading data for about 100 most liquid futures contracts, includes about 3 billion trades, and takes 140GB as text files. By using (1) a more efficient file format for storing the trading records, (2) more effective data structures and algorithms, and (3) parallelizing the computations, we are able to explore 16,000 different ways of computing VPIN in less than 20 hours on a 32-core IBM DataPlex machine. Our test demonstrates that a modest computer is sufficient to monitor a vast number of trading activities in real-time – an ability that could be valuable to regulators. Our test results also confirm that VPIN is a strong predictor of liquidity-induced volatility. With appropriate parameter choices, the false positive rates are about 7% averaged over all the futures contracts in the test data set. More specifically, when VPIN values rise above a threshold (CDF > 0.99), the volatility in the subsequent time windows is higher than the average in 93% of the cases.« less

  10. A Battery-Aware Algorithm for Supporting Collaborative Applications

    NASA Astrophysics Data System (ADS)

    Rollins, Sami; Chang-Yit, Cheryl

    Battery-powered devices such as laptops, cell phones, and MP3 players are becoming ubiquitous. There are several significant ways in which the ubiquity of battery-powered technology impacts the field of collaborative computing. First, applications such as collaborative data gathering, become possible. Also, existing applications that depend on collaborating devices to maintain the system infrastructure must be reconsidered. Fundamentally, the problem lies in the fact that collaborative applications often require end-user computing devices to perform tasks that happen in the background and are not directly advantageous to the user. In this work, we seek to better understand how laptop users use the batteries attached to their devices and analyze a battery-aware alternative to Gnutella’s ultrapeer selection algorithm. Our algorithm provides insight into how system maintenance tasks can be allocated to battery-powered nodes. The most significant result of our study indicates that a large portion of laptop users can participate in system maintenance without sacrificing any of their battery. These results show great promise for existing collaborative applications as well as new applications, such as collaborative data gathering, that rely upon battery-powered devices.

  11. A new model predictive control algorithm by reducing the computing time of cost function minimization for NPC inverter in three-phase power grids.

    PubMed

    Taheri, Asghar; Zhalebaghi, Mohammad Hadi

    2017-11-01

    This paper presents a new control strategy based on finite-control-set model-predictive control (FCS-MPC) for Neutral-point-clamped (NPC) three-level converters. Containing some advantages like fast dynamic response, easy inclusion of constraints and simple control loop, makes the FCS-MPC method attractive to use as a switching strategy for converters. However, the large amount of required calculations is a problem in the widespread of this method. In this way, to resolve this problem this paper presents a modified method that effectively reduces the computation load compare with conventional FCS-MPC method and at the same time does not affect on control performance. The proposed method can be used for exchanging power between electrical grid and DC resources by providing active and reactive power compensations. Experiments on three-level converter for three Power Factor Correction (PFC), inductive and capacitive compensation modes verify the good and comparable performance. The results have been simulated using MATLAB/SIMULINK software. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Physical Principle for Generation of Randomness

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2009-01-01

    A physical principle (more precisely, a principle that incorporates mathematical models used in physics) has been conceived as the basis of a method of generating randomness in Monte Carlo simulations. The principle eliminates the need for conventional random-number generators. The Monte Carlo simulation method is among the most powerful computational methods for solving high-dimensional problems in physics, chemistry, economics, and information processing. The Monte Carlo simulation method is especially effective for solving problems in which computational complexity increases exponentially with dimensionality. The main advantage of the Monte Carlo simulation method over other methods is that the demand on computational resources becomes independent of dimensionality. As augmented by the present principle, the Monte Carlo simulation method becomes an even more powerful computational method that is especially useful for solving problems associated with dynamics of fluids, planning, scheduling, and combinatorial optimization. The present principle is based on coupling of dynamical equations with the corresponding Liouville equation. The randomness is generated by non-Lipschitz instability of dynamics triggered and controlled by feedback from the Liouville equation. (In non-Lipschitz dynamics, the derivatives of solutions of the dynamical equations are not required to be bounded.)

  13. Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud

    PubMed Central

    Florence, A. Paulin; Shanthi, V.; Simon, C. B. Sunil

    2016-01-01

    Cloud computing is a new technology which supports resource sharing on a “Pay as you go” basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS) scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption. PMID:27239551

  14. Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud.

    PubMed

    Florence, A Paulin; Shanthi, V; Simon, C B Sunil

    2016-01-01

    Cloud computing is a new technology which supports resource sharing on a "Pay as you go" basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS) scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption.

  15. A combined computational-experimental analyses of selected metabolic enzymes in Pseudomonas species.

    PubMed

    Perumal, Deepak; Lim, Chu Sing; Chow, Vincent T K; Sakharkar, Kishore R; Sakharkar, Meena K

    2008-09-10

    Comparative genomic analysis has revolutionized our ability to predict the metabolic subsystems that occur in newly sequenced genomes, and to explore the functional roles of the set of genes within each subsystem. These computational predictions can considerably reduce the volume of experimental studies required to assess basic metabolic properties of multiple bacterial species. However, experimental validations are still required to resolve the apparent inconsistencies in the predictions by multiple resources. Here, we present combined computational-experimental analyses on eight completely sequenced Pseudomonas species. Comparative pathway analyses reveal that several pathways within the Pseudomonas species show high plasticity and versatility. Potential bypasses in 11 metabolic pathways were identified. We further confirmed the presence of the enzyme O-acetyl homoserine (thiol) lyase (EC: 2.5.1.49) in P. syringae pv. tomato that revealed inconsistent annotations in KEGG and in the recently published SYSTOMONAS database. These analyses connect and integrate systematic data generation, computational data interpretation, and experimental validation and represent a synergistic and powerful means for conducting biological research.

  16. Large-scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU).

    PubMed

    Shi, Yulin; Veidenbaum, Alexander V; Nicolau, Alex; Xu, Xiangmin

    2015-01-15

    Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post hoc processing and analysis. Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22× speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    PubMed Central

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  18. Cortical Power-Density Changes of Different Frequency Bands in Visually Guided Associative Learning: A Human EEG-Study

    PubMed Central

    Puszta, András; Katona, Xénia; Bodosi, Balázs; Pertich, Ákos; Nyujtó, Diána; Braunitzer, Gábor; Nagy, Attila

    2018-01-01

    The computer-based Rutgers Acquired Equivalence test (RAET) is a widely used paradigm to test the function of subcortical structures in visual associative learning. The test consists of an acquisition (pair learning) and a test (rule transfer) phase, associated with the function of the basal ganglia and the hippocampi, respectively. Obviously, such a complex task also requires cortical involvement. To investigate the activity of different cortical areas during this test, 64-channel EEG recordings were recorded in 24 healthy volunteers. Fast-Fourier and Morlet wavelet convolution analyses were performed on the recordings. The most robust power changes were observed in the theta (4–7 Hz) and gamma (>30 Hz) frequency bands, in which significant power elevation was observed in the vast majority of the subjects, over the parieto-occipital and temporo-parietal areas during the acquisition phase. The involvement of the frontal areas in the acquisition phase was remarkably weaker. No remarkable cortical power elevations were found in the test phase. In fact, the power of the alpha and beta bands was significantly decreased over the parietooccipital areas. We conclude that the initial acquisition of the image pairs requires strong cortical involvement, but once the pairs have been learned, neither retrieval nor generalization requires strong cortical contribution. PMID:29867412

  19. Cortical Power-Density Changes of Different Frequency Bands in Visually Guided Associative Learning: A Human EEG-Study.

    PubMed

    Puszta, András; Katona, Xénia; Bodosi, Balázs; Pertich, Ákos; Nyujtó, Diána; Braunitzer, Gábor; Nagy, Attila

    2018-01-01

    The computer-based Rutgers Acquired Equivalence test (RAET) is a widely used paradigm to test the function of subcortical structures in visual associative learning. The test consists of an acquisition (pair learning) and a test (rule transfer) phase, associated with the function of the basal ganglia and the hippocampi, respectively. Obviously, such a complex task also requires cortical involvement. To investigate the activity of different cortical areas during this test, 64-channel EEG recordings were recorded in 24 healthy volunteers. Fast-Fourier and Morlet wavelet convolution analyses were performed on the recordings. The most robust power changes were observed in the theta (4-7 Hz) and gamma (>30 Hz) frequency bands, in which significant power elevation was observed in the vast majority of the subjects, over the parieto-occipital and temporo-parietal areas during the acquisition phase. The involvement of the frontal areas in the acquisition phase was remarkably weaker. No remarkable cortical power elevations were found in the test phase. In fact, the power of the alpha and beta bands was significantly decreased over the parietooccipital areas. We conclude that the initial acquisition of the image pairs requires strong cortical involvement, but once the pairs have been learned, neither retrieval nor generalization requires strong cortical contribution.

  20. Theoretical comparison of maser materials for a 32-GHz maser amplifier

    NASA Technical Reports Server (NTRS)

    Lyons, James R.

    1988-01-01

    The computational results of a comparison of maser materials for a 32 GHz maser amplifier are presented. The search for a better maser material is prompted by the relatively large amount of pump power required to sustain a population inversion in ruby at frequencies on the order of 30 GHz and above. The general requirements of a maser material and the specific problems with ruby are outlined. The spin Hamiltonian is used to calculate energy levels and transition probabilities for ruby and twelve other materials. A table is compiled of several attractive operating points for each of the materials analyzed. All the materials analyzed possess operating points that could be superior to ruby. To complete the evaluation of the materials, measurements of inversion ratio and pump power requirements must be made in the future.

  1. A Wireless Biomedical Signal Interface System-on-Chip for Body Sensor Networks.

    PubMed

    Lei Wang; Guang-Zhong Yang; Jin Huang; Jinyong Zhang; Li Yu; Zedong Nie; Cumming, D R S

    2010-04-01

    Recent years have seen the rapid development of biosensor technology, system-on-chip design, wireless technology. and ubiquitous computing. When assembled into an autonomous body sensor network (BSN), the technologies become powerful tools in well-being monitoring, medical diagnostics, and personal connectivity. In this paper, we describe the first demonstration of a fully customized mixed-signal silicon chip that has most of the attributes required for use in a wearable or implantable BSN. Our intellectual-property blocks include low-power analog sensor interface for temperature and pH, a data multiplexing and conversion module, a digital platform based around an 8-b microcontroller, data encoding for spread-spectrum wireless transmission, and a RF section requiring very few off-chip components. The chip has been fully evaluated and tested by connection to external sensors, and it satisfied typical system requirements.

  2. Creation of Power Reserves Under the Market Economy Conditions

    NASA Astrophysics Data System (ADS)

    Mahnitko, A.; Gerhards, J.; Lomane, T.; Ribakov, S.

    2008-09-01

    The main task of the control over an electric power system (EPS) is to ensure reliable power supply at the least cost. In this case, requirements to the electric power quality, power supply reliability and cost limitations on the energy resources must be observed. The available power reserve in an EPS is the necessary condition to keep it in operation with maintenance of normal operating variables (frequency, node voltage, power flows via the transmission lines, etc.). The authors examine possibilities to create power reserves that could be offered for sale by the electric power producer. They consider a procedure of price formation for the power reserves and propose a relevant mathematical model for a united EPS, the initial data being the fuel-cost functions for individual systems, technological limitations on the active power generation and consumers' load. As the criterion of optimization the maximum profit for the producer is taken. The model is exemplified by a concentrated EPS. The computations have been performed using the MATLAB program.

  3. Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Gilbreth, C. N.; Alhassid, Y.

    2015-03-01

    Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.

  4. The LHCb software and computing upgrade for Run 3: opportunities and challenges

    NASA Astrophysics Data System (ADS)

    Bozzi, C.; Roiser, S.; LHCb Collaboration

    2017-10-01

    The LHCb detector will be upgraded for the LHC Run 3 and will be readout at 30 MHz, corresponding to the full inelastic collision rate, with major implications on the full software trigger and offline computing. If the current computing model and software framework are kept, the data storage capacity and computing power required to process data at this rate, and to generate and reconstruct equivalent samples of simulated events, will exceed the current capacity by at least one order of magnitude. A redesign of the software framework, including scheduling, the event model, the detector description and the conditions database, is needed to fully exploit the computing power of multi-, many-core architectures, and coprocessors. Data processing and the analysis model will also change towards an early streaming of different data types, in order to limit storage resources, with further implications for the data analysis workflows. Fast simulation options will allow to obtain a reasonable parameterization of the detector response in considerably less computing time. Finally, the upgrade of LHCb will be a good opportunity to review and implement changes in the domains of software design, test and review, and analysis workflow and preservation. In this contribution, activities and recent results in all the above areas are presented.

  5. Probabilistic simple sticker systems

    NASA Astrophysics Data System (ADS)

    Selvarajoo, Mathuri; Heng, Fong Wan; Sarmin, Nor Haniza; Turaev, Sherzod

    2017-04-01

    A model for DNA computing using the recombination behavior of DNA molecules, known as a sticker system, was introduced by by L. Kari, G. Paun, G. Rozenberg, A. Salomaa, and S. Yu in the paper entitled DNA computing, sticker systems and universality from the journal of Acta Informatica vol. 35, pp. 401-420 in the year 1998. A sticker system uses the Watson-Crick complementary feature of DNA molecules: starting from the incomplete double stranded sequences, and iteratively using sticking operations until a complete double stranded sequence is obtained. It is known that sticker systems with finite sets of axioms and sticker rules generate only regular languages. Hence, different types of restrictions have been considered to increase the computational power of sticker systems. Recently, a variant of restricted sticker systems, called probabilistic sticker systems, has been introduced [4]. In this variant, the probabilities are initially associated with the axioms, and the probability of a generated string is computed by multiplying the probabilities of all occurrences of the initial strings in the computation of the string. Strings for the language are selected according to some probabilistic requirements. In this paper, we study fundamental properties of probabilistic simple sticker systems. We prove that the probabilistic enhancement increases the computational power of simple sticker systems.

  6. PS3 CELL Development for Scientific Computation and Research

    NASA Astrophysics Data System (ADS)

    Christiansen, M.; Sevre, E.; Wang, S. M.; Yuen, D. A.; Liu, S.; Lyness, M. D.; Broten, M.

    2007-12-01

    The Cell processor is one of the most powerful processors on the market, and researchers in the earth sciences may find its parallel architecture to be very useful. A cell processor, with 7 cores, can easily be obtained for experimentation by purchasing a PlayStation 3 (PS3) and installing linux and the IBM SDK. Each core of the PS3 is capable of 25 GFLOPS giving a potential limit of 150 GFLOPS when using all 6 SPUs (synergistic processing units) by using vectorized algorithms. We have used the Cell's computational power to create a program which takes simulated tsunami datasets, parses them, and returns a colorized height field image using ray casting techniques. As expected, the time required to create an image is inversely proportional to the number of SPUs used. We believe that this trend will continue when multiple PS3s are chained using OpenMP functionality and are in the process of researching this. By using the Cell to visualize tsunami data, we have found that its greatest feature is its power. This fact entwines well with the needs of the scientific community where the limiting factor is time. Any algorithm, such as the heat equation, that can be subdivided into multiple parts can take advantage of the PS3 Cell's ability to split the computations across the 6 SPUs reducing required run time by one sixth. Further vectorization of the code can allow for 4 simultanious floating point operations by using the SIMD (single instruction multiple data) capabilities of the SPU increasing efficiency 24 times.

  7. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    NASA Astrophysics Data System (ADS)

    Shaw, Amelia R.; Smith Sawyer, Heather; LeBoeuf, Eugene J.; McDonald, Mark P.; Hadjerioua, Boualem

    2017-11-01

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2 is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. The reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.

  8. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    DOE PAGES

    Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.; ...

    2017-10-24

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less

  9. Bio-Inspired Controller on an FPGA Applied to Closed-Loop Diaphragmatic Stimulation

    PubMed Central

    Zbrzeski, Adeline; Bornat, Yannick; Hillen, Brian; Siu, Ricardo; Abbas, James; Jung, Ranu; Renaud, Sylvie

    2016-01-01

    Cervical spinal cord injury can disrupt connections between the brain respiratory network and the respiratory muscles which can lead to partial or complete loss of ventilatory control and require ventilatory assistance. Unlike current open-loop technology, a closed-loop diaphragmatic pacing system could overcome the drawbacks of manual titration as well as respond to changing ventilation requirements. We present an original bio-inspired assistive technology for real-time ventilation assistance, implemented in a digital configurable Field Programmable Gate Array (FPGA). The bio-inspired controller, which is a spiking neural network (SNN) inspired by the medullary respiratory network, is as robust as a classic controller while having a flexible, low-power and low-cost hardware design. The system was simulated in MATLAB with FPGA-specific constraints and tested with a computational model of rat breathing; the model reproduced experimentally collected respiratory data in eupneic animals. The open-loop version of the bio-inspired controller was implemented on the FPGA. Electrical test bench characterizations confirmed the system functionality. Open and closed-loop paradigm simulations were simulated to test the FPGA system real-time behavior using the rat computational model. The closed-loop system monitors breathing and changes in respiratory demands to drive diaphragmatic stimulation. The simulated results inform future acute animal experiments and constitute the first step toward the development of a neuromorphic, adaptive, compact, low-power, implantable device. The bio-inspired hardware design optimizes the FPGA resource and time costs while harnessing the computational power of spike-based neuromorphic hardware. Its real-time feature makes it suitable for in vivo applications. PMID:27378844

  10. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    X. Zhao, S. Ramakrishnan, J. Lawson, C.Neumeyer, R. Marsala, H. Schneider, Engineering Operations

    NSTX at Princeton Plasma Physics Laboratory (PPPL) requires sophisticated plasma positioning control system for stable plasma operation. TF magnetic coils and PF magnetic coils provide electromagnetic fields to position and shape the plasma vertically and horizontally respectively. NSTX utilizes twenty six coil power supplies to establish and initiate electromagnetic fields through the coil system for plasma control. A power protection and interlock system is utilized to detect power system faults and protect the TF coils and PF coils against excessive electromechanical forces, overheating, and over current. Upon detecting any fault condition the power system is restricted, and it is eithermore » prevented from initializing or suppressed to de-energize coil power during pulsing. Power fault status is immediately reported to the computer system. This paper describes the design and operation of NSTX's protection and interlocking system and possible future expansion.« less

  12. Budget-based power consumption for application execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

    2013-02-05

    Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.

  13. Budget-based power consumption for application execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J; Inglett, Todd A; Ratterman, Joseph D

    2012-10-23

    Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.

  14. Solid-state Isotopic Power Source for Computer Memory Chips

    NASA Technical Reports Server (NTRS)

    Brown, Paul M.

    1993-01-01

    Recent developments in materials technology now make it possible to fabricate nonthermal thin-film radioisotopic energy converters (REC) with a specific power of 24 W/kg and a 10 year working life at 5 to 10 watts. This creates applications never before possible, such as placing the power supply directly on integrated circuit chips. The efficiency of the REC is about 25 percent which is two to three times greater than the 6 to 8 percent capabilities of current thermoelectric systems. Radio isotopic energy converters have the potential to meet many future space power requirements for a wide variety of applications with less mass, better efficiency, and less total area than other power conversion options. These benefits result in significant dollar savings over the projected mission lifetime.

  15. Using Mosix for Wide-Area Compuational Resources

    USGS Publications Warehouse

    Maddox, Brian G.

    2004-01-01

    One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.

  16. Gas-injection-start and shutdown characteristics of a 2-kilowatt to 15-kilowatt Brayton power system

    NASA Technical Reports Server (NTRS)

    Cantoni, D. A.

    1972-01-01

    Two methods of starting the Brayton power system have been considered: (1) using the alternator as a motor to spin the Brayton rotating unit (BRU), and (2) spinning the BRU by forced gas injection. The first method requires the use of an auxiliary electrical power source. An alternating voltage is applied to the terminals of the alternator to drive it as an induction motor. Only gas-injection starts are discussed in this report. The gas-injection starting method requires high-pressure gas storage and valves to route the gas flow to provide correct BRU rotation. An analog computer simulation was used to size hardware and to determine safe start and shutdown procedures. The simulation was also used to define the range of conditions for successful startups. Experimental data were also obtained under various test conditions. These data verify the validity of the start and shutdown procedures.

  17. Flight experiment of thermal energy storage. [for spacecraft power systems

    NASA Technical Reports Server (NTRS)

    Namkoong, David

    1989-01-01

    Thermal energy storage (TES) enables a solar dynamic system to deliver constant electric power through periods of sun and shade. Brayton and Stirling power systems under current considerations for missions in the near future require working fluid temperatures in the 1100 to 1300+ K range. TES materials that meet these requirements fall into the fluoride family of salts. Salts shrink as they solidify, a change reaching 30 percent for some salts. Hot spots can develop in the TES container or the container can become distorted if the melting salt cannot expand elsewhere. Analysis of the transient, two-phase phenomenon is being incorporated into a three-dimensional computer code. The objective of the flight program is to verify the predictions of the code, particularly of the void location and its effect on containment temperature. The four experimental packages comprising the program will be the first tests of melting and freezing conducted under microgravity.

  18. A case study for cloud based high throughput analysis of NGS data using the globus genomics system

    DOE PAGES

    Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; ...

    2015-01-01

    Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-end NGS analysis requirements. The Globus Genomicsmore » system is built on Amazon's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research.« less

  19. Application of digital computer APU modeling techniques to control system design.

    NASA Technical Reports Server (NTRS)

    Bailey, D. A.; Burriss, W. L.

    1973-01-01

    Study of the required controls for a H2-O2 auxiliary power unit (APU) technology program for the Space Shuttle. A steady-state system digital computer program was prepared and used to optimize initial system design. Analytical models of each system component were included. The program was used to solve a nineteen-dimensional problem, and then time-dependent differential equations were added to the computer program to simulate transient APU system and control. Some system parameters were considered quasi-steady-state, and others were treated as differential variables. The dynamic control analysis proceeded from initial ideal control modeling (which considered one control function and assumed the others to be ideal), stepwise through the system (adding control functions), until all of the control functions and their interactions were considered. In this way, the adequacy of the final control design over the required wide range of APU operating conditions was established.

  20. Restricted Authentication and Encryption for Cyber-physical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirkpatrick, Michael S; Bertino, Elisa; Sheldon, Frederick T

    2009-01-01

    Cyber-physical systems (CPS) are characterized by the close linkage of computational resources and physical devices. These systems can be deployed in a number of critical infrastructure settings. As a result, the security requirements of CPS are different than traditional computing architectures. For example, critical functions must be identified and isolated from interference by other functions. Similarly, lightweight schemes may be required, as CPS can include devices with limited computing power. One approach that offers promise for CPS security is the use of lightweight, hardware-based authentication. Specifically, we consider the use of Physically Unclonable Functions (PUFs) to bind an access requestmore » to specific hardware with device-specific keys. PUFs are implemented in hardware, such as SRAM, and can be used to uniquely identify the device. This technology could be used in CPS to ensure location-based access control and encryption, both of which would be desirable for CPS implementations.« less

  1. A Bitslice Implementation of Anderson's Attack on A5/1

    NASA Astrophysics Data System (ADS)

    Bulavintsev, Vadim; Semenov, Alexander; Zaikin, Oleg; Kochemazov, Stepan

    2018-03-01

    The A5/1 keystream generator is a part of Global System for Mobile Communications (GSM) protocol, employed in cellular networks all over the world. Its cryptographic resistance was extensively analyzed in dozens of papers. However, almost all corresponding methods either employ a specific hardware or require an extensive preprocessing stage and significant amounts of memory. In the present study, a bitslice variant of Anderson's Attack on A5/1 is implemented. It requires very little computer memory and no preprocessing. Moreover, the attack can be made even more efficient by harnessing the computing power of modern Graphics Processing Units (GPUs). As a result, using commonly available GPUs this method can quite efficiently recover the secret key using only 64 bits of keystream. To test the performance of the implementation, a volunteer computing project was launched. 10 instances of A5/1 cryptanalysis have been successfully solved in this project in a single week.

  2. A case study for cloud based high throughput analysis of NGS data using the globus genomics system

    PubMed Central

    Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; Rodriguez, Alex; Madduri, Ravi; Dave, Utpal; Lacinski, Lukasz; Foster, Ian; Gusev, Yuriy; Madhavan, Subha

    2014-01-01

    Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-endNGS analysis requirements. The Globus Genomics system is built on Amazon 's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research. PMID:26925205

  3. Reducing cooling energy consumption in data centres and critical facilities

    NASA Astrophysics Data System (ADS)

    Cross, Gareth

    Given the rise of our everyday reliance on computers in all walks of life, from checking the train times to paying our credit card bills online, the need for computational power is ever increasing. Other than the ever-increasing performance of home Personal Computers (PC's) this reliance has given rise to a new phenomenon in the last 10 years ago. The data centre. Data centres contain vast arrays of IT cabinets loaded with servers that perform millions of computational equations every second. It is these data centres that allow us to continue with our reliance on the internet and the PC. As more and more data centres become necessary due to the increase in computing processing power required for the everyday activities we all take for granted so the energy consumed by these data centres rises. Not only are more and more data centres being constructed daily, but operators are also looking at ways to squeeze more processing from their existing data centres. This in turn leads to greater heat outputs and therefore requires more cooling. Cooling data centres requires a sizeable energy input, indeed to many megawatts per data centre site. Given the large amounts of money dependant on the successful operation of data centres, in particular for data centres operated by financial institutions, the onus is predominantly on ensuring the data centres operate with no technical glitches rather than in an energy conscious fashion. This report aims to investigate the ways and means of reducing energy consumption within data centres without compromising the technology the data centres are designed to house. As well as discussing the individual merits of the technologies and their implementation technical calculations will be undertaken where necessary to determine the levels of energy saving, if any, from each proposal. To enable comparison between each proposal any design calculations within this report will be undertaken against a notional data facility. This data facility will nominally be considered to require 1000 kW. Refer to Section 2.1 'Outline of Notional data Facility for Calculation Purposes' for details of the design conditions and constraints of the energy consumption calculations.

  4. Continuous-variable quantum computing in optical time-frequency modes using quantum memories.

    PubMed

    Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A

    2014-09-26

    We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.

  5. How does the brain solve visual object recognition?

    PubMed Central

    Zoccolan, Davide; Rust, Nicole C.

    2012-01-01

    Mounting evidence suggests that “core object recognition,” the ability to rapidly recognize objects despite substantial appearance variation, is solved in the brain via a cascade of reflexive, largely feedforward computations that culminate in a powerful neuronal representation in the inferior temporal cortex. However, the algorithm that produces this solution remains little-understood. Here we review evidence ranging from individual neurons, to neuronal populations, to behavior, to computational models. We propose that understanding this algorithm will require using neuronal and psychophysical data to sift through many computational models, each based on building blocks of small, canonical sub-networks with a common functional goal. PMID:22325196

  6. Cloud computing for energy management in smart grid - an application survey

    NASA Astrophysics Data System (ADS)

    Naveen, P.; Kiing Ing, Wong; Kobina Danquah, Michael; Sidhu, Amandeep S.; Abu-Siada, Ahmed

    2016-03-01

    The smart grid is the emerging energy system wherein the application of information technology, tools and techniques that make the grid run more efficiently. It possesses demand response capacity to help balance electrical consumption with supply. The challenges and opportunities of emerging and future smart grids can be addressed by cloud computing. To focus on these requirements, we provide an in-depth survey on different cloud computing applications for energy management in the smart grid architecture. In this survey, we present an outline of the current state of research on smart grid development. We also propose a model of cloud based economic power dispatch for smart grid.

  7. A new numerical method for calculating extrema of received power for polarimetric SAR

    USGS Publications Warehouse

    Zhang, Y.; Zhang, Jiahua; Lu, Z.; Gong, W.

    2009-01-01

    A numerical method called cross-step iteration is proposed to calculate the maximal/minimal received power for polarized imagery based on a target's Kennaugh matrix. This method is much more efficient than the systematic method, which searches for the extrema of received power by varying the polarization ellipse angles of receiving and transmitting polarizations. It is also more advantageous than the Schuler method, which has been adopted by the PolSARPro package, because the cross-step iteration method requires less computation time and can derive both the maximal and minimal received powers, whereas the Schuler method is designed to work out only the maximal received power. The analytical model of received-power optimization indicates that the first eigenvalue of the Kennaugh matrix is the supremum of the maximal received power. The difference between these two parameters reflects the depolarization effect of the target's backscattering, which might be useful for target discrimination. ?? 2009 IEEE.

  8. A parallel-processing approach to computing for the geographic sciences

    USGS Publications Warehouse

    Crane, Michael; Steinwand, Dan; Beckmann, Tim; Krpan, Greg; Haga, Jim; Maddox, Brian; Feller, Mark

    2001-01-01

    The overarching goal of this project is to build a spatially distributed infrastructure for information science research by forming a team of information science researchers and providing them with similar hardware and software tools to perform collaborative research. Four geographically distributed Centers of the U.S. Geological Survey (USGS) are developing their own clusters of low-cost personal computers into parallel computing environments that provide a costeffective way for the USGS to increase participation in the high-performance computing community. Referred to as Beowulf clusters, these hybrid systems provide the robust computing power required for conducting research into various areas, such as advanced computer architecture, algorithms to meet the processing needs for real-time image and data processing, the creation of custom datasets from seamless source data, rapid turn-around of products for emergency response, and support for computationally intense spatial and temporal modeling.

  9. Computational aerodynamics requirements: The future role of the computer and the needs of the aerospace industry

    NASA Technical Reports Server (NTRS)

    Rubbert, P. E.

    1978-01-01

    The commercial airplane builder's viewpoint on the important issues involved in the development of improved computational aerodynamics tools such as powerful computers optimized for fluid flow problems is presented. The primary user of computational aerodynamics in a commercial aircraft company is the design engineer who is concerned with solving practical engineering problems. From his viewpoint, the development of program interfaces and pre-and post-processing capability for new computational methods is just as important as the algorithms and machine architecture. As more and more details of the entire flow field are computed, the visibility of the output data becomes a major problem which is then doubled when a design capability is added. The user must be able to see, understand, and interpret the results calculated. Enormous costs are expanded because of the need to work with programs having only primitive user interfaces.

  10. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  11. Application of Semi Active Control Techniques to the Damping Suppression Problem of Solar Sail Booms

    NASA Technical Reports Server (NTRS)

    Adetona, O.; Keel, L. H.; Whorton, M. S.

    2007-01-01

    Solar sails provide a propellant free form for space propulsion. These are large flat surfaces that generate thrust when they are impacted by light. When attached to a space vehicle, the thrust generated can propel the space vehicle to great distances at significant speeds. For optimal performance the sail must be kept from excessive vibration. Active control techniques can provide the best performance. However, they require an external power-source that may create significant parasitic mass to the solar sail. However, solar sails require low mass for optimal performance. Secondly, active control techniques typically require a good system model to ensure stability and performance. However, the accuracy of solar sail models validated on earth for a space environment is questionable. An alternative approach is passive vibration techniques. These do not require an external power supply, and do not destabilize the system. A third alternative is referred to as semi-active control. This approach tries to get the best of both active and passive control, while avoiding their pitfalls. In semi-active control, an active control law is designed for the system, and passive control techniques are used to implement it. As a result, no external power supply is needed so the system is not destabilize-able. Though it typically underperforms active control techniques, it has been shown to out-perform passive control approaches and can be unobtrusively installed on a solar sail boom. Motivated by this, the objective of this research is to study the suitability of a Piezoelectric (PZT) patch actuator/sensor based semi-active control system for the vibration suppression problem of solar sail booms. Accordingly, we develop a suitable mathematical and computer model for such studies and demonstrate the capabilities of the proposed approach with computer simulations.

  12. How to Use Color Displays Effectively: The Elements of Color Vision and Their Implications for Programmers.

    ERIC Educational Resources Information Center

    Durrett, John; Trezona, Judi

    1982-01-01

    Discusses physiological and psychological aspects of color. Includes guidelines for using color effectively, especially in the development of computer programs. Indicates that if applied with its limitations and requirements in mind, color can be a powerful manipulator of attention, memory, and understanding. (Author/JN)

  13. Overcoming Microsoft Excel's Weaknesses for Crop Model Building and Simulations

    ERIC Educational Resources Information Center

    Sung, Christopher Teh Boon

    2011-01-01

    Using spreadsheets such as Microsoft Excel for building crop models and running simulations can be beneficial. Excel is easy to use, powerful, and versatile, and it requires the least proficiency in computer programming compared to other programming platforms. Excel, however, has several weaknesses: it does not directly support loops for iterative…

  14. ALFIL: A Crowd Simulation Serious Game for Massive Evacuation Training and Awareness

    ERIC Educational Resources Information Center

    García-García, César; Fernández-Robles, José Luis; Larios-Rosillo, Victor; Luga, Hervé

    2012-01-01

    This article presents the current development of a serious game for the simulation of massive evacuations. The purpose of this project is to promote self-protection through awareness of the procedures and different possible scenarios during the evacuation of a massive event. Sophisticated behaviors require massive computational power and it has…

  15. Energy requirement for the production of silicon solar arrays

    NASA Technical Reports Server (NTRS)

    Lindmayer, J.; Wihl, M.; Scheinine, A.; Morrison, A.

    1977-01-01

    An assessment of potential changes and alternative technologies which could impact the photovoltaic manufacturing process is presented. Topics discussed include: a multiple wire saw, ribbon growth techniques, silicon casting, and a computer model for a large-scale solar power plant. Emphasis is placed on reducing the energy demands of the manufacturing process.

  16. 10 CFR 36.41 - Construction monitoring and acceptance testing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... system will operate properly if offsite power is lost and shall verify that the computer has security... system to assure that the requirements in § 36.35 are met for protection of the source rack and the... protection. For panoramic irradiators, the licensee shall test the ability of the heat and smoke detectors to...

  17. 10 CFR 36.41 - Construction monitoring and acceptance testing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... system will operate properly if offsite power is lost and shall verify that the computer has security... system to assure that the requirements in § 36.35 are met for protection of the source rack and the... protection. For panoramic irradiators, the licensee shall test the ability of the heat and smoke detectors to...

  18. 10 CFR 36.41 - Construction monitoring and acceptance testing.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... system will operate properly if offsite power is lost and shall verify that the computer has security... system to assure that the requirements in § 36.35 are met for protection of the source rack and the... protection. For panoramic irradiators, the licensee shall test the ability of the heat and smoke detectors to...

  19. 10 CFR 36.41 - Construction monitoring and acceptance testing.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... system will operate properly if offsite power is lost and shall verify that the computer has security... system to assure that the requirements in § 36.35 are met for protection of the source rack and the... protection. For panoramic irradiators, the licensee shall test the ability of the heat and smoke detectors to...

  20. 10 CFR 36.41 - Construction monitoring and acceptance testing.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... system will operate properly if offsite power is lost and shall verify that the computer has security... system to assure that the requirements in § 36.35 are met for protection of the source rack and the... protection. For panoramic irradiators, the licensee shall test the ability of the heat and smoke detectors to...

  1. Journal news

    USGS Publications Warehouse

    Conroy, M.J.; Samuel, M.D.; White, Joanne C.

    1995-01-01

    Statistical power (and conversely, Type II error) is often ignored by biologists. Power is important to consider in the design of studies, to ensure that sufficient resources are allocated to address a hypothesis under examination. Deter- mining appropriate sample size when designing experiments or calculating power for a statistical test requires an investigator to consider the importance of making incorrect conclusions about the experimental hypothesis and the biological importance of the alternative hypothesis (or the biological effect size researchers are attempting to measure). Poorly designed studies frequently provide results that are at best equivocal, and do little to advance science or assist in decision making. Completed studies that fail to reject Ho should consider power and the related probability of a Type II error in the interpretation of results, particularly when implicit or explicit acceptance of Ho is used to support a biological hypothesis or management decision. Investigators must consider the biological question they wish to answer (Tacha et al. 1982) and assess power on the basis of biologically significant differences (Taylor and Gerrodette 1993). Power calculations are somewhat subjective, because the author must specify either f or the minimum difference that is biologically important. Biologists may have different ideas about what values are appropriate. While determining biological significance is of central importance in power analysis, it is also an issue of importance in wildlife science. Procedures, references, and computer software to compute power are accessible; therefore, authors should consider power. We welcome comments or suggestions on this subject.

  2. Requirements for Large Eddy Simulation Computations of Variable-Speed Power Turbine Flows

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2016-01-01

    Variable-speed power turbines (VSPTs) operate at low Reynolds numbers and with a wide range of incidence angles. Transition, separation, and the relevant physics leading to them are important to VSPT flow. Higher fidelity tools such as large eddy simulation (LES) may be needed to resolve the flow features necessary for accurate predictive capability and design of such turbines. A survey conducted for this report explores the requirements for such computations. The survey is limited to the simulation of two-dimensional flow cases and endwalls are not included. It suggests that a grid resolution necessary for this type of simulation to accurately represent the physics may be of the order of Delta(x)+=45, Delta(x)+ =2 and Delta(z)+=17. Various subgrid-scale (SGS) models have been used and except for the Smagorinsky model, all seem to perform well and in some instances the simulations worked well without SGS modeling. A method of specifying the inlet conditions such as synthetic eddy modeling (SEM) is necessary to correctly represent the inlet conditions.

  3. Laser/lidar analysis and testing

    NASA Technical Reports Server (NTRS)

    Spiers, Gary D.

    1994-01-01

    Section 1 of this report details development of a model of the output pulse frequency spectrum of a pulsed transversely excited (TE) CO2 laser. In order to limit the computation time required, the model was designed around a generic laser pulse shape model. The use of such a procedure allows many possible laser configurations to be examined. The output pulse shape is combined with the calculated frequency chirp to produce the electric field of the output pulse which is then computationally mixed with a local oscillator field to produce the heterodyne beat signal that would fall on a detector. The power spectral density of this heterodyne signal is then calculated. Section 2 reports on a visit to the LAWS laser contractors to measure the performance of the laser breadboards. The intention was to acquire data using a digital oscilloscope so that it could be analyzed. Section 3 reports on a model developed to assess the power requirements of a 5J LAWS instrument on a Spot MKII platform in a polar orbit. The performance was assessed for three different latitude dependent sampling strategies.

  4. A highly efficient multi-core algorithm for clustering extremely large datasets

    PubMed Central

    2010-01-01

    Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922

  5. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization.

    PubMed

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate.

  6. Computational examination of utility scale wind turbine wake interactions

    DOE PAGES

    Okosun, Tyamo; Zhou, Chenn Q.

    2015-07-14

    We performed numerical simulations of small, utility scale wind turbine groupings to determine how wakes generated by upstream turbines affect the performance of the small turbine group as a whole. Specifically, various wind turbine arrangements were simulated to better understand how turbine location influences small group wake interactions. The minimization of power losses due to wake interactions certainly plays a significant role in the optimization of wind farms. Since wind turbines extract kinetic energy from the wind, the air passing through a wind turbine decreases in velocity, and turbines downstream of the initial turbine experience flows of lower energy, resultingmore » in reduced power output. Our study proposes two arrangements of turbines that could generate more power by exploiting the momentum of the wind to increase velocity at downstream turbines, while maintaining low wake interactions at the same time. Furthermore, simulations using Computational Fluid Dynamics are used to obtain results much more quickly than methods requiring wind tunnel models or a large scale experimental test.« less

  7. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization

    PubMed Central

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate. PMID:27073853

  8. Dish layouts analysis method for concentrative solar power plant.

    PubMed

    Xu, Jinshan; Gan, Shaocong; Li, Song; Ruan, Zhongyuan; Chen, Shengyong; Wang, Yong; Gui, Changgui; Wan, Bin

    2016-01-01

    Designs leading to maximize the use of sun radiation of a given reflective area without increasing the expense on investment are important to solar power plants construction. We here provide a method that allows one to compute shade area at any given time as well as the total shading effect of a day. By establishing a local coordinate system with the origin at the apex of a parabolic dish and z -axis pointing to the sun, neighboring dishes only with [Formula: see text] would shade onto the dish when in tracking mode. This procedure reduces the required computational resources, simplifies the calculation and allows a quick search for the optimum layout by considering all aspects leading to optimized arrangement: aspect ratio, shifting and rotation. Computer simulations done with information on dish Stirling system as well as DNI data released from NREL, show that regular-spacing is not an optimal layout, shifting and rotating column by certain amount can bring more benefits.

  9. Active Flash: Performance-Energy Tradeoffs for Out-of-Core Processing on Non-Volatile Memory Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S

    2012-01-01

    In this abstract, we study the performance and energy tradeoffs involved in migrating data analysis into the flash device, a process we refer to as Active Flash. The Active Flash paradigm is similar to 'active disks', which has received considerable attention. Active Flash allows us to move processing closer to data, thereby minimizing data movement costs and reducing power consumption. It enables true out-of-core computation. The conventional definition of out-of-core solvers refers to an approach to process data that is too large to fit in the main memory and, consequently, requires access to disk. However, in Active Flash, processing outsidemore » the host CPU literally frees the core and achieves real 'out-of-core' analysis. Moving analysis to data has long been desirable, not just at this level, but at all levels of the system hierarchy. However, this requires a detailed study on the tradeoffs involved in achieving analysis turnaround under an acceptable energy envelope. To this end, we first need to evaluate if there is enough computing power on the flash device to warrant such an exploration. Flash processors require decent computing power to run the internal logic pertaining to the Flash Translation Layer (FTL), which is responsible for operations such as address translation, garbage collection (GC) and wear-leveling. Modern SSDs are composed of multiple packages and several flash chips within a package. The packages are connected using multiple I/O channels to offer high I/O bandwidth. SSD computing power is also expected to be high enough to exploit such inherent internal parallelism within the drive to increase the bandwidth and to handle fast I/O requests. More recently, SSD devices are being equipped with powerful processing units and are even embedded with multicore CPUs (e.g. ARM Cortex-A9 embedded processor is advertised to reach 2GHz frequency and deliver 5000 DMIPS; OCZ RevoDrive X2 SSD has 4 SandForce controllers, each with 780MHz max frequency Tensilica core). Efforts that take advantage of the available computing cycles on the processors on SSDs to run auxiliary tasks other than actual I/O requests are beginning to emerge. Kim et al. investigate database scan operations in the context of processing on the SSDs, and propose dedicated hardware logic to speed up scans. Also, cluster architectures have been explored, which consist of low-power embedded CPUs coupled with small local flash to achieve fast, parallel access to data. Processor utilization on SSD is highly dependent on workloads and, therefore, they can be idle during periods with no I/O accesses. We propose to use the available processing capability on the SSD to run tasks that can be offloaded from the host. This paper makes the following contributions: (1) We have investigated Active Flash and its potential to optimize the total energy cost, including power consumption on the host and the flash device; (2) We have developed analytical models to analyze the performance-energy tradeoffs for Active Flash, by treating the SSD as a blackbox, this is particularly valuable due to the proprietary nature of the SSD internal hardware; and (3) We have enhanced a well-known SSD simulator (from MSR) to implement 'on-the-fly' data compression using Active Flash. Our results provide a window into striking a balance between energy consumption and application performance.« less

  10. Exascale computing and what it means for shock physics

    NASA Astrophysics Data System (ADS)

    Germann, Timothy

    2015-06-01

    The U.S. Department of Energy is preparing to launch an Exascale Computing Initiative, to address the myriad challenges required to deploy and effectively utilize an exascale-class supercomputer (i.e., one capable of performing 1018 operations per second) in the 2023 timeframe. Since physical (power dissipation) requirements limit clock rates to at most a few GHz, this will necessitate the coordination of on the order of a billion concurrent operations, requiring sophisticated system and application software, and underlying mathematical algorithms, that may differ radically from traditional approaches. Even at the smaller workstation or cluster level of computation, the massive concurrency and heterogeneity within each processor will impact computational scientists. Through the multi-institutional, multi-disciplinary Exascale Co-design Center for Materials in Extreme Environments (ExMatEx), we have initiated an early and deep collaboration between domain (computational materials) scientists, applied mathematicians, computer scientists, and hardware architects, in order to establish the relationships between algorithms, software stacks, and architectures needed to enable exascale-ready materials science application codes within the next decade. In my talk, I will discuss these challenges, and what it will mean for exascale-era electronic structure, molecular dynamics, and engineering-scale simulations of shock-compressed condensed matter. In particular, we anticipate that the emerging hierarchical, heterogeneous architectures can be exploited to achieve higher physical fidelity simulations using adaptive physics refinement. This work is supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research.

  11. Use of Transition Modeling to Enable the Computation of Losses for Variable-Speed Power Turbine

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2012-01-01

    To investigate the penalties associated with using a variable speed power turbine (VSPT) in a rotorcraft capable of vertical takeoff and landing, various analysis tools are required. Such analysis tools must be able to model the flow accurately within the operating envelope of VSPT. For power turbines low Reynolds numbers and a wide range of the incidence angles, positive and negative, due to the variation in the shaft speed at relatively fixed corrected flows, characterize this envelope. The flow in the turbine passage is expected to be transitional and separated at high incidence. The turbulence model of Walters and Leylek was implemented in the NASA Glenn-HT code to enable a more accurate analysis of such flows. Two-dimensional heat transfer predictions of flat plate flow and two-dimensional and three-dimensional heat transfer predictions on a turbine blade were performed and reported herein. Heat transfer computations were performed because it is a good marker for transition. The final goal is to be able to compute the aerodynamic losses. Armed with the new transition model, total pressure losses for three-dimensional flow of an Energy Efficient Engine (E3) tip section cascade for a range of incidence angles were computed in anticipation of the experimental data. The results obtained form a loss bucket for the chosen blade.

  12. Efficient flapping flight of pterosaurs

    NASA Astrophysics Data System (ADS)

    Strang, Karl Axel

    In the late eighteenth century, humans discovered the first pterosaur fossil remains and have been fascinated by their existence ever since. Pterosaurs exploited their membrane wings in a sophisticated manner for flight control and propulsion, and were likely the most efficient and effective flyers ever to inhabit our planet. The flapping gait is a complex combination of motions that sustains and propels an animal in the air. Because pterosaurs were so large with wingspans up to eleven meters, if they could have sustained flapping flight, they would have had to achieve high propulsive efficiencies. Identifying the wing motions that contribute the most to propulsive efficiency is key to understanding pterosaur flight, and therefore to shedding light on flapping flight in general and the design of efficient ornithopters. This study is based on published results for a very well-preserved specimen of Coloborhynchus robustus, for which the joints are well-known and thoroughly described in the literature. Simplifying assumptions are made to estimate the characteristics that can not be inferred directly from the fossil remains. For a given animal, maximizing efficiency is equivalent to minimizing power at a given thrust and speed. We therefore aim at finding the flapping gait, that is the joint motions, that minimize the required flapping power. The power is computed from the aerodynamic forces created during a given wing motion. We develop an unsteady three-dimensional code based on the vortex-lattice method, which correlates well with published results for unsteady motions of rectangular wings. In the aerodynamic model, the rigid pterosaur wing is defined by the position of the bones. In the aeroelastic model, we add the flexibility of the bones and of the wing membrane. The nonlinear structural behavior of the membrane is reduced to a linear modal decomposition, assuming small deflections about the reference wing geometry. The reference wing geometry is computed for the membrane subject to glide loads and pretension from the wing joint positions. The flapping gait is optimized in a two-stage procedure. First the design space is explored using a binary genetic algorithm. The best design points are then used as starting points in a sequential quadratic programming optimization algorithm. This algorithm is used to refine the solutions by precisely satisfying the constraints. The refined solutions are found in generally less than twenty major iterations and constraints are violated generally by less than 0.1%. We find that the optimal motions are in agreement with previous results for simple wing motions. By adding joint motions, the required flapping power is reduced by 7% to 17%. Because of the large uncertainties for some estimates, we investigate the sensitivity of the optimized flapping gait. We find that the optimal motions are sensitive mainly to flight speed, body accelerations, and to the material properties of the wing membrane. The optimal flight speed found correlates well with other studies of pterosaur flapping flight, and is 31% to 37% faster than previous estimates based on glide performance. Accounting for the body accelerations yields an increase of 10% to 16% in required flapping power. When including the aeroelastic effects, the optimal flapping gait is only slightly modified to accommodate for the deflections of stiff membranes. For a flexible membrane, the motion is significantly modified and the power increased by up to 57%. Finally, the flapping gait and required power compare well with published results for similar wing motions. Some published estimates of required power assumed a propulsive efficiency of 100%, whereas the propulsive efficiency computed for Coloborhynchus robustus ranges between 54% and 87%.

  13. Comparison of ISS Power System Telemetry with Analytically Derived Data for Shadowed Cases

    NASA Technical Reports Server (NTRS)

    Fincannon, H. James

    2002-01-01

    Accurate International Space Station (ISS) power prediction requires the quantification of solar array shadowing. Prior papers have discussed the NASA Glenn Research Center (GRC) ISS power system tool SPACE (System Power Analysis for Capability Evaluation) and its integrated shadowing algorithms. On-orbit telemetry has become available that permits the correlation of theoretical shadowing predictions with actual data. This paper documents the comparison of a shadowing metric (total solar array current) as derived from SPACE predictions and on-orbit flight telemetry data for representative significant shadowing cases. Images from flight video recordings and the SPACE computer program graphical output are used to illustrate the comparison. The accuracy of the SPACE shadowing capability is demonstrated for the cases examined.

  14. Control of a solar-energy-supplied electrical-power system without intermediate circuitry

    NASA Astrophysics Data System (ADS)

    Leistner, K.

    A computer control system is developed for electric-power systems comprising solar cells and small numbers of users with individual centrally controlled converters (and storage facilities when needed). Typical system structures are reviewed; the advantages of systems without an intermediate network are outlined; the demands on a control system in such a network (optimizing generator working point and power distribution) are defined; and a flexible modular prototype system is described in detail. A charging station for lead batteries used in electric automobiles is analyzed as an example. The power requirements of the control system (30 W for generator control and 50 W for communications and distribution control) are found to limit its use to larger networks.

  15. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    NASA Astrophysics Data System (ADS)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.

  16. Quality user support: Supporting quality users

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woolley, T.C.

    1994-12-31

    During the past decade, fundamental changes have occurred in technical computing in the oil industry. Technical computing systems have moved from local, fragmented quantity, to global, integrated, quality. The compute power available to the average geoscientist at his desktop has grown exponentially. Technical computing applications have increased in integration and complexity. At the same time, there has been a significant change in the work force due to the pressures of restructuring, and the increased focus on international opportunities. The profile of the user of technical computing resources has changed. Users are generally more mature, knowledgeable, and team oriented than theirmore » predecessors. In the 1990s, computer literacy is a requirement. This paper describes the steps taken by Oryx Energy Company to address the problems and opportunities created by the explosive growth in computing power and needs, coupled with the contraction of the business. A successful user support strategy will be described. Characteristics of the program include: (1) Client driven support; (2) Empowerment of highly skilled professionals to fill the support role; (3) Routine and ongoing modification to the support plan; (4) Utilization of the support assignment to create highly trained advocates on the line; (5) Integration of the support role to the reservoir management team. Results of the plan include a highly trained work force, stakeholder teams that include support personnel, and global support from a centralized support organization.« less

  17. Harnessing Disordered-Ensemble Quantum Dynamics for Machine Learning

    NASA Astrophysics Data System (ADS)

    Fujii, Keisuke; Nakajima, Kohei

    2017-08-01

    The quantum computer has an amazing potential of fast information processing. However, the realization of a digital quantum computer is still a challenging problem requiring highly accurate controls and key application strategies. Here we propose a platform, quantum reservoir computing, to solve these issues successfully by exploiting the natural quantum dynamics of ensemble systems, which are ubiquitous in laboratories nowadays, for machine learning. This framework enables ensemble quantum systems to universally emulate nonlinear dynamical systems including classical chaos. A number of numerical experiments show that quantum systems consisting of 5-7 qubits possess computational capabilities comparable to conventional recurrent neural networks of 100-500 nodes. This discovery opens up a paradigm for information processing with artificial intelligence powered by quantum physics.

  18. Use of parallel computing for analyzing big data in EEG studies of ambiguous perception

    NASA Astrophysics Data System (ADS)

    Maksimenko, Vladimir A.; Grubov, Vadim V.; Kirsanov, Daniil V.

    2018-02-01

    Problem of interaction between human and machine systems through the neuro-interfaces (or brain-computer interfaces) is an urgent task which requires analysis of large amount of neurophysiological EEG data. In present paper we consider the methods of parallel computing as one of the most powerful tools for processing experimental data in real-time with respect to multichannel structure of EEG. In this context we demonstrate the application of parallel computing for the estimation of the spectral properties of multichannel EEG signals, associated with the visual perception. Using CUDA C library we run wavelet-based algorithm on GPUs and show possibility for detection of specific patterns in multichannel set of EEG data in real-time.

  19. Torque Transmission Device at Zero Leakage

    NASA Technical Reports Server (NTRS)

    Hendricks, R. C.; Mullen, R. L.

    2005-01-01

    In a few critical applications, mechanical transmission of power by rotation at low speed is required without leakage at an interface. Herein we examine a device that enables torque to be transmitted across a sealed environmental barrier. The barrier represents the restraint membrane through which the torque is transmitted. The power is transferred through elastic deformation of a circular tube into an elliptical cross-section. Rotation of the principle axis of the ellipse at one end results in a commensurate rotation of an elliptical cross section at the other end of the tube. This transfer requires no rigid body rotation of the tube allowing a membrane to seal one end from the other. Both computational and experimental models of the device are presented.

  20. FEM numerical model study of heating in magnetic nanoparticles

    NASA Astrophysics Data System (ADS)

    Pearce, John A.; Cook, Jason R.; Hoopes, P. Jack; Giustini, Andrew

    2011-03-01

    Electromagnetic heating of nanoparticles is complicated by the extremely short thermal relaxation time constants and difficulty of coupling sufficient power into the particles to achieve desired temperatures. Magnetic field heating by the hysteresis loop mechanism at frequencies between about 100 and 300 kHz has proven to be an effective mechanism in magnetic nanoparticles. Experiments at 2.45 GHz show that Fe3O4 magnetite nanoparticle dispersions in the range of 1012 to 1013 NP/mL also heat substantially at this frequency. An FEM numerical model study was undertaken to estimate the order of magnitude of volume power density, Qgen (W m-3) required to achieve significant heating in evenly dispersed and aggregated clusters of nanoparticles. The FEM models were computed using Comsol Multiphysics; consequently the models were confined to continuum formulations and did not include film nano-dimension heat transfer effects at the nanoparticle surface. As an example, the models indicate that for a single 36 nm diameter particle at an equivalent dispersion of 1013 NP/mL located within one control volume (1.0 x 10-19 m3) of a capillary vessel a power density in the neighborhood of 1017 (W m-3) is required to achieve a steady state particle temperature of 52°C - the total power coupled to the particle is 2.44 μW. As a uniformly distributed particle cluster moves farther from the capillary the required power density decreases markedly. Finally, the tendency for particles in vivo to cluster together at separation distances much less than those of the uniform distribution further reduces the required power density.

  1. 2007 Wholesale Power Rate Case Initial Proposal : Wholesale Power Rate Development Study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    United States. Bonneville Power Administration.

    The Wholesale Power Rate Development Study (WPRDS) calculates BPA proposed rates based on information either developed in the WPRDS or supplied by the other studies that comprise the BPA rate proposal. All of these studies, and accompanying documentation, provide the details of computations and assumptions. In general, information about loads and resources is provided by the Load Resource Study (LRS), WP-07-E-BPA-01, and the LRS Documentation, WP-07-E-BPA-01A. Revenue requirements information, as well as the Planned Net Revenues for Risk (PNNR), is provided in the Revenue Requirement Study, WP-07-E-BPA-02, and its accompanying Revenue Requirement Study Documentation, WP-07-E-BPA-02A and WP-07-E-BPA-02B. The Market Pricemore » Forecast Study (MPFS), WP-07-E-BPA-03, and the MPFS Documentation, WP-07-E-BPA-03A, provide the WPRDS with information regarding seasonal and diurnal differentiation of energy rates, as well information regarding monthly market prices for Demand Rates. In addition, this study provides information for the pricing of unbundled power products. The Risk Analysis Study, WP-07-E-BPA-04, and the Risk Analysis Study Documentation, WP-07-E-BPA-04A, provide short-term balancing purchases as well as secondary energy sales and revenue. The Section 7(b)(2) Rate Test Study, WP-07-E-BPA-06, and the Section 7(b)(2) Rate Test Study Documentation, WP-07-E-BPA-06A, implement Section 7(b)(2) of the Northwest Power Act to ensure that BPA preference customers firm power rates applied to their general requirements are no higher than rates calculated using specific assumptions in the Northwest Power Act.« less

  2. Description of a MIL-STD-1553B Data Bus Ada Driver for the LeRC EPS Testbed

    NASA Technical Reports Server (NTRS)

    Mackin, Michael A.

    1995-01-01

    This document describes the software designed to provide communication between control computers in the NASA Lewis Research Center Electrical Power System Testbed using MIL-STD-1553B. The software drivers are coded in the Ada programming language and were developed on a MSDOS-based computer workstation. The Electrical Power System (EPS) Testbed is a reduced-scale prototype space station electrical power system. The power system manages and distributes electrical power from the sources (batteries or photovoltaic arrays) to the end-user loads. The electrical system primary operates at 120 volts DC, and the secondary system operates at 28 volts DC. The devices which direct the flow of electrical power are controlled by a network of six control computers. Data and control messages are passed between the computers using the MIL-STD-1553B network. One of the computers, the Power Management Controller (PMC), controls the primary power distribution and another, the Load Management Controller (LMC), controls the secondary power distribution. Each of these computers communicates with two other computers which act as subsidiary controllers. These subsidiary controllers are, in turn, connected to the devices which directly control the flow of electrical power.

  3. The Space Technology 5 Avionics System

    NASA Technical Reports Server (NTRS)

    Speer, Dave; Jackson, George; Stewart, Karen; Hernandez-Pellerano, Amri

    2004-01-01

    The Space Technology 5 (ST5) mission is a NASA New Millennium Program project that will validate new technologies for future space science missions and demonstrate the feasibility of building launching and operating multiple, miniature spacecraft that can collect research-quality in-situ science measurements. The three satellites in the ST5 constellation will be launched into a sun-synchronous Earth orbit in early 2006. ST5 fits into the 25-kilogram and 24-watt class of very small but fully capable spacecraft. The new technologies and design concepts for a compact power and command and data handling (C&DH) avionics system are presented. The 2-card ST5 avionics design incorporates new technology components while being tightly constrained in mass, power and volume. In order to hold down the mass and volume, and quali& new technologies for fUture use in space, high efficiency triple-junction solar cells and a lithium-ion battery were baselined into the power system design. The flight computer is co-located with the power system electronics in an integral spacecraft structural enclosure called the card cage assembly. The flight computer has a full set of uplink, downlink and solid-state recording capabilities, and it implements a new CMOS Ultra-Low Power Radiation Tolerant logic technology. There were a number of challenges imposed by the ST5 mission. Specifically, designing a micro-sat class spacecraft demanded that minimizing mass, volume and power dissipation would drive the overall design. The result is a very streamlined approach, while striving to maintain a high level of capability, The mission's radiation requirements, along with the low voltage DC power distribution, limited the selection of analog parts that can operate within these constraints. The challenge of qualifying new technology components for the space environment within a short development schedule was another hurdle. The mission requirements also demanded magnetic cleanliness in order to reduce the effect of stray (spacecraft-generated) magnetic fields on the science-grade magnetometer.

  4. Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid.

    PubMed

    Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul

    2017-02-01

    Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid.

  5. Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid

    PubMed Central

    Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul

    2017-01-01

    Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid1. PMID:29354654

  6. Novel systems and methods for quantum communication, quantum computation, and quantum simulation

    NASA Astrophysics Data System (ADS)

    Gorshkov, Alexey Vyacheslavovich

    Precise control over quantum systems can enable the realization of fascinating applications such as powerful computers, secure communication devices, and simulators that can elucidate the physics of complex condensed matter systems. However, the fragility of quantum effects makes it very difficult to harness the power of quantum mechanics. In this thesis, we present novel systems and tools for gaining fundamental insights into the complex quantum world and for bringing practical applications of quantum mechanics closer to reality. We first optimize and show equivalence between a wide range of techniques for storage of photons in atomic ensembles. We describe experiments demonstrating the potential of our optimization algorithms for quantum communication and computation applications. Next, we combine the technique of photon storage with strong atom-atom interactions to propose a robust protocol for implementing the two-qubit photonic phase gate, which is an important ingredient in many quantum computation and communication tasks. In contrast to photon storage, many quantum computation and simulation applications require individual addressing of closely-spaced atoms, ions, quantum dots, or solid state defects. To meet this requirement, we propose a method for coherent optical far-field manipulation of quantum systems with a resolution that is not limited by the wavelength of radiation. While alkali atoms are currently the system of choice for photon storage and many other applications, we develop new methods for quantum information processing and quantum simulation with ultracold alkaline-earth atoms in optical lattices. We show how multiple qubits can be encoded in individual alkaline-earth atoms and harnessed for quantum computing and precision measurements applications. We also demonstrate that alkaline-earth atoms can be used to simulate highly symmetric systems exhibiting spin-orbital interactions and capable of providing valuable insights into strongly correlated physics of transition metal oxides, heavy fermion materials, and spin liquid phases. While ultracold atoms typically exhibit only short-range interactions, numerous exotic phenomena and practical applications require long-range interactions, which can be achieved with ultracold polar molecules. We demonstrate the possibility to engineer a repulsive interaction between polar molecules, which allows for the suppression of inelastic collisions, efficient evaporative cooling, and the creation of novel phases of polar molecules.

  7. Computer Assisted Instruction in Basic.

    DTIC Science & Technology

    1983-09-28

    to’ I976 PRINT’ the power of .. 5 squared is 25.’ :980 PRINT’ review part 1, PRINT, and part 2. FUNCTIONS’ 1q9o GTO 2020 2000 PRINT*CORRECT’ 201 Q(4...data.’ 11:0 PRINT’As we continue through our lessons, we will discover some very’ 1140 PRINT powerful uses for ARRAYs.’ ’!50 PRINT ’:60 INPUT’press... kowledge about comoutErs. BASIC. which stands for Beainner’s All-ourpose Symbolic irstrcton role, s a lanquage that requires only a .sdrate nderstandira of

  8. Using Computing and Data Grids for Large-Scale Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2001-01-01

    We use the term "Grid" to refer to a software system that provides uniform and location independent access to geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. These emerging data and computing Grids promise to provide a highly capable and scalable environment for addressing large-scale science problems. We describe the requirements for science Grids, the resulting services and architecture of NASA's Information Power Grid (IPG) and DOE's Science Grid, and some of the scaling issues that have come up in their implementation.

  9. Sustainable Cooperative Robotic Technologies for Human and Robotic Outpost Infrastructure Construction and Maintenance

    NASA Technical Reports Server (NTRS)

    Stroupe, Ashley W.; Okon, Avi; Robinson, Matthew; Huntsberger, Terry; Aghazarian, Hrand; Baumgartner, Eric

    2004-01-01

    Robotic Construction Crew (RCC) is a heterogeneous multi-robot system for autonomous acquisition, transport, and precision mating of components in construction tasks. RCC minimizes resources constrained in a space environment such as computation, power, communication and, sensing. A behavior-based architecture provides adaptability and robustness despite low computational requirements. RCC successfully performs several construction related tasks in an emulated outdoor environment despite high levels of uncertainty in motions and sensing. Quantitative results are provided for formation keeping in component transport, precision instrument placement, and construction tasks.

  10. Programming distributed medical applications with XWCH2.

    PubMed

    Ben Belgacem, Mohamed; Niinimaki, Marko; Abdennadher, Nabil

    2010-01-01

    Many medical applications utilise distributed/parallel computing in order to cope with demands of large data or computing power requirements. In this paper, we present a new version of the XtremWeb-CH (XWCH) platform, and demonstrate two medical applications that run on XWCH. The platform is versatile in a way that it supports direct communication between tasks. When tasks cannot communicate directly, warehouses are used as intermediary nodes between "producer" and "consumer" tasks. New features have been developed to provide improved support for writing powerfull distributed applications using an easy API.

  11. Simple geometric algorithms to aid in clearance management for robotic mechanisms

    NASA Technical Reports Server (NTRS)

    Copeland, E. L.; Ray, L. D.; Peticolas, J. D.

    1981-01-01

    Global geometric shapes such as lines, planes, circles, spheres, cylinders, and the associated computational algorithms which provide relatively inexpensive estimates of minimum spatial clearance for safe operations were selected. The Space Shuttle, remote manipulator system, and the Power Extension Package are used as an example. Robotic mechanisms operate in quarters limited by external structures and the problem of clearance is often of considerable interest. Safe clearance management is simple and suited to real time calculation, whereas contact prediction requires more precision, sophistication, and computational overhead.

  12. Large-Scale Calculations for Material Sciences Using Accelerators to Improve Time- and Energy-to-Solution

    DOE PAGES

    Eisenbach, Markus

    2017-01-01

    A major impediment to deploying next-generation high-performance computational systems is the required electrical power, often measured in units of megawatts. The solution to this problem is driving the introduction of novel machine architectures, such as those employing many-core processors and specialized accelerators. In this article, we describe the use of a hybrid accelerated architecture to achieve both reduced time to solution and the associated reduction in the electrical cost for a state-of-the-art materials science computation.

  13. Using artificial intelligence to control fluid flow computations

    NASA Technical Reports Server (NTRS)

    Gelsey, Andrew

    1992-01-01

    Computational simulation is an essential tool for the prediction of fluid flow. Many powerful simulation programs exist today. However, using these programs to reliably analyze fluid flow and other physical situations requires considerable human effort and expertise to set up a simulation, determine whether the output makes sense, and repeatedly run the simulation with different inputs until a satisfactory result is achieved. Automating this process is not only of considerable practical importance but will also significantly advance basic artificial intelligence (AI) research in reasoning about the physical world.

  14. Efficient operating system level virtualization techniques for cloud resources

    NASA Astrophysics Data System (ADS)

    Ansu, R.; Samiksha; Anju, S.; Singh, K. John

    2017-11-01

    Cloud computing is an advancing technology which provides the servcies of Infrastructure, Platform and Software. Virtualization and Computer utility are the keys of Cloud computing. The numbers of cloud users are increasing day by day. So it is the need of the hour to make resources available on demand to satisfy user requirements. The technique in which resources namely storage, processing power, memory and network or I/O are abstracted is known as Virtualization. For executing the operating systems various virtualization techniques are available. They are: Full System Virtualization and Para Virtualization. In Full Virtualization, the whole architecture of hardware is duplicated virtually. No modifications are required in Guest OS as the OS deals with the VM hypervisor directly. In Para Virtualization, modifications of OS is required to run in parallel with other OS. For the Guest OS to access the hardware, the host OS must provide a Virtual Machine Interface. OS virtualization has many advantages such as migrating applications transparently, consolidation of server, online maintenance of OS and providing security. This paper briefs both the virtualization techniques and discusses the issues in OS level virtualization.

  15. A novel recursive Fourier transform for nonuniform sampled signals: application to heart rate variability spectrum estimation.

    PubMed

    Holland, Alexander; Aboy, Mateo

    2009-07-01

    We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.

  16. Multimodal computational microscopy based on transport of intensity equation

    NASA Astrophysics Data System (ADS)

    Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao

    2016-12-01

    Transport of intensity equation (TIE) is a powerful tool for phase retrieval and quantitative phase imaging, which requires intensity measurements only at axially closely spaced planes without a separate reference beam. It does not require coherent illumination and works well on conventional bright-field microscopes. The quantitative phase reconstructed by TIE gives valuable information that has been encoded in the complex wave field by passage through a sample of interest. Such information may provide tremendous flexibility to emulate various microscopy modalities computationally without requiring specialized hardware components. We develop a requisite theory to describe such a hybrid computational multimodal imaging system, which yields quantitative phase, Zernike phase contrast, differential interference contrast, and light field moment imaging, simultaneously. It makes the various observations for biomedical samples easy. Then we give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens-based TIE system, combined with the appropriate postprocessing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.

  17. "Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2009-01-01

    Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…

  18. Comparative analysis of existing models for power-grid synchronization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Takashi; Motter, Adilson E.

    2015-01-01

    The dynamics of power-grid networks is becoming an increasingly active area of research within the physics and network science communities. The results from such studies are typically insightful and illustrative, but are often based on simplifying assumptions that can be either difficult to assess or not fully justified for realistic applications. Here we perform a comprehensive comparative analysis of three leading models recently used to study synchronization dynamics in power-grid networks—a fundamental problem of practical significance given that frequency synchronization of all power generators in the same interconnection is a necessary condition for a power grid to operate. We show that each of these models can be derived from first principles within a common framework based on the classical model of a generator, thereby clarifying all assumptions involved. This framework allows us to view power grids as complex networks of coupled second-order phase oscillators with both forcing and damping terms. Using simple illustrative examples, test systems, and real power-grid datasets, we study the inherent frequencies of the oscillators as well as their coupling structure, comparing across the different models. We demonstrate, in particular, that if the network structure is not homogeneous, generators with identical parameters need to be modeled as non-identical oscillators in general. We also discuss an approach to estimate the required (dynamical) system parameters that are unavailable in typical power-grid datasets, their use for computing the constants of each of the three models, and an open-source MATLAB toolbox that we provide for these computations.

  19. Application of Nearly Linear Solvers to Electric Power System Computation

    NASA Astrophysics Data System (ADS)

    Grant, Lisa L.

    To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.

  20. Dark matter statistics for large galaxy catalogs: power spectra and covariance matrices

    NASA Astrophysics Data System (ADS)

    Klypin, Anatoly; Prada, Francisco

    2018-06-01

    Large-scale surveys of galaxies require accurate theoretical predictions of the dark matter clustering for thousands of mock galaxy catalogs. We demonstrate that this goal can be achieve with the new Parallel Particle-Mesh (PM) N-body code GLAM at a very low computational cost. We run ˜22, 000 simulations with ˜2 billion particles that provide ˜1% accuracy of the dark matter power spectra P(k) for wave-numbers up to k ˜ 1hMpc-1. Using this large data-set we study the power spectrum covariance matrix. In contrast to many previous analytical and numerical results, we find that the covariance matrix normalised to the power spectrum C(k, k΄)/P(k)P(k΄) has a complex structure of non-diagonal components: an upturn at small k, followed by a minimum at k ≈ 0.1 - 0.2 hMpc-1, and a maximum at k ≈ 0.5 - 0.6 hMpc-1. The normalised covariance matrix strongly evolves with redshift: C(k, k΄)∝δα(t)P(k)P(k΄), where δ is the linear growth factor and α ≈ 1 - 1.25, which indicates that the covariance matrix depends on cosmological parameters. We also show that waves longer than 1h-1Gpc have very little impact on the power spectrum and covariance matrix. This significantly reduces the computational costs and complexity of theoretical predictions: relatively small volume ˜(1h-1Gpc)3 simulations capture the necessary properties of dark matter clustering statistics. As our results also indicate, achieving ˜1% errors in the covariance matrix for k < 0.50 hMpc-1 requires a resolution better than ɛ ˜ 0.5h-1Mpc.

  1. Distributed Optimal Power Flow of AC/DC Interconnected Power Grid Using Synchronous ADMM

    NASA Astrophysics Data System (ADS)

    Liang, Zijun; Lin, Shunjiang; Liu, Mingbo

    2017-05-01

    Distributed optimal power flow (OPF) is of great importance and challenge to AC/DC interconnected power grid with different dispatching centres, considering the security and privacy of information transmission. In this paper, a fully distributed algorithm for OPF problem of AC/DC interconnected power grid called synchronous ADMM is proposed, and it requires no form of central controller. The algorithm is based on the fundamental alternating direction multiplier method (ADMM), by using the average value of boundary variables of adjacent regions obtained from current iteration as the reference values of both regions for next iteration, which realizes the parallel computation among different regions. The algorithm is tested with the IEEE 11-bus AC/DC interconnected power grid, and by comparing the results with centralized algorithm, we find it nearly no differences, and its correctness and effectiveness can be validated.

  2. A Suboptimal Power-Saving Transmission Scheme in Multiple Component Carrier Networks

    NASA Astrophysics Data System (ADS)

    Chung, Yao-Liang; Tsai, Zsehong

    Power consumption due to transmissions in base stations (BSs) has been a major contributor to communication-related CO2 emissions. A power optimization model is developed in this study with respect to radio resource allocation and activation in a multiple Component Carrier (CC) environment. We formulate and solve the power-minimization problem of the BS transceivers for multiple-CC networks with carrier aggregation, while maintaining the overall system and respective users' utilities above minimum levels. The optimized power consumption based on this model can be viewed as a lower bound of that of other algorithms employed in practice. A suboptimal scheme with low computation complexity is proposed. Numerical results show that the power consumption of our scheme is much better than that of the conventional one in which all CCs are always active, if both schemes maintain the same required utilities.

  3. Possible Improvements to MCNP6 and its CEM/LAQGSM Event-Generators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mashnik, Stepan Georgievich

    2015-08-04

    This report is intended to the MCNP6 developers and sponsors of MCNP6. It presents a set of suggested possible future improvements to MCNP6 and to its CEM03.03 and LAQGSM03.03 event-generators. A few suggested modifications of MCNP6 are quite simple, aimed at avoiding possible problems with running MCNP6 on various computers, i.e., these changes are not expected to change or improve any results, but should make the use of MCNP6 easier; such changes are expected to require limited man-power resources. On the other hand, several other suggested improvements require a serious further development of nuclear reaction models, are expected to improvemore » significantly the predictive power of MCNP6 for a number of nuclear reactions; but, such developments require several years of work by real experts on nuclear reactions.« less

  4. Quantum computing on encrypted data

    NASA Astrophysics Data System (ADS)

    Fisher, K. A. G.; Broadbent, A.; Shalm, L. K.; Yan, Z.; Lavoie, J.; Prevedel, R.; Jennewein, T.; Resch, K. J.

    2014-01-01

    The ability to perform computations on encrypted data is a powerful tool for protecting privacy. Recently, protocols to achieve this on classical computing systems have been found. Here, we present an efficient solution to the quantum analogue of this problem that enables arbitrary quantum computations to be carried out on encrypted quantum data. We prove that an untrusted server can implement a universal set of quantum gates on encrypted quantum bits (qubits) without learning any information about the inputs, while the client, knowing the decryption key, can easily decrypt the results of the computation. We experimentally demonstrate, using single photons and linear optics, the encryption and decryption scheme on a set of gates sufficient for arbitrary quantum computations. As our protocol requires few extra resources compared with other schemes it can be easily incorporated into the design of future quantum servers. These results will play a key role in enabling the development of secure distributed quantum systems.

  5. Computational modelling of oxygenation processes in enzymes and biomimetic model complexes.

    PubMed

    de Visser, Sam P; Quesne, Matthew G; Martin, Bodo; Comba, Peter; Ryde, Ulf

    2014-01-11

    With computational resources becoming more efficient and more powerful and at the same time cheaper, computational methods have become more and more popular for studies on biochemical and biomimetic systems. Although large efforts from the scientific community have gone into exploring the possibilities of computational methods for studies on large biochemical systems, such studies are not without pitfalls and often cannot be routinely done but require expert execution. In this review we summarize and highlight advances in computational methodology and its application to enzymatic and biomimetic model complexes. In particular, we emphasize on topical and state-of-the-art methodologies that are able to either reproduce experimental findings, e.g., spectroscopic parameters and rate constants, accurately or make predictions of short-lived intermediates and fast reaction processes in nature. Moreover, we give examples of processes where certain computational methods dramatically fail.

  6. Quantum computing on encrypted data.

    PubMed

    Fisher, K A G; Broadbent, A; Shalm, L K; Yan, Z; Lavoie, J; Prevedel, R; Jennewein, T; Resch, K J

    2014-01-01

    The ability to perform computations on encrypted data is a powerful tool for protecting privacy. Recently, protocols to achieve this on classical computing systems have been found. Here, we present an efficient solution to the quantum analogue of this problem that enables arbitrary quantum computations to be carried out on encrypted quantum data. We prove that an untrusted server can implement a universal set of quantum gates on encrypted quantum bits (qubits) without learning any information about the inputs, while the client, knowing the decryption key, can easily decrypt the results of the computation. We experimentally demonstrate, using single photons and linear optics, the encryption and decryption scheme on a set of gates sufficient for arbitrary quantum computations. As our protocol requires few extra resources compared with other schemes it can be easily incorporated into the design of future quantum servers. These results will play a key role in enabling the development of secure distributed quantum systems.

  7. Neuromorphic computing enabled by physics of electron spins: Prospects and perspectives

    NASA Astrophysics Data System (ADS)

    Sengupta, Abhronil; Roy, Kaushik

    2018-03-01

    “Spintronics” refers to the understanding of the physics of electron spin-related phenomena. While most of the significant advancements in this field has been driven primarily by memory, recent research has demonstrated that various facets of the underlying physics of spin transport and manipulation can directly mimic the functionalities of the computational primitives in neuromorphic computation, i.e., the neurons and synapses. Given the potential of these spintronic devices to implement bio-mimetic computations at very low terminal voltages, several spin-device structures have been proposed as the core building blocks of neuromorphic circuits and systems to implement brain-inspired computing. Such an approach is expected to play a key role in circumventing the problems of ever-increasing power dissipation and hardware requirements for implementing neuro-inspired algorithms in conventional digital CMOS technology. Perspectives on spin-enabled neuromorphic computing, its status, and challenges and future prospects are outlined in this review article.

  8. Computational Science at the Argonne Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Romero, Nichols

    2014-03-01

    The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.

  9. Rankine engine solar power generation. I - Performance and economic analysis

    NASA Technical Reports Server (NTRS)

    Gossler, A. A.; Orrock, J. E.

    1981-01-01

    Results of a computer simulation of the performance of a solar flat plate collector powered electrical generation system are presented. The simulation was configured to include locations in New Mexico, North Dakota, Tennessee, and Massachusetts, and considered a water-based heat-transfer fluid collector system with storage. The collectors also powered a Rankine-cycle boiler filled with a low temperature working fluid. The generator was considered to be run only when excess solar heat and full storage would otherwise require heat purging through the collectors. All power was directed into the utility grid. The solar powered generator unit addition was found to be dependent on site location and collector area, and reduced the effective solar cost with collector areas greater than 400-670 sq m. The sites were economically ranked, best to worst: New Mexico, North Dakota, Massachusetts, and Tennessee.

  10. Power subsystem automation study

    NASA Technical Reports Server (NTRS)

    Imamura, M. S.; Moser, R. L.; Veatch, M.

    1983-01-01

    Generic power-system elements and their potential faults are identified. Automation functions and their resulting benefits are defined and automation functions between power subsystem, central spacecraft computer, and ground flight-support personnel are partitioned. All automation activities were categorized as data handling, monitoring, routine control, fault handling, planning and operations, or anomaly handling. Incorporation of all these classes of tasks, except for anomaly handling, in power subsystem hardware and software was concluded to be mandatory to meet the design and operational requirements of the space station. The key drivers are long mission lifetime, modular growth, high-performance flexibility, a need to accommodate different electrical user-load equipment, onorbit assembly/maintenance/servicing, and potentially large number of power subsystem components. A significant effort in algorithm development and validation is essential in meeting the 1987 technology readiness date for the space station.

  11. Cloud-based crowd sensing: a framework for location-based crowd analyzer and advisor

    NASA Astrophysics Data System (ADS)

    Aishwarya, K. C.; Nambi, A.; Hudson, S.; Nadesh, R. K.

    2017-11-01

    Cloud computing is an emerging field of computer science to integrate and explore large and powerful computing systems and storages for personal and also for enterprise requirements. Mobile Cloud Computing is the inheritance of this concept towards mobile hand-held devices. Crowdsensing, or to be precise, Mobile Crowdsensing is the process of sharing resources from an available group of mobile handheld devices that support sharing of different resources such as data, memory and bandwidth to perform a single task for collective reasons. In this paper, we propose a framework to use Crowdsensing and perform a crowd analyzer and advisor whether the user can go to the place or not. This is an ongoing research and is a new concept to which the direction of cloud computing has shifted and is viable for more expansion in the near future.

  12. Capability of GPGPU for Faster Thermal Analysis Used in Data Assimilation

    NASA Astrophysics Data System (ADS)

    Takaki, Ryoji; Akita, Takeshi; Shima, Eiji

    A thermal mathematical model plays an important role in operations on orbit as well as spacecraft thermal designs. The thermal mathematical model has some uncertain thermal characteristic parameters, such as thermal contact resistances between components, effective emittances of multilayer insulation (MLI) blankets, discouraging make up efficiency and accuracy of the model. A particle filter which is one of successive data assimilation methods has been applied to construct spacecraft thermal mathematical models. This method conducts a lot of ensemble computations, which require large computational power. Recently, General Purpose computing in Graphics Processing Unit (GPGPU) has been attracted attention in high performance computing. Therefore GPGPU is applied to increase the computational speed of thermal analysis used in the particle filter. This paper shows the speed-up results by using GPGPU as well as the application method of GPGPU.

  13. On the Compliance of Simbol-X Mirror Roughness with its Effective Area Requirements

    NASA Astrophysics Data System (ADS)

    Spiga, D.; Basso, S.; Cotroneo, V.; Pareschi, G.; Tagliaferri, G.

    2009-05-01

    Surface microroughness of X-ray mirrors is a key issue for the angular resolution of Simbol-X to comply with the required one (<20 arcsec at 30 keV). The maximum tolerable microroughness for Simbol-X mirrors, in order to satisfy the required imaging capability, has already been derived in terms of its PSD (Power Spectral Density). However, also the Effective Area of the telescope is affected by the mirror roughness. In this work we will show how the expected effective area of the Simbol-X mirror module can be computed from the roughness PSD tolerance, checking its compliance with the requirements.

  14. ExM:System Support for Extreme-Scale, Many-Task Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katz, Daniel S

    The ever-increasing power of supercomputer systems is both driving and enabling the emergence of new problem-solving methods that require the effi cient execution of many concurrent and interacting tasks. Methodologies such as rational design (e.g., in materials science), uncertainty quanti fication (e.g., in engineering), parameter estimation (e.g., for chemical and nuclear potential functions, and in economic energy systems modeling), massive dynamic graph pruning (e.g., in phylogenetic searches), Monte-Carlo- based iterative fi xing (e.g., in protein structure prediction), and inverse modeling (e.g., in reservoir simulation) all have these requirements. These many-task applications frequently have aggregate computing needs that demand the fastestmore » computers. For example, proposed next-generation climate model ensemble studies will involve 1,000 or more runs, each requiring 10,000 cores for a week, to characterize model sensitivity to initial condition and parameter uncertainty. The goal of the ExM project is to achieve the technical advances required to execute such many-task applications efficiently, reliably, and easily on petascale and exascale computers. In this way, we will open up extreme-scale computing to new problem solving methods and application classes. In this document, we report on combined technical progress of the collaborative ExM project, and the institutional financial status of the portion of the project at University of Chicago, over the rst 8 months (through April 30, 2011)« less

  15. Open solutions to distributed control in ground tracking stations

    NASA Technical Reports Server (NTRS)

    Heuser, William Randy

    1994-01-01

    The advent of high speed local area networks has made it possible to interconnect small, powerful computers to function together as a single large computer. Today, distributed computer systems are the new paradigm for large scale computing systems. However, the communications provided by the local area network is only one part of the solution. The services and protocols used by the application programs to communicate across the network are as indispensable as the local area network. And the selection of services and protocols that do not match the system requirements will limit the capabilities, performance, and expansion of the system. Proprietary solutions are available but are usually limited to a select set of equipment. However, there are two solutions based on 'open' standards. The question that must be answered is 'which one is the best one for my job?' This paper examines a model for tracking stations and their requirements for interprocessor communications in the next century. The model and requirements are matched with the model and services provided by the five different software architectures and supporting protocol solutions. Several key services are examined in detail to determine which services and protocols most closely match the requirements for the tracking station environment. The study reveals that the protocols are tailored to the problem domains for which they were originally designed. Further, the study reveals that the process control model is the closest match to the tracking station model.

  16. Power limits for microbial life.

    PubMed

    LaRowe, Douglas E; Amend, Jan P

    2015-01-01

    To better understand the origin, evolution, and extent of life, we seek to determine the minimum flux of energy needed for organisms to remain viable. Despite the difficulties associated with direct measurement of the power limits for life, it is possible to use existing data and models to constrain the minimum flux of energy required to sustain microorganisms. Here, a we apply a bioenergetic model to a well characterized marine sedimentary environment in order to quantify the amount of power organisms use in an ultralow-energy setting. In particular, we show a direct link between power consumption in this environment and the amount of biomass (cells cm(-3)) found in it. The power supply resulting from the aerobic degradation of particular organic carbon (POC) at IODP Site U1370 in the South Pacific Gyre is between ∼10(-12) and 10(-16) W cm(-3). The rates of POC degradation are calculated using a continuum model while Gibbs energies have been computed using geochemical data describing the sediment as a function of depth. Although laboratory-determined values of maintenance power do a poor job of representing the amount of biomass in U1370 sediments, the number of cells per cm(-3) can be well-captured using a maintenance power, 190 zW cell(-1), two orders of magnitude lower than the lowest value reported in the literature. In addition, we have combined cell counts and calculated power supplies to determine that, on average, the microorganisms at Site U1370 require 50-3500 zW cell(-1), with most values under ∼300 zW cell(-1). Furthermore, we carried out an analysis of the absolute minimum power requirement for a single cell to remain viable to be on the order of 1 zW cell(-1).

  17. Using CAS to Solve a Mathematics Task: A Deconstruction

    ERIC Educational Resources Information Center

    Berger, Margot

    2010-01-01

    I investigate how and whether a heterogeneous group of first-year university mathematics students in South Africa harness the potential power of a computer algebra system (CAS) when doing a specific mathematics task. In order to do this, I develop a framework for deconstructing a mathematics task requiring the use of CAS, into its primary…

  18. RoMPS concept review automatic control of space robot, volume 2

    NASA Technical Reports Server (NTRS)

    Dobbs, M. E.

    1991-01-01

    Topics related to robot operated materials processing in space (RoMPS) are presented in view graph form and include: (1) system concept; (2) Hitchhiker Interface Requirements; (3) robot axis control concepts; (4) Autonomous Experiment Management System; (5) Zymate Robot Controller; (6) Southwest SC-4 Computer; (7) oven control housekeeping data; and (8) power distribution.

  19. Investigation into Cloud Computing for More Robust Automated Bulk Image Geoprocessing

    NASA Technical Reports Server (NTRS)

    Brown, Richard B.; Smoot, James C.; Underwood, Lauren; Armstrong, C. Duane

    2012-01-01

    Geospatial resource assessments frequently require timely geospatial data processing that involves large multivariate remote sensing data sets. In particular, for disasters, response requires rapid access to large data volumes, substantial storage space and high performance processing capability. The processing and distribution of this data into usable information products requires a processing pipeline that can efficiently manage the required storage, computing utilities, and data handling requirements. In recent years, with the availability of cloud computing technology, cloud processing platforms have made available a powerful new computing infrastructure resource that can meet this need. To assess the utility of this resource, this project investigates cloud computing platforms for bulk, automated geoprocessing capabilities with respect to data handling and application development requirements. This presentation is of work being conducted by Applied Sciences Program Office at NASA-Stennis Space Center. A prototypical set of image manipulation and transformation processes that incorporate sample Unmanned Airborne System data were developed to create value-added products and tested for implementation on the "cloud". This project outlines the steps involved in creating and testing of open source software developed process code on a local prototype platform, and then transitioning this code with associated environment requirements into an analogous, but memory and processor enhanced cloud platform. A data processing cloud was used to store both standard digital camera panchromatic and multi-band image data, which were subsequently subjected to standard image processing functions such as NDVI (Normalized Difference Vegetation Index), NDMI (Normalized Difference Moisture Index), band stacking, reprojection, and other similar type data processes. Cloud infrastructure service providers were evaluated by taking these locally tested processing functions, and then applying them to a given cloud-enabled infrastructure to assesses and compare environment setup options and enabled technologies. This project reviews findings that were observed when cloud platforms were evaluated for bulk geoprocessing capabilities based on data handling and application development requirements.

  20. Parallel processing architecture for H.264 deblocking filter on multi-core platforms

    NASA Astrophysics Data System (ADS)

    Prasad, Durga P.; Sonachalam, Sekar; Kunchamwar, Mangesh K.; Gunupudi, Nageswara Rao

    2012-03-01

    Massively parallel computing (multi-core) chips offer outstanding new solutions that satisfy the increasing demand for high resolution and high quality video compression technologies such as H.264. Such solutions not only provide exceptional quality but also efficiency, low power, and low latency, previously unattainable in software based designs. While custom hardware and Application Specific Integrated Circuit (ASIC) technologies may achieve lowlatency, low power, and real-time performance in some consumer devices, many applications require a flexible and scalable software-defined solution. The deblocking filter in H.264 encoder/decoder poses difficult implementation challenges because of heavy data dependencies and the conditional nature of the computations. Deblocking filter implementations tend to be fixed and difficult to reconfigure for different needs. The ability to scale up for higher quality requirements such as 10-bit pixel depth or a 4:2:2 chroma format often reduces the throughput of a parallel architecture designed for lower feature set. A scalable architecture for deblocking filtering, created with a massively parallel processor based solution, means that the same encoder or decoder will be deployed in a variety of applications, at different video resolutions, for different power requirements, and at higher bit-depths and better color sub sampling patterns like YUV, 4:2:2, or 4:4:4 formats. Low power, software-defined encoders/decoders may be implemented using a massively parallel processor array, like that found in HyperX technology, with 100 or more cores and distributed memory. The large number of processor elements allows the silicon device to operate more efficiently than conventional DSP or CPU technology. This software programing model for massively parallel processors offers a flexible implementation and a power efficiency close to that of ASIC solutions. This work describes a scalable parallel architecture for an H.264 compliant deblocking filter for multi core platforms such as HyperX technology. Parallel techniques such as parallel processing of independent macroblocks, sub blocks, and pixel row level are examined in this work. The deblocking architecture consists of a basic cell called deblocking filter unit (DFU) and dependent data buffer manager (DFM). The DFU can be used in several instances, catering to different performance needs the DFM serves the data required for the different number of DFUs, and also manages all the neighboring data required for future data processing of DFUs. This approach achieves the scalability, flexibility, and performance excellence required in deblocking filters.

  1. Resolution-Enhanced Harmonic and Interharmonic Measurement for Power Quality Analysis in Cyber-Physical Energy System.

    PubMed

    Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin

    2016-06-27

    Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system.

  2. Resolution-Enhanced Harmonic and Interharmonic Measurement for Power Quality Analysis in Cyber-Physical Energy System

    PubMed Central

    Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin

    2016-01-01

    Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system. PMID:27355946

  3. Evolutionary growth for Space Station Freedom electrical power system

    NASA Technical Reports Server (NTRS)

    Marshall, Matthew Fisk; Mclallin, Kerry; Zernic, Mike

    1989-01-01

    Over an operational lifetime of at least 30 yr, Space Station Freedom will encounter increased Space Station user requirements and advancing technologies. The Space Station electrical power system is designed with the flexibility to accommodate these emerging technologies and expert systems and is being designed with the necessary software hooks and hardware scars to accommodate increased growth demand. The electrical power system is planned to grow from the initial 75 kW up to 300 kW. The Phase 1 station will utilize photovoltaic arrays to produce the electrical power; however, for growth to 300 kW, solar dynamic power modules will be utilized. Pairs of 25 kW solar dynamic power modules will be added to the station to reach the power growth level. The addition of solar dynamic power in the growth phase places constraints in the initial Space Station systems such as guidance, navigation, and control, external thermal, truss structural stiffness, computational capabilities and storage, which must be planned-in, in order to facilitate the addition of the solar dynamic modules.

  4. Applications of the pipeline environment for visual informatics and genomics computations

    PubMed Central

    2011-01-01

    Background Contemporary informatics and genomics research require efficient, flexible and robust management of large heterogeneous data, advanced computational tools, powerful visualization, reliable hardware infrastructure, interoperability of computational resources, and detailed data and analysis-protocol provenance. The Pipeline is a client-server distributed computational environment that facilitates the visual graphical construction, execution, monitoring, validation and dissemination of advanced data analysis protocols. Results This paper reports on the applications of the LONI Pipeline environment to address two informatics challenges - graphical management of diverse genomics tools, and the interoperability of informatics software. Specifically, this manuscript presents the concrete details of deploying general informatics suites and individual software tools to new hardware infrastructures, the design, validation and execution of new visual analysis protocols via the Pipeline graphical interface, and integration of diverse informatics tools via the Pipeline eXtensible Markup Language syntax. We demonstrate each of these processes using several established informatics packages (e.g., miBLAST, EMBOSS, mrFAST, GWASS, MAQ, SAMtools, Bowtie) for basic local sequence alignment and search, molecular biology data analysis, and genome-wide association studies. These examples demonstrate the power of the Pipeline graphical workflow environment to enable integration of bioinformatics resources which provide a well-defined syntax for dynamic specification of the input/output parameters and the run-time execution controls. Conclusions The LONI Pipeline environment http://pipeline.loni.ucla.edu provides a flexible graphical infrastructure for efficient biomedical computing and distributed informatics research. The interactive Pipeline resource manager enables the utilization and interoperability of diverse types of informatics resources. The Pipeline client-server model provides computational power to a broad spectrum of informatics investigators - experienced developers and novice users, user with or without access to advanced computational-resources (e.g., Grid, data), as well as basic and translational scientists. The open development, validation and dissemination of computational networks (pipeline workflows) facilitates the sharing of knowledge, tools, protocols and best practices, and enables the unbiased validation and replication of scientific findings by the entire community. PMID:21791102

  5. On the impact of approximate computation in an analog DeSTIN architecture.

    PubMed

    Young, Steven; Lu, Junjie; Holleman, Jeremy; Arel, Itamar

    2014-05-01

    Deep machine learning (DML) holds the potential to revolutionize machine learning by automating rich feature extraction, which has become the primary bottleneck of human engineering in pattern recognition systems. However, the heavy computational burden renders DML systems implemented on conventional digital processors impractical for large-scale problems. The highly parallel computations required to implement large-scale deep learning systems are well suited to custom hardware. Analog computation has demonstrated power efficiency advantages of multiple orders of magnitude relative to digital systems while performing nonideal computations. In this paper, we investigate typical error sources introduced by analog computational elements and their impact on system-level performance in DeSTIN--a compositional deep learning architecture. These inaccuracies are evaluated on a pattern classification benchmark, clearly demonstrating the robustness of the underlying algorithm to the errors introduced by analog computational elements. A clear understanding of the impacts of nonideal computations is necessary to fully exploit the efficiency of analog circuits.

  6. Event triggered state estimation techniques for power systems with integrated variable energy resources.

    PubMed

    Francy, Reshma C; Farid, Amro M; Youcef-Toumi, Kamal

    2015-05-01

    For many decades, state estimation (SE) has been a critical technology for energy management systems utilized by power system operators. Over time, it has become a mature technology that provides an accurate representation of system state under fairly stable and well understood system operation. The integration of variable energy resources (VERs) such as wind and solar generation, however, introduces new fast frequency dynamics and uncertainties into the system. Furthermore, such renewable energy is often integrated into the distribution system thus requiring real-time monitoring all the way to the periphery of the power grid topology and not just the (central) transmission system. The conventional solution is two fold: solve the SE problem (1) at a faster rate in accordance with the newly added VER dynamics and (2) for the entire power grid topology including the transmission and distribution systems. Such an approach results in exponentially growing problem sets which need to be solver at faster rates. This work seeks to address these two simultaneous requirements and builds upon two recent SE methods which incorporate event-triggering such that the state estimator is only called in the case of considerable novelty in the evolution of the system state. The first method incorporates only event-triggering while the second adds the concept of tracking. Both SE methods are demonstrated on the standard IEEE 14-bus system and the results are observed for a specific bus for two difference scenarios: (1) a spike in the wind power injection and (2) ramp events with higher variability. Relative to traditional state estimation, the numerical case studies showed that the proposed methods can result in computational time reductions of 90%. These results were supported by a theoretical discussion of the computational complexity of three SE techniques. The work concludes that the proposed SE techniques demonstrate practical improvements to the computational complexity of classical state estimation. In such a way, state estimation can continue to support the necessary control actions to mitigate the imbalances resulting from the uncertainties in renewables. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Resonant UPS topologies for the emerging hybrid fiber-coaxial networks

    NASA Astrophysics Data System (ADS)

    Pinheiro, Humberto

    Uninterruptible power supply (UPS) systems have been extensively applied to feed critical loads in many areas. Typical examples of critical loads include life-support equipment, computers and telecommunication systems. Although all UPS systems have a common purpose to provide continuous power-to critical loads, the emerging hybrid fiber-coaxial networks have created the need for specific types of UPS topologies. For example, galvanic isolation for the load and the battery, small size, high input power factor, and trapezoidal output voltage waveforms are among the required features of UPS topologies for hybrid fiber-coaxial networks. None of the conventional UPS topologies meet all these requirements. Consequently. this thesis is directed towards the design and analysis of UPS topologies for this new application. Novel UPS topologies are proposed and control techniques are developed to allow operation at high switching frequencies without penalizing the converter efficiency. By the use of resonant converters in the proposed UPS topologies. a high input power factor is achieved without requiring a dedicated power factor correction stage. In addition, a self-sustained oscillation control method is proposed to ensure soft switching under all operating conditions. A detailed analytical treatment of the resonant converters in the proposed UPS topologies is presented and design procedures illustrated. Simulation and experimental results are presented to validate the analyses and to demonstrate the feasibility of the proposed schemes.

  8. Communication Simulations for Power System Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuller, Jason C.; Ciraci, Selim; Daily, Jeffrey A.

    2013-05-29

    New smart grid technologies and concepts, such as dynamic pricing, demand response, dynamic state estimation, and wide area monitoring, protection, and control, are expected to require considerable communication resources. As the cost of retrofit can be high, future power grids will require the integration of high-speed, secure connections with legacy communication systems, while still providing adequate system control and security. While considerable work has been performed to create co-simulators for the power domain with load models and market operations, limited work has been performed in integrating communications directly into a power domain solver. The simulation of communication and power systemsmore » will become more important as the two systems become more inter-related. This paper will discuss ongoing work at Pacific Northwest National Laboratory to create a flexible, high-speed power and communication system co-simulator for smart grid applications. The framework for the software will be described, including architecture considerations for modular, high performance computing and large-scale scalability (serialization, load balancing, partitioning, cross-platform support, etc.). The current simulator supports the ns-3 (telecommunications) and GridLAB-D (distribution systems) simulators. Ongoing and future work will be described, including planned future expansions for a traditional transmission solver. A test case using the co-simulator, utilizing a transactive demand response system created for the Olympic Peninsula and AEP gridSMART demonstrations, requiring two-way communication between distributed and centralized market devices, will be used to demonstrate the value and intended purpose of the co-simulation environment.« less

  9. Potential efficiencies of open- and closed-cycle CO, supersonic, electric-discharge lasers

    NASA Technical Reports Server (NTRS)

    Monson, D. J.

    1976-01-01

    Computed open- and closed-cycle system efficiencies (laser power output divided by electrical power input) are presented for a CW carbon monoxide, supersonic, electric-discharge laser. Closed-system results include the compressor power required to overcome stagnation pressure losses due to supersonic heat addition and a supersonic diffuser. The paper shows the effect on the system efficiencies of varying several important parameters. These parameters include: gas mixture, gas temperature, gas total temperature, gas density, total discharge energy loading, discharge efficiency, saturated gain coefficient, optical cavity size and location with respect to the discharge, and supersonic diffuser efficiency. Maximum open-cycle efficiency of 80-90% is predicted; the best closed-cycle result is 60-70%.

  10. Optical information processing at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Reid, Max B.; Bualat, Maria G.; Cho, Young C.; Downie, John D.; Gary, Charles K.; Ma, Paul W.; Ozcan, Meric; Pryor, Anna H.; Spirkovska, Lilly

    1993-01-01

    The combination of analog optical processors with digital electronic systems offers the potential of tera-OPS computational performance, while often requiring less power and weight relative to all-digital systems. NASA is working to develop and demonstrate optical processing techniques for on-board, real time science and mission applications. Current research areas and applications under investigation include optical matrix processing for space structure vibration control and the analysis of Space Shuttle Main Engine plume spectra, optical correlation-based autonomous vision for robotic vehicles, analog computation for robotic path planning, free-space optical interconnections for information transfer within digital electronic computers, and multiplexed arrays of fiber optic interferometric sensors for acoustic and vibration measurements.

  11. New insights into faster computation of uncertainties

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Atreyee

    2012-11-01

    Heavy computation power, lengthy simulations, and an exhaustive number of model runs—often these seem like the only statistical tools that scientists have at their disposal when computing uncertainties associated with predictions, particularly in cases of environmental processes such as groundwater movement. However, calculation of uncertainties need not be as lengthy, a new study shows. Comparing two approaches—the classical Bayesian “credible interval” and a less commonly used regression-based “confidence interval” method—Lu et al. show that for many practical purposes both methods provide similar estimates of uncertainties. The advantage of the regression method is that it demands 10-1000 model runs, whereas the classical Bayesian approach requires 10,000 to millions of model runs.

  12. I-deas TMG to NX Space Systems Thermal Model Conversion and Computational Performance Comparison

    NASA Technical Reports Server (NTRS)

    Somawardhana, Ruwan

    2011-01-01

    CAD/CAE packages change on a continuous basis as the power of the tools increase to meet demands. End -users must adapt to new products as they come to market and replace legacy packages. CAE modeling has continued to evolve and is constantly becoming more detailed and complex. Though this comes at the cost of increased computing requirements Parallel processing coupled with appropriate hardware can minimize computation time. Users of Maya Thermal Model Generator (TMG) are faced with transitioning from NX I -deas to NX Space Systems Thermal (SST). It is important to understand what differences there are when changing software packages We are looking for consistency in results.

  13. Initial values for the integration scheme to compute the eigenvalues for propagation in ducts

    NASA Technical Reports Server (NTRS)

    Eversman, W.

    1977-01-01

    A scheme for the calculation of eigenvalues in the problem of acoustic propagation in a two-dimensional duct is described. The computation method involves changing the coupled transcendental nonlinear algebraic equations into an initial value problem involving a nonlinear ordinary differential equation. The simplest approach is to use as initial values the hardwall eigenvalues and to integrate away from these values as the admittance varies from zero to its actual value with a linear variation. The approach leads to a powerful root finding routine capable of computing the transverse and axial wave numbers for two-dimensional ducts for any frequency, lining, admittance and Mach number without requiring initial guesses or starting points.

  14. NOSTOS: A Paper–Based Ubiquitous Computing Healthcare Environment to Support Data Capture and Collaboration

    PubMed Central

    Bång, Magnus; Larsson, Anders; Eriksson, Henrik

    2003-01-01

    In this paper, we present a new approach to clinical workplace computerization that departs from the window–based user interface paradigm. NOSTOS is an experimental computer–augmented work environment designed to support data capture and teamwork in an emergency room. NOSTOS combines multiple technologies, such as digital pens, walk–up displays, headsets, a smart desk, and sensors to enhance an existing paper–based practice with computer power. The physical interfaces allow clinicians to retain mobile paper–based collaborative routines and still benefit from computer technology. The requirements for the system were elicited from situated workplace studies. We discuss the advantages and disadvantages of augmenting a paper–based clinical work environment. PMID:14728131

  15. Extending Moore's Law via Computationally Error Tolerant Computing.

    DOE PAGES

    Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.; ...

    2018-03-01

    Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less

  16. Extending Moore's Law via Computationally Error Tolerant Computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.

    Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less

  17. Harnessing the power of emerging petascale platforms

    NASA Astrophysics Data System (ADS)

    Mellor-Crummey, John

    2007-07-01

    As part of the US Department of Energy's Scientific Discovery through Advanced Computing (SciDAC-2) program, science teams are tackling problems that require computational simulation and modeling at the petascale. A grand challenge for computer science is to develop software technology that makes it easier to harness the power of these systems to aid scientific discovery. As part of its activities, the SciDAC-2 Center for Scalable Application Development Software (CScADS) is building open source software tools to support efficient scientific computing on the emerging leadership-class platforms. In this paper, we describe two tools for performance analysis and tuning that are being developed as part of CScADS: a tool for analyzing scalability and performance, and a tool for optimizing loop nests for better node performance. We motivate these tools by showing how they apply to S3D, a turbulent combustion code under development at Sandia National Laboratory. For S3D, our node performance analysis tool helped uncover several performance bottlenecks. Using our loop nest optimization tool, we transformed S3D's most costly loop nest to reduce execution time by a factor of 2.94 for a processor working on a 503 domain.

  18. CFD research and systems in Kawasaki Heavy Industries and its future prospects

    NASA Astrophysics Data System (ADS)

    Hiraoka, Koichi

    1990-09-01

    KHI Computational Fluid Dynamics (CFD) system is composed of VP100 computer and 2-D and 3-D Euler and/or Navier-Stokes (NS) analysis softwares. For KHI, this system has become a very powerful aerodynamic tool together with the Kawasaki 1 m Transonic Wind Tunnel. The 2-D Euler/NS software, developed in-house, is fully automated, requires no special skill, and was successfully applied to the design of YXX high lift devices and SST supersonic inlet, etc. The 3-D Euler/NS software, developed under joint research with NAL, has an interactively operated Multi-Block type grid generator and can effectively generate grids around complex airplane shapes. Due to the main memory size limitation, 3-D analysis of relatively simple shape, such as SST wing-body, was computed in-house on VP100, otherwise, such as detailed 3-D analyses of ASUKA and HOPE, were computed on NAL VP400, which is 10 times more powerful than VP100, under KHI-NAL joint research. These analysis results have very good correlation with experimental results. However, the present CFD system is less productive than wind tunnel and has applicability limitations.

  19. Transient Heat Conduction Simulation around Microprocessor Die

    NASA Astrophysics Data System (ADS)

    Nishi, Koji

    This paper explains about fundamental formula of calculating power consumption of CMOS (Complementary Metal-Oxide-Semiconductor) devices and its voltage and temperature dependency, then introduces equation for estimating power consumption of the microprocessor for notebook PC (Personal Computer). The equation is applied to heat conduction simulation with simplified thermal model and evaluates in sub-millisecond time step calculation. In addition, the microprocessor has two major heat conduction paths; one is from the top of the silicon die via thermal solution and the other is from package substrate and pins via PGA (Pin Grid Array) socket. Even though the dominant factor of heat conduction is the former path, the latter path - from package substrate and pins - plays an important role in transient heat conduction behavior. Therefore, this paper tries to focus the path from package substrate and pins, and to investigate more accurate method of estimating heat conduction paths of the microprocessor. Also, cooling performance expression of heatsink fan is one of key points to assure result with practical accuracy, while finer expression requires more computation resources which results in longer computation time. Then, this paper discusses the expression to minimize computation workload with a practical accuracy of the result.

  20. Rapid DNA Amplification Using a Battery-Powered Thin-Film Resistive Thermocycler

    PubMed Central

    Herold, Keith E.; Sergeev, Nikolay; Matviyenko, Andriy; Rasooly, Avraham

    2010-01-01

    Summary A prototype handheld, compact, rapid thermocycler was developed for multiplex analysis of nucleic acids in an inexpensive, portable configuration. Instead of the commonly used Peltier heating/cooling element, electric thin-film resistive heater and a miniature fan enable rapid heating and cooling of glass capillaries leading to a simple, low-cost Thin-Film Resistive Thermocycler (TFRT). Computer-based pulse width modulation control yields heating rates of 6–7 K/s and cooling rates of 5 K/s. The four capillaries are closely coupled to the heater, resulting in low power consumption. The energy required by a nominal PCR cycle (20 s at each temperature) was found to be 57 ± 2 J yielding an average power of approximately 1.0 W (not including the computer and the control system). Thus the device can be powered by a standard 9 V alkaline battery (or other 9 V power supply). The prototype TFRT was demonstrated (in a benchtop configuration) for detection of three important food pathogens (E. coli ETEC, Shigella dysenteriae, and Salmonella enterica). PCR amplicons were analyzed by gel electrophoresis. The 35 cycle PCR protocol using a single channel was completed in less then 18 min. Simple and efficient heating/cooling, low cost, rapid amplification, and low power consumption make the device suitable for portable DNA amplification applications including clinical point of care diagnostics and field use. PMID:19159110

Top