DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Austin; Chakraborty, Sudipta; Wang, Dexin
This paper presents a cyber-physical testbed, developed to investigate the complex interactions between emerging microgrid technologies such as grid-interactive power sources, control systems, and a wide variety of communication platforms and bandwidths. The cyber-physical testbed consists of three major components for testing and validation: real time models of a distribution feeder model with microgrid assets that are integrated into the National Renewable Energy Laboratory's (NREL) power hardware-in-the-loop (PHIL) platform; real-time capable network-simulator-in-the-loop (NSIL) models; and physical hardware including inverters and a simple system controller. Several load profiles and microgrid configurations were tested to examine the effect on system performance withmore » increasing channel delays and router processing delays in the network simulator. Testing demonstrated that the controller's ability to maintain a target grid import power band was severely diminished with increasing network delays and laid the foundation for future testing of more complex cyber-physical systems.« less
Spares Management : Optimizing Hardware Usage for the Space Shuttle Main Engine
NASA Technical Reports Server (NTRS)
Gulbrandsen, K. A.
1999-01-01
The complexity of the Space Shuttle Main Engine (SSME), combined with mounting requirements to reduce operations costs have increased demands for accurate tracking, maintenance, and projections of SSME assets. The SSME Logistics Team is developing an integrated asset management process. This PC-based tool provides a user-friendly asset database for daily decision making, plus a variable-input hardware usage simulation with complex logic yielding output that addresses essential asset management issues. Cycle times on critical tasks are significantly reduced. Associated costs have decreased as asset data quality and decision-making capability has increased.
A software framework for pipelined arithmetic algorithms in field programmable gate arrays
NASA Astrophysics Data System (ADS)
Kim, J. B.; Won, E.
2018-03-01
Pipelined algorithms implemented in field programmable gate arrays are extensively used for hardware triggers in the modern experimental high energy physics field and the complexity of such algorithms increases rapidly. For development of such hardware triggers, algorithms are developed in C++, ported to hardware description language for synthesizing firmware, and then ported back to C++ for simulating the firmware response down to the single bit level. We present a C++ software framework which automatically simulates and generates hardware description language code for pipelined arithmetic algorithms.
Analyzing SystemC Designs: SystemC Analysis Approaches for Varying Applications
Stoppe, Jannis; Drechsler, Rolf
2015-01-01
The complexity of hardware designs is still increasing according to Moore's law. With embedded systems being more and more intertwined and working together not only with each other, but also with their environments as cyber physical systems (CPSs), more streamlined development workflows are employed to handle the increasing complexity during a system's design phase. SystemC is a C++ library for the design of hardware/software systems, enabling the designer to quickly prototype, e.g., a distributed CPS without having to decide about particular implementation details (such as whether to implement a feature in hardware or in software) early in the design process. Thereby, this approach reduces the initial implementation's complexity by offering an abstract layer with which to build a working prototype. However, as SystemC is based on C++, analyzing designs becomes a difficult task due to the complex language features that are available to the designer. Several fundamentally different approaches for analyzing SystemC designs have been suggested. This work illustrates several different SystemC analysis approaches, including their specific advantages and shortcomings, allowing designers to pick the right tools to assist them with a specific problem during the design of a system using SystemC. PMID:25946632
Analyzing SystemC Designs: SystemC Analysis Approaches for Varying Applications.
Stoppe, Jannis; Drechsler, Rolf
2015-05-04
The complexity of hardware designs is still increasing according to Moore's law. With embedded systems being more and more intertwined and working together not only with each other, but also with their environments as cyber physical systems (CPSs), more streamlined development workflows are employed to handle the increasing complexity during a system's design phase. SystemC is a C++ library for the design of hardware/software systems, enabling the designer to quickly prototype, e.g., a distributed CPS without having to decide about particular implementation details (such as whether to implement a feature in hardware or in software) early in the design process. Thereby, this approach reduces the initial implementation's complexity by offering an abstract layer with which to build a working prototype. However, as SystemC is based on C++, analyzing designs becomes a difficult task due to the complex language features that are available to the designer. Several fundamentally different approaches for analyzing SystemC designs have been suggested. This work illustrates several different SystemC analysis approaches, including their specific advantages and shortcomings, allowing designers to pick the right tools to assist them with a specific problem during the design of a system using SystemC.
Liu, Weisong; Huang, Zhitao; Wang, Xiang; Sun, Weichao
2017-01-01
In a cognitive radio sensor network (CRSN), wideband spectrum sensing devices which aims to effectively exploit temporarily vacant spectrum intervals as soon as possible are of great importance. However, the challenge of increasingly high signal frequency and wide bandwidth requires an extremely high sampling rate which may exceed today’s best analog-to-digital converters (ADCs) front-end bandwidth. Recently, the newly proposed architecture called modulated wideband converter (MWC), is an attractive analog compressed sensing technique that can highly reduce the sampling rate. However, the MWC has high hardware complexity owing to its parallel channel structure especially when the number of signals increases. In this paper, we propose a single channel modulated wideband converter (SCMWC) scheme for spectrum sensing of band-limited wide-sense stationary (WSS) signals. With one antenna or sensor, this scheme can save not only sampling rate but also hardware complexity. We then present a new, SCMWC based, single node CR prototype System, on which the spectrum sensing algorithm was tested. Experiments on our hardware prototype show that the proposed architecture leads to successful spectrum sensing. And the total sampling rate as well as hardware size is only one channel’s consumption of MWC. PMID:28471410
NASA Astrophysics Data System (ADS)
Vikhlyantsev, O. P.; Generalov, L. N.; Kuryakin, A. V.; Karpov, I. A.; Gurin, N. E.; Tumkin, A. D.; Fil'chagin, S. V.
2017-12-01
A hardware-software complex for measurement of energy and angular distributions of charged particles formed in nuclear reactions is presented. Hardware and software structures of the complex, the basic set of the modular nuclear-physical apparatus of a multichannel detecting system on the basis of Δ E- E telescopes of silicon detectors, and the hardware of experimental data collection, storage, and processing are presented and described.
Model-Based Verification and Validation of Spacecraft Avionics
NASA Technical Reports Server (NTRS)
Khan, M. Omair; Sievers, Michael; Standley, Shaun
2012-01-01
Verification and Validation (V&V) at JPL is traditionally performed on flight or flight-like hardware running flight software. For some time, the complexity of avionics has increased exponentially while the time allocated for system integration and associated V&V testing has remained fixed. There is an increasing need to perform comprehensive system level V&V using modeling and simulation, and to use scarce hardware testing time to validate models; the norm for thermal and structural V&V for some time. Our approach extends model-based V&V to electronics and software through functional and structural models implemented in SysML. We develop component models of electronics and software that are validated by comparison with test results from actual equipment. The models are then simulated enabling a more complete set of test cases than possible on flight hardware. SysML simulations provide access and control of internal nodes that may not be available in physical systems. This is particularly helpful in testing fault protection behaviors when injecting faults is either not possible or potentially damaging to the hardware. We can also model both hardware and software behaviors in SysML, which allows us to simulate hardware and software interactions. With an integrated model and simulation capability we can evaluate the hardware and software interactions and identify problems sooner. The primary missing piece is validating SysML model correctness against hardware; this experiment demonstrated such an approach is possible.
Creating an open environment software infrastructure
NASA Technical Reports Server (NTRS)
Jipping, Michael J.
1992-01-01
As the development of complex computer hardware accelerates at increasing rates, the ability of software to keep pace is essential. The development of software design tools, however, is falling behind the development of hardware for several reasons, the most prominent of which is the lack of a software infrastructure to provide an integrated environment for all parts of a software system. The research was undertaken to provide a basis for answering this problem by investigating the requirements of open environments.
NASA Astrophysics Data System (ADS)
Boyarnikov, A. V.; Boyarnikova, L. V.; Kozhushko, A. A.; Sekachev, A. F.
2017-08-01
In the article the process of verification (calibration) of oil metering units secondary equipment is considered. The purpose of the work is to increase the reliability and reduce the complexity of this process by developing a software and hardware system that provides automated verification and calibration. The hardware part of this complex carries out the commutation of the measuring channels of the verified controller and the reference channels of the calibrator in accordance with the introduced algorithm. The developed software allows controlling the commutation of channels, setting values on the calibrator, reading the measured data from the controller, calculating errors and compiling protocols. This system can be used for checking the controllers of the secondary equipment of the oil metering units in the automatic verification mode (with the open communication protocol) or in the semi-automatic verification mode (without it). The peculiar feature of the approach used is the development of a universal signal switch operating under software control, which can be configured for various verification methods (calibration), which allows to cover the entire range of controllers of metering units secondary equipment. The use of automatic verification with the help of a hardware and software system allows to shorten the verification time by 5-10 times and to increase the reliability of measurements, excluding the influence of the human factor.
NASA Astrophysics Data System (ADS)
Bazdrov, I. I.; Bortkevich, V. S.; Khokhlov, V. N.
2004-10-01
This paper describes a software-hardware complex for the input into a personal computer of telemetric information obtained by means of telemetry stations TRAL KR28, RTS-8, and TRAL K2N. Structural and functional diagrams are given of the input device and the hardware complex. Results that characterize the features of the input process and selective data of optical measurements of atmospheric radiation are given. © 2004
Software technology insertion: A study of success factors
NASA Technical Reports Server (NTRS)
Lydon, Tom
1990-01-01
Managing software development in large organizations has become increasingly difficult due to increasing technical complexity, stricter government standards, a shortage of experienced software engineers, competitive pressure for improved productivity and quality, the need to co-develop hardware and software together, and the rapid changes in both hardware and software technology. The 'software factory' approach to software development minimizes risks while maximizing productivity and quality through standardization, automation, and training. However, in practice, this approach is relatively inflexible when adopting new software technologies. The methods that a large multi-project software engineering organization can use to increase the likelihood of successful software technology insertion (STI), especially in a standardized engineering environment, are described.
Tri-state delta modulation system for Space Shuttle digital TV downlink
NASA Technical Reports Server (NTRS)
Udalov, S.; Huth, G. K.; Roberts, D.; Batson, B. H.
1981-01-01
Future requirements for Shuttle Orbiter downlink communication may include transmission of digital video which, in addition to black and white, may also be either field-sequential or NTSC color format. The use of digitized video could provide for picture privacy at the expense of additional onboard hardware, together with an increased bandwidth due to the digitization process. A general objective for the Space Shuttle application is to develop a digitization technique that is compatible with data rates in the 20-30 Mbps range but still provides good quality pictures. This paper describes a tri-state delta modulation/demodulation (TSDM) technique which is a good compromise between implementation complexity and performance. The unique feature of TSDM is that it provides for efficient run-length encoding of constant-intensity segments of a TV picture. Axiomatix has developed a hardware implementation of a high-speed TSDM transmitter and receiver for black-and-white TV and field-sequential color. The hardware complexity of this TSDM implementation is summarized in the paper.
Benchmarking Model Variants in Development of a Hardware-in-the-Loop Simulation System
NASA Technical Reports Server (NTRS)
Aretskin-Hariton, Eliot D.; Zinnecker, Alicia M.; Kratz, Jonathan L.; Culley, Dennis E.; Thomas, George L.
2016-01-01
Distributed engine control architecture presents a significant increase in complexity over traditional implementations when viewed from the perspective of system simulation and hardware design and test. Even if the overall function of the control scheme remains the same, the hardware implementation can have a significant effect on the overall system performance due to differences in the creation and flow of data between control elements. A Hardware-in-the-Loop (HIL) simulation system is under development at NASA Glenn Research Center that enables the exploration of these hardware dependent issues. The system is based on, but not limited to, the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k). This paper describes the step-by-step conversion from the self-contained baseline model to the hardware in the loop model, and the validation of each step. As the control model hardware fidelity was improved during HIL system development, benchmarking simulations were performed to verify that engine system performance characteristics remained the same. The results demonstrate the goal of the effort; the new HIL configurations have similar functionality and performance compared to the baseline C-MAPSS40k system.
Advanced Booster Liquid Engine Combustion Stability
NASA Technical Reports Server (NTRS)
Tucker, Kevin; Gentz, Steve; Nettles, Mindy
2015-01-01
Combustion instability is a phenomenon in liquid rocket engines caused by complex coupling between the time-varying combustion processes and the fluid dynamics in the combustor. Consequences of the large pressure oscillations associated with combustion instability often cause significant hardware damage and can be catastrophic. The current combustion stability assessment tools are limited by the level of empiricism in many inputs and embedded models. This limited predictive capability creates significant uncertainty in stability assessments. This large uncertainty then increases hardware development costs due to heavy reliance on expensive and time-consuming testing.
A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potok, Thomas E; Schuman, Catherine D; Young, Steven R
Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less
Hardware Implementation of Lossless Adaptive and Scalable Hyperspectral Data Compression for Space
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh; Keymeulen, Didier; Bakhshi, Alireza; Klimesh, Matthew
2009-01-01
On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. The technique also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware. A modified form of the algorithm that is better suited for data from pushbroom instruments is generally appropriate for flight implementation. A scalable field programmable gate array (FPGA) hardware implementation was developed. The FPGA implementation achieves a throughput performance of 58 Msamples/sec, which can be increased to over 100 Msamples/sec in a parallel implementation that uses twice the hardware resources This paper describes the hardware implementation of the 'Modified Fast Lossless' compression algorithm on an FPGA. The FPGA implementation targets the current state-of-the-art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for space applications.
Integrated Tools for Future Distributed Engine Control Technologies
NASA Technical Reports Server (NTRS)
Culley, Dennis; Thomas, Randy; Saus, Joseph
2013-01-01
Turbine engines are highly complex mechanical systems that are becoming increasingly dependent on control technologies to achieve system performance and safety metrics. However, the contribution of controls to these measurable system objectives is difficult to quantify due to a lack of tools capable of informing the decision makers. This shortcoming hinders technology insertion in the engine design process. NASA Glenn Research Center is developing a Hardware-inthe- Loop (HIL) platform and analysis tool set that will serve as a focal point for new control technologies, especially those related to the hardware development and integration of distributed engine control. The HIL platform is intended to enable rapid and detailed evaluation of new engine control applications, from conceptual design through hardware development, in order to quantify their impact on engine systems. This paper discusses the complex interactions of the control system, within the context of the larger engine system, and how new control technologies are changing that paradigm. The conceptual design of the new HIL platform is then described as a primary tool to address those interactions and how it will help feed the insertion of new technologies into future engine systems.
The dynamical analysis of modified two-compartment neuron model and FPGA implementation
NASA Astrophysics Data System (ADS)
Lin, Qianjin; Wang, Jiang; Yang, Shuangming; Yi, Guosheng; Deng, Bin; Wei, Xile; Yu, Haitao
2017-10-01
The complexity of neural models is increasing with the investigation of larger biological neural network, more various ionic channels and more detailed morphologies, and the implementation of biological neural network is a task with huge computational complexity and power consumption. This paper presents an efficient digital design using piecewise linearization on field programmable gate array (FPGA), to succinctly implement the reduced two-compartment model which retains essential features of more complicated models. The design proposes an approximate neuron model which is composed of a set of piecewise linear equations, and it can reproduce different dynamical behaviors to depict the mechanisms of a single neuron model. The consistency of hardware implementation is verified in terms of dynamical behaviors and bifurcation analysis, and the simulation results including varied ion channel characteristics coincide with the biological neuron model with a high accuracy. Hardware synthesis on FPGA demonstrates that the proposed model has reliable performance and lower hardware resource compared with the original two-compartment model. These investigations are conducive to scalability of biological neural network in reconfigurable large-scale neuromorphic system.
Kim, Dong-Sun; Kwon, Jin-San
2014-01-01
Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900
NASA Astrophysics Data System (ADS)
Karmazikov, Y. V.; Fainberg, E. M.
2005-06-01
Work with DICOM compatible equipment integrated into hardware and software systems for medical purposes has been considered. Structures of process of reception and translormation of the data are resulted by the example of digital rentgenography and angiography systems, included in hardware-software complex DIMOL-IK. Algorithms of reception and the analysis of the data are offered. Questions of the further processing and storage of the received data are considered.
COALA-System for Visual Representation of Cryptography Algorithms
ERIC Educational Resources Information Center
Stanisavljevic, Zarko; Stanisavljevic, Jelena; Vuletic, Pavle; Jovanovic, Zoran
2014-01-01
Educational software systems have an increasingly significant presence in engineering sciences. They aim to improve students' attitudes and knowledge acquisition typically through visual representation and simulation of complex algorithms and mechanisms or hardware systems that are often not available to the educational institutions. This paper…
Parameterized hardware description as object oriented hardware model implementation
NASA Astrophysics Data System (ADS)
Drabik, Pawel K.
2010-09-01
The paper introduces novel model for design, visualization and management of complex, highly adaptive hardware systems. The model settles component oriented environment for both hardware modules and software application. It is developed on parameterized hardware description research. Establishment of stable link between hardware and software, as a purpose of designed and realized work, is presented. Novel programming framework model for the environment, named Graphic-Functional-Components is presented. The purpose of the paper is to present object oriented hardware modeling with mentioned features. Possible model implementation in FPGA chips and its management by object oriented software in Java is described.
Programs for Testing Processor-in-Memory Computing Systems
NASA Technical Reports Server (NTRS)
Katz, Daniel S.
2006-01-01
The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards.
Reconfigurable Hardware Adapts to Changing Mission Demands
NASA Technical Reports Server (NTRS)
2003-01-01
A new class of computing architectures and processing systems, which use reconfigurable hardware, is creating a revolutionary approach to implementing future spacecraft systems. With the increasing complexity of electronic components, engineers must design next-generation spacecraft systems with new technologies in both hardware and software. Derivation Systems, Inc., of Carlsbad, California, has been working through NASA s Small Business Innovation Research (SBIR) program to develop key technologies in reconfigurable computing and Intellectual Property (IP) soft cores. Founded in 1993, Derivation Systems has received several SBIR contracts from NASA s Langley Research Center and the U.S. Department of Defense Air Force Research Laboratories in support of its mission to develop hardware and software for high-assurance systems. Through these contracts, Derivation Systems began developing leading-edge technology in formal verification, embedded Java, and reconfigurable computing for its PF3100, Derivational Reasoning System (DRS ), FormalCORE IP, FormalCORE PCI/32, FormalCORE DES, and LavaCORE Configurable Java Processor, which are designed for greater flexibility and security on all space missions.
Bibliographic Utilities and the Use of Microcomputers in Libraries: Current and Projected Practices.
ERIC Educational Resources Information Center
McAninch, Glen
1986-01-01
Bibliographic utilities are marketing specially designed hardware and software that permit libraries to patch together automated cataloging, acquisitions, serials control, reference, online public catalogs, circulation, interlibrary loan, and administrative functions. The increasing complexity of the technology is making it more difficult to…
Proceedings of the Symposium on Long-Life Hardware for Space
NASA Technical Reports Server (NTRS)
1970-01-01
Two-volume edition of the papers of the symposium is described. It is divided into six sections - parts, materials, management, system testing, component design, and system test. Material presented focuses attention on problems created by the increased complexity of technology and long-term mission requirements.
A Parallel Rendering Algorithm for MIMD Architectures
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.; Orloff, Tobias
1991-01-01
Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.
Managing Complexity in Next Generation Robotic Spacecraft: From a Software Perspective
NASA Technical Reports Server (NTRS)
Reinholtz, Kirk
2008-01-01
This presentation highlights the challenges in the design of software to support robotic spacecraft. Robotic spacecraft offer a higher degree of autonomy, however currently more capabilities are required, primarily in the software, while providing the same or higher degree of reliability. The complexity of designing such an autonomous system is great, particularly while attempting to address the needs for increased capabilities and high reliability without increased needs for time or money. The efforts to develop programming models for the new hardware and the integration of software architecture are highlighted.
Pre-Hardware Optimization of Spacecraft Image Processing Algorithms and Hardware Implementation
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Petrick, David J.; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Day, John H. (Technical Monitor)
2002-01-01
Spacecraft telemetry rates and telemetry product complexity have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image data processing and color picture generation application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The proposed solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms, and reconfigurable computing hardware (RC) technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processors (DSP). It has been shown that this approach can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft.
Planning and Implementing Technical Services Workstations.
ERIC Educational Resources Information Center
Kaplan, Michael, Ed.
The job of the library cataloger has grown increasingly complex. Catalogers must draw from a vast pool of dynamic information as they handle traditional and new forms of media. Technical Services Workstations (TSWs) provide catalogers the network data, application programs, and standard hardware required to catalog all types of media quickly and…
Hardware Implementation of a MIMO Decoder Using Matrix Factorization Based Channel Estimation
NASA Astrophysics Data System (ADS)
Islam, Mohammad Tariqul; Numan, Mostafa Wasiuddin; Misran, Norbahiah; Ali, Mohd Alauddin Mohd; Singh, Mandeep
2011-05-01
This paper presents an efficient hardware realization of multiple-input multiple-output (MIMO) wireless communication decoder that utilizes the available resources by adopting the technique of parallelism. The hardware is designed and implemented on Xilinx Virtex™-4 XC4VLX60 field programmable gate arrays (FPGA) device in a modular approach which simplifies and eases hardware update, and facilitates testing of the various modules independently. The decoder involves a proficient channel estimation module that employs matrix factorization on least squares (LS) estimation to reduce a full rank matrix into a simpler form in order to eliminate matrix inversion. This results in performance improvement and complexity reduction of the MIMO system. Performance evaluation of the proposed method is validated through MATLAB simulations which indicate 2 dB improvement in terms of SNR compared to LS estimation. Moreover complexity comparison is performed in terms of mathematical operations, which shows that the proposed approach appreciably outperforms LS estimation at a lower complexity and represents a good solution for channel estimation technique.
Deep Space Gateway - Enabling Missions to Mars
NASA Technical Reports Server (NTRS)
Rucker, Michelle; Connolly, John
2017-01-01
There are many opportunities for commonality between Lunar vicinity and Mars mission hardware and operations. Best approach: Identify Mars mission risks that can be bought down with testing in the Lunar vicinity, then explore hardware and operational concepts that work for both missions with minimal compromise. Deep Space Transport will validate the systems and capabilities required to send humans to Mars orbit and return to Earth. Deep Space Gateway provides a convenient assembly, checkout, and refurbishment location to enable Mars missions Current deep space transport concept is to fly missions of increasing complexity: Shakedown cruise, Mars orbital mission, Mars surface mission; Mars surface mission would require additional elements.
Event-driven processing for hardware-efficient neural spike sorting
NASA Astrophysics Data System (ADS)
Liu, Yan; Pereira, João L.; Constandinou, Timothy G.
2018-02-01
Objective. The prospect of real-time and on-node spike sorting provides a genuine opportunity to push the envelope of large-scale integrated neural recording systems. In such systems the hardware resources, power requirements and data bandwidth increase linearly with channel count. Event-based (or data-driven) processing can provide here a new efficient means for hardware implementation that is completely activity dependant. In this work, we investigate using continuous-time level-crossing sampling for efficient data representation and subsequent spike processing. Approach. (1) We first compare signals (synthetic neural datasets) encoded with this technique against conventional sampling. (2) We then show how such a representation can be directly exploited by extracting simple time domain features from the bitstream to perform neural spike sorting. (3) The proposed method is implemented in a low power FPGA platform to demonstrate its hardware viability. Main results. It is observed that considerably lower data rates are achievable when using 7 bits or less to represent the signals, whilst maintaining the signal fidelity. Results obtained using both MATLAB and reconfigurable logic hardware (FPGA) indicate that feature extraction and spike sorting accuracies can be achieved with comparable or better accuracy than reference methods whilst also requiring relatively low hardware resources. Significance. By effectively exploiting continuous-time data representation, neural signal processing can be achieved in a completely event-driven manner, reducing both the required resources (memory, complexity) and computations (operations). This will see future large-scale neural systems integrating on-node processing in real-time hardware.
CAMS 2009 PC Co-chairs' Message
NASA Astrophysics Data System (ADS)
Hinze, Annika; Buchanan, George
In the four years since the first CAMS workshop, context awareness has become an increasingly commonplace tool for mobile developers. The limited screen displays of many mobile devices mean that content must be carefully selected to match the user's needs and expectations, and context provides one powerful means of performing such tailoring. Furthermore, increasing availability of additional hardware sensors has bolstered the use of context. GPS, Near-Field Communication, Bluetooth and WiFi have all been used to sense the general environment and to determine the devices' location. Light and tilt sensors have also been used to tune simple features such as the strength of the display lighting, through to complex uses in game control. Context-aware mobile systems are becoming ubiquitous. With this hardware comes the opportunity for "on-board" applications to use location data to provide new services - until recently such systems could only be created with complex and expensive components. Furthermore, the current "mode" of the phone (e.g., silent, meeting, outdoors), contents of the built-in calendar, etc., can all used to provide a rich context for the user's immediate environment.
Self-Reacting Friction Stir Welding for Aluminum Complex Curvature Applications
NASA Technical Reports Server (NTRS)
Brown, Randy J.; Martin, W.; Schneider, J.; Hartley, P. J.; Russell, Carolyn; Lawless, Kirby; Jones, Chip
2003-01-01
This viewgraph representation provides an overview of sucessful research conducted by Lockheed Martin and NASA to develop an advanced self-reacting friction stir technology for complex curvature aluminum alloys. The research included weld process development for 0.320 inch Al 2219, sucessful transfer from the 'lab' scale to the production scale tool and weld quality exceeding strenght goals. This process will enable development and implementation of large scale complex geometry hardware fabrication. Topics covered include: weld process development, weld process transfer, and intermediate hardware fabrication.
Robust Fixed-Structure Controller Synthesis
NASA Technical Reports Server (NTRS)
Corrado, Joseph R.; Haddad, Wassim M.; Gupta, Kajal (Technical Monitor)
2000-01-01
The ability to develop an integrated control system design methodology for robust high performance controllers satisfying multiple design criteria and real world hardware constraints constitutes a challenging task. The increasingly stringent performance specifications required for controlling such systems necessitates a trade-off between controller complexity and robustness. The principle challenge of the minimal complexity robust control design is to arrive at a tractable control design formulation in spite of the extreme complexity of such systems. Hence, design of minimal complexitY robust controllers for systems in the face of modeling errors has been a major preoccupation of system and control theorists and practitioners for the past several decades.
Hardware problems encountered in solar heating and cooling systems
NASA Technical Reports Server (NTRS)
Cash, M.
1978-01-01
Numerous problems in the design, production, installation, and operation of solar energy systems are discussed. Described are hardware problems, which range from simple to obscure and complex, and their resolution.
Numerical propulsion system simulation
NASA Technical Reports Server (NTRS)
Lytle, John K.; Remaklus, David A.; Nichols, Lester D.
1990-01-01
The cost of implementing new technology in aerospace propulsion systems is becoming prohibitively expensive. One of the major contributors to the high cost is the need to perform many large scale system tests. Extensive testing is used to capture the complex interactions among the multiple disciplines and the multiple components inherent in complex systems. The objective of the Numerical Propulsion System Simulation (NPSS) is to provide insight into these complex interactions through computational simulations. This will allow for comprehensive evaluation of new concepts early in the design phase before a commitment to hardware is made. It will also allow for rapid assessment of field-related problems, particularly in cases where operational problems were encountered during conditions that would be difficult to simulate experimentally. The tremendous progress taking place in computational engineering and the rapid increase in computing power expected through parallel processing make this concept feasible within the near future. However it is critical that the framework for such simulations be put in place now to serve as a focal point for the continued developments in computational engineering and computing hardware and software. The NPSS concept which is described will provide that framework.
Criticality as a Set-Point for Adaptive Behavior in Neuromorphic Hardware
Srinivasa, Narayan; Stepp, Nigel D.; Cruz-Albrecht, Jose
2015-01-01
Neuromorphic hardware are designed by drawing inspiration from biology to overcome limitations of current computer architectures while forging the development of a new class of autonomous systems that can exhibit adaptive behaviors. Several designs in the recent past are capable of emulating large scale networks but avoid complexity in network dynamics by minimizing the number of dynamic variables that are supported and tunable in hardware. We believe that this is due to the lack of a clear understanding of how to design self-tuning complex systems. It has been widely demonstrated that criticality appears to be the default state of the brain and manifests in the form of spontaneous scale-invariant cascades of neural activity. Experiment, theory and recent models have shown that neuronal networks at criticality demonstrate optimal information transfer, learning and information processing capabilities that affect behavior. In this perspective article, we argue that understanding how large scale neuromorphic electronics can be designed to enable emergent adaptive behavior will require an understanding of how networks emulated by such hardware can self-tune local parameters to maintain criticality as a set-point. We believe that such capability will enable the design of truly scalable intelligent systems using neuromorphic hardware that embrace complexity in network dynamics rather than avoiding it. PMID:26648839
Movable Ground Based Recovery System for Reuseable Space Flight Hardware
NASA Technical Reports Server (NTRS)
Sarver, George L. (Inventor)
2013-01-01
A reusable space flight launch system is configured to eliminate complex descent and landing systems from the space flight hardware and move them to maneuverable ground based systems. Precision landing of the reusable space flight hardware is enabled using a simple, light weight aerodynamic device on board the flight hardware such as a parachute, and one or more translating ground based vehicles such as a hovercraft that include active speed, orientation and directional control. The ground based vehicle maneuvers itself into position beneath the descending flight hardware, matching its speed and direction and captures the flight hardware. The ground based vehicle will contain propulsion, command and GN&C functionality as well as space flight hardware landing cushioning and retaining hardware. The ground based vehicle propulsion system enables longitudinal and transverse maneuverability independent of its physical heading.
NASA Technical Reports Server (NTRS)
Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark
2016-01-01
Radio-frequency interference (RFI) is a known problem for passive remote sensing as evidenced in the L-band radiometers SMOS, Aquarius and more recently, SMAP. Various algorithms have been developed and implemented on SMAP to improve science measurements. This was achieved by the use of a digital microwave radiometer. RFI mitigation becomes more challenging for microwave radiometers operating at higher frequencies in shared allocations. At higher frequencies larger bandwidths are also desirable for lower measurement noise further adding to processing challenges. This work focuses on finding improved RFI mitigation techniques that will be effective at additional frequencies and at higher bandwidths. To aid the development and testing of applicable detection and mitigation techniques, a wide-band RFI algorithm testing environment has been developed using the Reconfigurable Open Architecture Computing Hardware System (ROACH) built by the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) Group. The testing environment also consists of various test equipment used to reproduce typical signals that a radiometer may see including those with and without RFI. The testing environment permits quick evaluations of RFI mitigation algorithms as well as show that they are implementable in hardware. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The complex signal kurtosis detector showed improved performance over the real kurtosis detector under certain conditions. The real kurtosis is implemented on SMAP at 24 MHz bandwidth. The complex signal kurtosis algorithm was then implemented in hardware at 200 MHz bandwidth using the ROACH. In this work, performance of the complex signal kurtosis and the real signal kurtosis are compared. Performance evaluations and comparisons in both simulation as well as experimental hardware implementations were done with the use of receiver operating characteristic (ROC) curves.
Wen, Gen; Wang, Chun-Yang; Chai, Yi-Min; Cheng, Liang; Chen, Ming; Yi-Min, L V
2013-11-01
The complex wound with the exposed hardware and infection is one of the common complications after the internal fixation of the tibia fracture. The salvage of hardware and reconstruction of soft tissue defect remain challenging. In this report, we presented our experience on the use of the distally based saphenous neurocutaneous perforator flap combined with vacuum-assisted closure (VAC) therapy for the coverage of the soft tissue defect and the exposed hardware in the lower extremity with fracture. Between January 2008 and July 2010, seven patients underwent the VAC therapy followed by transferring a reversed saphenous neurocutaneous perforator flap for reconstruction of the wound with exposed hardware around the distal tibia. The sizes of the flaps ranged from 6 × 3 cm to 15 × 6 cm. Six flaps survived completely. Partial necrosis occurred in one patient. There were no other complications of repair and donor sites. Bone healing was achieved in all patients. In conclusion, the reversed saphenous neurocutaneous perfortor flaps combined with the VAC therapy might be one of the options to cover the complex wound with exposed hardware in the lower extremities. © 2013 Wiley Periodicals, Inc.
Development of simulation computer complex specification
NASA Technical Reports Server (NTRS)
1973-01-01
The Training Simulation Computer Complex Study was one of three studies contracted in support of preparations for procurement of a shuttle mission simulator for shuttle crew training. The subject study was concerned with definition of the software loads to be imposed on the computer complex to be associated with the shuttle mission simulator and the development of procurement specifications based on the resulting computer requirements. These procurement specifications cover the computer hardware and system software as well as the data conversion equipment required to interface the computer to the simulator hardware. The development of the necessary hardware and software specifications required the execution of a number of related tasks which included, (1) simulation software sizing, (2) computer requirements definition, (3) data conversion equipment requirements definition, (4) system software requirements definition, (5) a simulation management plan, (6) a background survey, and (7) preparation of the specifications.
Understanding GPU Power. A Survey of Profiling, Modeling, and Simulation Methods
Bridges, Robert A.; Imam, Neena; Mintz, Tiffany M.
2016-09-01
Modern graphics processing units (GPUs) have complex architectures that admit exceptional performance and energy efficiency for high throughput applications.Though GPUs consume large amounts of power, their use for high throughput applications facilitate state-of-the-art energy efficiency and performance. Consequently, continued development relies on understanding their power consumption. Our work is a survey of GPU power modeling and profiling methods with increased detail on noteworthy efforts. Moreover, as direct measurement of GPU power is necessary for model evaluation and parameter initiation, internal and external power sensors are discussed. Hardware counters, which are low-level tallies of hardware events, share strong correlation to powermore » use and performance. Statistical correlation between power and performance counters has yielded worthwhile GPU power models, yet the complexity inherent to GPU architectures presents new hurdles for power modeling. Developments and challenges of counter-based GPU power modeling is discussed. Often building on the counter-based models, research efforts for GPU power simulation, which make power predictions from input code and hardware knowledge, provide opportunities for optimization in programming or architectural design. Noteworthy strides in power simulations for GPUs are included along with their performance or functional simulator counterparts when appropriate. Lastly, possible directions for future research are discussed.« less
Understanding GPU Power. A Survey of Profiling, Modeling, and Simulation Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Robert A.; Imam, Neena; Mintz, Tiffany M.
Modern graphics processing units (GPUs) have complex architectures that admit exceptional performance and energy efficiency for high throughput applications.Though GPUs consume large amounts of power, their use for high throughput applications facilitate state-of-the-art energy efficiency and performance. Consequently, continued development relies on understanding their power consumption. Our work is a survey of GPU power modeling and profiling methods with increased detail on noteworthy efforts. Moreover, as direct measurement of GPU power is necessary for model evaluation and parameter initiation, internal and external power sensors are discussed. Hardware counters, which are low-level tallies of hardware events, share strong correlation to powermore » use and performance. Statistical correlation between power and performance counters has yielded worthwhile GPU power models, yet the complexity inherent to GPU architectures presents new hurdles for power modeling. Developments and challenges of counter-based GPU power modeling is discussed. Often building on the counter-based models, research efforts for GPU power simulation, which make power predictions from input code and hardware knowledge, provide opportunities for optimization in programming or architectural design. Noteworthy strides in power simulations for GPUs are included along with their performance or functional simulator counterparts when appropriate. Lastly, possible directions for future research are discussed.« less
Use of Field Programmable Gate Array Technology in Future Space Avionics
NASA Technical Reports Server (NTRS)
Ferguson, Roscoe C.; Tate, Robert
2005-01-01
Fulfilling NASA's new vision for space exploration requires the development of sustainable, flexible and fault tolerant spacecraft control systems. The traditional development paradigm consists of the purchase or fabrication of hardware boards with fixed processor and/or Digital Signal Processing (DSP) components interconnected via a standardized bus system. This is followed by the purchase and/or development of software. This paradigm has several disadvantages for the development of systems to support NASA's new vision. Building a system to be fault tolerant increases the complexity and decreases the performance of included software. Standard bus design and conventional implementation produces natural bottlenecks. Configuring hardware components in systems containing common processors and DSPs is difficult initially and expensive or impossible to change later. The existence of Hardware Description Languages (HDLs), the recent increase in performance, density and radiation tolerance of Field Programmable Gate Arrays (FPGAs), and Intellectual Property (IP) Cores provides the technology for reprogrammable Systems on a Chip (SOC). This technology supports a paradigm better suited for NASA's vision. Hardware and software production are melded for more effective development; they can both evolve together over time. Designers incorporating this technology into future avionics can benefit from its flexibility. Systems can be designed with improved fault isolation and tolerance using hardware instead of software. Also, these designs can be protected from obsolescence problems where maintenance is compromised via component and vendor availability.To investigate the flexibility of this technology, the core of the Central Processing Unit and Input/Output Processor of the Space Shuttle AP101S Computer were prototyped in Verilog HDL and synthesized into an Altera Stratix FPGA.
2012-09-01
Maintenance activities, as this will allow new methods and Operational changes to be made if necessary (i.e., more downtime than originally planned or...increased complexity of military hardware, both new systems and their integration with legacy systems, requires a correspondingly increased expertise in...available Little of that added weight involves weapons or armor that actually is becoming lighter as new technologies and composites are utilized (Task
High-resolution imaging using a wideband MIMO radar system with two distributed arrays.
Wang, Dang-wei; Ma, Xiao-yan; Chen, A-Lei; Su, Yi
2010-05-01
Imaging a fast maneuvering target has been an active research area in past decades. Usually, an array antenna with multiple elements is implemented to avoid the motion compensations involved in the inverse synthetic aperture radar (ISAR) imaging. Nevertheless, there is a price dilemma due to the high level of hardware complexity compared to complex algorithm implemented in the ISAR imaging system with only one antenna. In this paper, a wideband multiple-input multiple-output (MIMO) radar system with two distributed arrays is proposed to reduce the hardware complexity of the system. Furthermore, the system model, the equivalent array production method and the imaging procedure are presented. As compared with the classical real aperture radar (RAR) imaging system, there is a very important contribution in our method that the lower hardware complexity can be involved in the imaging system since many additive virtual array elements can be obtained. Numerical simulations are provided for testing our system and imaging method.
NASA Technical Reports Server (NTRS)
Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark
2016-01-01
Radio-frequency interference (RFI) is a known problem for passive remote sensing as evidenced in the L-band radiometers SMOS, Aquarius and more recently, SMAP. Various algorithms have been developed and implemented on SMAP to improve science measurements. This was achieved by the use of a digital microwave radiometer. RFI mitigation becomes more challenging for microwave radiometers operating at higher frequencies in shared allocations. At higher frequencies larger bandwidths are also desirable for lower measurement noise further adding to processing challenges. This work focuses on finding improved RFI mitigation techniques that will be effective at additional frequencies and at higher bandwidths. To aid the development and testing of applicable detection and mitigation techniques, a wide-band RFI algorithm testing environment has been developed using the Reconfigurable Open Architecture Computing Hardware System (ROACH) built by the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) Group. The testing environment also consists of various test equipment used to reproduce typical signals that a radiometer may see including those with and without RFI. The testing environment permits quick evaluations of RFI mitigation algorithms as well as show that they are implementable in hardware. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The complex signal kurtosis detector showed improved performance over the real kurtosis detector under certain conditions. The real kurtosis is implemented on SMAP at 24 MHz bandwidth. The complex signal kurtosis algorithm was then implemented in hardware at 200 MHz bandwidth using the ROACH. In this work, performance of the complex signal kurtosis and the real signal kurtosis are compared. Performance evaluations and comparisons in both simulation as well as experimental hardware implementations were done with the use of receiver operating characteristic (ROC) curves. The complex kurtosis algorithm has the potential to reduce data rate due to onboard processing in addition to improving RFI detection performance.
Safe to Fly: Certifying COTS Hardware for Spaceflight
NASA Technical Reports Server (NTRS)
Fichuk, Jessica L.
2011-01-01
Providing hardware for the astronauts to use on board the Space Shuttle or International Space Station (ISS) involves a certification process that entails evaluating hardware safety, weighing risks, providing mitigation, and verifying requirements. Upon completion of this certification process, the hardware is deemed safe to fly. This process from start to finish can be completed as quickly as 1 week or can take several years in length depending on the complexity of the hardware and whether the item is a unique custom design. One area of cost and schedule savings that NASA implements is buying Commercial Off the Shelf (COTS) hardware and certifying it for human spaceflight as safe to fly. By utilizing commercial hardware, NASA saves time not having to develop, design and build the hardware from scratch, as well as a timesaving in the certification process. By utilizing COTS hardware, the current detailed certification process can be simplified which results in schedule savings. Cost savings is another important benefit of flying COTS hardware. Procuring COTS hardware for space use can be more economical than custom building the hardware. This paper will investigate the cost savings associated with certifying COTS hardware to NASA s standards rather than performing a custom build.
Study of efficient video compression algorithms for space shuttle applications
NASA Technical Reports Server (NTRS)
Poo, Z.
1975-01-01
Results are presented of a study on video data compression techniques applicable to space flight communication. This study is directed towards monochrome (black and white) picture communication with special emphasis on feasibility of hardware implementation. The primary factors for such a communication system in space flight application are: picture quality, system reliability, power comsumption, and hardware weight. In terms of hardware implementation, these are directly related to hardware complexity, effectiveness of the hardware algorithm, immunity of the source code to channel noise, and data transmission rate (or transmission bandwidth). A system is recommended, and its hardware requirement summarized. Simulations of the study were performed on the improved LIM video controller which is computer-controlled by the META-4 CPU.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuo, Wangda; McNeil, Andrew; Wetter, Michael
2013-05-23
Building designers are increasingly relying on complex fenestration systems to reduce energy consumed for lighting and HVAC in low energy buildings. Radiance, a lighting simulation program, has been used to conduct daylighting simulations for complex fenestration systems. Depending on the configurations, the simulation can take hours or even days using a personal computer. This paper describes how to accelerate the matrix multiplication portion of a Radiance three-phase daylight simulation by conducting parallel computing on heterogeneous hardware of a personal computer. The algorithm was optimized and the computational part was implemented in parallel using OpenCL. The speed of new approach wasmore » evaluated using various daylighting simulation cases on a multicore central processing unit and a graphics processing unit. Based on the measurements and analysis of the time usage for the Radiance daylighting simulation, further speedups can be achieved by using fast I/O devices and storing the data in a binary format.« less
Enabling Co-Design of Multi-Layer Exascale Storage Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carothers, Christopher
Growing demands for computing power in applications such as energy production, climate analysis, computational chemistry, and bioinformatics have propelled computing systems toward the exascale: systems with 10 18 floating-point operations per second. These systems, to be designed and constructed over the next decade, will create unprecedented challenges in component counts, power consumption, resource limitations, and system complexity. Data storage and access are an increasingly important and complex component in extreme-scale computing systems, and significant design work is needed to develop successful storage hardware and software architectures at exascale. Co-design of these systems will be necessary to find the best possiblemore » design points for exascale systems. The goal of this work has been to enable the exploration and co-design of exascale storage systems by providing a detailed, accurate, and highly parallel simulation of exascale storage and the surrounding environment. Specifically, this simulation has (1) portrayed realistic application checkpointing and analysis workloads, (2) captured the complexity, scale, and multilayer nature of exascale storage hardware and software, and (3) executed in a timeframe that enables “what if'” exploration of design concepts. We developed models of the major hardware and software components in an exascale storage system, as well as the application I/O workloads that drive them. We used our simulation system to investigate critical questions in reliability and concurrency at exascale, helping guide the design of future exascale hardware and software architectures. Additionally, we provided this system to interested vendors and researchers so that others can explore the design space. We validated the capabilities of our simulation environment by configuring the simulation to represent the Argonne Leadership Computing Facility Blue Gene/Q system and comparing simulation results for application I/O patterns to the results of executions of these I/O kernels on the actual system.« less
Parallel approach in RDF query processing
NASA Astrophysics Data System (ADS)
Vajgl, Marek; Parenica, Jan
2017-07-01
Parallel approach is nowadays a very cheap solution to increase computational power due to possibility of usage of multithreaded computational units. This hardware became typical part of nowadays personal computers or notebooks and is widely spread. This contribution deals with experiments how evaluation of computational complex algorithm of the inference over RDF data can be parallelized over graphical cards to decrease computational time.
Sensitivity study and parameter optimization of OCD tool for 14nm finFET process
NASA Astrophysics Data System (ADS)
Zhang, Zhensheng; Chen, Huiping; Cheng, Shiqiu; Zhan, Yunkun; Huang, Kun; Shi, Yaoming; Xu, Yiping
2016-03-01
Optical critical dimension (OCD) measurement has been widely demonstrated as an essential metrology method for monitoring advanced IC process in the technology node of 90 nm and beyond. However, the rapidly shrunk critical dimensions of the semiconductor devices and the increasing complexity of the manufacturing process bring more challenges to OCD. The measurement precision of OCD technology highly relies on the optical hardware configuration, spectral types, and inherently interactions between the incidence of light and various materials with various topological structures, therefore sensitivity analysis and parameter optimization are very critical in the OCD applications. This paper presents a method for seeking the optimum sensitive measurement configuration to enhance the metrology precision and reduce the noise impact to the greatest extent. In this work, the sensitivity of different types of spectra with a series of hardware configurations of incidence angles and azimuth angles were investigated. The optimum hardware measurement configuration and spectrum parameter can be identified. The FinFET structures in the technology node of 14 nm were constructed to validate the algorithm. This method provides guidance to estimate the measurement precision before measuring actual device features and will be beneficial for OCD hardware configuration.
Automating quantum experiment control
NASA Astrophysics Data System (ADS)
Stevens, Kelly E.; Amini, Jason M.; Doret, S. Charles; Mohler, Greg; Volin, Curtis; Harter, Alexa W.
2017-03-01
The field of quantum information processing is rapidly advancing. As the control of quantum systems approaches the level needed for useful computation, the physical hardware underlying the quantum systems is becoming increasingly complex. It is already becoming impractical to manually code control for the larger hardware implementations. In this chapter, we will employ an approach to the problem of system control that parallels compiler design for a classical computer. We will start with a candidate quantum computing technology, the surface electrode ion trap, and build a system instruction language which can be generated from a simple machine-independent programming language via compilation. We incorporate compile time generation of ion routing that separates the algorithm description from the physical geometry of the hardware. Extending this approach to automatic routing at run time allows for automated initialization of qubit number and placement and additionally allows for automated recovery after catastrophic events such as qubit loss. To show that these systems can handle real hardware, we present a simple demonstration system that routes two ions around a multi-zone ion trap and handles ion loss and ion placement. While we will mainly use examples from transport-based ion trap quantum computing, many of the issues and solutions are applicable to other architectures.
Adaptive sampling strategies with high-throughput molecular dynamics
NASA Astrophysics Data System (ADS)
Clementi, Cecilia
Despite recent significant hardware and software developments, the complete thermodynamic and kinetic characterization of large macromolecular complexes by molecular simulations still presents significant challenges. The high dimensionality of these systems and the complexity of the associated potential energy surfaces (creating multiple metastable regions connected by high free energy barriers) does not usually allow to adequately sample the relevant regions of their configurational space by means of a single, long Molecular Dynamics (MD) trajectory. Several different approaches have been proposed to tackle this sampling problem. We focus on the development of ensemble simulation strategies, where data from a large number of weakly coupled simulations are integrated to explore the configurational landscape of a complex system more efficiently. Ensemble methods are of increasing interest as the hardware roadmap is now mostly based on increasing core counts, rather than clock speeds. The main challenge in the development of an ensemble approach for efficient sampling is in the design of strategies to adaptively distribute the trajectories over the relevant regions of the systems' configurational space, without using any a priori information on the system global properties. We will discuss the definition of smart adaptive sampling approaches that can redirect computational resources towards unexplored yet relevant regions. Our approaches are based on new developments in dimensionality reduction for high dimensional dynamical systems, and optimal redistribution of resources. NSF CHE-1152344, NSF CHE-1265929, Welch Foundation C-1570.
A data-driven modeling approach to stochastic computation for low-energy biomedical devices.
Lee, Kyong Ho; Jang, Kuk Jin; Shoeb, Ali; Verma, Naveen
2011-01-01
Low-power devices that can detect clinically relevant correlations in physiologically-complex patient signals can enable systems capable of closed-loop response (e.g., controlled actuation of therapeutic stimulators, continuous recording of disease states, etc.). In ultra-low-power platforms, however, hardware error sources are becoming increasingly limiting. In this paper, we present how data-driven methods, which allow us to accurately model physiological signals, also allow us to effectively model and overcome prominent hardware error sources with nearly no additional overhead. Two applications, EEG-based seizure detection and ECG-based arrhythmia-beat classification, are synthesized to a logic-gate implementation, and two prominent error sources are introduced: (1) SRAM bit-cell errors and (2) logic-gate switching errors ('stuck-at' faults). Using patient data from the CHB-MIT and MIT-BIH databases, performance similar to error-free hardware is achieved even for very high fault rates (up to 0.5 for SRAMs and 7 × 10(-2) for logic) that cause computational bit error rates as high as 50%.
Scout: high-performance heterogeneous computing made simple
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jablin, James; Mc Cormick, Patrick; Herlihy, Maurice
2011-01-26
Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focusmore » on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.« less
Stream-based Hebbian eigenfilter for real-time neuronal spike discrimination
2012-01-01
Background Principal component analysis (PCA) has been widely employed for automatic neuronal spike sorting. Calculating principal components (PCs) is computationally expensive, and requires complex numerical operations and large memory resources. Substantial hardware resources are therefore needed for hardware implementations of PCA. General Hebbian algorithm (GHA) has been proposed for calculating PCs of neuronal spikes in our previous work, which eliminates the needs of computationally expensive covariance analysis and eigenvalue decomposition in conventional PCA algorithms. However, large memory resources are still inherently required for storing a large volume of aligned spikes for training PCs. The large size memory will consume large hardware resources and contribute significant power dissipation, which make GHA difficult to be implemented in portable or implantable multi-channel recording micro-systems. Method In this paper, we present a new algorithm for PCA-based spike sorting based on GHA, namely stream-based Hebbian eigenfilter, which eliminates the inherent memory requirements of GHA while keeping the accuracy of spike sorting by utilizing the pseudo-stationarity of neuronal spikes. Because of the reduction of large hardware storage requirements, the proposed algorithm can lead to ultra-low hardware resources and power consumption of hardware implementations, which is critical for the future multi-channel micro-systems. Both clinical and synthetic neural recording data sets were employed for evaluating the accuracy of the stream-based Hebbian eigenfilter. The performance of spike sorting using stream-based eigenfilter and the computational complexity of the eigenfilter were rigorously evaluated and compared with conventional PCA algorithms. Field programmable logic arrays (FPGAs) were employed to implement the proposed algorithm, evaluate the hardware implementations and demonstrate the reduction in both power consumption and hardware memories achieved by the streaming computing Results and discussion Results demonstrate that the stream-based eigenfilter can achieve the same accuracy and is 10 times more computationally efficient when compared with conventional PCA algorithms. Hardware evaluations show that 90.3% logic resources, 95.1% power consumption and 86.8% computing latency can be reduced by the stream-based eigenfilter when compared with PCA hardware. By utilizing the streaming method, 92% memory resources and 67% power consumption can be saved when compared with the direct implementation of GHA. Conclusion Stream-based Hebbian eigenfilter presents a novel approach to enable real-time spike sorting with reduced computational complexity and hardware costs. This new design can be further utilized for multi-channel neuro-physiological experiments or chronic implants. PMID:22490725
Realtime multi-plot graphics system
NASA Technical Reports Server (NTRS)
Shipkowski, Michael S.
1990-01-01
The increased complexity of test operations and customer requirements at Langley Research Center's National Transonic Facility (NTF) surpassed the capabilities of the initial realtime graphics system. The analysis of existing hardware and software and the enhancements made to develop a new realtime graphics system are described. The result of this effort is a cost effective system, based on hardware already in place, that support high speed, high resolution, generation and display of multiple realtime plots. The enhanced graphics system (EGS) meets the current and foreseeable future realtime graphics requirements of the NTF. While this system was developed to support wind tunnel operations, the overall design and capability of the system is applicable to other realtime data acquisition systems that have realtime plot requirements.
Optimized design of embedded DSP system hardware supporting complex algorithms
NASA Astrophysics Data System (ADS)
Li, Yanhua; Wang, Xiangjun; Zhou, Xinling
2003-09-01
The paper presents an optimized design method for a flexible and economical embedded DSP system that can implement complex processing algorithms as biometric recognition, real-time image processing, etc. It consists of a floating-point DSP, 512 Kbytes data RAM, 1 Mbytes FLASH program memory, a CPLD for achieving flexible logic control of input channel and a RS-485 transceiver for local network communication. Because of employing a high performance-price ratio DSP TMS320C6712 and a large FLASH in the design, this system permits loading and performing complex algorithms with little algorithm optimization and code reduction. The CPLD provides flexible logic control for the whole DSP board, especially in input channel, and allows convenient interface between different sensors and DSP system. The transceiver circuit can transfer data between DSP and host computer. In the paper, some key technologies are also introduced which make the whole system work efficiently. Because of the characters referred above, the hardware is a perfect flat for multi-channel data collection, image processing, and other signal processing with high performance and adaptability. The application section of this paper presents how this hardware is adapted for the biometric identification system with high identification precision. The result reveals that this hardware is easy to interface with a CMOS imager and is capable of carrying out complex biometric identification algorithms, which require real-time process.
Next-generation digital camera integration and software development issues
NASA Astrophysics Data System (ADS)
Venkataraman, Shyam; Peters, Ken; Hecht, Richard
1998-04-01
This paper investigates the complexities associated with the development of next generation digital cameras due to requirements in connectivity and interoperability. Each successive generation of digital camera improves drastically in cost, performance, resolution, image quality and interoperability features. This is being accomplished by advancements in a number of areas: research, silicon, standards, etc. As the capabilities of these cameras increase, so do the requirements for both hardware and software. Today, there are two single chip camera solutions in the market including the Motorola MPC 823 and LSI DCAM- 101. Real time constraints for a digital camera may be defined by the maximum time allowable between capture of images. Constraints in the design of an embedded digital camera include processor architecture, memory, processing speed and the real-time operating systems. This paper will present the LSI DCAM-101, a single-chip digital camera solution. It will present an overview of the architecture and the challenges in hardware and software for supporting streaming video in such a complex device. Issues presented include the development of the data flow software architecture, testing and integration on this complex silicon device. The strategy for optimizing performance on the architecture will also be presented.
Investigating an API for resilient exascale computing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stearley, Jon R.; Tomkins, James; VanDyke, John P.
2013-05-01
Increased HPC capability comes with increased complexity, part counts, and fault occurrences. In- creasing the resilience of systems and applications to faults is a critical requirement facing the viability of exascale systems, as the overhead of traditional checkpoint/restart is projected to outweigh its bene ts due to fault rates outpacing I/O bandwidths. As faults occur and propagate throughout hardware and software layers, pervasive noti cation and handling mechanisms are necessary. This report describes an initial investigation of fault types and programming interfaces to mitigate them. Proof-of-concept APIs are presented for the frequent and important cases of memory errors and nodemore » failures, and a strategy proposed for lesystem failures. These involve changes to the operating system, runtime, I/O library, and application layers. While a single API for fault handling among hardware and OS and application system-wide remains elusive, the e ort increased our understanding of both the mountainous challenges and the promising trailheads. 3« less
Development of structural model of adaptive training complex in ergatic systems for professional use
NASA Astrophysics Data System (ADS)
Obukhov, A. D.; Dedov, D. L.; Arkhipov, A. E.
2018-03-01
The article considers the structural model of the adaptive training complex (ATC), which reflects the interrelations between the hardware, software and mathematical model of ATC and describes the processes in this subject area. The description of the main components of software and hardware complex, their interaction and functioning within the common system are given. Also the article scrutinizers a brief description of mathematical models of personnel activity, a technical system and influences, the interactions of which formalize the regularities of ATC functioning. The studies of main objects of training complexes and connections between them will make it possible to realize practical implementation of ATC in ergatic systems for professional use.
NASA Astrophysics Data System (ADS)
Semenishchev, E. A.; Marchuk, V. I.; Fedosov, V. P.; Stradanchenko, S. G.; Ruslyakov, D. V.
2015-05-01
This work aimed to study computationally simple method of saliency map calculation. Research in this field received increasing interest for the use of complex techniques in portable devices. A saliency map allows increasing the speed of many subsequent algorithms and reducing the computational complexity. The proposed method of saliency map detection based on both image and frequency space analysis. Several examples of test image from the Kodak dataset with different detalisation considered in this paper demonstrate the effectiveness of the proposed approach. We present experiments which show that the proposed method providing better results than the framework Salience Toolbox in terms of accuracy and speed.
Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds
NASA Astrophysics Data System (ADS)
Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.
In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.
NASA Technical Reports Server (NTRS)
Dewberry, Brandon S.
1990-01-01
The Environmental Control and Life Support System (ECLSS) is a Freedom Station distributed system with inherent applicability to advanced automation primarily due to the comparatively large reaction times of its subsystem processes. This allows longer contemplation times in which to form a more intelligent control strategy and to detect or prevent faults. The objective of the ECLSS Advanced Automation Project is to reduce the flight and ground manpower needed to support the initial and evolutionary ECLS system. The approach is to search out and make apparent those processes in the baseline system which are in need of more automatic control and fault detection strategies, to influence the ECLSS design by suggesting software hooks and hardware scars which will allow easy adaptation to advanced algorithms, and to develop complex software prototypes which fit into the ECLSS software architecture and will be shown in an ECLSS hardware testbed to increase the autonomy of the system. Covered here are the preliminary investigation and evaluation process, aimed at searching the ECLSS for candidate functions for automation and providing a software hooks and hardware scars analysis. This analysis shows changes needed in the baselined system for easy accommodation of knowledge-based or other complex implementations which, when integrated in flight or ground sustaining engineering architectures, will produce a more autonomous and fault tolerant Environmental Control and Life Support System.
Open-Source Wax RepRap 3-D Printer for Rapid Prototyping Paper-Based Microfluidics.
Pearce, J M; Anzalone, N C; Heldt, C L
2016-08-01
The open-source release of self-replicating rapid prototypers (RepRaps) has created a rich opportunity for low-cost distributed digital fabrication of complex 3-D objects such as scientific equipment. For example, 3-D printable reactionware devices offer the opportunity to combine open hardware microfluidic handling with lab-on-a-chip reactionware to radically reduce costs and increase the number and complexity of microfluidic applications. To further drive down the cost while improving the performance of lab-on-a-chip paper-based microfluidic prototyping, this study reports on the development of a RepRap upgrade capable of converting a Prusa Mendel RepRap into a wax 3-D printer for paper-based microfluidic applications. An open-source hardware approach is used to demonstrate a 3-D printable upgrade for the 3-D printer, which combines a heated syringe pump with the RepRap/Arduino 3-D control. The bill of materials, designs, basic assembly, and use instructions are provided, along with a completely free and open-source software tool chain. The open-source hardware device described here accelerates the potential of the nascent field of electrochemical detection combined with paper-based microfluidics by dropping the marginal cost of prototyping to nearly zero while accelerating the turnover between paper-based microfluidic designs. © 2016 Society for Laboratory Automation and Screening.
The regulatory software of cellular metabolism.
Segrè, Daniel
2004-06-01
Understanding the regulation of metabolic pathways in the cell is like unraveling the 'software' that is running on the 'hardware' of the metabolic network. Transcriptional regulation of enzymes is an important component of this software. A recent systematic analysis of metabolic gene-expression data in Saccharomyces cerevisiae reveals a complex modular organization of co-expressed genes, which could increase our ability to understand and engineer cellular metabolic functions.
Training Scalable Restricted Boltzmann Machines Using a Quantum Annealer
NASA Astrophysics Data System (ADS)
Kumar, V.; Bass, G.; Dulny, J., III
2016-12-01
Machine learning and the optimization involved therein is of critical importance for commercial and military applications. Due to the computational complexity of many-variable optimization, the conventional approach is to employ meta-heuristic techniques to find suboptimal solutions. Quantum Annealing (QA) hardware offers a completely novel approach with the potential to obtain significantly better solutions with large speed-ups compared to traditional computing. In this presentation, we describe our development of new machine learning algorithms tailored for QA hardware. We are training restricted Boltzmann machines (RBMs) using QA hardware on large, high-dimensional commercial datasets. Traditional optimization heuristics such as contrastive divergence and other closely related techniques are slow to converge, especially on large datasets. Recent studies have indicated that QA hardware when used as a sampler provides better training performance compared to conventional approaches. Most of these studies have been limited to moderately-sized datasets due to the hardware restrictions imposed by exisitng QA devices, which make it difficult to solve real-world problems at scale. In this work we develop novel strategies to circumvent this issue. We discuss scale-up techniques such as enhanced embedding and partitioned RBMs which allow large commercial datasets to be learned using QA hardware. We present our initial results obtained by training an RBM as an autoencoder on an image dataset. The results obtained so far indicate that the convergence rates can be improved significantly by increasing RBM network connectivity. These ideas can be readily applied to generalized Boltzmann machines and we are currently investigating this in an ongoing project.
NASA Astrophysics Data System (ADS)
Moon, Hongsik
What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.
Statistical Analysis of Complexity Generators for Cost Estimation
NASA Technical Reports Server (NTRS)
Rowell, Ginger Holmes
1999-01-01
Predicting the cost of cutting edge new technologies involved with spacecraft hardware can be quite complicated. A new feature of the NASA Air Force Cost Model (NAFCOM), called the Complexity Generator, is being developed to model the complexity factors that drive the cost of space hardware. This parametric approach is also designed to account for the differences in cost, based on factors that are unique to each system and subsystem. The cost driver categories included in this model are weight, inheritance from previous missions, technical complexity, and management factors. This paper explains the Complexity Generator framework, the statistical methods used to select the best model within this framework, and the procedures used to find the region of predictability and the prediction intervals for the cost of a mission.
Formal hardware verification of digital circuits
NASA Technical Reports Server (NTRS)
Joyce, J.; Seger, C.-J.
1991-01-01
The use of formal methods to verify the correctness of digital circuits is less constrained by the growing complexity of digital circuits than conventional methods based on exhaustive simulation. This paper briefly outlines three main approaches to formal hardware verification: symbolic simulation, state machine analysis, and theorem-proving.
A Low-Complexity and High-Performance 2D Look-Up Table for LDPC Hardware Implementation
NASA Astrophysics Data System (ADS)
Chen, Jung-Chieh; Yang, Po-Hui; Lain, Jenn-Kaie; Chung, Tzu-Wen
In this paper, we propose a low-complexity, high-efficiency two-dimensional look-up table (2D LUT) for carrying out the sum-product algorithm in the decoding of low-density parity-check (LDPC) codes. Instead of employing adders for the core operation when updating check node messages, in the proposed scheme, the main term and correction factor of the core operation are successfully merged into a compact 2D LUT. Simulation results indicate that the proposed 2D LUT not only attains close-to-optimal bit error rate performance but also enjoys a low complexity advantage that is suitable for hardware implementation.
Using Modern Design Tools for Digital Avionics Development
NASA Technical Reports Server (NTRS)
Hyde, David W.; Lakin, David R., II; Asquith, Thomas E.
2000-01-01
Using Modem Design Tools for Digital Avionics Development Shrinking development time and increased complexity of new avionics forces the designer to use modem tools and methods during hardware development. Engineers at the Marshall Space Flight Center have successfully upgraded their design flow and used it to develop a Mongoose V based radiation tolerant processor board for the International Space Station's Water Recovery System. The design flow, based on hardware description languages, simulation, synthesis, hardware models, and full functional software model libraries, allowed designers to fully simulate the processor board from reset, through initialization before any boards were built. The fidelity of a digital simulation is limited to the accuracy of the models used and how realistically the designer drives the circuit's inputs during simulation. By using the actual silicon during simulation, device modeling errors are reduced. Numerous design flaws were discovered early in the design phase when they could be easily fixed. The use of hardware models and actual MIPS software loaded into full functional memory models also provided checkout of the software development environment. This paper will describe the design flow used to develop the processor board and give examples of errors that were found using the tools. An overview of the processor board firmware will also be covered.
Improving the energy efficiency of sparse linear system solvers on multicore and manycore systems.
Anzt, H; Quintana-Ortí, E S
2014-06-28
While most recent breakthroughs in scientific research rely on complex simulations carried out in large-scale supercomputers, the power draft and energy spent for this purpose is increasingly becoming a limiting factor to this trend. In this paper, we provide an overview of the current status in energy-efficient scientific computing by reviewing different technologies used to monitor power draft as well as power- and energy-saving mechanisms available in commodity hardware. For the particular domain of sparse linear algebra, we analyse the energy efficiency of a broad collection of hardware architectures and investigate how algorithmic and implementation modifications can improve the energy performance of sparse linear system solvers, without negatively impacting their performance. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
A methodology of SiP testing based on boundary scan
NASA Astrophysics Data System (ADS)
Qin, He; Quan, Haiyang; Han, Yifei; Zhu, Tianrui; Zheng, Tuo
2017-10-01
System in Package (SiP) play an important role in portable, aerospace and military electronic with the microminiaturization, light weight, high density, and high reliability. At present, SiP system test has encountered the problem on system complexity and malfunction location with the system scale exponentially increase. For SiP system, this paper proposed a testing methodology and testing process based on the boundary scan technology. Combining the character of SiP system and referencing the boundary scan theory of PCB circuit and embedded core test, the specific testing methodology and process has been proposed. The hardware requirement of the under test SiP system has been provided, and the hardware platform of the testing has been constructed. The testing methodology has the character of high test efficiency and accurate malfunction location.
New technologies for supporting real-time on-board software development
NASA Astrophysics Data System (ADS)
Kerridge, D.
1995-03-01
The next generation of on-board data management systems will be significantly more complex than current designs, and will be required to perform more complex and demanding tasks in software. Improved hardware technology, in the form of the MA31750 radiation hard processor, is one key component in addressing the needs of future embedded systems. However, to complement these hardware advances, improved support for the design and implementation of real-time data management software is now needed. This will help to control the cost and risk assoicated with developing data management software development as it becomes an increasingly significant element within embedded systems. One particular problem with developing embedded software is managing the non-functional requirements in a systematic way. This paper identifies how Logica has exploited recent developments in hard real-time theory to address this problem through the use of new hard real-time analysis and design methods which can be supported by specialized tools. The first stage in transferring this technology from the research domain to industrial application has already been completed. The MA37150 Hard Real-Time Embedded Software Support Environment (HESSE) is a loosely integrated set of hardware and software tools which directly support the process of hard real-time analysis for software targeting the MA31750 processor. With further development, this HESSE promises to provide embedded system developers with software tools which can reduce the risks associated with developing complex hard real-time software. Supported in this way by more sophisticated software methods and tools, it is foreseen that MA31750 based embedded systems can meet the processing needs for the next generation of on-board data management systems.
Atlas : A library for numerical weather prediction and climate modelling
NASA Astrophysics Data System (ADS)
Deconinck, Willem; Bauer, Peter; Diamantakis, Michail; Hamrud, Mats; Kühnlein, Christian; Maciel, Pedro; Mengaldo, Gianmarco; Quintino, Tiago; Raoult, Baudouin; Smolarkiewicz, Piotr K.; Wedi, Nils P.
2017-11-01
The algorithms underlying numerical weather prediction (NWP) and climate models that have been developed in the past few decades face an increasing challenge caused by the paradigm shift imposed by hardware vendors towards more energy-efficient devices. In order to provide a sustainable path to exascale High Performance Computing (HPC), applications become increasingly restricted by energy consumption. As a result, the emerging diverse and complex hardware solutions have a large impact on the programming models traditionally used in NWP software, triggering a rethink of design choices for future massively parallel software frameworks. In this paper, we present Atlas, a new software library that is currently being developed at the European Centre for Medium-Range Weather Forecasts (ECMWF), with the scope of handling data structures required for NWP applications in a flexible and massively parallel way. Atlas provides a versatile framework for the future development of efficient NWP and climate applications on emerging HPC architectures. The applications range from full Earth system models, to specific tools required for post-processing weather forecast products. The Atlas library thus constitutes a step towards affordable exascale high-performance simulations by providing the necessary abstractions that facilitate the application in heterogeneous HPC environments by promoting the co-design of NWP algorithms with the underlying hardware.
Viewpoints: A High-Performance High-Dimensional Exploratory Data Analysis Tool
NASA Astrophysics Data System (ADS)
Gazis, P. R.; Levit, C.; Way, M. J.
2010-12-01
Scientific data sets continue to increase in both size and complexity. In the past, dedicated graphics systems at supercomputing centers were required to visualize large data sets, but as the price of commodity graphics hardware has dropped and its capability has increased, it is now possible, in principle, to view large complex data sets on a single workstation. To do this in practice, an investigator will need software that is written to take advantage of the relevant graphics hardware. The Viewpoints visualization package described herein is an example of such software. Viewpoints is an interactive tool for exploratory visual analysis of large high-dimensional (multivariate) data. It leverages the capabilities of modern graphics boards (GPUs) to run on a single workstation or laptop. Viewpoints is minimalist: it attempts to do a small set of useful things very well (or at least very quickly) in comparison with similar packages today. Its basic feature set includes linked scatter plots with brushing, dynamic histograms, normalization, and outlier detection/removal. Viewpoints was originally designed for astrophysicists, but it has since been used in a variety of fields that range from astronomy, quantum chemistry, fluid dynamics, machine learning, bioinformatics, and finance to information technology server log mining. In this article, we describe the Viewpoints package and show examples of its usage.
Software and hardware complex for research and management of the separation process
NASA Astrophysics Data System (ADS)
Borisov, A. P.
2018-01-01
The article is devoted to the development of a program for studying the operation of an asynchronous electric drive using vector-algorithmic switching of windings, as well as the development of a hardware-software complex for controlling parameters and controlling the speed of rotation of an asynchronous electric drive for investigating the operation of a cyclone. To study the operation of an asynchronous electric drive, a method was used in which the average value of flux linkage is found and a method for vector-algorithmic calculation of the power and electromagnetic moment of an asynchronous electric drive feeding from a single-phase network is developed, with vector-algorithmic commutation, and software for calculating parameters. The software part of the complex allows to regulate the speed of rotation of the motor by vector-algorithmic switching of transistors or, using pulse-width modulation (PWM), set any engine speed. Also sensors are connected to the hardware-software complex at the inlet and outlet of the cyclone. The developed cyclone with an inserted complex allows to receive high efficiency of product separation at various entrance speeds. At an inlet air speed of 18 m / s, the cyclone’s maximum efficiency is achieved. For this, it is necessary to provide the rotational speed of an asynchronous electric drive with a frequency of 45 Hz.
NASA Astrophysics Data System (ADS)
Ismail, K.; Muharam, A.; Amin; Widodo Budi, S.
2015-12-01
Inverter is widely used for industrial, office, and residential purposes. Inverter supports the development of alternative energy such as solar cells, wind turbines and fuel cells by converting dc voltage to ac voltage. Inverter has been made with a variety of hardware and software combinations, such as the use of pure analog circuit and various types of microcontroller as controller. When using pure analog circuit, modification would be difficult because it will change the entire hardware components. In inverter with microcontroller based design (with software), calculations to generate AC modulation is done in the microcontroller. This increases programming complexity and amount of coding downloaded to the microcontroller chip (capacity flash memory in the microcontroller is limited). This paper discusses the design of a single phase inverter using unipolar modulation of sine wave and triangular wave, which is done outside the microcontroller using data processing software application (Microsoft Excel), result shows that complexity programming was reduce and resolution sampling data is very influence to THD. Resolution sampling must taking ½ A degree to get best THD (15.8%).
The Impact of Flight Hardware Scavenging on Space Logistics
NASA Technical Reports Server (NTRS)
Oeftering, Richard C.
2011-01-01
For a given fixed launch vehicle capacity the logistics payload delivered to the moon may be only roughly 20 percent of the payload delivered to the International Space Station (ISS). This is compounded by the much lower flight frequency to the moon and thus low availability of spares for maintenance. This implies that lunar hardware is much more scarce and more costly per kilogram than ISS and thus there is much more incentive to preserve hardware. The Constellation Lunar Surface System (LSS) program is considering ways of utilizing hardware scavenged from vehicles including the Altair lunar lander. In general, the hardware will have only had a matter of hours of operation yet there may be years of operational life remaining. By scavenging this hardware the program, in effect, is treating vehicle hardware as part of the payload. Flight hardware may provide logistics spares for system maintenance and reduce the overall logistics footprint. This hardware has a wide array of potential applications including expanding the power infrastructure, and exploiting in-situ resources. Scavenging can also be seen as a way of recovering the value of, literally, billions of dollars worth of hardware that would normally be discarded. Scavenging flight hardware adds operational complexity and steps must be taken to augment the crew s capability with robotics, capabilities embedded in flight hardware itself, and external processes. New embedded technologies are needed to make hardware more serviceable and scavengable. Process technologies are needed to extract hardware, evaluate hardware, reconfigure or repair hardware, and reintegrate it into new applications. This paper also illustrates how scavenging can be used to drive down the cost of the overall program by exploiting the intrinsic value of otherwise discarded flight hardware.
A high throughput architecture for a low complexity soft-output demapping algorithm
NASA Astrophysics Data System (ADS)
Ali, I.; Wasenmüller, U.; Wehn, N.
2015-11-01
Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.
Modeling and Simulation of the Economics of Mining in the Bitcoin Market.
Cocco, Luisanna; Marchesi, Michele
2016-01-01
In January 3, 2009, Satoshi Nakamoto gave rise to the "Bitcoin Blockchain", creating the first block of the chain hashing on his computer's central processing unit (CPU). Since then, the hash calculations to mine Bitcoin have been getting more and more complex, and consequently the mining hardware evolved to adapt to this increasing difficulty. Three generations of mining hardware have followed the CPU's generation. They are GPU's, FPGA's and ASIC's generations. This work presents an agent-based artificial market model of the Bitcoin mining process and of the Bitcoin transactions. The goal of this work is to model the economy of the mining process, starting from GPU's generation, the first with economic significance. The model reproduces some "stylized facts" found in real-time price series and some core aspects of the mining business. In particular, the computational experiments performed can reproduce the unit root property, the fat tail phenomenon and the volatility clustering of Bitcoin price series. In addition, under proper assumptions, they can reproduce the generation of Bitcoins, the hashing capability, the power consumption, and the mining hardware and electrical energy expenditures of the Bitcoin network.
NASA Technical Reports Server (NTRS)
Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark
2016-01-01
In the field of microwave radiometry, Radio Frequency Interference (RFI) consistently degrades the value of scientific results. Through the use of digital receivers and signal processing, the effects of RFI on scientific measurements can be reduced depending on certain circumstances. As technology allows us to implement wider band digital receivers for radiometry, the problem of RFI mitigation changes. Our work focuses on finding a detector that outperforms real kurtosis in wide band scenarios. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The performance of both complex and real signal kurtosis is evaluated for continuous wave, pulsed continuous wave, and wide band quadrature phase shift keying (QPSK) modulations. The use of complex signal kurtosis increased the detectability of interference.
Crossing the chasm: how to develop weather and climate models for next generation computers?
NASA Astrophysics Data System (ADS)
Lawrence, Bryan N.; Rezny, Michael; Budich, Reinhard; Bauer, Peter; Behrens, Jörg; Carter, Mick; Deconinck, Willem; Ford, Rupert; Maynard, Christopher; Mullerworth, Steven; Osuna, Carlos; Porter, Andrew; Serradell, Kim; Valcke, Sophie; Wedi, Nils; Wilson, Simon
2018-05-01
Weather and climate models are complex pieces of software which include many individual components, each of which is evolving under pressure to exploit advances in computing to enhance some combination of a range of possible improvements (higher spatio-temporal resolution, increased fidelity in terms of resolved processes, more quantification of uncertainty, etc.). However, after many years of a relatively stable computing environment with little choice in processing architecture or programming paradigm (basically X86 processors using MPI for parallelism), the existing menu of processor choices includes significant diversity, and more is on the horizon. This computational diversity, coupled with ever increasing software complexity, leads to the very real possibility that weather and climate modelling will arrive at a chasm which will separate scientific aspiration from our ability to develop and/or rapidly adapt codes to the available hardware. In this paper we review the hardware and software trends which are leading us towards this chasm, before describing current progress in addressing some of the tools which we may be able to use to bridge the chasm. This brief introduction to current tools and plans is followed by a discussion outlining the scientific requirements for quality model codes which have satisfactory performance and portability, while simultaneously supporting productive scientific evolution. We assert that the existing method of incremental model improvements employing small steps which adjust to the changing hardware environment is likely to be inadequate for crossing the chasm between aspiration and hardware at a satisfactory pace, in part because institutions cannot have all the relevant expertise in house. Instead, we outline a methodology based on large community efforts in engineering and standardisation, which will depend on identifying a taxonomy of key activities - perhaps based on existing efforts to develop domain-specific languages, identify common patterns in weather and climate codes, and develop community approaches to commonly needed tools and libraries - and then collaboratively building up those key components. Such a collaborative approach will depend on institutions, projects, and individuals adopting new interdependencies and ways of working.
FPGA Based Reconfigurable ATM Switch Test Bed
NASA Technical Reports Server (NTRS)
Chu, Pong P.; Jones, Robert E.
1998-01-01
Various issues associated with "FPGA Based Reconfigurable ATM Switch Test Bed" are presented in viewgraph form. Specific topics include: 1) Network performance evaluation; 2) traditional approaches; 3) software simulation; 4) hardware emulation; 5) test bed highlights; 6) design environment; 7) test bed architecture; 8) abstract sheared-memory switch; 9) detailed switch diagram; 10) traffic generator; 11) data collection circuit and user interface; 12) initial results; and 13) the following conclusions: Advances in FPGA make hardware emulation feasible for performance evaluation, hardware emulation can provide several orders of magnitude speed-up over software simulation; due to the complexity of hardware synthesis process, development in emulation is much more difficult than simulation and requires knowledge in both networks and digital design.
Dragas, Jelena; Jäckel, David; Hierlemann, Andreas; Franke, Felix
2017-01-01
Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction. PMID:25415989
Dragas, Jelena; Jackel, David; Hierlemann, Andreas; Franke, Felix
2015-03-01
Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction.
Area-delay trade-offs of texture decompressors for a graphics processing unit
NASA Astrophysics Data System (ADS)
Novoa Súñer, Emilio; Ituero, Pablo; López-Vallejo, Marisa
2011-05-01
Graphics Processing Units have become a booster for the microelectronics industry. However, due to intellectual property issues, there is a serious lack of information on implementation details of the hardware architecture that is behind GPUs. For instance, the way texture is handled and decompressed in a GPU to reduce bandwidth usage has never been dealt with in depth from a hardware point of view. This work addresses a comparative study on the hardware implementation of different texture decompression algorithms for both conventional (PCs and video game consoles) and mobile platforms. Circuit synthesis is performed targeting both a reconfigurable hardware platform and a 90nm standard cell library. Area-delay trade-offs have been extensively analyzed, which allows us to compare the complexity of decompressors and thus determine suitability of algorithms for systems with limited hardware resources.
Reconfigurable Computing for Computational Science: A New Focus in High Performance Computing
2006-11-01
in the past decade. Researchers are regularly employing the power of large computing systems and parallel processing to tackle larger and more...complex problems in all of the physical sciences. For the past decade or so, most of this growth in computing power has been “free” with increased...the scientific computing community as a means to continued growth in computing capability. This paper offers a glimpse of the hardware and
Real-time demonstration hardware for enhanced DPCM video compression algorithm
NASA Technical Reports Server (NTRS)
Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.
1992-01-01
The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home).
NASA Astrophysics Data System (ADS)
Nicolaeva, B. K.; Borisov, A. P.; Zlochevskiy, V. L.
2017-08-01
The article is devoted to the development of a hardware-software complex for monitoring and controlling the process of air purification by means of a cyclone-separator. The hardware of this complex is the Arduino platform, to which are connected pressure sensors, air velocities, dustmeters, which allow monitoring of the main parameters of the cyclone-separator. Also, a frequency converter was developed to regulate the rotation speed of an asynchronous motor necessary to correct the flow rate, the control signals of which come with Arduino. The program part of the complex is written in the form of a web application in the programming language JavaScript and inserts into CSS and HTML for the user interface. This program allows you to receive data from sensors, build dependencies in real time and control the speed of rotation of an asynchronous electric drive. The conducted experiment shows that the cleaning efficiency is 95-99.9%, while the airflow at the cyclone inlet is 16-18 m/s, and at the exit 50-70 m/s.
Paraskevopoulou, Sivylla E; Barsakcioglu, Deren Y; Saberi, Mohammed R; Eftekhar, Amir; Constandinou, Timothy G
2013-04-30
Next generation neural interfaces aspire to achieve real-time multi-channel systems by integrating spike sorting on chip to overcome limitations in communication channel capacity. The feasibility of this approach relies on developing highly efficient algorithms for feature extraction and clustering with the potential of low-power hardware implementation. We are proposing a feature extraction method, not requiring any calibration, based on first and second derivative features of the spike waveform. The accuracy and computational complexity of the proposed method are quantified and compared against commonly used feature extraction methods, through simulation across four datasets (with different single units) at multiple noise levels (ranging from 5 to 20% of the signal amplitude). The average classification error is shown to be below 7% with a computational complexity of 2N-3, where N is the number of sample points of each spike. Overall, this method presents a good trade-off between accuracy and computational complexity and is thus particularly well-suited for hardware-efficient implementation. Copyright © 2013 Elsevier B.V. All rights reserved.
Assessment Environment for Complex Systems Software Guide
NASA Technical Reports Server (NTRS)
2013-01-01
This Software Guide (SG) describes the software developed to test the Assessment Environment for Complex Systems (AECS) by the West Virginia High Technology Consortium (WVHTC) Foundation's Mission Systems Group (MSG) for the National Aeronautics and Space Administration (NASA) Aeronautics Research Mission Directorate (ARMD). This software is referred to as the AECS Test Project throughout the remainder of this document. AECS provides a framework for developing, simulating, testing, and analyzing modern avionics systems within an Integrated Modular Avionics (IMA) architecture. The purpose of the AECS Test Project is twofold. First, it provides a means to test the AECS hardware and system developed by MSG. Second, it provides an example project upon which future AECS research may be based. This Software Guide fully describes building, installing, and executing the AECS Test Project as well as its architecture and design. The design of the AECS hardware is described in the AECS Hardware Guide. Instructions on how to configure, build and use the AECS are described in the User's Guide. Sample AECS software, developed by the WVHTC Foundation, is presented in the AECS Software Guide. The AECS Hardware Guide, AECS User's Guide, and AECS Software Guide are authored by MSG. The requirements set forth for AECS are presented in the Statement of Work for the Assessment Environment for Complex Systems authored by NASA Dryden Flight Research Center (DFRC). The intended audience for this document includes software engineers, hardware engineers, project managers, and quality assurance personnel from WVHTC Foundation (the suppliers of the software), NASA (the customer), and future researchers (users of the software). Readers are assumed to have general knowledge in the field of real-time, embedded computer software development.
A Model for Minimizing Numeric Function Generator Complexity and Delay
2007-12-01
allow computation of difficult mathematical functions in less time and with less hardware than commonly employed methods. They compute piecewise...Programmable Gate Arrays (FPGAs). The algorithms and estimation techniques apply to various NFG architectures and mathematical functions. This...thesis compares hardware utilization and propagation delay for various NFG architectures, mathematical functions, word widths, and segmentation methods
Reliable High Performance Peta- and Exa-Scale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronevetsky, G
2012-04-02
As supercomputers become larger and more powerful, they are growing increasingly complex. This is reflected both in the exponentially increasing numbers of components in HPC systems (LLNL is currently installing the 1.6 million core Sequoia system) as well as the wide variety of software and hardware components that a typical system includes. At this scale it becomes infeasible to make each component sufficiently reliable to prevent regular faults somewhere in the system or to account for all possible cross-component interactions. The resulting faults and instability cause HPC applications to crash, perform sub-optimally or even produce erroneous results. As supercomputers continuemore » to approach Exascale performance and full system reliability becomes prohibitively expensive, we will require novel techniques to bridge the gap between the lower reliability provided by hardware systems and users unchanging need for consistent performance and reliable results. Previous research on HPC system reliability has developed various techniques for tolerating and detecting various types of faults. However, these techniques have seen very limited real applicability because of our poor understanding of how real systems are affected by complex faults such as soft fault-induced bit flips or performance degradations. Prior work on such techniques has had very limited practical utility because it has generally focused on analyzing the behavior of entire software/hardware systems both during normal operation and in the face of faults. Because such behaviors are extremely complex, such studies have only produced coarse behavioral models of limited sets of software/hardware system stacks. Since this provides little insight into the many different system stacks and applications used in practice, this work has had little real-world impact. My project addresses this problem by developing a modular methodology to analyze the behavior of applications and systems during both normal and faulty operation. By synthesizing models of individual components into a whole-system behavior models my work is making it possible to automatically understand the behavior of arbitrary real-world systems to enable them to tolerate a wide range of system faults. My project is following a multi-pronged research strategy. Section II discusses my work on modeling the behavior of existing applications and systems. Section II.A discusses resilience in the face of soft faults and Section II.B looks at techniques to tolerate performance faults. Finally Section III presents an alternative approach that studies how a system should be designed from the ground up to make resilience natural and easy.« less
Exascale Hardware Architectures Working Group
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemmert, S; Ang, J; Chiang, P
2011-03-15
The ASC Exascale Hardware Architecture working group is challenged to provide input on the following areas impacting the future use and usability of potential exascale computer systems: processor, memory, and interconnect architectures, as well as the power and resilience of these systems. Going forward, there are many challenging issues that will need to be addressed. First, power constraints in processor technologies will lead to steady increases in parallelism within a socket. Additionally, all cores may not be fully independent nor fully general purpose. Second, there is a clear trend toward less balanced machines, in terms of compute capability compared tomore » memory and interconnect performance. In order to mitigate the memory issues, memory technologies will introduce 3D stacking, eventually moving on-socket and likely on-die, providing greatly increased bandwidth but unfortunately also likely providing smaller memory capacity per core. Off-socket memory, possibly in the form of non-volatile memory, will create a complex memory hierarchy. Third, communication energy will dominate the energy required to compute, such that interconnect power and bandwidth will have a significant impact. All of the above changes are driven by the need for greatly increased energy efficiency, as current technology will prove unsuitable for exascale, due to unsustainable power requirements of such a system. These changes will have the most significant impact on programming models and algorithms, but they will be felt across all layers of the machine. There is clear need to engage all ASC working groups in planning for how to deal with technological changes of this magnitude. The primary function of the Hardware Architecture Working Group is to facilitate codesign with hardware vendors to ensure future exascale platforms are capable of efficiently supporting the ASC applications, which in turn need to meet the mission needs of the NNSA Stockpile Stewardship Program. This issue is relatively immediate, as there is only a small window of opportunity to influence hardware design for 2018 machines. Given the short timeline a firm co-design methodology with vendors is of prime importance.« less
PREPARING FOR EXASCALE: ORNL Leadership Computing Application Requirements and Strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joubert, Wayne; Kothe, Douglas B; Nam, Hai Ah
2009-12-01
In 2009 the Oak Ridge Leadership Computing Facility (OLCF), a U.S. Department of Energy (DOE) facility at the Oak Ridge National Laboratory (ORNL) National Center for Computational Sciences (NCCS), elicited petascale computational science requirements from leading computational scientists in the international science community. This effort targeted science teams whose projects received large computer allocation awards on OLCF systems. A clear finding of this process was that in order to reach their science goals over the next several years, multiple projects will require computational resources in excess of an order of magnitude more powerful than those currently available. Additionally, for themore » longer term, next-generation science will require computing platforms of exascale capability in order to reach DOE science objectives over the next decade. It is generally recognized that achieving exascale in the proposed time frame will require disruptive changes in computer hardware and software. Processor hardware will become necessarily heterogeneous and will include accelerator technologies. Software must undergo the concomitant changes needed to extract the available performance from this heterogeneous hardware. This disruption portends to be substantial, not unlike the change to the message passing paradigm in the computational science community over 20 years ago. Since technological disruptions take time to assimilate, we must aggressively embark on this course of change now, to insure that science applications and their underlying programming models are mature and ready when exascale computing arrives. This includes initiation of application readiness efforts to adapt existing codes to heterogeneous architectures, support of relevant software tools, and procurement of next-generation hardware testbeds for porting and testing codes. The 2009 OLCF requirements process identified numerous actions necessary to meet this challenge: (1) Hardware capabilities must be advanced on multiple fronts, including peak flops, node memory capacity, interconnect latency, interconnect bandwidth, and memory bandwidth. (2) Effective parallel programming interfaces must be developed to exploit the power of emerging hardware. (3) Science application teams must now begin to adapt and reformulate application codes to the new hardware and software, typified by hierarchical and disparate layers of compute, memory and concurrency. (4) Algorithm research must be realigned to exploit this hierarchy. (5) When possible, mathematical libraries must be used to encapsulate the required operations in an efficient and useful way. (6) Software tools must be developed to make the new hardware more usable. (7) Science application software must be improved to cope with the increasing complexity of computing systems. (8) Data management efforts must be readied for the larger quantities of data generated by larger, more accurate science models. Requirements elicitation, analysis, validation, and management comprise a difficult and inexact process, particularly in periods of technological change. Nonetheless, the OLCF requirements modeling process is becoming increasingly quantitative and actionable, as the process becomes more developed and mature, and the process this year has identified clear and concrete steps to be taken. This report discloses (1) the fundamental science case driving the need for the next generation of computer hardware, (2) application usage trends that illustrate the science need, (3) application performance characteristics that drive the need for increased hardware capabilities, (4) resource and process requirements that make the development and deployment of science applications on next-generation hardware successful, and (5) summary recommendations for the required next steps within the computer and computational science communities.« less
Brown, Bryan D; Steinert, Justin N; Stelzer, John W; Yoon, Richard S; Langford, Joshua R; Koval, Kenneth J
2017-12-01
Indications for removing orthopedic hardware on an elective basis varies widely. Although viewed as a relatively benign procedure, there is a lack of data regarding overall complication rates after fracture fixation. The purpose of this study is to determine the overall short-term complication rate for elective removal of orthopedic hardware after fracture fixation and to identify associated risk factors. Adult patients indicated for elective hardware removal after fracture fixation between July 2012 and July 2016 were screened for inclusion. Inclusion criteria included patients with hardware related pain and/or impaired cosmesis with complete medical and radiographic records and at least 3-month follow-up. Exclusion criteria were those patients indicated for hardware removal for a diagnosis of malunion, non-union, and/or infection. Data collected included patient age, gender, anatomic location of hardware removed, body mass index, ASA score, and comorbidities. Overall complications, as well as complications requiring revision surgery were recorded. Statistical analysis was performed with SPSS 20.0, and included univariate and multivariate regression analysis. 391 patients (418 procedures) were included for analysis. Overall complication rates were 8.4%, with a 3.6% revision surgery rate. Univariate regression analysis revealed that patients who had liver disease were at significant risk for complication (p=0.001) and revision surgery (p=0.036). Multivariate regression analysis showed that: 1) patients who had liver disease were at significant risk of overall complication (p=0.001) and revision surgery (p=0.039); 2) Removal of hardware following fixation for a pilon had significantly increased risk for complication (p=0.012), but not revision surgery (p=0.43); and 3) Removal of hardware for pelvic fixation had a significantly increased risk for revision surgery (p=0.017). Removal of hardware following fracture fixation is not a risk-free procedure. Patients with liver disease are at increased risk for complications, including increased risk for needing revision surgery following hardware removal. Patients having hardware removed following fixation for pilon fractures also are at increased risk for complication, although they may not require a return trip to the operating room. Finally, removal of pelvic hardware is associated with a higher return to the operating room. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fast convergent frequency-domain MIMO equalizer for few-mode fiber communication systems
NASA Astrophysics Data System (ADS)
He, Xuan; Weng, Yi; Wang, Junyi; Pan, Z.
2018-02-01
Space division multiplexing using few-mode fibers has been extensively explored to sustain the continuous traffic growth. In few-mode fiber optical systems, both spatial and polarization modes are exploited to transmit parallel channels, thus increasing the overall capacity. However, signals on spatial channels inevitably suffer from the intrinsic inter-modal coupling and large accumulated differential mode group delay (DMGD), which causes spatial modes de-multiplex even harder. Many research articles have demonstrated that frequency domain adaptive multi-input multi-output (MIMO) equalizer can effectively compensate the DMGD and demultiplex the spatial channels with digital signal processing (DSP). However, the large accumulated DMGD usually requires a large number of training blocks for the initial convergence of adaptive MIMO equalizers, which will decrease the overall system efficiency and even degrade the equalizer performance in fast-changing optical channels. Least mean square (LMS) algorithm is always used in MIMO equalization to dynamically demultiplex the spatial signals. We have proposed to use signal power spectral density (PSD) dependent method and noise PSD directed method to improve the convergence speed of adaptive frequency domain LMS algorithm. We also proposed frequency domain recursive least square (RLS) algorithm to further increase the convergence speed of MIMO equalizer at cost of greater hardware complexity. In this paper, we will compare the hardware complexity and convergence speed of signal PSD dependent and noise power directed algorithms against the conventional frequency domain LMS algorithm. In our numerical study of a three-mode 112 Gbit/s PDM-QPSK optical system with 3000 km transmission, the noise PSD directed and signal PSD dependent methods could improve the convergence speed by 48.3% and 36.1% respectively, at cost of 17.2% and 10.7% higher hardware complexity. We will also compare the frequency domain RLS algorithm against conventional frequency domain LMS algorithm. Our numerical study shows that, in a three-mode 224 Gbit/s PDM-16-QAM system with 3000 km transmission, the RLS algorithm could improve the convergence speed by 53.7% over conventional frequency domain LMS algorithm.
Multi-Mission System Architecture Platform: Design and Verification of the Remote Engineering Unit
NASA Technical Reports Server (NTRS)
Sartori, John
2005-01-01
The Multi-Mission System Architecture Platform (MSAP) represents an effort to bolster efficiency in the spacecraft design process. By incorporating essential spacecraft functionality into a modular, expandable system, the MSAP provides a foundation on which future spacecraft missions can be developed. Once completed, the MSAP will provide support for missions with varying objectives, while maintaining a level of standardization that will minimize redesign of general system components. One subsystem of the MSAP, the Remote Engineering Unit (REU), functions by gathering engineering telemetry from strategic points on the spacecraft and providing these measurements to the spacecraft's Command and Data Handling (C&DH) subsystem. Before the MSAP Project reaches completion, all hardware, including the REU, must be verified. However, the speed and complexity of the REU circuitry rules out the possibility of physical prototyping. Instead, the MSAP hardware is designed and verified using the Verilog Hardware Definition Language (HDL). An increasingly popular means of digital design, HDL programming provides a level of abstraction, which allows the designer to focus on functionality while logic synthesis tools take care of gate-level design and optimization. As verification of the REU proceeds, errors are quickly remedied, preventing costly changes during hardware validation. After undergoing the careful, iterative processes of verification and validation, the REU and MSAP will prove their readiness for use in a multitude of spacecraft missions.
ESTL tracking and data relay satellite /TDRSS/ simulation system
NASA Technical Reports Server (NTRS)
Kapell, M. H.
1980-01-01
The Tracking Data Relay Satellite System (TDRSS) provides single access forward and return communication links with the Shuttle/Orbiter via S-band and Ku-band frequency bands. The ESTL (Electronic Systems Test Laboratory) at Lyndon B. Johnson Space Center (JSC) utilizes a TDRS satellite simulator and critical TDRS ground hardware for test operations. To accomplish Orbiter/TDRSS relay communications performance testing in the ESTL, a satellite simulator was developed which met the specification requirements of the TDRSS channels utilized by the Orbiter. Actual TDRSS ground hardware unique to the Orbiter communication interfaces was procured from individual vendors, integrated in the ESTL, and interfaced via a data bus for control and status monitoring. This paper discusses the satellite simulation hardware in terms of early development and subsequent modifications. The TDRS ground hardware configuration and the complex computer interface requirements are reviewed. Also, special test hardware such as a radio frequency interference test generator is discussed.
On the writing of programming systems for spacecraft computers.
NASA Technical Reports Server (NTRS)
Mathur, F. P.; Rohr, J. A.
1972-01-01
Consideration of the systems designed to generate programs for the increasingly complex digital computers being used on board unmanned deep-space probes. Such programming systems must accommodate the special-purpose features incorporated in the hardware. The use of higher-level language facilities in the programming system can significantly simplify the task. Computers for Mariner and for the Outer Planets Grand Tour are briefly described, as well as their programming systems. Aspects of the higher level languages are considered.
NASA Astrophysics Data System (ADS)
Sun, Wenhao; Cai, Xudong; Meng, Qiao
2016-04-01
Complex automatic protection functions are being added to the onboard software of the Alpha Magnetic Spectrometer. A hardware-in-the-loop simulation method has been introduced to overcome the difficulties of ground testing that are brought by hardware and environmental limitations. We invented a time-saving approach by reusing the flight data as the data source of the simulation system instead of mathematical models. This is easy to implement and it works efficiently. This paper presents the system framework, implementation details and some application examples.
NASA Technical Reports Server (NTRS)
Kavi, K. M.
1984-01-01
There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.
Using SysML to model complex systems for security.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cano, Lester Arturo
2010-08-01
As security systems integrate more Information Technology the design of these systems has tended to become more complex. Some of the most difficult issues in designing Complex Security Systems (CSS) are: Capturing Requirements: Defining Hardware Interfaces: Defining Software Interfaces: Integrating Technologies: Radio Systems: Voice Over IP Systems: Situational Awareness Systems.
Modeling and Simulation of the Economics of Mining in the Bitcoin Market
Marchesi, Michele
2016-01-01
In January 3, 2009, Satoshi Nakamoto gave rise to the “Bitcoin Blockchain”, creating the first block of the chain hashing on his computer’s central processing unit (CPU). Since then, the hash calculations to mine Bitcoin have been getting more and more complex, and consequently the mining hardware evolved to adapt to this increasing difficulty. Three generations of mining hardware have followed the CPU’s generation. They are GPU’s, FPGA’s and ASIC’s generations. This work presents an agent-based artificial market model of the Bitcoin mining process and of the Bitcoin transactions. The goal of this work is to model the economy of the mining process, starting from GPU’s generation, the first with economic significance. The model reproduces some “stylized facts” found in real-time price series and some core aspects of the mining business. In particular, the computational experiments performed can reproduce the unit root property, the fat tail phenomenon and the volatility clustering of Bitcoin price series. In addition, under proper assumptions, they can reproduce the generation of Bitcoins, the hashing capability, the power consumption, and the mining hardware and electrical energy expenditures of the Bitcoin network. PMID:27768691
The evolution of image-guided lumbosacral spine surgery.
Bourgeois, Austin C; Faulkner, Austin R; Pasciak, Alexander S; Bradley, Yong C
2015-04-01
Techniques and approaches of spinal fusion have considerably evolved since their first description in the early 1900s. The incorporation of pedicle screw constructs into lumbosacral spine surgery is among the most significant advances in the field, offering immediate stability and decreased rates of pseudarthrosis compared to previously described methods. However, early studies describing pedicle screw fixation and numerous studies thereafter have demonstrated clinically significant sequelae of inaccurate surgical fusion hardware placement. A number of image guidance systems have been developed to reduce morbidity from hardware malposition in increasingly complex spine surgeries. Advanced image guidance systems such as intraoperative stereotaxis improve the accuracy of pedicle screw placement using a variety of surgical approaches, however their clinical indications and clinical impact remain debated. Beginning with intraoperative fluoroscopy, this article describes the evolution of image guided lumbosacral spinal fusion, emphasizing two-dimensional (2D) and three-dimensional (3D) navigational methods.
Highly efficient simulation environment for HDTV video decoder in VLSI design
NASA Astrophysics Data System (ADS)
Mao, Xun; Wang, Wei; Gong, Huimin; He, Yan L.; Lou, Jian; Yu, Lu; Yao, Qingdong; Pirsch, Peter
2002-01-01
With the increase of the complex of VLSI such as the SoC (System on Chip) of MPEG-2 Video decoder with HDTV scalability especially, simulation and verification of the full design, even as high as the behavior level in HDL, often proves to be very slow, costly and it is difficult to perform full verification until late in the design process. Therefore, they become bottleneck of the procedure of HDTV video decoder design, and influence it's time-to-market mostly. In this paper, the architecture of Hardware/Software Interface of HDTV video decoder is studied, and a Hardware-Software Mixed Simulation (HSMS) platform is proposed to check and correct error in the early design stage, based on the algorithm of MPEG-2 video decoding. The application of HSMS to target system could be achieved by employing several introduced approaches. Those approaches speed up the simulation and verification task without decreasing performance.
Resolution-independent surface rendering using programmable graphics hardware
Loop, Charles T.; Blinn, James Frederick
2008-12-16
Surfaces defined by a Bezier tetrahedron, and in particular quadric surfaces, are rendered on programmable graphics hardware. Pixels are rendered through triangular sides of the tetrahedra and locations on the shapes, as well as surface normals for lighting evaluations, are computed using pixel shader computations. Additionally, vertex shaders are used to aid interpolation over a small number of values as input to the pixel shaders. Through this, rendering of the surfaces is performed independently of viewing resolution, allowing for advanced level-of-detail management. By individually rendering tetrahedrally-defined surfaces which together form complex shapes, the complex shapes can be rendered in their entirety.
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Britton, Paul; Hatfield, Glen Spencer; Novack, Steven D.
2017-01-01
Today's launch vehicles complex electronic and avionics systems heavily utilize Field Programmable Gate Array (FPGA) integrated circuits (IC) for their superb speed and reconfiguration capabilities. Consequently, FPGAs are prevalent ICs in communication protocols such as MILSTD- 1553B and in control signal commands such as in solenoid valve actuations. This paper will identify reliability concerns and high level guidelines to estimate FPGA total failure rates in a launch vehicle application. The paper will discuss hardware, hardware description language, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC. The hardware description language portion will discuss the high level FPGA programming languages and software/code reliability growth. The radiation portion will discuss FPGA susceptibility to space environment radiation.
NASA Technical Reports Server (NTRS)
Oeftering, Richard C.; Bradish, Martin A.
2011-01-01
The role of synthetic instruments (SIs) for Component-Level Electronic-Assembly Repair (CLEAR) is to provide an external lower-level diagnostic and functional test capability beyond the built-in-test capabilities of spacecraft electronics. Built-in diagnostics can report faults and symptoms, but isolating the root cause and performing corrective action requires specialized instruments. Often a fault can be revealed by emulating the operation of external hardware. This implies complex hardware that is too massive to be accommodated in spacecraft. The SI strategy is aimed at minimizing complexity and mass by employing highly reconfigurable instruments that perform diagnostics and emulate external functions. In effect, SI can synthesize an instrument on demand. The SI architecture section of this document summarizes the result of a recent program diagnostic and test needs assessment based on the International Space Station. The SI architecture addresses operational issues such as minimizing crew time and crew skill level, and the SI data transactions between the crew and supporting ground engineering searching for the root cause and formulating corrective actions. SI technology is described within a teleoperations framework. The remaining sections describe a lab demonstration intended to show that a single SI circuit could synthesize an instrument in hardware and subsequently clear the hardware and synthesize a completely different instrument on demand. An analysis of the capabilities and limitations of commercially available SI hardware and programming tools is included. Future work in SI technology is also described.
Manipulation of volumetric patient data in a distributed virtual reality environment.
Dech, F; Ai, Z; Silverstein, J C
2001-01-01
Due to increases in network speed and bandwidth, distributed exploration of medical data in immersive Virtual Reality (VR) environments is becoming increasingly feasible. The volumetric display of radiological data in such environments presents a unique set of challenges. The shear size and complexity of the datasets involved not only make them difficult to transmit to remote sites, but these datasets also require extensive user interaction in order to make them understandable to the investigator and manageable to the rendering hardware. A sophisticated VR user interface is required in order for the clinician to focus on the aspects of the data that will provide educational and/or diagnostic insight. We will describe a software system of data acquisition, data display, Tele-Immersion, and data manipulation that supports interactive, collaborative investigation of large radiological datasets. The hardware required in this strategy is still at the high-end of the graphics workstation market. Future software ports to Linux and NT, along with the rapid development of PC graphics cards, open the possibility for later work with Linux or NT PCs and PC clusters.
A Virtual Laboratory for Aviation and Airspace Prognostics Research
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan; Gorospe, George; Teubert, Christ; Quach, Cuong C.; Hogge, Edward; Darafsheh, Kaveh
2017-01-01
Integration of Unmanned Aerial Vehicles (UAVs), autonomy, spacecraft, and other aviation technologies, in the airspace is becoming more and more complicated, and will continue to do so in the future. Inclusion of new technology and complexity into the airspace increases the importance and difficulty of safety assurance. Additionally, testing new technologies on complex aviation systems and systems of systems can be challenging, expensive, and at times unsafe when implementing real life scenarios. The application of prognostics to aviation and airspace management may produce new tools and insight into these problems. Prognostic methodology provides an estimate of the health and risks of a component, vehicle, or airspace and knowledge of how that will change over time. That measure is especially useful in safety determination, mission planning, and maintenance scheduling. In our research, we develop a live, distributed, hardware- in-the-loop Prognostics Virtual Laboratory testbed for aviation and airspace prognostics. The developed testbed will be used to validate prediction algorithms for the real-time safety monitoring of the National Airspace System (NAS) and the prediction of unsafe events. In our earlier work1 we discussed the initial Prognostics Virtual Laboratory testbed development work and related results for milestones 1 & 2. This paper describes the design, development, and testing of the integrated tested which are part of milestone 3, along with our next steps for validation of this work. Through a framework consisting of software/hardware modules and associated interface clients, the distributed testbed enables safe, accurate, and inexpensive experimentation and research into airspace and vehicle prognosis that would not have been possible otherwise. The testbed modules can be used cohesively to construct complex and relevant airspace scenarios for research. Four modules are key to this research: the virtual aircraft module which uses the X-Plane simulator and X-PlaneConnect toolbox, the live aircraft module which connects fielded aircraft using onboard cellular communications devices, the hardware in the loop (HITL) module which connects laboratory based bench-top hardware testbeds and the research module which contains diagnostics and prognostics tools for analysis of live air traffic situations and vehicle health conditions. The testbed also features other modules for data recording and playback, information visualization, and air traffic generation. Software reliability, safety, and latency are some of the critical design considerations in development of the testbed.
Complex multidisciplinary system composition for aerospace vehicle conceptual design
NASA Astrophysics Data System (ADS)
Gonzalez, Lex
Although, there exists a vast amount of work concerning the analysis, design, integration of aerospace vehicle systems, there is no standard for how this data and knowledge should be combined in order to create a synthesis system. Each institution creating a synthesis system has in house vehicle and hardware components they are attempting to model and proprietary methods with which to model them. This leads to the fact that synthesis systems begin as one-off creations meant to answer a specific problem. As the scope of the synthesis system grows to encompass more and more problems, so does its size and complexity; in order for a single synthesis system to answer multiple questions the number of methods and method interface must increase. As a means to curtail the requirement that the increase of an aircraft synthesis systems capability leads to an increase in its size and complexity, this research effort focuses on the idea that each problem in aerospace requires its own analysis framework. By focusing on the creation of a methodology which centers on the matching of an analysis framework towards the problem being solved, the complexity of the analysis framework is decoupled from the complexity of the system that creates it. The derived methodology allows for the composition of complex multi-disciplinary systems (CMDS) through the automatic creation and implementation of system and disciplinary method interfaces. The CMDS Composition process follows a four step methodology meant to take a problem definition and progress towards the creation of an analysis framework meant to answer said problem. The unique implementation of the CMDS Composition process take user selected disciplinary analysis methods and automatically integrates them, together in order to create a syntactically composable analysis framework. As a means of assessing the validity of the CMDS Composition process a prototype system (AVDDBMS) has been developed. AVD DBMS has been used to model the Generic Hypersonic Vehicle (GHV), an open source family of hypersonic vehicles originating from the Air Force Research Laboratory. AVDDBMS has been applied in three different ways in order to assess its validity: Verification using GHV disciplinary data, Validation using selected disciplinary analysis methods, and Application of the CMDS Composition Process to assess the design solution space for the GHV hardware. The research demonstrates the holistic effect that selection of individual disciplinary analysis methods has on the structure and integration of the analysis framework.
Evolutionary Development of the Simulation by Logical Modeling System (SIBYL)
NASA Technical Reports Server (NTRS)
Wu, Helen
1995-01-01
Through the evolutionary development of the Simulation by Logical Modeling System (SIBYL) we have re-engineered the expensive and complex IBM mainframe based Long-term Hardware Projection Model (LHPM) to a robust cost-effective computer based mode that is easy to use. We achieved significant cost reductions and improved productivity in preparing long-term forecasts of Space Shuttle Main Engine (SSME) hardware. The LHPM for the SSME is a stochastic simulation model that projects the hardware requirements over 10 years. SIBYL is now the primary modeling tool for developing SSME logistics proposals and Program Operating Plan (POP) for NASA and divisional marketing studies.
International space station wire program
NASA Technical Reports Server (NTRS)
May, Todd
1995-01-01
Hardware provider wire systems and current wire insulation issues for the International Space Station (ISS) program are discussed in this viewgraph presentation. Wire insulation issues include silicone wire contamination, Tefzel cold temperature flexibility, and Russian polyimide wire insulation. ISS is a complex program with hardware developed and managed by many countries and hundreds of contractors. Most of the obvious wire insulation issues are known by contractors and have been precluded by proper selection.
Knight, Christopher G.; Knight, Sylvia H. E.; Massey, Neil; Aina, Tolu; Christensen, Carl; Frame, Dave J.; Kettleborough, Jamie A.; Martin, Andrew; Pascoe, Stephen; Sanderson, Ben; Stainforth, David A.; Allen, Myles R.
2007-01-01
In complex spatial models, as used to predict the climate response to greenhouse gas emissions, parameter variation within plausible bounds has major effects on model behavior of interest. Here, we present an unprecedentedly large ensemble of >57,000 climate model runs in which 10 parameters, initial conditions, hardware, and software used to run the model all have been varied. We relate information about the model runs to large-scale model behavior (equilibrium sensitivity of global mean temperature to a doubling of carbon dioxide). We demonstrate that effects of parameter, hardware, and software variation are detectable, complex, and interacting. However, we find most of the effects of parameter variation are caused by a small subset of parameters. Notably, the entrainment coefficient in clouds is associated with 30% of the variation seen in climate sensitivity, although both low and high values can give high climate sensitivity. We demonstrate that the effect of hardware and software is small relative to the effect of parameter variation and, over the wide range of systems tested, may be treated as equivalent to that caused by changes in initial conditions. We discuss the significance of these results in relation to the design and interpretation of climate modeling experiments and large-scale modeling more generally. PMID:17640921
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Britton, Paul; Hatfield, Glen Spencer; Novack, Steven D.
2017-01-01
Field Programmable Gate Arrays (FPGAs) integrated circuits (IC) are one of the key electronic components in today's sophisticated launch and space vehicle complex avionic systems, largely due to their superb reprogrammable and reconfigurable capabilities combined with relatively low non-recurring engineering costs (NRE) and short design cycle. Consequently, FPGAs are prevalent ICs in communication protocols and control signal commands. This paper will identify reliability concerns and high level guidelines to estimate FPGA total failure rates in a launch vehicle application. The paper will discuss hardware, hardware description language, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC. The hardware description language portion will discuss the high level FPGA programming languages and software/code reliability growth. The radiation portion will discuss FPGA susceptibility to space environment radiation.
Implantable electronics: emerging design issues and an ultra light-weight security solution.
Narasimhan, Seetharam; Wang, Xinmu; Bhunia, Swarup
2010-01-01
Implantable systems that monitor biological signals require increasingly complex digital signal processing (DSP) electronics for real-time in-situ analysis and compression of the recorded signals. While it is well-known that such signal processing hardware needs to be implemented under tight area and power constraints, new design requirements emerge with their increasing complexity. Use of nanoscale technology shows tremendous benefits in implementing these advanced circuits due to dramatic improvement in integration density and power dissipation per operation. However, it also brings in new challenges such as reliability and large idle power (due to higher leakage current). Besides, programmability of the device as well as security of the recorded information are rapidly becoming major design considerations of such systems. In this paper, we analyze the emerging issues associated with the design of the DSP unit in an implantable system. Next, we propose a novel ultra light-weight solution to address the information security issue. Unlike the conventional information security approaches like data encryption, which come at large area and power overhead and hence are not amenable for resource-constrained implantable systems, we propose a multilevel key-based scrambling algorithm, which exploits the nature of the biological signal to effectively obfuscate it. Analysis of the proposed algorithm in the context of neural signal processing and its hardware implementation shows that we can achieve high level of security with ∼ 13X lower power and ∼ 5X lower area overhead than conventional cryptographic solutions.
A novel low-complexity digital filter design for wearable ECG devices
Mehrnia, Alireza
2017-01-01
Wearable and implantable Electrocardiograph (ECG) devices are becoming prevailing tools for continuous real-time personal health monitoring. The ECG signal can be contaminated by various types of noise and artifacts (e.g., powerline interference, baseline wandering) that must be removed or suppressed for accurate ECG signal processing. Limited device size, power consumption and cost are critical issues that need to be carefully considered when designing any portable health monitoring device, including a battery-powered ECG device. This work presents a novel low-complexity noise suppression reconfigurable finite impulse response (FIR) filter structure for wearable ECG and heart monitoring devices. The design relies on a recently introduced optimally-factored FIR filter method. The new filter structure and several of its useful features are presented in detail. We also studied the hardware complexity of the proposed structure and compared it with the state-of-the-art. The results showed that the new ECG filter has a lower hardware complexity relative to the state-of-the-art ECG filters. PMID:28384272
A novel low-complexity digital filter design for wearable ECG devices.
Asgari, Shadnaz; Mehrnia, Alireza
2017-01-01
Wearable and implantable Electrocardiograph (ECG) devices are becoming prevailing tools for continuous real-time personal health monitoring. The ECG signal can be contaminated by various types of noise and artifacts (e.g., powerline interference, baseline wandering) that must be removed or suppressed for accurate ECG signal processing. Limited device size, power consumption and cost are critical issues that need to be carefully considered when designing any portable health monitoring device, including a battery-powered ECG device. This work presents a novel low-complexity noise suppression reconfigurable finite impulse response (FIR) filter structure for wearable ECG and heart monitoring devices. The design relies on a recently introduced optimally-factored FIR filter method. The new filter structure and several of its useful features are presented in detail. We also studied the hardware complexity of the proposed structure and compared it with the state-of-the-art. The results showed that the new ECG filter has a lower hardware complexity relative to the state-of-the-art ECG filters.
Design and operations technologies - Integrating the pieces. [for future space systems design
NASA Technical Reports Server (NTRS)
Eldred, C. H.
1979-01-01
As major elements of life-cycle costs (LCC) having critical impacts on the initiation and utilization of future space programs, the areas of vehicle design and operations are reviewed in order to identify technology requirements. Common to both areas is the requirement for efficient integration of broad, complex systems. Operations technologies focus on the extension of space-based capabilities and cost reduction through the combination of innovative design, low-maintenance hardware, and increased manpower productivity. Design technologies focus on computer-aided techniques which increase productivity while maintaining a high degree of flexibility which enhances creativity and permits graceful design changes.
Developing an Integration Infrastructure for Distributed Engine Control Technologies
NASA Technical Reports Server (NTRS)
Culley, Dennis; Zinnecker, Alicia; Aretskin-Hariton, Eliot; Kratz, Jonathan
2014-01-01
Turbine engine control technology is poised to make the first revolutionary leap forward since the advent of full authority digital engine control in the mid-1980s. This change aims squarely at overcoming the physical constraints that have historically limited control system hardware on aero-engines to a federated architecture. Distributed control architecture allows complex analog interfaces existing between system elements and the control unit to be replaced by standardized digital interfaces. Embedded processing, enabled by high temperature electronics, provides for digitization of signals at the source and network communications resulting in a modular system at the hardware level. While this scheme simplifies the physical integration of the system, its complexity appears in other ways. In fact, integration now becomes a shared responsibility among suppliers and system integrators. While these are the most obvious changes, there are additional concerns about performance, reliability, and failure modes due to distributed architecture that warrant detailed study. This paper describes the development of a new facility intended to address the many challenges of the underlying technologies of distributed control. The facility is capable of performing both simulation and hardware studies ranging from component to system level complexity. Its modular and hierarchical structure allows the user to focus their interaction on specific areas of interest.
Different source image fusion based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Xiao; Piao, Yan
2016-03-01
The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.
Nazarynasab, Dariush; Farahmand, Farzam; Mirbagheri, Alireza; Afshari, Elnaz
2017-07-01
Data related to force-deformation behaviour of soft tissue plays an important role in medical/surgical applications such as realistically modelling mechanical behaviour of soft tissue as well as minimally invasive surgery (MIS) and medical diagnosis. While the mechanical behaviour of soft tissue is very complex due to its different constitutive components, some issues increase its complexity like behavioural changes between the live and dead tissues. Indeed, an adequate quantitative description of mechanical behaviour of soft tissues requires high quality in vivo experimental data to be obtained and analysed. This paper describes a novel laparoscopic grasper with two parallel jaws capable of obtaining compressive force-deformation data related to mechanical behaviour of soft tissues. This new laparoscopic grasper includes four sections as mechanical hardware, sensory part, electrical/electronical part and data storage part. By considering a unique design for mechanical hardware, data recording conditions will be close to unconfined-compression-test conditions; so obtained data can be properly used in extracting the mechanical behaviour of soft tissues. Also, the other distinguishing feature of this new system is its applicability during different laparoscopic surgeries and subsequently obtaining in vivo data. However, more preclinical examinations are needed to evaluate the practicality of the novel laparoscopic grasper with two parallel jaws.
A software methodology for compiling quantum programs
NASA Astrophysics Data System (ADS)
Häner, Thomas; Steiger, Damian S.; Svore, Krysta; Troyer, Matthias
2018-04-01
Quantum computers promise to transform our notions of computation by offering a completely new paradigm. To achieve scalable quantum computation, optimizing compilers and a corresponding software design flow will be essential. We present a software architecture for compiling quantum programs from a high-level language program to hardware-specific instructions. We describe the necessary layers of abstraction and their differences and similarities to classical layers of a computer-aided design flow. For each layer of the stack, we discuss the underlying methods for compilation and optimization. Our software methodology facilitates more rapid innovation among quantum algorithm designers, quantum hardware engineers, and experimentalists. It enables scalable compilation of complex quantum algorithms and can be targeted to any specific quantum hardware implementation.
NASA Astrophysics Data System (ADS)
Borisov, A. P.
2018-01-01
The article is devoted to the development of a software and hardware complex for investigating the grinding process on a pendulum deformer. The hardware part of this complex is the Raspberry Pi model 2B platform, to which a contactless angle sensor is connected, which allows to obtain data on the angle of deviation of the pendulum surface, usb-cameras, which allow to obtain grain images before and after grinding, and stepping motors allowing lifting of the pendulum surface and adjust the clearance between the pendulum and the supporting surfaces. The program part of the complex is written in C # and allows receiving data from the sensor and usb-cameras, processing the received data, and also controlling the synchronous-step motors in manual and automatic mode. The conducted studies show that the rational mode is the deviation of the pendulum surface by an angle of 400, and the location of the grain in the central zone of the support surface, regardless of the orientation of the grain in space. Also, due to the non-contact angle sensor, energy consumption for grinding, speed and acceleration of the pendulum surface, as well as vitreousness of grain and the energy consumption are calculated. With the help of photographs obtained from usb cameras, the work of a pendulum deformer based on the Rebinder formula and calculation of the grain area before and after grinding is determined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, Kathleen E.; Humble, Travis S.
Using quantum annealing to solve an optimization problem requires minor embedding a logic graph into a known hardware graph. We introduce the minor set cover (MSC) of a known graph GG : a subset of graph minors which contain any remaining minor of the graph as a subgraph, in an effort to reduce the complexity of the minor embedding problem. Any graph that can be embedded into GG will be embeddable into a member of the MSC. Focusing on embedding into the hardware graph of commercially available quantum annealers, we establish the MSC for a particular known virtual hardware, whichmore » is a complete bipartite graph. Furthermore, we show that the complete bipartite graph K N,N has a MSC of N minors, from which K N+1 is identified as the largest clique minor of K N,N. In the case of determining the largest clique minor of hardware with faults we briefly discussed this open question.« less
NASA Astrophysics Data System (ADS)
Keshet, Aviv; Ketterle, Wolfgang
2013-01-01
Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.
Keshet, Aviv; Ketterle, Wolfgang
2013-01-01
Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.
A system for the real time, direct measurement of natural gas flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sowell, T.
1995-12-31
PMI/Badger Meter, Inc. with partial sponsorship from the Gas Research Institute, has designed and developed direct measurement total energy flow metering instrumentation. As industry demands for improved accuracy and speed of measurement have increased so has the complexity of the overall hardware and software systems. Considering traditional system approaches, few companies have the in house capability of maintaining a complete system. This paper addresses efforts to implement a direct, total gas energy flow metering system which is simple to use and cost effective.
BigDataScript: a scripting language for data pipelines.
Cingolani, Pablo; Sladek, Rob; Blanchette, Mathieu
2015-01-01
The analysis of large biological datasets often requires complex processing pipelines that run for a long time on large computational infrastructures. We designed and implemented a simple script-like programming language with a clean and minimalist syntax to develop and manage pipeline execution and provide robustness to various types of software and hardware failures as well as portability. We introduce the BigDataScript (BDS) programming language for data processing pipelines, which improves abstraction from hardware resources and assists with robustness. Hardware abstraction allows BDS pipelines to run without modification on a wide range of computer architectures, from a small laptop to multi-core servers, server farms, clusters and clouds. BDS achieves robustness by incorporating the concepts of absolute serialization and lazy processing, thus allowing pipelines to recover from errors. By abstracting pipeline concepts at programming language level, BDS simplifies implementation, execution and management of complex bioinformatics pipelines, resulting in reduced development and debugging cycles as well as cleaner code. BigDataScript is available under open-source license at http://pcingola.github.io/BigDataScript. © The Author 2014. Published by Oxford University Press.
BigDataScript: a scripting language for data pipelines
Cingolani, Pablo; Sladek, Rob; Blanchette, Mathieu
2015-01-01
Motivation: The analysis of large biological datasets often requires complex processing pipelines that run for a long time on large computational infrastructures. We designed and implemented a simple script-like programming language with a clean and minimalist syntax to develop and manage pipeline execution and provide robustness to various types of software and hardware failures as well as portability. Results: We introduce the BigDataScript (BDS) programming language for data processing pipelines, which improves abstraction from hardware resources and assists with robustness. Hardware abstraction allows BDS pipelines to run without modification on a wide range of computer architectures, from a small laptop to multi-core servers, server farms, clusters and clouds. BDS achieves robustness by incorporating the concepts of absolute serialization and lazy processing, thus allowing pipelines to recover from errors. By abstracting pipeline concepts at programming language level, BDS simplifies implementation, execution and management of complex bioinformatics pipelines, resulting in reduced development and debugging cycles as well as cleaner code. Availability and implementation: BigDataScript is available under open-source license at http://pcingola.github.io/BigDataScript. Contact: pablo.e.cingolani@gmail.com PMID:25189778
NASA Technical Reports Server (NTRS)
Hughes, Mark S.; Hebert, Phillip W.; Davis, Dawn M.; Jensen, Scott L.; Abell, Frederick K., Jr.
2004-01-01
The John C. Stennis Space Center (SSC) provides test operations services to a variety of customers, including NASA, DoD, and commercial enterprises for the development of current and next-generation rocket propulsion systems. Many of these testing services are provided in the E-Complex test facilities composed of three active test stands (E1, E2, & E3) and 7 total test positions. Each test position is outfitted with unique sets of data acquisition and controls hardware and software that record both facility and test article data and enable safe operation of the test facility. This paper addresses each system in more detail including efforts to upgrade hardware and software.
Comparison of spike-sorting algorithms for future hardware implementation.
Gibson, Sarah; Judy, Jack W; Markovic, Dejan
2008-01-01
Applications such as brain-machine interfaces require hardware spike sorting in order to (1) obtain single-unit activity and (2) perform data reduction for wireless transmission of data. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection and feature extraction algorithms for spike sorting are described briefly and evaluated in terms of accuracy versus computational complexity. The nonlinear energy operator method is chosen as the optimal spike detection algorithm, being most robust over noise and relatively simple. The discrete derivatives method [1] is chosen as the optimal feature extraction method, maintaining high accuracy across SNRs with a complexity orders of magnitude less than that of traditional methods such as PCA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arion is a library and tool set that enables researchers to holistically define test system models. To define a complex system for testing an algorithm or control requires expertise across multiple domains. Simulating a complex system requires the integration of multiple simulators and test hardware, each with their own specification languages and concepts. This requires extensive set of knowledge and capabilities. Arion was developed to alleviate this challenge. Arion is a library of Java libraries that abstracts the concepts from supported simulators into a cohesive model language that allows someone to build models to their needed level of fidelity andmore » expertise. Arion is also a software tool that translates the users model back into the specification languages of the simulators and test hardware needed for execution.« less
Automated Analysis of ARM Binaries using the Low-Level Virtual Machine Compiler Framework
2011-03-01
president to insist on keeping his smartphone [CNN09]. A self-proclaimed BlackBerry addict , President Obama fought hard to keep his mobile device after his... smartphone but renders a device non-functional on installation [FSe09][Hof07]. Complex interactions between hardware and software components both within... smartphone (which is a big assumption), the phone may still be vulnerable if the hardware or software does not correctly implement the design
MSFC Skylab student project report. [selected space experiments
NASA Technical Reports Server (NTRS)
1974-01-01
The Skylab Student Project some 4000 students submitted experiments from which twenty-five national winners were selected. Of these, eleven required special flight hardware, eight were allowed to obtain data using hardware available for professional investigations, and the remaining six were affiliated with researchers in alternate fields, since their proposals could not be accommodated due to complexity or similar incompatibility. The background of the project is elaborated and experiment performance results and evaluations are touched upon.
2017-02-19
software systems: the students design and build robotics software towards real-world applications, without being distracted by hardware issues; (ii) it...high school students require the students to focus on building and integrating the hardware that make up the robot, at the expense of designing and...robotics programs focus on the mechanics; as a result, they do not have room for students to design and implement relatively complex software systems, as
Sight Application Analysis Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronevetsky, G.
2014-09-17
The scale and complexity of scientific applications makes it very difficult to optimize, debug and extend them to support new capabilities. We have developed a tool that supports developers’ efforts to understand the logical flow of their applications and interactions between application components and hardware in a way that scales with application complexity and parallelism.
Sublimator Driven Coldplate Engineering Development Unit Test Results
NASA Technical Reports Server (NTRS)
Sheth, Rubik B.; Stephan, Ryan A.; Leimkuehler, Thomas O.
2010-01-01
The Sublimator Driven Coldplate (SDC) is a unique piece of thermal control hardware that has several advantages over a traditional thermal control scheme. The principal advantage is the possible elimination of a pumped fluid loop, potentially increasing reliability and reducing complexity while saving both mass and power. Because the SDC requires a consumable feedwater, it can only be used for short mission durations. Additionally, the SDC is ideal for a vehicle with small transport distances and low heat rejection requirements. An SDC Engineering Development Unit was designed and fabricated. Performance tests were performed in a vacuum chamber to quantify and assess the performance of the SDC. The test data was then used to develop correlated thermal math models. Nonetheless, an Integrated Sublimator Driven Coldplate (ISDC) concept is being developed. The ISDC couples a coolant loop with the previously described SDC hardware. This combination allows the SDC to be used as a traditional coldplate during long mission phases and provides for dissimilar system redundancy
Hu, Fei; Hao, Qi; Lukowiak, Marcin; Sun, Qingquan; Wilhelm, Kyle; Radziszowski, Stanisław; Wu, Yao
2010-11-01
Implantable medical devices (IMDs) have played an important role in many medical fields. Any failure in IMDs operations could cause serious consequences and it is important to protect the IMDs access from unauthenticated access. This study investigates secure IMD data collection within a telehealthcare [mobile health (m-health)] network. We use medical sensors carried by patients to securely access IMD data and perform secure sensor-to-sensor communications between patients to relay the IMD data to a remote doctor's server. To meet the requirements on low computational complexity, we choose N-th degree truncated polynomial ring (NTRU)-based encryption/decryption to secure IMD-sensor and sensor-sensor communications. An extended matryoshkas model is developed to estimate direct/indirect trust relationship among sensors. An NTRU hardware implementation in very large integrated circuit hardware description language is studied based on industry Standard IEEE 1363 to increase the speed of key generation. The performance analysis results demonstrate the security robustness of the proposed IMD data access trust model.
Runtime and Architecture Support for Efficient Data Exchange in Multi-Accelerator Applications.
Cabezas, Javier; Gelado, Isaac; Stone, John E; Navarro, Nacho; Kirk, David B; Hwu, Wen-Mei
2015-05-01
Heterogeneous parallel computing applications often process large data sets that require multiple GPUs to jointly meet their needs for physical memory capacity and compute throughput. However, the lack of high-level abstractions in previous heterogeneous parallel programming models force programmers to resort to multiple code versions, complex data copy steps and synchronization schemes when exchanging data between multiple GPU devices, which results in high software development cost, poor maintainability, and even poor performance. This paper describes the HPE runtime system, and the associated architecture support, which enables a simple, efficient programming interface for exchanging data between multiple GPUs through either interconnects or cross-node network interfaces. The runtime and architecture support presented in this paper can also be used to support other types of accelerators. We show that the simplified programming interface reduces programming complexity. The research presented in this paper started in 2009. It has been implemented and tested extensively in several generations of HPE runtime systems as well as adopted into the NVIDIA GPU hardware and drivers for CUDA 4.0 and beyond since 2011. The availability of real hardware that support key HPE features gives rise to a rare opportunity for studying the effectiveness of the hardware support by running important benchmarks on real runtime and hardware. Experimental results show that in a exemplar heterogeneous system, peer DMA and double-buffering, pinned buffers, and software techniques can improve the inter-accelerator data communication bandwidth by 2×. They can also improve the execution speed by 1.6× for a 3D finite difference, 2.5× for 1D FFT, and 1.6× for merge sort, all measured on real hardware. The proposed architecture support enables the HPE runtime to transparently deploy these optimizations under simple portable user code, allowing system designers to freely employ devices of different capabilities. We further argue that simple interfaces such as HPE are needed for most applications to benefit from advanced hardware features in practice.
Runtime and Architecture Support for Efficient Data Exchange in Multi-Accelerator Applications
Cabezas, Javier; Gelado, Isaac; Stone, John E.; Navarro, Nacho; Kirk, David B.; Hwu, Wen-mei
2014-01-01
Heterogeneous parallel computing applications often process large data sets that require multiple GPUs to jointly meet their needs for physical memory capacity and compute throughput. However, the lack of high-level abstractions in previous heterogeneous parallel programming models force programmers to resort to multiple code versions, complex data copy steps and synchronization schemes when exchanging data between multiple GPU devices, which results in high software development cost, poor maintainability, and even poor performance. This paper describes the HPE runtime system, and the associated architecture support, which enables a simple, efficient programming interface for exchanging data between multiple GPUs through either interconnects or cross-node network interfaces. The runtime and architecture support presented in this paper can also be used to support other types of accelerators. We show that the simplified programming interface reduces programming complexity. The research presented in this paper started in 2009. It has been implemented and tested extensively in several generations of HPE runtime systems as well as adopted into the NVIDIA GPU hardware and drivers for CUDA 4.0 and beyond since 2011. The availability of real hardware that support key HPE features gives rise to a rare opportunity for studying the effectiveness of the hardware support by running important benchmarks on real runtime and hardware. Experimental results show that in a exemplar heterogeneous system, peer DMA and double-buffering, pinned buffers, and software techniques can improve the inter-accelerator data communication bandwidth by 2×. They can also improve the execution speed by 1.6× for a 3D finite difference, 2.5× for 1D FFT, and 1.6× for merge sort, all measured on real hardware. The proposed architecture support enables the HPE runtime to transparently deploy these optimizations under simple portable user code, allowing system designers to freely employ devices of different capabilities. We further argue that simple interfaces such as HPE are needed for most applications to benefit from advanced hardware features in practice. PMID:26180487
An FPGA-Based Silicon Neuronal Network with Selectable Excitability Silicon Neurons
Li, Jing; Katori, Yuichi; Kohno, Takashi
2012-01-01
This paper presents a digital silicon neuronal network which simulates the nerve system in creatures and has the ability to execute intelligent tasks, such as associative memory. Two essential elements, the mathematical-structure-based digital spiking silicon neuron (DSSN) and the transmitter release based silicon synapse, allow us to tune the excitability of silicon neurons and are computationally efficient for hardware implementation. We adopt mixed pipeline and parallel structure and shift operations to design a sufficient large and complex network without excessive hardware resource cost. The network with 256 full-connected neurons is built on a Digilent Atlys board equipped with a Xilinx Spartan-6 LX45 FPGA. Besides, a memory control block and USB control block are designed to accomplish the task of data communication between the network and the host PC. This paper also describes the mechanism of associative memory performed in the silicon neuronal network. The network is capable of retrieving stored patterns if the inputs contain enough information of them. The retrieving probability increases with the similarity between the input and the stored pattern increasing. Synchronization of neurons is observed when the successful stored pattern retrieval occurs. PMID:23269911
FPGA-accelerated adaptive optics wavefront control
NASA Astrophysics Data System (ADS)
Mauch, S.; Reger, J.; Reinlein, C.; Appelfelder, M.; Goy, M.; Beckert, E.; Tünnermann, A.
2014-03-01
The speed of real-time adaptive optical systems is primarily restricted by the data processing hardware and computational aspects. Furthermore, the application of mirror layouts with increasing numbers of actuators reduces the bandwidth (speed) of the system and, thus, the number of applicable control algorithms. This burden turns out a key-impediment for deformable mirrors with continuous mirror surface and highly coupled actuator influence functions. In this regard, specialized hardware is necessary for high performance real-time control applications. Our approach to overcome this challenge is an adaptive optics system based on a Shack-Hartmann wavefront sensor (SHWFS) with a CameraLink interface. The data processing is based on a high performance Intel Core i7 Quadcore hard real-time Linux system. Employing a Xilinx Kintex-7 FPGA, an own developed PCie card is outlined in order to accelerate the analysis of a Shack-Hartmann Wavefront Sensor. A recently developed real-time capable spot detection algorithm evaluates the wavefront. The main features of the presented system are the reduction of latency and the acceleration of computation For example, matrix multiplications which in general are of complexity O(n3 are accelerated by using the DSP48 slices of the field-programmable gate array (FPGA) as well as a novel hardware implementation of the SHWFS algorithm. Further benefits are the Streaming SIMD Extensions (SSE) which intensively use the parallelization capability of the processor for further reducing the latency and increasing the bandwidth of the closed-loop. Due to this approach, up to 64 actuators of a deformable mirror can be handled and controlled without noticeable restriction from computational burdens.
A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.
2014-01-01
Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a Simulink(R) library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.
Rapid-X - An FPGA Development Toolset Using a Custom Simulink Library for MTCA.4 Modules
NASA Astrophysics Data System (ADS)
Prędki, Paweł; Heuer, Michael; Butkowski, Łukasz; Przygoda, Konrad; Schlarb, Holger; Napieralski, Andrzej
2015-06-01
The recent introduction of advanced hardware architectures such as the Micro Telecommunications Computing Architecture (MTCA) caused a change in the approach to implementation of control schemes in many fields. The development has been moving away from traditional programming languages ( C/C++), to hardware description languages (VHDL, Verilog), which are used in FPGA development. With MATLAB/Simulink it is possible to describe complex systems with block diagrams and simulate their behavior. Those diagrams are then used by the HDL experts to implement exactly the required functionality in hardware. Both the porting of existing applications and adaptation of new ones require a lot of development time from them. To solve this, Xilinx System Generator, a toolbox for MATLAB/Simulink, allows rapid prototyping of those block diagrams using hardware modelling. It is still up to the firmware developer to merge this structure with the hardware-dependent HDL project. This prevents the application engineer from quickly verifying the proposed schemes in real hardware. The framework described in this article overcomes these challenges, offering a hardware-independent library of components that can be used in Simulink/System Generator models. The components are subsequently translated into VHDL entities and integrated with a pre-prepared VHDL project template. Furthermore, the entire implementation process is run in the background, giving the user an almost one-click path from control scheme modelling and simulation to bit-file generation. This approach allows the application engineers to quickly develop new schemes and test them in real hardware environment. The applications may range from simple data logging or signal generation ones to very advanced controllers. Taking advantage of the Simulink simulation capabilities and user-friendly hardware implementation routines, the framework significantly decreases the development time of FPGA-based applications.
A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.
2015-01-01
Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a SimulinkR library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.
A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia Mae; Culley, Dennis E.; Aretskin-Hariton, Eliot D.
2014-01-01
Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (40,000 pound force thrust) (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a Simulink (R) library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.
Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh I.; Keymeulen, Didier; Kimesh, Matthew A.
2012-01-01
Modern hyperspectral imaging systems are able to acquire far more data than can be downlinked from a spacecraft. Onboard data compression helps to alleviate this problem, but requires a system capable of power efficiency and high throughput. Software solutions have limited throughput performance and are power-hungry. Dedicated hardware solutions can provide both high throughput and power efficiency, while taking the load off of the main processor. Thus a hardware compression system was developed. The implementation uses a field-programmable gate array (FPGA). The implementation is based on the fast lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral-Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which achieves excellent compression performance and has low complexity. This algorithm performs predictive compression using an adaptive filtering method, and uses adaptive Golomb coding. The implementation also packetizes the coded data. The FL algorithm is well suited for implementation in hardware. In the FPGA implementation, one sample is compressed every clock cycle, which makes for a fast and practical realtime solution for space applications. Benefits of this implementation are: 1) The underlying algorithm achieves a combination of low complexity and compression effectiveness that exceeds that of techniques currently in use. 2) The algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. 3) Hardware acceleration provides a throughput improvement of 10 to 100 times vs. the software implementation. A prototype of the compressor is available in software, but it runs at a speed that does not meet spacecraft requirements. The hardware implementation targets the Xilinx Virtex IV FPGAs, and makes the use of this compressor practical for Earth satellites as well as beyond-Earth missions with hyperspectral instruments.
Mission Engineering of a Rapid Cycle Spacecraft Logistics Fleet
NASA Technical Reports Server (NTRS)
Holladay, Jon; McClendon, Randy (Technical Monitor)
2002-01-01
The requirement for logistics re-supply of the International Space Station has provided a unique opportunity for engineering the implementation of NASA's first dedicated pressurized logistics carrier fleet. The NASA fleet is comprised of three Multi-Purpose Logistics Modules (MPLM) provided to NASA by the Italian Space Agency in return for operations time aboard the International Space Station. Marshall Space Flight Center was responsible for oversight of the hardware development from preliminary design through acceptance of the third flight unit, and currently manages the flight hardware sustaining engineering and mission engineering activities. The actual MPLM Mission began prior to NASA acceptance of the first flight unit in 1999 and will continue until the de-commission of the International Space Station that is planned for 20xx. Mission engineering of the MPLM program requires a broad focus on three distinct yet inter-related operations processes: pre-flight, flight operations, and post-flight turn-around. Within each primary area exist several complex subsets of distinct and inter-related activities. Pre-flight processing includes the evaluation of carrier hardware readiness for space flight. This includes integration of payload into the carrier, integration of the carrier into the launch vehicle, and integration of the carrier onto the orbital platform. Flight operations include the actual carrier operations during flight and any required real-time ground support. Post-flight processing includes de-integration of the carrier hardware from the launch vehicle, de-integration of the payload, and preparation for returning the carrier to pre-flight staging. Typical space operations are engineered around the requirements and objectives of a dedicated mission on a dedicated operational platform (i.e. Launch or Orbiting Vehicle). The MPLM, however, has expanded this envelope by requiring operations with both vehicles during flight as well as pre-launch and post-landing operations. These unique requirements combined with a success-oriented schedule of four flights within a ten-month period have provided numerous opportunities for understanding and improving operations processes. Furthermore, it has increased the knowledge base of future Payload Carrier and Launch Vehicle hardware and requirement developments. Discussion of the process flows and target areas for process improvement are provided in the subject paper. Special emphasis is also placed on supplying guidelines for hardware development. The combination of process knowledge and hardware development knowledge will provide a comprehensive overview for future vehicle developments as related to integration and transportation of payloads.
Skylab materials processing facility experiment developer's report
NASA Technical Reports Server (NTRS)
Parks, P. G.
1975-01-01
The development of the Skylab M512 Materials Processing Facility is traced from the design of a portable, self-contained electron beam welding system for terrestrial applications to the highly complex experiment system ultimately developed for three Skylab missions. The M512 experiment facility was designed to support six in-space experiments intended to explore the advantages of manufacturing materials in the near-zero-gravity environment of Earth orbit. Detailed descriptions of the M512 facility and related experiment hardware are provided, with discussions of hardware verification and man-machine interfaces included. An analysis of the operation of the facility and experiments during the three Skylab missions is presented, including discussions of the hardware performance, anomalies, and data returned to earth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshii, Kazutomo; Llopis, Pablo; Zhang, Kaicheng
As CMOS scaling nears its end, parameter variations (process, temperature and voltage) are becoming a major concern. To overcome parameter variations and provide stability, modern processors are becoming dynamic, opportunistically adjusting voltage and frequency based on thermal and energy constraints, which negatively impacts traditional bulk-synchronous parallelism-minded hardware and software designs. As node-level architecture is growing in complexity, implementing variation control mechanisms only with hardware can be a challenging task. In this paper we investigate a software strategy to manage hardwareinduced variations, leveraging low-level monitoring/controlling mechanisms.
Environmental Conditions for Space Flight Hardware: A Survey
NASA Technical Reports Server (NTRS)
Plante, Jeannette; Lee, Brandon
2005-01-01
Interest in generalization of the physical environment experienced by NASA hardware from the natural Earth environment (on the launch pad), man-made environment on Earth (storage acceptance an d qualification testing), the launch environment, and the space environment, is ed to find commonality among our hardware in an effort to reduce cost and complexity. NASA is entering a period of increase in its number of planetary missions and it is important to understand how our qualification requirements will evolve with and track these new environments. Environmental conditions are described for NASA projects in several ways for the different periods of the mission life cycle. At the beginning, the mission manager defines survivability requirements based on the mission length, orbit, launch date, launch vehicle, and other factors . such as the use of reactor engines. Margins are then applied to these values (temperature extremes, vibration extremes, radiation tolerances, etc,) and a new set of conditions is generalized for design requirements. Mission assurance documents will then assign an additional margin for reliability, and a third set of values is provided for during testing. A fourth set of environmental condition values may evolve intermittently from heritage hardware that has been tested to a level beyond the actual mission requirement. These various sets of environment figures can make it quite confusing and difficult to capture common hardware environmental requirements. Environmental requirement information can be found in a wide variety of places. The most obvious is with the individual projects. We can easily get answers to questions about temperature extremes being used and radiation tolerance goals, but it is more difficult to map the answers to the process that created these requirements: for design, for qualification, and for actual environment with no margin applied. Not everyone assigned to a NASA project may have that kind of insight, as many have only the environmental requirement numbers needed to do their jobs but do not necessarily have a programmatic-level understanding of how all of the environmental requirements fit together.
An Alternative Approach to Human Servicing of Crewed Earth Orbiting Spacecraft
NASA Technical Reports Server (NTRS)
Mularski, John R.; Alpert, Brian K.
2017-01-01
As crewed spacecraft have grown larger and more complex, they have come to rely on spacewalks, or Extravehicular Activities (EVA), for mission success and crew safety. Typically, these spacecraft maintain all of the hardware and trained personnel needed to perform an EVA on-board at all times. Maintaining this capability requires volume and up-mass for storage of EVA hardware, crew time for ground and on-orbit training, and on-orbit maintenance of EVA hardware. This paper proposes an alternative methodology, utilizing launch on-need hardware and crew to provide EVA capability for space stations in Earth orbit after assembly complete, in the same way that one would call a repairman to fix something at their home. This approach would reduce ground training requirements, save Intravehicular Activity (IVA) crew time in the form of EVA hardware maintenance and on-orbit training, and lead to more efficient EVAs because they would be performed by specialists with detailed knowledge and training stemming from their direct involvement in the development of the EVA. The on-orbit crew would then be available to focus on the immediate response to the failure as well as the day-to-day operations of the spacecraft and payloads. This paper will look at how current unplanned EVAs are conducted, including the time required for preparation, and offer alternatives for future spacecraft. As this methodology relies on the on-time and on-need launch of spacecraft, any space station that utilized this approach would need a robust transportation system including more than one launch vehicle capable of carrying crew. In addition, the fault tolerance of the space station would be an important consideration in how much time was available for EVA preparation after the failure. Each future program would have to weigh the risk of on-time launch against the increase in available crew time for the main objective of the spacecraft.
An Alternative Approach to Human Servicing of Manned Earth Orbiting Spacecraft
NASA Technical Reports Server (NTRS)
Mularski, John; Alpert, Brian
2011-01-01
As manned spacecraft have grown larger and more complex, they have come to rely on spacewalks or Extravehicular Activities (EVA) for both mission success and crew safety. Typically these spacecraft maintain all of the hardware and trained personnel needed to perform an EVA on-board at all times. Maintaining this capability requires volume and up-mass for storage of EVA hardware, crew time for ground and on-orbit training, and on-orbit maintenance of EVA hardware . This paper proposes an alternative methodology to utilize launch-on-need hardware and crew to provide EVA capability for space stations in Earth orbit after assembly complete, in the same way that most people would call a repairman to fix something at their home. This approach would not only reduce ground training requirements and save Intravehicular Activity (IVA) crew time in the form of EVA hardware maintenance and on-orbit training, but would also lead to more efficient EVAs because they would be performed by specialists with detailed knowledge and training stemming from their direct involvement in the development of the EVA. The on-orbit crew would then be available to focus on the immediate response to the failure as well as the day-to-day operations of the spacecraft and payloads. This paper will look at how current ISS unplanned EVAs are conducted, including the time required for preparation, and offer alternatives for future spacecraft utilizing lessons learned from ISS. As this methodology relies entirely on the on-time and on-need launch of spacecraft, any space station that utilized this approach would need a robust transportation system including more than one launch vehicle capable of carrying crew. In addition the fault tolerance of the space station would be an important consideration in how much time was available for EVA preparation after the failure. Each future program would have to weigh the risk of on-time launch against the increase in available crew time for the main objective of the spacecraft.
Okamoto, Takumi; Koide, Tetsushi; Sugi, Koki; Shimizu, Tatsuya; Anh-Tuan Hoang; Tamaki, Toru; Raytchev, Bisser; Kaneda, Kazufumi; Kominami, Yoko; Yoshida, Shigeto; Mieno, Hiroshi; Tanaka, Shinji
2015-08-01
With the increase of colorectal cancer patients in recent years, the needs of quantitative evaluation of colorectal cancer are increased, and the computer-aided diagnosis (CAD) system which supports doctor's diagnosis is essential. In this paper, a hardware design of type identification module in CAD system for colorectal endoscopic images with narrow band imaging (NBI) magnification is proposed for real-time processing of full high definition image (1920 × 1080 pixel). A pyramid style image segmentation with SVMs for multi-size scan windows, which can be implemented on an FPGA with small circuit area and achieve high accuracy, is proposed for actual complex colorectal endoscopic images.
I-deas TMG to NX Space Systems Thermal Model Conversion and Computational Performance Comparison
NASA Technical Reports Server (NTRS)
Somawardhana, Ruwan
2011-01-01
CAD/CAE packages change on a continuous basis as the power of the tools increase to meet demands. End -users must adapt to new products as they come to market and replace legacy packages. CAE modeling has continued to evolve and is constantly becoming more detailed and complex. Though this comes at the cost of increased computing requirements Parallel processing coupled with appropriate hardware can minimize computation time. Users of Maya Thermal Model Generator (TMG) are faced with transitioning from NX I -deas to NX Space Systems Thermal (SST). It is important to understand what differences there are when changing software packages We are looking for consistency in results.
Olson, Eric J.
2013-06-11
An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).
Preparing the Next IT Leaders: Financial Management
ERIC Educational Resources Information Center
Goldstein, Karen L.
2007-01-01
The next generation of IT leaders must learn to navigate the complexities of higher education financial planning and negotiation. Technology infrastructure, hardware, software, and services are very expensive to provide and will continue to be so for some time to come. Most CIOs have learned about complex IT finances the hard way--they were handed…
Harnessing the Power of Light to See and Treat Breast Cancer
2015-12-01
complexity : the raw data, the most basic form of data, represents the raw numeric readout obtained from the acquisition hardware. The raw data...has the added advantage of full-field illumination and non-descanned detection, thus lowering the complexity compared to confocal scanning systems... complexity of images that have varying levels of contrast and non-uniform background heterogeneity. In 2004 Matas described a technique for detecting
Identifying the minor set cover of dense connected bipartite graphs via random matching edge sets
NASA Astrophysics Data System (ADS)
Hamilton, Kathleen E.; Humble, Travis S.
2017-04-01
Using quantum annealing to solve an optimization problem requires minor embedding a logic graph into a known hardware graph. In an effort to reduce the complexity of the minor embedding problem, we introduce the minor set cover (MSC) of a known graph G: a subset of graph minors which contain any remaining minor of the graph as a subgraph. Any graph that can be embedded into G will be embeddable into a member of the MSC. Focusing on embedding into the hardware graph of commercially available quantum annealers, we establish the MSC for a particular known virtual hardware, which is a complete bipartite graph. We show that the complete bipartite graph K_{N,N} has a MSC of N minors, from which K_{N+1} is identified as the largest clique minor of K_{N,N}. The case of determining the largest clique minor of hardware with faults is briefly discussed but remains an open question.
Identifying the minor set cover of dense connected bipartite graphs via random matching edge sets
Hamilton, Kathleen E.; Humble, Travis S.
2017-02-23
Using quantum annealing to solve an optimization problem requires minor embedding a logic graph into a known hardware graph. We introduce the minor set cover (MSC) of a known graph GG : a subset of graph minors which contain any remaining minor of the graph as a subgraph, in an effort to reduce the complexity of the minor embedding problem. Any graph that can be embedded into GG will be embeddable into a member of the MSC. Focusing on embedding into the hardware graph of commercially available quantum annealers, we establish the MSC for a particular known virtual hardware, whichmore » is a complete bipartite graph. Furthermore, we show that the complete bipartite graph K N,N has a MSC of N minors, from which K N+1 is identified as the largest clique minor of K N,N. In the case of determining the largest clique minor of hardware with faults we briefly discussed this open question.« less
Primary and secondary electrical space power based on advanced PEM systems
NASA Technical Reports Server (NTRS)
Vanderborgh, N. E.; Hedstrom, J. C.; Stroh, K. R.; Huff, J. R.
1993-01-01
For new space ventures, power continues to be a pacing function for mission planning and experiment endurance. Although electrochemical power is a well demonstrated space power technology, current hardware limitations impact future mission viability. In order to document and augment electrochemical technology, a series of experiments for the National Aeronautics and Space Administration Lewis Research Center (NASA LeRC) are underway at the Los Alamos National Laboratory that define operational parameters on contemporary proton exchange membrane (PEM) hardware operating with hydrogen and oxygen reactants. Because of the high efficiency possible for water electrolysis, this hardware is also thought part of a secondary battery design built around stored reactants - the so-called regenerative fuel cell. An overview of stack testing at Los Alamos and of analyses related to regenerative fuel cell systems are provided in this paper. Finally, this paper describes work looking at innovative concepts that remove complexity from stack hardware with the specific intent of higher system reliability. This new concept offers the potential for unprecedented electrochemical power system energy densities.
A Hardware-Supported Algorithm for Self-Managed and Choreographed Task Execution in Sensor Networks.
Bordel, Borja; Miguel, Carlos; Alcarria, Ramón; Robles, Tomás
2018-03-07
Nowadays, sensor networks are composed of a great number of tiny resource-constraint nodes, whose management is increasingly more complex. In fact, although collaborative or choreographic task execution schemes are which fit in the most perfect way with the nature of sensor networks, they are rarely implemented because of the high resource consumption of these algorithms (especially if networks include many resource-constrained devices). On the contrary, hierarchical networks are usually designed, in whose cusp it is included a heavy orchestrator with a remarkable processing power, being able to implement any necessary management solution. However, although this orchestration approach solves most practical management problems of sensor networks, a great amount of the operation time is wasted while nodes request the orchestrator to address a conflict and they obtain the required instructions to operate. Therefore, in this paper it is proposed a new mechanism for self-managed and choreographed task execution in sensor networks. The proposed solution considers only a lightweight gateway instead of traditional heavy orchestrators and a hardware-supported algorithm, which consume a negligible amount of resources in sensor nodes. The gateway avoids the congestion of the entire sensor network and the hardware-supported algorithm enables a choreographed task execution scheme, so no particular node is overloaded. The performance of the proposed solution is evaluated through numerical and electronic ModelSim-based simulations.
A Hardware-Supported Algorithm for Self-Managed and Choreographed Task Execution in Sensor Networks
2018-01-01
Nowadays, sensor networks are composed of a great number of tiny resource-constraint nodes, whose management is increasingly more complex. In fact, although collaborative or choreographic task execution schemes are which fit in the most perfect way with the nature of sensor networks, they are rarely implemented because of the high resource consumption of these algorithms (especially if networks include many resource-constrained devices). On the contrary, hierarchical networks are usually designed, in whose cusp it is included a heavy orchestrator with a remarkable processing power, being able to implement any necessary management solution. However, although this orchestration approach solves most practical management problems of sensor networks, a great amount of the operation time is wasted while nodes request the orchestrator to address a conflict and they obtain the required instructions to operate. Therefore, in this paper it is proposed a new mechanism for self-managed and choreographed task execution in sensor networks. The proposed solution considers only a lightweight gateway instead of traditional heavy orchestrators and a hardware-supported algorithm, which consume a negligible amount of resources in sensor nodes. The gateway avoids the congestion of the entire sensor network and the hardware-supported algorithm enables a choreographed task execution scheme, so no particular node is overloaded. The performance of the proposed solution is evaluated through numerical and electronic ModelSim-based simulations. PMID:29518986
Evaluation of the OpenCL AES Kernel using the Intel FPGA SDK for OpenCL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal
The OpenCL standard is an open programming model for accelerating algorithms on heterogeneous computing system. OpenCL extends the C-based programming language for developing portable codes on different platforms such as CPU, Graphics processing units (GPUs), Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs). The Intel FPGA SDK for OpenCL is a suite of tools that allows developers to abstract away the complex FPGA-based development flow for a high-level software development flow. Users can focus on the design of hardware-accelerated kernel functions in OpenCL and then direct the tools to generate the low-level FPGA implementations. The approach makes themore » FPGA-based development more accessible to software users as the needs for hybrid computing using CPUs and FPGAs are increasing. It can also significantly reduce the hardware development time as users can evaluate different ideas with high-level language without deep FPGA domain knowledge. In this report, we evaluate the performance of the kernel using the Intel FPGA SDK for OpenCL and Nallatech 385A FPGA board. Compared to the M506 module, the board provides more hardware resources for a larger design exploration space. The kernel performance is measured with the compute kernel throughput, an upper bound to the FPGA throughput. The report presents the experimental results in details. The Appendix lists the kernel source code.« less
Computational adaptive optics for broadband interferometric tomography of tissues and cells
NASA Astrophysics Data System (ADS)
Adie, Steven G.; Mulligan, Jeffrey A.
2016-03-01
Adaptive optics (AO) can shape aberrated optical wavefronts to physically restore the constructive interference needed for high-resolution imaging. With access to the complex optical field, however, many functions of optical hardware can be achieved computationally, including focusing and the compensation of optical aberrations to restore the constructive interference required for diffraction-limited imaging performance. Holography, which employs interferometric detection of the complex optical field, was developed based on this connection between hardware and computational image formation, although this link has only recently been exploited for 3D tomographic imaging in scattering biological tissues. This talk will present the underlying imaging science behind computational image formation with optical coherence tomography (OCT) -- a beam-scanned version of broadband digital holography. Analogous to hardware AO (HAO), we demonstrate computational adaptive optics (CAO) and optimization of the computed pupil correction in 'sensorless mode' (Zernike polynomial corrections with feedback from image metrics) or with the use of 'guide-stars' in the sample. We discuss the concept of an 'isotomic volume' as the volumetric extension of the 'isoplanatic patch' introduced in astronomical AO. Recent CAO results and ongoing work is highlighted to point to the potential biomedical impact of computed broadband interferometric tomography. We also discuss the advantages and disadvantages of HAO vs. CAO for the effective shaping of optical wavefronts, and highlight opportunities for hybrid approaches that synergistically combine the unique advantages of hardware and computational methods for rapid volumetric tomography with cellular resolution.
NASA Technical Reports Server (NTRS)
McNeill, Justin
1995-01-01
The Multimission Image Processing Subsystem (MIPS) at the Jet Propulsion Laboratory (JPL) has managed transitions of application software sets from one operating system and hardware platform to multiple operating systems and hardware platforms. As a part of these transitions, cost estimates were generated from the personal experience of in-house developers and managers to calculate the total effort required for such projects. Productivity measures have been collected for two such transitions, one very large and the other relatively small in terms of source lines of code. These estimates used a cost estimation model similar to the Software Engineering Laboratory (SEL) Effort Estimation Model. Experience in transitioning software within JPL MIPS have uncovered a high incidence of interface complexity. Interfaces, both internal and external to individual software applications, have contributed to software transition project complexity, and thus to scheduling difficulties and larger than anticipated design work on software to be ported.
NASA Astrophysics Data System (ADS)
Passas, Georgios; Freear, Steven; Fawcett, Darren
2010-01-01
Space-time coding (STC) is an important milestone in modern wireless communications. In this technique, more copies of the same signal are transmitted through different antennas (space) and different symbol periods (time), to improve the robustness of a wireless system by increasing its diversity gain. STCs are channel coding algorithms that can be readily implemented on a field programmable gate array (FPGA) device. This work provides some figures for the amount of required FPGA hardware resources, the speed that the algorithms can operate and the power consumption requirements of a space-time block code (STBC) encoder. Seven encoder very high-speed integrated circuit hardware description language (VHDL) designs have been coded, synthesised and tested. Each design realises a complex orthogonal space-time block code with a different transmission matrix. All VHDL designs are parameterisable in terms of sample precision. Precisions ranging from 4 bits to 32 bits have been synthesised. Alamouti's STBC encoder design [Alamouti, S.M. (1998), 'A Simple Transmit Diversity Technique for Wireless Communications', IEEE Journal on Selected Areas in Communications, 16:55-108.] proved to be the best trade-off, since it is on average 3.2 times smaller, 1.5 times faster and requires slightly less power than the next best trade-off in the comparison, which is a 3/4-rate full-diversity 3Tx-antenna STBC.
The Evolution of Exercise Hardware on ISS: Past, Present, and Future
NASA Technical Reports Server (NTRS)
Buxton, R. E.; Kalogera, K. L.; Hanson, A. M.
2017-01-01
During 16 years in low-Earth orbit, the suite of exercise hardware aboard the International Space Station (ISS) has matured significantly. Today, the countermeasure system supports an array of physical-training protocols and serves as an extensive research platform. Future hardware designs are required to have smaller operational envelopes and must also mitigate known physiologic issues observed in long-duration spaceflight. Taking lessons learned from the long history of space exercise will be important to successful development and implementation of future, compact exercise hardware. The evolution of exercise hardware as deployed on the ISS has implications for future exercise hardware and operations. Key lessons learned from the early days of ISS have helped to: 1. Enhance hardware performance (increased speed and loads). 2. Mature software interfaces. 3. Compare inflight exercise workloads to pre-, in-, and post-flight musculoskeletal and aerobic conditions. 4. Improve exercise comfort. 5. Develop complimentary hardware for research and operations. Current ISS exercise hardware includes both custom and commercial-off-the-shelf (COTS) hardware. Benefits and challenges to this approach have prepared engineering teams to take a hybrid approach when designing and implementing future exercise hardware. Significant effort has gone into consideration of hardware instrumentation and wearable devices that provide important data to monitor crew health and performance.
NASA Astrophysics Data System (ADS)
Yussup, N.; Ibrahim, M. M.; Lombigit, L.; Rahman, N. A. A.; Zin, M. R. M.
2014-02-01
Typically a system consists of hardware as the controller and software which is installed in the personal computer (PC). In the effective nuclear detection, the hardware involves the detection setup and the electronics used, with the software consisting of analysis tools and graphical display on PC. A data acquisition interface is necessary to enable the communication between the controller hardware and PC. Nowadays, Universal Serial Bus (USB) has become a standard connection method for computer peripherals and has replaced many varieties of serial and parallel ports. However the implementation of USB is complex. This paper describes the implementation of data acquisition interface between a field-programmable gate array (FPGA) board and a PC by exploiting the USB link of the FPGA board. The USB link is based on an FTDI chip which allows direct access of input and output to the Joint Test Action Group (JTAG) signals from a USB host and a complex programmable logic device (CPLD) with a 24 MHz clock input to the USB link. The implementation and results of using the USB link of FPGA board as the data interfacing are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yussup, N.; Ibrahim, M. M.; Lombigit, L.
Typically a system consists of hardware as the controller and software which is installed in the personal computer (PC). In the effective nuclear detection, the hardware involves the detection setup and the electronics used, with the software consisting of analysis tools and graphical display on PC. A data acquisition interface is necessary to enable the communication between the controller hardware and PC. Nowadays, Universal Serial Bus (USB) has become a standard connection method for computer peripherals and has replaced many varieties of serial and parallel ports. However the implementation of USB is complex. This paper describes the implementation of datamore » acquisition interface between a field-programmable gate array (FPGA) board and a PC by exploiting the USB link of the FPGA board. The USB link is based on an FTDI chip which allows direct access of input and output to the Joint Test Action Group (JTAG) signals from a USB host and a complex programmable logic device (CPLD) with a 24 MHz clock input to the USB link. The implementation and results of using the USB link of FPGA board as the data interfacing are discussed.« less
Compiler-assisted multiple instruction rollback recovery using a read buffer
NASA Technical Reports Server (NTRS)
Alewine, Neal J.; Chen, Shyh-Kwei; Fuchs, W. Kent; Hwu, Wen-Mei W.
1995-01-01
Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper describes compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. The compiler-assisted scheme presented consists of hardware that is less complex than shadow files, history files, history buffers, or delayed write buffers, while experimental evaluation indicates performance improvement over compiler-based schemes.
Application of advanced control techniques to aircraft propulsion systems
NASA Technical Reports Server (NTRS)
Lehtinen, B.
1984-01-01
Two programs are described which involve the application of advanced control techniques to the design of engine control algorithms. Multivariable control theory is used in the F100 MVCS (multivariable control synthesis) program to design controls which coordinate the control inputs for improved engine performance. A systematic method for handling a complex control design task is given. Methods of analytical redundancy are aimed at increasing the control system reliability. The F100 DIA (detection, isolation, and accommodation) program, which investigates the uses of software to replace or augment hardware redundancy for certain critical engine sensor, is described.
Gyro-based Maximum-Likelihood Thruster Fault Detection and Identification
NASA Technical Reports Server (NTRS)
Wilson, Edward; Lages, Chris; Mah, Robert; Clancy, Daniel (Technical Monitor)
2002-01-01
When building smaller, less expensive spacecraft, there is a need for intelligent fault tolerance vs. increased hardware redundancy. If fault tolerance can be achieved using existing navigation sensors, cost and vehicle complexity can be reduced. A maximum likelihood-based approach to thruster fault detection and identification (FDI) for spacecraft is developed here and applied in simulation to the X-38 space vehicle. The system uses only gyro signals to detect and identify hard, abrupt, single and multiple jet on- and off-failures. Faults are detected within one second and identified within one to five accords,
Improving Quantum Gate Simulation using a GPU
NASA Astrophysics Data System (ADS)
Gutierrez, Eladio; Romero, Sergio; Trenas, Maria A.; Zapata, Emilio L.
2008-11-01
Due to the increasing computing power of the graphics processing units (GPU), they are becoming more and more popular when solving general purpose algorithms. As the simulation of quantum computers results on a problem with exponential complexity, it is advisable to perform a parallel computation, such as the one provided by the SIMD multiprocessors present in recent GPUs. In this paper, we focus on an important quantum algorithm, the quantum Fourier transform (QTF), in order to evaluate different parallelization strategies on a novel GPU architecture. Our implementation makes use of the new CUDA software/hardware architecture developed recently by NVIDIA.
Gas-analytic measurement complexes of Baikal atmospheric-limnological observatory
NASA Astrophysics Data System (ADS)
Pestunov, D. A.; Shamrin, A. M.; Shmargunov, V. P.; Panchenko, M. V.
2015-11-01
The paper presents the present-day structure of stationary and mobile hardware-software gas-analytical complexes of Baikal atmospheric-limnological observatory (BALO) Siberian Branch Russian Academy of Sciences (SB RAS), designed to study the processes of gas exchange of carbon-containing gases in the "atmosphere-water" system, which are constantly updated to include new measuring and auxiliary instrumentation.
Scott, Graham W S
2012-01-01
The year 1962 was pre-medicare. The public was concerned about access and individual affordability of care. Funding involved public or private responsibility and the role of government. Physicians, the most influential providers, were concerned that government funding would result in the loss of their independence and their becoming state employees. The retrospective analysis "Looking Back 50 Years in Hospital Administration" by Graham and Sibbald is arresting as it underlines just how much progress we have made in what could be termed "hardware" in support of healthcare policy and hospital administration. From this perspective, the progress has been eye opening, given the advent of universal healthcare, the advancement in our physical facilities, the development of high-quality diagnostic equipment, the explosion of new research centres and new and complex clinical procedures. The development of this hardware has given our providers better weapons and contributed to a remarkable improvement in life expectancy. But progress in health administration and policy management involves more than hardware. If the hardware constitutes the tools, then the "software" of the healthcare system involves the human resources and the culture change that must be positioned to make maximum use of the hardware. In 2062, looking back at the 2012 experience, the legacy test may be whether we dealt with health human resources and culture change at a rate that matched our progress in hardware.
James, Conrad D.; Aimone, James B.; Miner, Nadine E.; ...
2017-01-04
In this study, biological neural networks continue to inspire new developments in algorithms and microelectronic hardware to solve challenging data processing and classification problems. Here in this research, we survey the history of neural-inspired and neuromorphic computing in order to examine the complex and intertwined trajectories of the mathematical theory and hardware developed in this field. Early research focused on adapting existing hardware to emulate the pattern recognition capabilities of living organisms. Contributions from psychologists, mathematicians, engineers, neuroscientists, and other professions were crucial to maturing the field from narrowly-tailored demonstrations to more generalizable systems capable of addressing difficult problem classesmore » such as object detection and speech recognition. Algorithms that leverage fundamental principles found in neuroscience such as hierarchical structure, temporal integration, and robustness to error have been developed, and some of these approaches are achieving world-leading performance on particular data classification tasks. Additionally, novel microelectronic hardware is being developed to perform logic and to serve as memory in neuromorphic computing systems with optimized system integration and improved energy efficiency. Key to such advancements was the incorporation of new discoveries in neuroscience research, the transition away from strict structural replication and towards the functional replication of neural systems, and the use of mathematical theory frameworks to guide algorithm and hardware developments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, Conrad D.; Aimone, James B.; Miner, Nadine E.
In this study, biological neural networks continue to inspire new developments in algorithms and microelectronic hardware to solve challenging data processing and classification problems. Here in this research, we survey the history of neural-inspired and neuromorphic computing in order to examine the complex and intertwined trajectories of the mathematical theory and hardware developed in this field. Early research focused on adapting existing hardware to emulate the pattern recognition capabilities of living organisms. Contributions from psychologists, mathematicians, engineers, neuroscientists, and other professions were crucial to maturing the field from narrowly-tailored demonstrations to more generalizable systems capable of addressing difficult problem classesmore » such as object detection and speech recognition. Algorithms that leverage fundamental principles found in neuroscience such as hierarchical structure, temporal integration, and robustness to error have been developed, and some of these approaches are achieving world-leading performance on particular data classification tasks. Additionally, novel microelectronic hardware is being developed to perform logic and to serve as memory in neuromorphic computing systems with optimized system integration and improved energy efficiency. Key to such advancements was the incorporation of new discoveries in neuroscience research, the transition away from strict structural replication and towards the functional replication of neural systems, and the use of mathematical theory frameworks to guide algorithm and hardware developments.« less
Trends in computer hardware and software.
Frankenfeld, F M
1993-04-01
Previously identified and current trends in the development of computer systems and in the use of computers for health care applications are reviewed. Trends identified in a 1982 article were increasing miniaturization and archival ability, increasing software costs, increasing software independence, user empowerment through new software technologies, shorter computer-system life cycles, and more rapid development and support of pharmaceutical services. Most of these trends continue today. Current trends in hardware and software include the increasing use of reduced instruction-set computing, migration to the UNIX operating system, the development of large software libraries, microprocessor-based smart terminals that allow remote validation of data, speech synthesis and recognition, application generators, fourth-generation languages, computer-aided software engineering, object-oriented technologies, and artificial intelligence. Current trends specific to pharmacy and hospitals are the withdrawal of vendors of hospital information systems from the pharmacy market, improved linkage of information systems within hospitals, and increased regulation by government. The computer industry and its products continue to undergo dynamic change. Software development continues to lag behind hardware, and its high cost is offsetting the savings provided by hardware.
Incorporating a Human-Computer Interaction Course into Software Development Curriculums
ERIC Educational Resources Information Center
Janicki, Thomas N.; Cummings, Jeffrey; Healy, R. Joseph
2015-01-01
Individuals have increasing options on retrieving information related to hardware and software. Specific hardware devices include desktops, tablets and smart devices. Also, the number of software applications has significantly increased the user's capability to access data. Software applications include the traditional web site, smart device…
NASA Astrophysics Data System (ADS)
Nguyen, An Hung; Guillemette, Thomas; Lambert, Andrew J.; Pickering, Mark R.; Garratt, Matthew A.
2017-09-01
Image registration is a fundamental image processing technique. It is used to spatially align two or more images that have been captured at different times, from different sensors, or from different viewpoints. There have been many algorithms proposed for this task. The most common of these being the well-known Lucas-Kanade (LK) and Horn-Schunck approaches. However, the main limitation of these approaches is the computational complexity required to implement the large number of iterations necessary for successful alignment of the images. Previously, a multi-pass image interpolation algorithm (MP-I2A) was developed to considerably reduce the number of iterations required for successful registration compared with the LK algorithm. This paper develops a kernel-warping algorithm (KWA), a modified version of the MP-I2A, which requires fewer iterations to successfully register two images and less memory space for the field-programmable gate array (FPGA) implementation than the MP-I2A. These reductions increase feasibility of the implementation of the proposed algorithm on FPGAs with very limited memory space and other hardware resources. A two-FPGA system rather than single FPGA system is successfully developed to implement the KWA in order to compensate insufficiency of hardware resources supported by one FPGA, and increase parallel processing ability and scalability of the system.
NASA Technical Reports Server (NTRS)
Portree, David S. F.
1995-01-01
The heritage of the major Mir complex hardware elements is described. These elements include Soyuz-TM and Progress-M; the Kvant, Kvant 2, and Kristall modules; and the Mir base block. Configuration changes and major mission events of the Salyut 6, Salyut 7, and Mir multiport space stations are described in detail for the period 1977-1994. A comparative chronology of U.S. and Soviet/Russian manned spaceflight is also given for that period. The 68 illustrations include comparative scale drawings of U.S. and Russian spacecraft as well as sequential drawings depicting missions and mission events.
3D graphics hardware accelerator programming methods for real-time visualization systems
NASA Astrophysics Data System (ADS)
Souetov, Andrew E.
2001-02-01
The paper deals with new approaches in software design for creating real-time applications that use modern graphics acceleration hardware. The growing complexity of such type of software compels programmers to use different types of CASE systems in design and development process. The subject under discussion is integration of such systems in a development process, their effective use, and the combination of these new methods with the necessity to produce optimal codes. A method of simulation integration and modeling tools in real-time software development cycle is described.
3D graphics hardware accelerator programming methods for real-time visualization systems
NASA Astrophysics Data System (ADS)
Souetov, Andrew E.
2000-02-01
The paper deals with new approaches in software design for creating real-time applications that use modern graphics acceleration hardware. The growing complexity of such type of software compels programmers to use different types of CASE systems in design and development process. The subject under discussion is integration of such systems in a development process, their effective use, and the combination of these new methods with the necessity to produce optimal codes. A method of simulation integration and modeling tools in real-time software development cycle is described.
2012-02-17
Industrial Area Construction: Located 5 miles south of Launch Complex 39, construction of the main buildings -- Operations and Checkout Building, Headquarters Building, and Central Instrumentation Facility – began in 1963. In 1992, the Space Station Processing Facility was designed and constructed for the pre-launch processing of International Space Station hardware that was flown on the space shuttle. Along with other facilities, the industrial area provides spacecraft assembly and checkout, crew training, computer and instrumentation equipment, hardware preflight testing and preparations, as well as administrative offices. Poster designed by Kennedy Space Center Graphics Department/Greg Lee. Credit: NASA
Towards fully analog hardware reservoir computing for speech recognition
NASA Astrophysics Data System (ADS)
Smerieri, Anteo; Duport, François; Paquot, Yvan; Haelterman, Marc; Schrauwen, Benjamin; Massar, Serge
2012-09-01
Reservoir computing is a very recent, neural network inspired unconventional computation technique, where a recurrent nonlinear system is used in conjunction with a linear readout to perform complex calculations, leveraging its inherent internal dynamics. In this paper we show the operation of an optoelectronic reservoir computer in which both the nonlinear recurrent part and the readout layer are implemented in hardware for a speech recognition application. The performance obtained is close to the one of to state-of-the-art digital reservoirs, while the analog architecture opens the way to ultrafast computation.
Evaluating geographic information systems technology
Guptill, Stephen C.
1989-01-01
Computerized geographic information systems (GISs) are emerging as the spatial data handling tools of choice for solving complex geographical problems. However, few guidelines exist for assisting potential users in identifying suitable hardware and software. A process to be followed in evaluating the merits of GIS technology is presented. Related standards and guidelines, software functions, hardware components, and benchmarking are discussed. By making users aware of all aspects of adopting GIS technology, they can decide if GIS is an appropriate tool for their application and, if so, which GIS should be used.
Hardware-in-the-loop simulation for undersea vehicle applications
NASA Astrophysics Data System (ADS)
Kelf, Michael A.
2001-08-01
Torpedoes and other Unmanned Undersea Vehicles (UUV) are employed by submarines and surface combatants, as well as aircraft, for undersea warfare. These vehicles are autonomous devices whose guidance systems rival the complexity of the most sophisticated air combat missiles. The tactical environment for undersea warfare is a difficult one in terms of target detection,k classification, and pursuit because of the physics of underwater sounds. Both hardware-in-the-loop and all-digital simulations have become vital tools in developing and evaluating undersea weapon and vehicle guidance performance in the undersea environment.
Minimum Control Requirements for Advanced Life Support Systems
NASA Technical Reports Server (NTRS)
Boulange, Richard; Jones, Harry; Jones, Harry
2002-01-01
Advanced control technologies are not necessary for the safe, reliable and continuous operation of Advanced Life Support (ALS) systems. ALS systems can and are adequately controlled by simple, reliable, low-level methodologies and algorithms. The automation provided by advanced control technologies is claimed to decrease system mass and necessary crew time by reducing buffer size and minimizing crew involvement. In truth, these approaches increase control system complexity without clearly demonstrating an increase in reliability across the ALS system. Unless these systems are as reliable as the hardware they control, there is no savings to be had. A baseline ALS system is presented with the minimal control system required for its continuous safe reliable operation. This baseline control system uses simple algorithms and scheduling methodologies and relies on human intervention only in the event of failure of the redundant backup equipment. This ALS system architecture is designed for reliable operation, with minimal components and minimal control system complexity. The fundamental design precept followed is "If it isn't there, it can't fail".
Platform-Independence and Scheduling In a Multi-Threaded Real-Time Simulation
NASA Technical Reports Server (NTRS)
Sugden, Paul P.; Rau, Melissa A.; Kenney, P. Sean
2001-01-01
Aviation research often relies on real-time, pilot-in-the-loop flight simulation as a means to develop new flight software, flight hardware, or pilot procedures. Often these simulations become so complex that a single processor is incapable of performing the necessary computations within a fixed time-step. Threads are an elegant means to distribute the computational work-load when running on a symmetric multi-processor machine. However, programming with threads often requires operating system specific calls that reduce code portability and maintainability. While a multi-threaded simulation allows a significant increase in the simulation complexity, it also increases the workload of a simulation operator by requiring that the operator determine which models run on which thread. To address these concerns an object-oriented design was implemented in the NASA Langley Standard Real-Time Simulation in C++ (LaSRS++) application framework. The design provides a portable and maintainable means to use threads and also provides a mechanism to automatically load balance the simulation models.
ISS-CREAM Thermal and Fluid System Design and Analysis
NASA Technical Reports Server (NTRS)
Thorpe, Rosemary S.
2015-01-01
Thermal and Fluids Analysis Workshop (TFAWS), Silver Spring MD NCTS 21070-15. The ISS-CREAM (Cosmic Ray Energetics And Mass for the International Space Station) payload is being developed by an international team and will provide significant cosmic ray characterization over a long time frame. Cold fluid provided by the ISS Exposed Facility (EF) is the primary means of cooling for 5 science instruments and over 7 electronics boxes. Thermal fluid integrated design and analysis was performed for CREAM using a Thermal Desktop model. This presentation will provide some specific design and modeling examples from the fluid cooling system, complex SCD (Silicon Charge Detector) and calorimeter hardware, and integrated payload and ISS level modeling. Features of Thermal Desktop such as CAD simplification, meshing of complex hardware, External References (Xrefs), and FloCAD modeling will be discussed.
New Single Piece Blast Hardware design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulrich, Andri; Steinzig, Michael Louis; Aragon, Daniel Adrian
W, Q and PF engineers and machinists designed and fabricated, on the new Mazak i300, the first Single Piece Blast Hardware (unclassified design shown) reducing fabrication and inspection time by over 50%. The first DU Single Piece is completed and will be used for Hydro Test 3680. Past hydro tests used a twopiece assembly due to a lack of equipment capable of machining the complex saddle shape in a single piece. The i300 provides turning and milling 5-axis machining on one machine. The milling head on the i300 can machine past 90 relative to the spindle axis. This makes itmore » possible to machine the complex saddle surface on a single piece. Going to a single piece eliminates tolerance problems, such as tilting and eccentricity, that typically occurred when assembling the two pieces together« less
Hardware-software complex of informing passengers of forecasted route transport arrival at stop
NASA Astrophysics Data System (ADS)
Pogrebnoy, V. Yu; Pushkarev, M. I.; Fadeev, A. S.
2017-02-01
The paper presents the hardware-software complex of informing the passengers of the forecasted route transport arrival. A client-server architecture of the forecasting information system is represented and an electronic information board prototype is described. The scheme of information transfer and processing, starting with receiving navigating telemetric data from a transport vehicle and up to the time of passenger public transport arrival at the stop, as well as representation of the information on the electronic board is illustrated and described. Methods and algorithms of determination of the transport vehicle current location in the city route network are considered in detail. The description of the proposed forecasting model of transport vehicle arrival time at the stop is given. The obtained result is applied in Tomsk for forecasting and displaying the arrival time information at the stops.
NASA Technical Reports Server (NTRS)
Deese, J. E.; Agarwal, R. K.
1989-01-01
Computational fluid dynamics has an increasingly important role in the design and analysis of aircraft as computer hardware becomes faster and algorithms become more efficient. Progress is being made in two directions: more complex and realistic configurations are being treated and algorithms based on higher approximations to the complete Navier-Stokes equations are being developed. The literature indicates that linear panel methods can model detailed, realistic aircraft geometries in flow regimes where this approximation is valid. As algorithms including higher approximations to the Navier-Stokes equations are developed, computer resource requirements increase rapidly. Generation of suitable grids become more difficult and the number of grid points required to resolve flow features of interest increases. Recently, the development of large vector computers has enabled researchers to attempt more complex geometries with Euler and Navier-Stokes algorithms. The results of calculations for transonic flow about a typical transport and fighter wing-body configuration using thin layer Navier-Stokes equations are described along with flow about helicopter rotor blades using both Euler/Navier-Stokes equations.
FDIR: To Increase the Reuse of the Design
NASA Astrophysics Data System (ADS)
Alison, Bernard; Parent, Loic; Provost-Grellier, Antoine; De-Ferluc, Regis
2012-08-01
The Failure Detection, Isolation and Recovery (FDIR) is a key function for the safety and the availability of a spacecraft in orbit. This function involves the totality of the avionics and the recovery efficiency directly depends on the software embedded on the On Board Computer and the Reconfiguration Module (hardware only) which is the ultimate barrier to avoid the loss of the spacecraft in case of serious failure.The design of the FDIR becomes more and more complex to fit the always more stringent requirements: to preserve the mission as long as possible to maximise the availability or to perform critical phases.In parallel with the increase of the complexity of the avionics, the failure cases and the feared events become more and more numerous. This trend leads to increase the cost and the delay of the FDIR validation.Thales Alenia Space, as satellite Prime, is aware of the problematic of the FDIR and has searched for several years solutions to formalize the design of the FDIR which is a way to enhance the reuse of the design from one mission to an other and to facilitate the validation phase.
Systems engineering and integration: Cost estimation and benefits analysis
NASA Technical Reports Server (NTRS)
Dean, ED; Fridge, Ernie; Hamaker, Joe
1990-01-01
Space Transportation Avionics hardware and software cost has traditionally been estimated in Phase A and B using cost techniques which predict cost as a function of various cost predictive variables such as weight, lines of code, functions to be performed, quantities of test hardware, quantities of flight hardware, design and development heritage, complexity, etc. The output of such analyses has been life cycle costs, economic benefits and related data. The major objectives of Cost Estimation and Benefits analysis are twofold: (1) to play a role in the evaluation of potential new space transportation avionics technologies, and (2) to benefit from emerging technological innovations. Both aspects of cost estimation and technology are discussed here. The role of cost analysis in the evaluation of potential technologies should be one of offering additional quantitative and qualitative information to aid decision-making. The cost analyses process needs to be fully integrated into the design process in such a way that cost trades, optimizations and sensitivities are understood. Current hardware cost models tend to primarily use weights, functional specifications, quantities, design heritage and complexity as metrics to predict cost. Software models mostly use functionality, volume of code, heritage and complexity as cost descriptive variables. Basic research needs to be initiated to develop metrics more responsive to the trades which are required for future launch vehicle avionics systems. These would include cost estimating capabilities that are sensitive to technological innovations such as improved materials and fabrication processes, computer aided design and manufacturing, self checkout and many others. In addition to basic cost estimating improvements, the process must be sensitive to the fact that no cost estimate can be quoted without also quoting a confidence associated with the estimate. In order to achieve this, better cost risk evaluation techniques are needed as well as improved usage of risk data by decision-makers. More and better ways to display and communicate cost and cost risk to management are required.
Microbial biodiversity assessment of the European Space Agency's ExoMars 2016 mission.
Koskinen, Kaisa; Rettberg, Petra; Pukall, Rüdiger; Auerbach, Anna; Wink, Lisa; Barczyk, Simon; Perras, Alexandra; Mahnert, Alexander; Margheritis, Diana; Kminek, Gerhard; Moissl-Eichinger, Christine
2017-10-25
The ExoMars 2016 mission, consisting of the Trace Gas Orbiter and the Schiaparelli lander, was launched on March 14 2016 from Baikonur, Kazakhstan and reached its destination in October 2016. The Schiaparelli lander was subject to strict requirements for microbial cleanliness according to the obligatory planetary protection policy. To reach the required cleanliness, the ExoMars 2016 flight hardware was assembled in a newly built, biocontrolled cleanroom complex at Thales Alenia Space in Turin, Italy. In this study, we performed microbiological surveys of the cleanroom facilities and the spacecraft hardware before and during the assembly, integration and testing (AIT) activities. Besides the European Space Agency (ESA) standard bioburden assay, that served as a proxy for the microbiological contamination in general, we performed various alternative cultivation assays and utilised molecular techniques, including quantitative PCR and next generation sequencing, to assess the absolute and relative abundance and broadest diversity of microorganisms and their signatures in the cleanroom and on the spacecraft hardware. Our results show that the bioburden, detected microbial contamination and microbial diversity decreased continuously after the cleanroom was decontaminated with more effective cleaning agents and during the ongoing AIT. The studied cleanrooms and change room were occupied by very distinct microbial communities: Overall, the change room harboured a higher number and diversity of microorganisms, including Propionibacterium, which was found to be significantly increased in the change room. In particular, the so called alternative cultivation assays proved important in detecting a broader cultivable diversity than covered by the standard bioburden assay and thus completed the picture on the cleanroom microbiota. During the whole project, the bioburden stayed at acceptable level and did not raise any concern for the ExoMars 2016 mission. The cleanroom complex at Thales Alenia Space in Turin is an excellent example of how efficient microbiological control is performed.
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan; Teubert, Chris; Gorospe, George; Burgett, Drew; Quach, Cuong C.; Hogge, Edward
2016-01-01
The airspace is becoming more and more complicated, and will continue to do so in the future with the integration of Unmanned Aerial Vehicles (UAVs), autonomy, spacecraft, other forms of aviation technology into the airspace. The new technology and complexity increases the importance and difficulty of safety assurance. Additionally, testing new technologies on complex aviation systems & systems of systems can be very difficult, expensive, and sometimes unsafe in real life scenarios. Prognostic methodology provides an estimate of the health and risks of a component, vehicle, or airspace and knowledge of how that will change over time. That measure is especially useful in safety determination, mission planning, and maintenance scheduling. The developed testbed will be used to validate prediction algorithms for the real-time safety monitoring of the National Airspace System (NAS) and the prediction of unsafe events. The framework injects flight related anomalies related to ground systems, routing, airport congestion, etc. to test and verify algorithms for NAS safety. In our research work, we develop a live, distributed, hardware-in-the-loop testbed for aviation and airspace prognostics along with exploring further research possibilities to verify and validate future algorithms for NAS safety. The testbed integrates virtual aircraft using the X-Plane simulator and X-PlaneConnect toolbox, UAVs using onboard sensors and cellular communications, and hardware in the loop components. In addition, the testbed includes an additional research framework to support and simplify future research activities. It enables safe, accurate, and inexpensive experimentation and research into airspace and vehicle prognosis that would not have been possible otherwise. This paper describes the design, development, and testing of this system. Software reliability, safety and latency are some of the critical design considerations in development of the testbed. Integration of HITL elements in the development phases and veri cation/ validation are key elements to this report.
Birds of a Feather: Supporting Secure Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braswell III, H V
2006-04-24
Over the past few years Lawrence Livermore National Laboratory has begun the process of moving to a diskless environment in the Secure Computer Support realm. This movement has included many moving targets and increasing support complexity. We would like to set up a forum for Security and Support professionals to get together from across the Complex and discuss current deployments, lessons learned, and next steps. This would include what hardware, software, and hard copy based solutions are being used to manage Secure Computing. The topics to be discussed include but are not limited to: Diskless computing, port locking and management,more » PC, Mac, and Linux/UNIX support and setup, system imaging, security setup documentation and templates, security documentation and management, customer tracking, ticket tracking, software download and management, log management, backup/disaster recovery, and mixed media environments.« less
An experimental adaptive radar MTI filter
NASA Astrophysics Data System (ADS)
Gong, Y. H.; Cooling, J. E.
The theoretical and practical features of a self-adaptive filter designed to remove clutter noise from a radar signal are described. The hardware employs an 8-bit microprocessor/fast hardware multiplier combination along with analog-digital and digital-analog interfaces. The software here is implemented in assembler language. It is assumed that there is little overlap between the signal and the noise spectra and that the noise power is much greater than that of the signal. It is noted that one of the most important factors to be considered when designing digital filters is the quantization noise. This works to degrade the steady state performance from that of the ideal (infinite word length) filter. The principal limitation of the filter described here is its low sampling rate (1.72 kHz), due mainly to the time spent on the multiplication routines. The methods discussed here, however, are general and can be applied to both traditional and more complex radar MTI systems, provided that the filter sampling frequency is increased. Dedicated VLSI signal processors are seen as holding considerable promise.
Solution-Processed Carbon Nanotube True Random Number Generator.
Gaviria Rojas, William A; McMorrow, Julian J; Geier, Michael L; Tang, Qianying; Kim, Chris H; Marks, Tobin J; Hersam, Mark C
2017-08-09
With the growing adoption of interconnected electronic devices in consumer and industrial applications, there is an increasing demand for robust security protocols when transmitting and receiving sensitive data. Toward this end, hardware true random number generators (TRNGs), commonly used to create encryption keys, offer significant advantages over software pseudorandom number generators. However, the vast network of devices and sensors envisioned for the "Internet of Things" will require small, low-cost, and mechanically flexible TRNGs with low computational complexity. These rigorous constraints position solution-processed semiconducting single-walled carbon nanotubes (SWCNTs) as leading candidates for next-generation security devices. Here, we demonstrate the first TRNG using static random access memory (SRAM) cells based on solution-processed SWCNTs that digitize thermal noise to generate random bits. This bit generation strategy can be readily implemented in hardware with minimal transistor and computational overhead, resulting in an output stream that passes standardized statistical tests for randomness. By using solution-processed semiconducting SWCNTs in a low-power, complementary architecture to achieve TRNG, we demonstrate a promising approach for improving the security of printable and flexible electronics.
Understanding the Complex Dimensions of the Digital Divide: Lessons Learned in the Alaskan Arctic
ERIC Educational Resources Information Center
Subramony, Deepak Prem
2007-01-01
An ethnographic case study of Inupiat Eskimo in the Alaskan Arctic has provided insights into the complex nature of the sociological issues surrounding equitable access to technology tools and skills, which are referred to as the digital divide. These people can overcome the digital divide if they get the basic ready access to hardware and…
Impact of uniform electrode current distribution on ETF. [Engineering Test Facility MHD generator
NASA Technical Reports Server (NTRS)
Bents, D. J.
1982-01-01
A basic reason for the complexity and sheer volume of electrode consolidation hardware in the MHD ETF Powertrain system is the channel electrode current distribution, which is non-uniform. If the channel design is altered to provide uniform electrode current distribution, the amount of hardware required decreases considerably, but at the possible expense of degraded channel performance. This paper explains the design impacts on the ETF electrode consolidation network associated with uniform channel electrode current distribution, and presents the alternate consolidation designs which occur. They are compared to the baseline (non-uniform current) design with respect to performance, and hardware requirements. A rational basis is presented for comparing the requirements for the different designs and the savings that result from uniform current distribution. Performance and cost impacts upon the combined cycle plant are discussed.
Filling the Assurance Gap on Complex Electronics
NASA Technical Reports Server (NTRS)
Plastow, Richard A.
2007-01-01
Many of the methods used to develop software bare a close resemblance to Complex Electronics (CE) development. CE are now programmed to perform tasks that were previously handled by software, such as communication protocols. For example, the James Webb Space Telescope will use Field Programmable Gate Arrays (FPGAs), which can have over a million logic gates, to send telemetry. System-on-chip (SoC) devices, another type of complex electronics, can combine a microprocessor, input and output channels, and sometimes an FPGA for programmability. With this increased intricacy, the possibility of software-like bugs such as incorrect design, logic, and unexpected interactions within the logic is great. Since CE devices are obscuring the hardware/software boundary, mature software methodologies have been proposed, with slight modifications, to develop these devices. By using standardized S/W Engineering methods such as checklists, missing requirements and bugs can be detected earlier in the development cycle, thus creating a development process for CE that can be easily maintained and configurable based on the device used.
Software Process Assurance for Complex Electronics
NASA Technical Reports Server (NTRS)
Plastow, Richard A.
2007-01-01
Complex Electronics (CE) now perform tasks that were previously handled in software, such as communication protocols. Many methods used to develop software bare a close resemblance to CE development. Field Programmable Gate Arrays (FPGAs) can have over a million logic gates while system-on-chip (SOC) devices can combine a microprocessor, input and output channels, and sometimes an FPGA for programmability. With this increased intricacy, the possibility of software-like bugs such as incorrect design, logic, and unexpected interactions within the logic is great. With CE devices obscuring the hardware/software boundary, we propose that mature software methodologies may be utilized with slight modifications in the development of these devices. Software Process Assurance for Complex Electronics (SPACE) is a research project that used standardized S/W Assurance/Engineering practices to provide an assurance framework for development activities. Tools such as checklists, best practices and techniques were used to detect missing requirements and bugs earlier in the development cycle creating a development process for CE that was more easily maintained, consistent and configurable based on the device used.
Low-cost real-time infrared scene generation for image projection and signal injection
NASA Astrophysics Data System (ADS)
Buford, James A., Jr.; King, David E.; Bowden, Mark H.
1998-07-01
As cost becomes an increasingly important factor in the development and testing of Infrared sensors and flight computer/processors, the need for accurate hardware-in-the- loop (HWIL) simulations is critical. In the past, expensive and complex dedicated scene generation hardware was needed to attain the fidelity necessary for accurate testing. Recent technological advances and innovative applications of established technologies are beginning to allow development of cost-effective replacements for dedicated scene generators. These new scene generators are mainly constructed from commercial-off-the-shelf (COTS) hardware and software components. At the U.S. Army Aviation and Missile Command (AMCOM) Missile Research, Development, and Engineering Center (MRDEC), researchers have developed such a dynamic IR scene generator (IRSG) built around COTS hardware and software. The IRSG is used to provide dynamic inputs to an IR scene projector for in-band seeker testing and for direct signal injection into the seeker or processor electronics. AMCOM MRDEC has developed a second generation IRSG, namely IRSG2, using the latest Silicon Graphics Incorporated (SGI) Onyx2 with Infinite Reality graphics. As reported in previous papers, the SGI Onyx Reality Engine 2 is the platform of the original IRSG that is now referred to as IRSG1. IRSG1 has been in operation and used daily for the past three years on several IR projection and signal injection HWIL programs. Using this second generation IRSG, frame rates have increased from 120 Hz to 400 Hz and intensity resolution from 12 bits to 16 bits. The key features of the IRSGs are real time missile frame rates and frame sizes, dynamic missile-to-target(s) viewpoint updated each frame in real-time by a six-degree-of- freedom (6DOF) system under test (SUT) simulation, multiple dynamic objects (e.g. targets, terrain/background, countermeasures, and atmospheric effects), latency compensation, point-to-extended source anti-aliased targets, and sensor modeling effects. This paper provides a comparison between the IRSG1 and IRSG2 systems and focuses on the IRSG software, real time features, and database development tools.
NASA Astrophysics Data System (ADS)
Martin, Adrian
As the applications of mobile robotics evolve it has become increasingly less practical for researchers to design custom hardware and control systems for each problem. This research presents a new approach to control system design that looks beyond end-of-lifecycle performance and considers control system structure, flexibility, and extensibility. Toward these ends the Control ad libitum philosophy is proposed, stating that to make significant progress in the real-world application of mobile robot teams the control system must be structured such that teams can be formed in real-time from diverse components. The Control ad libitum philosophy was applied to the design of the HAA (Host, Avatar, Agent) architecture: a modular hierarchical framework built with provably correct distributed algorithms. A control system for exploration and mapping, search and deploy, and foraging was developed to evaluate the architecture in three sets of hardware-in-the-loop experiments. First, the basic functionality of the HAA architecture was studied, specifically the ability to: a) dynamically form the control system, b) dynamically form the robot team, c) dynamically form the processing network, and d) handle heterogeneous teams. Secondly, the real-time performance of the distributed algorithms was tested, and proved effective for the moderate sized systems tested. Furthermore, the distributed Just-in-time Cooperative Simultaneous Localization and Mapping (JC-SLAM) algorithm demonstrated accuracy equal to or better than traditional approaches in resource starved scenarios, while reducing exploration time significantly. The JC-SLAM strategies are also suitable for integration into many existing particle filter SLAM approaches, complementing their unique optimizations. Thirdly, the control system was subjected to concurrent software and hardware failures in a series of increasingly complex experiments. Even with unrealistically high rates of failure the control system was able to successfully complete its tasks. The HAA implementation designed following the Control ad libitum philosophy proved to be capable of dynamic team formation and extremely robust against both hardware and software failure; and, due to the modularity of the system there is significant potential for reuse of assets and future extensibility. One future goal is to make the source code publically available and establish a forum for the development and exchange of new agents.
VME rollback hardware for time warp multiprocessor systems
NASA Technical Reports Server (NTRS)
Robb, Michael J.; Buzzell, Calvin A.
1992-01-01
The purpose of the research effort is to develop and demonstrate innovative hardware to implement specific rollback and timing functions required for efficient queue management and precision timekeeping in multiprocessor discrete event simulations. The previously completed phase 1 effort demonstrated the technical feasibility of building hardware modules which eliminate the state saving overhead of the Time Warp paradigm used in distributed simulations on multiprocessor systems. The current phase 2 effort will build multiple pre-production rollback hardware modules integrated with a network of Sun workstations, and the integrated system will be tested by executing a Time Warp simulation. The rollback hardware will be designed to interface with the greatest number of multiprocessor systems possible. The authors believe that the rollback hardware will provide for significant speedup of large scale discrete event simulation problems and allow multiprocessors using Time Warp to dramatically increase performance.
Accelerating a MPEG-4 video decoder through custom software/hardware co-design
NASA Astrophysics Data System (ADS)
Díaz, Jorge L.; Barreto, Dacil; García, Luz; Marrero, Gustavo; Carballo, Pedro P.; Núñez, Antonio
2007-05-01
In this paper we present a novel methodology to accelerate an MPEG-4 video decoder using software/hardware co-design for wireless DAB/DMB networks. Software support includes the services provided by the embedded kernel μC/OS-II, and the application tasks mapped to software. Hardware support includes several custom co-processors and a communication architecture with bridges to the main system bus and with a dual port SRAM. Synchronization among tasks is achieved at two levels, by a hardware protocol and by kernel level scheduling services. Our reference application is an MPEG-4 video decoder composed of several software functions and written using a special C++ library named CASSE. Profiling and space exploration techniques were used previously over the Advanced Simple Profile (ASP) MPEG-4 decoder to determinate the best HW/SW partition developed here. This research is part of the ARTEMI project and its main goal is the establishment of methodologies for the design of real-time complex digital systems using Programmable Logic Devices with embedded microprocessors as target technology and the design of multimedia systems for broadcasting networks as reference application.
Extravehicular activity training and hardware design consideration
NASA Technical Reports Server (NTRS)
Thuot, P. J.; Harbaugh, G. J.
1995-01-01
Preparing astronauts to perform the many complex extravehicular activity (EVA) tasks required to assemble and maintain Space Station will be accomplished through training simulations in a variety of facilities. The adequacy of this training is dependent on a thorough understanding of the task to be performed, the environment in which the task will be performed, high-fidelity training hardware and an awareness of the limitations of each particular training facility. Designing hardware that can be successfully operated, or assembled, by EVA astronauts in an efficient manner, requires an acute understanding of human factors and the capabilities and limitations of the space-suited astronaut. Additionally, the significant effect the microgravity environment has on the crew members' capabilities has to be carefully considered not only for each particular task, but also for all the overhead related to the task and the general overhead associated with EVA. This paper will describe various training methods and facilities that will be used to train EVA astronauts for Space Station assembly and maintenance. User-friendly EVA hardware design considerations and recent EVA flight experience will also be presented.
Efficient Hardware Implementation of the Lightweight Block Encryption Algorithm LEA
Lee, Donggeon; Kim, Dong-Chan; Kwon, Daesung; Kim, Howon
2014-01-01
Recently, due to the advent of resource-constrained trends, such as smartphones and smart devices, the computing environment is changing. Because our daily life is deeply intertwined with ubiquitous networks, the importance of security is growing. A lightweight encryption algorithm is essential for secure communication between these kinds of resource-constrained devices, and many researchers have been investigating this field. Recently, a lightweight block cipher called LEA was proposed. LEA was originally targeted for efficient implementation on microprocessors, as it is fast when implemented in software and furthermore, it has a small memory footprint. To reflect on recent technology, all required calculations utilize 32-bit wide operations. In addition, the algorithm is comprised of not complex S-Box-like structures but simple Addition, Rotation, and XOR operations. To the best of our knowledge, this paper is the first report on a comprehensive hardware implementation of LEA. We present various hardware structures and their implementation results according to key sizes. Even though LEA was originally targeted at software efficiency, it also shows high efficiency when implemented as hardware. PMID:24406859
Extravehicular activity training and hardware design consideration.
Thuot, P J; Harbaugh, G J
1995-07-01
Preparing astronauts to perform the many complex extravehicular activity (EVA) tasks required to assemble and maintain Space Station will be accomplished through training simulations in a variety of facilities. The adequacy of this training is dependent on a thorough understanding of the task to be performed, the environment in which the task will be performed, high-fidelity training hardware and an awareness of the limitations of each particular training facility. Designing hardware that can be successfully operated, or assembled, by EVA astronauts in an efficient manner, requires an acute understanding of human factors and the capabilities and limitations of the space-suited astronaut. Additionally, the significant effect the microgravity environment has on the crew members' capabilities has to be carefully considered not only for each particular task, but also for all the overhead related to the task and the general overhead associated with EVA. This paper will describe various training methods and facilities that will be used to train EVA astronauts for Space Station assembly and maintenance. User-friendly EVA hardware design considerations and recent EVA flight experience will also be presented.
NASA Technical Reports Server (NTRS)
Fournelle, John; Carpenter, Paul
2006-01-01
Modem electron microprobe systems have become increasingly sophisticated. These systems utilize either UNIX or PC computer systems for measurement, automation, and data reduction. These systems have undergone major improvements in processing, storage, display, and communications, due to increased capabilities of hardware and software. Instrument specifications are typically utilized at the time of purchase and concentrate on hardware performance. The microanalysis community includes analysts, researchers, software developers, and manufacturers, who could benefit from exchange of ideas and the ultimate development of core community specifications (CCS) for hardware and software components of microprobe instrumentation and operating systems.
Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao; Chang, Chin-Chen
2016-12-01
Iris recognition has gained increasing popularity over the last few decades; however, the stand-off distance in a conventional iris recognition system is too short, which limits its application. In this paper, we propose a novel hardware-software hybrid method to increase the stand-off distance in an iris recognition system. When designing the system hardware, we use an optimized wavefront coding technique to extend the depth of field. To compensate for the blurring of the image caused by wavefront coding, on the software side, the proposed system uses a local patch-based super-resolution method to restore the blurred image to its clear version. The collaborative effect of the new hardware design and software post-processing showed great potential in our experiment. The experimental results showed that such improvement cannot be achieved by using a hardware-or software-only design. The proposed system can increase the capture volume of a conventional iris recognition system by three times and maintain the system's high recognition rate.
The Unified Floating Point Vector Coprocessor for Reconfigurable Hardware
NASA Astrophysics Data System (ADS)
Kathiara, Jainik
There has been an increased interest recently in using embedded cores on FPGAs. Many of the applications that make use of these cores have floating point operations. Due to the complexity and expense of floating point hardware, these algorithms are usually converted to fixed point operations or implemented using floating-point emulation in software. As the technology advances, more and more homogeneous computational resources and fixed function embedded blocks are added to FPGAs and hence implementation of floating point hardware becomes a feasible option. In this research we have implemented a high performance, autonomous floating point vector Coprocessor (FPVC) that works independently within an embedded processor system. We have presented a unified approach to vector and scalar computation, using a single register file for both scalar operands and vector elements. The Hybrid vector/SIMD computational model of FPVC results in greater overall performance for most applications along with improved peak performance compared to other approaches. By parameterizing vector length and the number of vector lanes, we can design an application specific FPVC and take optimal advantage of the FPGA fabric. For this research we have also initiated designing a software library for various computational kernels, each of which adapts FPVC's configuration and provide maximal performance. The kernels implemented are from the area of linear algebra and include matrix multiplication and QR and Cholesky decomposition. We have demonstrated the operation of FPVC on a Xilinx Virtex 5 using the embedded PowerPC.
Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware.
Rast, Alexander; Galluppi, Francesco; Davies, Sergio; Plana, Luis; Patterson, Cameron; Sharp, Thomas; Lester, David; Furber, Steve
2011-11-01
Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNaker's asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience. Copyright © 2011 Elsevier Ltd. All rights reserved.
Low-complexity piecewise-affine virtual sensors: theory and design
NASA Astrophysics Data System (ADS)
Rubagotti, Matteo; Poggi, Tomaso; Oliveri, Alberto; Pascucci, Carlo Alberto; Bemporad, Alberto; Storace, Marco
2014-03-01
This paper is focused on the theoretical development and the hardware implementation of low-complexity piecewise-affine direct virtual sensors for the estimation of unmeasured variables of interest of nonlinear systems. The direct virtual sensor is designed directly from measured inputs and outputs of the system and does not require a dynamical model. The proposed approach allows one to design estimators which mitigate the effect of the so-called 'curse of dimensionality' of simplicial piecewise-affine functions, and can be therefore applied to relatively high-order systems, enjoying convergence and optimality properties. An automatic toolchain is also presented to generate the VHDL code describing the digital circuit implementing the virtual sensor, starting from the set of measured input and output data. The proposed methodology is applied to generate an FPGA implementation of the virtual sensor for the estimation of vehicle lateral velocity, using a hardware-in-the-loop setting.
Tutorial: Parallel Computing of Simulation Models for Risk Analysis.
Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D
2016-10-01
Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Borisov, A.
2018-05-01
The current issue of studying the vector velocity field in a cyclone-separator with a screw insert is considered in the article. Modeling of the velocity vector field in SolidWorks was carried out, tangential, axial and radial velocities were investigated. Also, a software and hardware complex was developed that makes it possible to obtain data on the speed inside a cyclone separator. The results of the experiment showed that on flour dusts the efficiency of the cyclone separator in question was more than 99.5%, with an air flow rate of 376 m3 / h, 472 m3 / h and 516 m3 / h, and ΔP less than 600 Pa. The velocity in the inlet branch of the screw insert was 18-20 m / s, and at the exit of the screw insert the airflow velocity is 50-70 m / s.
Software control and system configuration management - A process that works
NASA Technical Reports Server (NTRS)
Petersen, K. L.; Flores, C., Jr.
1983-01-01
A comprehensive software control and system configuration management process for flight-crucial digital control systems of advanced aircraft has been developed and refined to insure efficient flight system development and safe flight operations. Because of the highly complex interactions among the hardware, software, and system elements of state-of-the-art digital flight control system designs, a systems-wide approach to configuration control and management has been used. Specific procedures are implemented to govern discrepancy reporting and reconciliation, software and hardware change control, systems verification and validation testing, and formal documentation requirements. An active and knowledgeable configuration control board reviews and approves all flight system configuration modifications and revalidation tests. This flexible process has proved effective during the development and flight testing of several research aircraft and remotely piloted research vehicles with digital flight control systems that ranged from relatively simple to highly complex, integrated mechanizations.
Hardware-efficient implementation of digital FIR filter using fast first-order moment algorithm
NASA Astrophysics Data System (ADS)
Cao, Li; Liu, Jianguo; Xiong, Jun; Zhang, Jing
2018-03-01
As the digital finite impulse response (FIR) filter can be transformed into the shift-add form of multiple small-sized firstorder moments, based on the existing fast first-order moment algorithm, this paper presents a novel multiplier-less structure to calculate any number of sequential filtering results in parallel. The theoretical analysis on its hardware and time-complexities reveals that by appropriately setting the degree of parallelism and the decomposition factor of a fixed word width, the proposed structure may achieve better area-time efficiency than the existing two-dimensional (2-D) memoryless-based filter. To evaluate the performance concretely, the proposed designs for different taps along with the existing 2-D memoryless-based filters, are synthesized by Synopsys Design Compiler with 0.18-μm SMIC library. The comparisons show that the proposed design has less area-time complexity and power consumption when the number of filter taps is larger than 48.
Software control and system configuration management: A systems-wide approach
NASA Technical Reports Server (NTRS)
Petersen, K. L.; Flores, C., Jr.
1984-01-01
A comprehensive software control and system configuration management process for flight-crucial digital control systems of advanced aircraft has been developed and refined to insure efficient flight system development and safe flight operations. Because of the highly complex interactions among the hardware, software, and system elements of state-of-the-art digital flight control system designs, a systems-wide approach to configuration control and management has been used. Specific procedures are implemented to govern discrepancy reporting and reconciliation, software and hardware change control, systems verification and validation testing, and formal documentation requirements. An active and knowledgeable configuration control board reviews and approves all flight system configuration modifications and revalidation tests. This flexible process has proved effective during the development and flight testing of several research aircraft and remotely piloted research vehicles with digital flight control systems that ranged from relatively simple to highly complex, integrated mechanizations.
A hardware-oriented concurrent TZ search algorithm for High-Efficiency Video Coding
NASA Astrophysics Data System (ADS)
Doan, Nghia; Kim, Tae Sung; Rhee, Chae Eun; Lee, Hyuk-Jae
2017-12-01
High-Efficiency Video Coding (HEVC) is the latest video coding standard, in which the compression performance is double that of its predecessor, the H.264/AVC standard, while the video quality remains unchanged. In HEVC, the test zone (TZ) search algorithm is widely used for integer motion estimation because it effectively searches the good-quality motion vector with a relatively small amount of computation. However, the complex computation structure of the TZ search algorithm makes it difficult to implement it in the hardware. This paper proposes a new integer motion estimation algorithm which is designed for hardware execution by modifying the conventional TZ search to allow parallel motion estimations of all prediction unit (PU) partitions. The algorithm consists of the three phases of zonal, raster, and refinement searches. At the beginning of each phase, the algorithm obtains the search points required by the original TZ search for all PU partitions in a coding unit (CU). Then, all redundant search points are removed prior to the estimation of the motion costs, and the best search points are then selected for all PUs. Compared to the conventional TZ search algorithm, experimental results show that the proposed algorithm significantly decreases the Bjøntegaard Delta bitrate (BD-BR) by 0.84%, and it also reduces the computational complexity by 54.54%.
NASA Technical Reports Server (NTRS)
Butler, Roy
2013-01-01
The growth in computer hardware performance, coupled with reduced energy requirements, has led to a rapid expansion of the resources available to software systems, driving them towards greater logical abstraction, flexibility, and complexity. This shift in focus from compacting functionality into a limited field towards developing layered, multi-state architectures in a grand field has both driven and been driven by the history of embedded processor design in the robotic spacecraft industry.The combinatorial growth of interprocess conditions is accompanied by benefits (concurrent development, situational autonomy, and evolution of goals) and drawbacks (late integration, non-deterministic interactions, and multifaceted anomalies) in achieving mission success, as illustrated by the case of the Mars Reconnaissance Orbiter. Approaches to optimizing the benefits while mitigating the drawbacks have taken the form of the formalization of requirements, modular design practices, extensive system simulation, and spacecraft data trend analysis. The growth of hardware capability and software complexity can be expected to continue, with future directions including stackable commodity subsystems, computer-generated algorithms, runtime reconfigurable processors, and greater autonomy.
Liquid Nitrogen Removal of Critical Aerospace Materials
NASA Technical Reports Server (NTRS)
Noah, Donald E.; Merrick, Jason; Hayes, Paul W.
2005-01-01
Identification of innovative solutions to unique materials problems is an every-day quest for members of the aerospace community. Finding a technique that will minimize costs, maximize throughput, and generate quality results is always the target. United Space Alliance Materials Engineers recently conducted such a search in their drive to return the Space Shuttle fleet to operational status. The removal of high performance thermal coatings from solid rocket motors represents a formidable task during post flight disassembly on reusable expended hardware. The removal of these coatings from unfired motors increases the complexity and safety requirements while reducing the available facilities and approved processes. A temporary solution to this problem was identified, tested and approved during the Solid Rocket Booster (SRB) return to flight activities. Utilization of ultra high-pressure liquid nitrogen (LN2) to strip the protective coating from assembled space shuttle hardware marked the first such use of the technology in the aerospace industry. This process provides a configurable stream of liquid nitrogen (LN2) at pressures of up to 55,000 psig. The performance of a one-time certification for the removal of thermal ablatives from SRB hardware involved extensive testing to ensure adequate material removal without causing undesirable damage to the residual materials or aluminum substrates. Testing to establish appropriate process parameters such as flow, temperature and pressures of the liquid nitrogen stream provided an initial benchmark for process testing. Equipped with these initial parameters engineers were then able to establish more detailed test criteria that set the process limits. Quantifying the potential for aluminum hardware damage represented the greatest hurdle for satisfying engineers as to the safety of this process. Extensive testing for aluminum erosion, surface profiling, and substrate weight loss was performed. This successful project clearly demonstrated that the liquid nitrogen jet possesses unique strengths that align remarkably well with the unusual challenges that space hardware and missile manufacturers face on a regular basis. Performance of this task within the confines of a critical manufacturing facility marks a milestone in advanced processing.
Space shuttle engineering and operations support. Avionics system engineering
NASA Technical Reports Server (NTRS)
Broome, P. A.; Neubaur, R. J.; Welsh, R. T.
1976-01-01
The shuttle avionics integration laboratory (SAIL) requirements for supporting the Spacelab/orbiter avionics verification process are defined. The principal topics are a Spacelab avionics hardware assessment, test operations center/electronic systems test laboratory (TOC/ESL) data processing requirements definition, SAIL (Building 16) payload accommodations study, and projected funding and test scheduling. Because of the complex nature of the Spacelab/orbiter computer systems, the PCM data link, and the high rate digital data system hardware/software relationships, early avionics interface verification is required. The SAIL is a prime candidate test location to accomplish this early avionics verification.
RAMP: A fault tolerant distributed microcomputer structure for aircraft navigation and control
NASA Technical Reports Server (NTRS)
Dunn, W. R.
1980-01-01
RAMP consists of distributed sets of parallel computers partioned on the basis of software and packaging constraints. To minimize hardware and software complexity, the processors operate asynchronously. It was shown that through the design of asymptotically stable control laws, data errors due to the asynchronism were minimized. It was further shown that by designing control laws with this property and making minor hardware modifications to the RAMP modules, the system became inherently tolerant to intermittent faults. A laboratory version of RAMP was constructed and is described in the paper along with the experimental results.
NASA Astrophysics Data System (ADS)
Mochalov, V. A.; Firstov, P. P.; Cherneva, N. V.; Sannikov, D. V.; Akbashev, R. R.; Uvarov, V. N.; Shevtsov, B. M.; Druzhin, G. I.; Mochalova, A. V.
2017-11-01
In the region of the Northern group of volcanoes in Kamchatka peninsula, a distributed network is being planned to monitor the VLF range electromagnetic radiation and to locate the lightning strokes. It will allow the researchers to register weaker electromagnetic pulses from lightning strokes in comparison to the World Wide Lightning Location Network. The hardware-software complex of the network under construction is presented. The capabilities of the available and the developing hardware and software to investigate natural phenomena associated with lightning activity are described.
Robotics control using isolated word recognition of voice input
NASA Technical Reports Server (NTRS)
Weiner, J. M.
1977-01-01
A speech input/output system is presented that can be used to communicate with a task oriented system. Human speech commands and synthesized voice output extend conventional information exchange capabilities between man and machine by utilizing audio input and output channels. The speech input facility is comprised of a hardware feature extractor and a microprocessor implemented isolated word or phrase recognition system. The recognizer offers a medium sized (100 commands), syntactically constrained vocabulary, and exhibits close to real time performance. The major portion of the recognition processing required is accomplished through software, minimizing the complexity of the hardware feature extractor.
Christophersen, Jon; Morrison, Bill
2018-02-14
Energy storage devices, primarily batteries, are now more important to consumers, industries and the military. With increasing technical complexity and higher user expectations, there is also a demand for highly accurate state-of-health battery assessment techniques. IMB incorporates patented, proprietary, and tested capabilities using control software and hardware that can be part of an embedded monitoring system. IMB directly measures the wideband impedance spectrum in seconds during battery operation with no significant impact on service life. It also can be applied to batteries prior to installation, confirming health before entering active service, as well as during regular maintenance. For more information about this project, visit http://www.inl.gov/rd100/2011/impedance-measurement-box/
Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M
2016-01-01
The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.
An Embedded Reconfigurable Logic Module
NASA Technical Reports Server (NTRS)
Tucker, Jerry H.; Klenke, Robert H.; Shams, Qamar A. (Technical Monitor)
2002-01-01
A Miniature Embedded Reconfigurable Computer and Logic (MERCAL) module has been developed and verified. MERCAL was designed to be a general-purpose, universal module that that can provide significant hardware and software resources to meet the requirements of many of today's complex embedded applications. This is accomplished in the MERCAL module by combining a sub credit card size PC in a DIMM form factor with a XILINX Spartan I1 FPGA. The PC has the ability to download program files to the FPGA to configure it for different hardware functions and to transfer data to and from the FPGA via the PC's ISA bus during run time. The MERCAL module combines, in a compact package, the computational power of a 133 MHz PC with up to 150,000 gate equivalents of digital logic that can be reconfigured by software. The general architecture and functionality of the MERCAL hardware and system software are described.
Langley Storage facility which houses remains of Apollo 204 craft
NASA Technical Reports Server (NTRS)
1990-01-01
The Apollo 204 command module is seen in storage at Langley Research Center in Virginia. The command module, damaged in the 1967 Apollo fire, its heat shield, booster protective cover and 81 cartons of related hardware and investigative data occupy 3,300 cubic feet of Langley's storage space. Astronauts Virgil I. Grissom, Roger B. Chaffee and Edward H. White II perished in the Apollo 204 spacecraft fire on Jan. 27, 1967 on Launch Complex 34, Cape Canaveral. The hardware has been stored at Langley since 1967. PLEASE NOTE UPDATE: In early May of 1990, NASA announced plans to move the hardware and related data to permanent storage at the site of all the Challenger debris in an abandoned missile silo at Cape Canaveral Air Force Station (CCAFS), Florida. However, at month's end, NASA announced it had decided to keep the capsule at Langley for an indefinite period of time.
Langley Storage facility which houses remains of Apollo 204 craft
NASA Technical Reports Server (NTRS)
1990-01-01
The Apollo 204 command module is seen in storage at Langley Research Center in Virginia. The command module, damaged in the 1967 Apollo fire, its heat shield, booster protective cover and 81 cartons of related hardware and investigative data occupy 3,300 cubic feet of warehouse storage space. Astronauts Virgil I. Grissom, Roger B. Chaffee and Edward H. White II perished in the Apollo 204 spacecraft fire on Jan. 27, 1967 on Launch Complex 34 at Cape Canaveral. The hardware has been stored at Langley since 1967. PLEASE NOTE UPDATE: In early May of 1990, NASA announced plans to move the hardware and related data to permanent storage with the Challenger debris in an abandoned missile silo at Cape Canaveral Air Force Station (CCAFS), Florida. However, at month's end, NASA announced it had decided to keep the capsule at Langley for an indefinite period of time.
Langley Storage facility which houses remains of Apollo 204 craft
NASA Technical Reports Server (NTRS)
1990-01-01
A warehouse holding Apollo 204 hardware and investigative data is seen at Langley Research Center in Virginia. The command module, damaged in the 1967 Apollo fire, its heat shield, booster protective cover and 81 cartons of data and other related materials occupy 3,300 cubic feet. Astronauts Virgil I. Grissom, Roger B. Chaffee and Edward H. White II perished in the Apollo 204 spacecraft fire on Jan. 27, 1967 on Launch Complex 34 at Cape Canaveral. The hardware has been stored at Langley since 1967. PLEASE NOTE UPDATE: In early May of 1990, NASA announced plans to move the hardware and related data to permanent storage with the Challenger debris in an abandoned missile silo at Cape Canaveral Air Force Station (CCAFS), Florida. However, at month's end, NASA announced it had decided to keep the capsule at Langley for an indefinite period of time.
Langley Storage facility which houses remains of Apollo 204 craft
NASA Technical Reports Server (NTRS)
1990-01-01
Part of 81 cartons of Apollo 204 hardware and investigation data are seen in storage at Langley Research Center in Virginia. The command module, damaged in the 1967 Apollo fire, its heat shield, booster protective cover and the cartons occupy 3,300 cubic feet of Langley's storage space. Astronauts Virgil I. Grissom, Roger B. Chaffee and Edward H. White II perished in the Apollo 204 spacecraft fire on Jan. 27, 1967 on Launch Complex 34, Cape Canaveral. The hardware has been stored at Langley since 1967. PLEASE NOTE UPDATE: In early May of 1990, NASA announced plans to move the hardware and related data to permanent storage with the Challenger debris in an abandoned missile silo at Cape Canaveral Air Force Station (CCAFS), Florida. However, at month's end, NASA announced it had decided to keep the capsule at Langley for an indefinite period of time.
Ground Operations Aerospace Language (GOAL). Volume 3: Data bank
NASA Technical Reports Server (NTRS)
1973-01-01
The GOAL (Ground Operations Aerospace Language) test programming language was developed for use in ground checkout operations in a space vehicle launch environment. To insure compatibility with a maximum number of applications, a systematic and error-free method of referencing command/response (analog and digital) hardware measurements is a principle feature of the language. Central to the concept of requiring the test language to be independent of launch complex equipment and terminology is that of addressing measurements via symbolic names that have meaning directly in the hardware units being tested. To form the link from test program through test system interfaces to the units being tested the concept of a data bank has been introduced. The data bank is actually a large cross-reference table that provides pertinent hardware data such as interface unit addresses, data bus routings, or any other system values required to locate and access measurements.
Building the Digital Library Infrastructure: A Primer.
ERIC Educational Resources Information Center
Tebbetts, Diane R.
1999-01-01
Provides a framework for examining the complex infrastructure needed to successfully implement a digital library. Highlights include database development, online public-access catalogs, interactive technical services, full-text documents, hardware and wiring, licensing, access, and security issues. (Author/LRW)
Analysis of Selected Enhancements to the En Route Central Computing Complex
DOT National Transportation Integrated Search
1981-09-01
This report analyzes selected hardware enhancements that could improve the performance of the 9020 computer systems, which are used to provide en route air traffic control services. These enhancements could be implemented quickly, would be relatively...
Exercise Countermeasure Hardware Evolution on ISS: The First Decade.
Korth, Deborah W
2015-12-01
The hardware systems necessary to support exercise countermeasures to the deconditioning associated with microgravity exposure have evolved and improved significantly during the first decade of the International Space Station (ISS), resulting in both new types of hardware and enhanced performance capabilities for initial hardware items. The original suite of countermeasure hardware supported the first crews to arrive on the ISS and the improved countermeasure system delivered in later missions continues to serve the astronauts today with increased efficacy. Due to aggressive hardware development schedules and constrained budgets, the initial approach was to identify existing spaceflight-certified exercise countermeasure equipment, when available, and modify it for use on the ISS. Program management encouraged the use of commercial-off-the-shelf (COTS) hardware, or hardware previously developed (heritage hardware) for the Space Shuttle Program. However, in many cases the resultant hardware did not meet the additional requirements necessary to support crew health maintenance during long-duration missions (3 to 12 mo) and anticipated future utilization activities in support of biomedical research. Hardware development was further complicated by performance requirements that were not fully defined at the outset and tended to evolve over the course of design and fabrication. Modifications, ranging from simple to extensive, were necessary to meet these evolving requirements in each case where heritage hardware was proposed. Heritage hardware was anticipated to be inherently reliable without the need for extensive ground testing, due to its prior positive history during operational spaceflight utilization. As a result, developmental budgets were typically insufficient and schedules were too constrained to permit long-term evaluation of dedicated ground-test units ("fleet leader" type testing) to identify reliability issues when applied to long-duration use. In most cases, the exercise unit with the most operational history was the unit installed on the ISS.
An approach to secure weather and climate models against hardware faults
NASA Astrophysics Data System (ADS)
Düben, Peter D.; Dawson, Andrew
2017-03-01
Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelization to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. In this paper, we present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform model simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13 % for the shallow water model.
An approach to secure weather and climate models against hardware faults
NASA Astrophysics Data System (ADS)
Düben, Peter; Dawson, Andrew
2017-04-01
Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelisation to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. We present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13% for the shallow water model.
Assurance of Complex Electronics. What Path Do We Take?
NASA Technical Reports Server (NTRS)
Plastow, Richard A.
2007-01-01
Many of the methods used to develop software bare a close resemblance to Complex Electronics (CE) development. CE are now programmed to perform tasks that were previously handled in software, such as communication protocols. For instance, Field Programmable Gate Arrays (FPGAs) can have over a million logic gates while system-on-chip (SOC) devices can combine a microprocessor, input and output channels, and sometimes an FPGA for programmability. With this increased intricacy, the possibility of "software-like" bugs such as incorrect design, logic, and unexpected interactions within the logic is great. Since CE devices are obscuring the hardware/software boundary, we propose that mature software methodologies may be utilized with slight modifications to develop these devices. By using standardized S/W Engineering methods such as checklists, missing requirements and "bugs" can be detected earlier in the development cycle, thus creating a development process for CE that will be easily maintained and configurable based on the device used.
Reducing the Requirements and Cost of Astronomical Telescopes
NASA Technical Reports Server (NTRS)
Smith, W. Scott; Whitakter, Ann F. (Technical Monitor)
2002-01-01
Limits on astronomical telescope apertures are being rapidly approached. These limits result from logistics, increasing complexity, and finally budgetary constraints. In an historical perspective, great strides have been made in the area of aperture, adaptive optics, wavefront sensors, detectors, stellar interferometers and image reconstruction. What will be the next advances? Emerging data analysis techniques based on communication theory holds the promise of yielding more information from observational data based on significant computer post-processing. This paper explores some of the current telescope limitations and ponders the possibilities increasing the yield of scientific data based on the migration computer post-processing techniques to higher dimensions. Some of these processes hold the promise of reducing the requirements on the basic telescope hardware making the next generation of instruments more affordable.
Kinematic primitives for walking and trotting gaits of a quadruped robot with compliant legs.
Spröwitz, Alexander T; Ajallooeian, Mostafa; Tuleu, Alexandre; Ijspeert, Auke Jan
2014-01-01
In this work we research the role of body dynamics in the complexity of kinematic patterns in a quadruped robot with compliant legs. Two gait patterns, lateral sequence walk and trot, along with leg length control patterns of different complexity were implemented in a modular, feed-forward locomotion controller. The controller was tested on a small, quadruped robot with compliant, segmented leg design, and led to self-stable and self-stabilizing robot locomotion. In-air stepping and on-ground locomotion leg kinematics were recorded, and the number and shapes of motion primitives accounting for 95% of the variance of kinematic leg data were extracted. This revealed that kinematic patterns resulting from feed-forward control had a lower complexity (in-air stepping, 2-3 primitives) than kinematic patterns from on-ground locomotion (νm4 primitives), although both experiments applied identical motor patterns. The complexity of on-ground kinematic patterns had increased, through ground contact and mechanical entrainment. The complexity of observed kinematic on-ground data matches those reported from level-ground locomotion data of legged animals. Results indicate that a very low complexity of modular, rhythmic, feed-forward motor control is sufficient for level-ground locomotion in combination with passive compliant legged hardware.
Kinematic primitives for walking and trotting gaits of a quadruped robot with compliant legs
Spröwitz, Alexander T.; Ajallooeian, Mostafa; Tuleu, Alexandre; Ijspeert, Auke Jan
2014-01-01
In this work we research the role of body dynamics in the complexity of kinematic patterns in a quadruped robot with compliant legs. Two gait patterns, lateral sequence walk and trot, along with leg length control patterns of different complexity were implemented in a modular, feed-forward locomotion controller. The controller was tested on a small, quadruped robot with compliant, segmented leg design, and led to self-stable and self-stabilizing robot locomotion. In-air stepping and on-ground locomotion leg kinematics were recorded, and the number and shapes of motion primitives accounting for 95% of the variance of kinematic leg data were extracted. This revealed that kinematic patterns resulting from feed-forward control had a lower complexity (in-air stepping, 2–3 primitives) than kinematic patterns from on-ground locomotion (νm4 primitives), although both experiments applied identical motor patterns. The complexity of on-ground kinematic patterns had increased, through ground contact and mechanical entrainment. The complexity of observed kinematic on-ground data matches those reported from level-ground locomotion data of legged animals. Results indicate that a very low complexity of modular, rhythmic, feed-forward motor control is sufficient for level-ground locomotion in combination with passive compliant legged hardware. PMID:24639645
HiCAT Software Infrastructure: Safe hardware control with object oriented Python
NASA Astrophysics Data System (ADS)
Moriarty, Christopher; Brooks, Keira; Soummer, Remi
2018-01-01
High contrast imaging for Complex Aperture Telescopes (HiCAT) is a testbed designed to demonstrate coronagraphy and wavefront control for segmented on-axis space telescopes such as envisioned for LUVOIR. To limit the air movements in the testbed room, software interfaces for several different hardware components were developed to completely automate operations. When developing software interfaces for many different pieces of hardware, unhandled errors are commonplace and can prevent the software from properly closing a hardware resource. Some fragile components (e.g. deformable mirrors) can be permanently damaged because of this. We present an object oriented Python-based infrastructure to safely automate hardware control and optical experiments. Specifically, conducting high-contrast imaging experiments while monitoring humidity and power status along with graceful shutdown processes even for unexpected errors. Python contains a construct called a “context manager” that allows you define code to run when a resource is opened or closed. Context managers ensure that a resource is properly closed, even when unhandled errors occur. Harnessing the context manager design, we also use Python’s multiprocessing library to monitor humidity and power status without interrupting the experiment. Upon detecting a safety problem, the master process sends an event to the child process that triggers the context managers to gracefully close any open resources. This infrastructure allows us to queue up several experiments and safely operate the testbed without a human in the loop.
Computer program determines chemical equilibria in complex systems
NASA Technical Reports Server (NTRS)
Gordon, S.; Zeleznik, F. J.
1966-01-01
Computer program numerically solves nonlinear algebraic equations for chemical equilibrium based on iteration equations independent of choice of components. This program calculates theoretical performance for frozen and equilibrium composition during expansion and Chapman-Jouguet flame properties, studies combustion, and designs hardware.
NASA Technical Reports Server (NTRS)
Maxwell, Scott A.; Cooper, Brian; Hartman, Frank; Wright, John; Yen, Jeng; Leger, Chris
2005-01-01
A Mars rover is a complex system, and driving one is a complex endeavor. Rover driver must be intimately familiar with the hardware and software of the mobility system and of the robotic arm. They must rapidly assess threats in the terrain, then creatively combine their knowledge o f the vehicle and its environment to achieve each day's science and engineering objective.
On the use of inexact, pruned hardware in atmospheric modelling
Düben, Peter D.; Joven, Jaume; Lingamneni, Avinash; McNamara, Hugh; De Micheli, Giovanni; Palem, Krishna V.; Palmer, T. N.
2014-01-01
Inexact hardware design, which advocates trading the accuracy of computations in exchange for significant savings in area, power and/or performance of computing hardware, has received increasing prominence in several error-tolerant application domains, particularly those involving perceptual or statistical end-users. In this paper, we evaluate inexact hardware for its applicability in weather and climate modelling. We expand previous studies on inexact techniques, in particular probabilistic pruning, to floating point arithmetic units and derive several simulated set-ups of pruned hardware with reasonable levels of error for applications in atmospheric modelling. The set-up is tested on the Lorenz ‘96 model, a toy model for atmospheric dynamics, using software emulation for the proposed hardware. The results show that large parts of the computation tolerate the use of pruned hardware blocks without major changes in the quality of short- and long-time diagnostics, such as forecast errors and probability density functions. This could open the door to significant savings in computational cost and to higher resolution simulations with weather and climate models. PMID:24842031
Bagrosky, Brian M; Hayes, Kari L; Koo, Phillip J; Fenton, Laura Z
2013-08-01
Evaluation of the child with spinal fusion hardware and concern for infection is challenging because of hardware artifact with standard imaging (CT and MRI) and difficult physical examination. Studies using (18)F-FDG PET/CT combine the benefit of functional imaging with anatomical localization. To discuss a case series of children and young adults with spinal fusion hardware and clinical concern for hardware infection. These people underwent FDG PET/CT imaging to determine the site of infection. We performed a retrospective review of whole-body FDG PET/CT scans at a tertiary children's hospital from December 2009 to January 2012 in children and young adults with spinal hardware and suspected hardware infection. The PET/CT scan findings were correlated with pertinent clinical information including laboratory values of inflammatory markers, postoperative notes and pathology results to evaluate the diagnostic accuracy of FDG PET/CT. An exempt status for this retrospective review was approved by the Institution Review Board. Twenty-five FDG PET/CT scans were performed in 20 patients. Spinal fusion hardware infection was confirmed surgically and pathologically in six patients. The most common FDG PET/CT finding in patients with hardware infection was increased FDG uptake in the soft tissue and bone immediately adjacent to the posterior spinal fusion rods at multiple contiguous vertebral levels. Noninfectious hardware complications were diagnosed in ten patients and proved surgically in four. Alternative sources of infection were diagnosed by FDG PET/CT in seven patients (five with pneumonia, one with pyonephrosis and one with superficial wound infections). FDG PET/CT is helpful in evaluation of children and young adults with concern for spinal hardware infection. Noninfectious hardware complications and alternative sources of infection, including pneumonia and pyonephrosis, can be diagnosed. FDG PET/CT should be the first-line cross-sectional imaging study in patients with suspected spinal hardware infection. Because pneumonia was diagnosed as often as spinal hardware infection, initial chest radiography should also be performed.
A simple encoding method for Sigma-Delta ADC based biopotential acquisition systems.
Guerrero, Federico N; Spinelli, Enrique M
2017-10-01
Sigma Delta analogue-to-digital converters allow acquiring the full dynamic range of biomedical signals at the electrodes, resulting in less complex hardware and increased measurement robustness. However, the increased data size per sample (typically 24 bits) demands the transmission of extremely large volumes of data across the isolation barrier, thus increasing power consumption on the patient side. This problem is accentuated when a large number of channels is used as in current 128-256 electrodes biopotential acquisition systems, that usually opt for an optic fibre link to the computer. An analogous problem occurs for simpler low-power acquisition platforms that transmit data through a wireless link to a computing platform. In this paper, a low-complexity encoding method is presented to decrease sample data size without losses, while preserving the full DC-coupled signal. The method achieved a 2.3 average compression ratio evaluated over an ECG and EMG signal bank acquired with equipment based on Sigma-Delta converters. It demands a very low processing load: a C language implementation is presented that resulted in an 110 clock cycles average execution on an 8-bit microcontroller.
Model Transformation for a System of Systems Dependability Safety Case
NASA Technical Reports Server (NTRS)
Murphy, Judy; Driskell, Stephen B.
2010-01-01
Software plays an increasingly larger role in all aspects of NASA's science missions. This has been extended to the identification, management and control of faults which affect safety-critical functions and by default, the overall success of the mission. Traditionally, the analysis of fault identification, management and control are hardware based. Due to the increasing complexity of system, there has been a corresponding increase in the complexity in fault management software. The NASA Independent Validation & Verification (IV&V) program is creating processes and procedures to identify, and incorporate safety-critical software requirements along with corresponding software faults so that potential hazards may be mitigated. This Specific to Generic ... A Case for Reuse paper describes the phases of a dependability and safety study which identifies a new, process to create a foundation for reusable assets. These assets support the identification and management of specific software faults and, their transformation from specific to generic software faults. This approach also has applications to other systems outside of the NASA environment. This paper addresses how a mission specific dependability and safety case is being transformed to a generic dependability and safety case which can be reused for any type of space mission with an emphasis on software fault conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, J; Zhang, Y; Zheng, Y
2015-06-15
Purpose: Spine hardware made of high-Z materials such as titanium has the potential to affect the dose distribution around the metal rods in CyberKnife spinal stereotactic radiosurgery (SRS) treatments. The purpose of this work was to evaluate the magnitude of such effect retrospectively for clinical CyberKnife plans. Methods: The dose calculation was performed within the MultiPlan treatment planning system using the ray tracing (RT) and Monte Carlo (MC) method. A custom density model was created by extending the CT-to-Density table to titanium density of 4.5 g/cm3 with the CT number of 4095. To understand the dose perturbation caused by themore » titanium rod, a simple beam setup (7.5 mm IRIS collimator) was used to irradiate a mimic rod (5 mm) with overridden high density. Five patient spinal SRS cases were found chronologically from 2010 to 2015 in our institution. For each case, the hardware was contoured manually. The original plan was re-calculated using both RT and MC methods with and without rod density override without changing clinical beam parameters. Results: The simple beam irradiation shows that there is 10% dose increase at the interface because of electron backscattering and 7% decrease behind the rod because of photon attenuation. For actual clinical plans, the iso-dose lines and DVHs are almost identical (<2%) for calculations with and without density override for both RT and MC methods. However, there is a difference of more than 10% for D90 between RT and MC method. Conclusion: Although the dose perturbation around the metal rods can be as large as 10% for a single beam irradiation, for clinical treatments with complex beam composition the effect of spinal hardware to the PTV and spinal dose is minimal. As such, the MC dose algorithm without rod density override for CyberKnife spinal SRS is acceptable.« less
Using SRAM Based FPGAs for Power-Aware High Performance Wireless Sensor Networks
Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa
2012-01-01
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today’s applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements. PMID:22736971
Using SRAM based FPGAs for power-aware high performance wireless sensor networks.
Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa
2012-01-01
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today's applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements.
Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C
2011-01-01
Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.
NASA Information Technology Implementation Plan
NASA Technical Reports Server (NTRS)
2000-01-01
NASA's Information Technology (IT) resources and IT support continue to be a growing and integral part of all NASA missions. Furthermore, the growing IT support requirements are becoming more complex and diverse. The following are a few examples of the growing complexity and diversity of NASA's IT environment. NASA is conducting basic IT research in the Intelligent Synthesis Environment (ISE) and Intelligent Systems (IS) Initiatives. IT security, infrastructure protection, and privacy of data are requiring more and more management attention and an increasing share of the NASA IT budget. Outsourcing of IT support is becoming a key element of NASA's IT strategy as exemplified by Outsourcing Desktop Initiative for NASA (ODIN) and the outsourcing of NASA Integrated Services Network (NISN) support. Finally, technology refresh is helping to provide improved support at lower cost. Recently the NASA Automated Data Processing (ADP) Consolidation Center (NACC) upgraded its bipolar technology computer systems with Complementary Metal Oxide Semiconductor (CMOS) technology systems. This NACC upgrade substantially reduced the hardware maintenance and software licensing costs, significantly increased system speed and capacity, and reduced customer processing costs by 11 percent.
Using Innovative Technologies for Manufacturing and Evaluating Rocket Engine Hardware
NASA Technical Reports Server (NTRS)
Betts, Erin M.; Hardin, Andy
2011-01-01
Many of the manufacturing and evaluation techniques that are currently used for rocket engine component production are traditional methods that have been proven through years of experience and historical precedence. As we enter into a new space age where new launch vehicles are being designed and propulsion systems are being improved upon, it is sometimes necessary to adopt new and innovative techniques for manufacturing and evaluating hardware. With a heavy emphasis on cost reduction and improvements in manufacturing time, manufacturing techniques such as Direct Metal Laser Sintering (DMLS) and white light scanning are being adopted and evaluated for their use on J-2X, with hopes of employing both technologies on a wide variety of future projects. DMLS has the potential to significantly reduce the processing time and cost of engine hardware, while achieving desirable material properties by using a layered powdered metal manufacturing process in order to produce complex part geometries. The white light technique is a non-invasive method that can be used to inspect for geometric feature alignment. Both the DMLS manufacturing method and the white light scanning technique have proven to be viable options for manufacturing and evaluating rocket engine hardware, and further development and use of these techniques is recommended.
Testing Microshutter Arrays Using Commercial FPGA Hardware
NASA Technical Reports Server (NTRS)
Rapchun, David
2008-01-01
NASA is developing micro-shutter arrays for the Near Infrared Spectrometer (NIRSpec) instrument on the James Webb Space Telescope (JWST). These micro-shutter arrays allow NIRspec to do Multi Object Spectroscopy, a key part of the mission. Each array consists of 62414 individual 100 x 200 micron shutters. These shutters are magnetically opened and held electrostatically. Individual shutters are then programmatically closed using a simple row/column addressing technique. A common approach to provide these data/clock patterns is to use a Field Programmable Gate Array (FPGA). Such devices require complex VHSIC Hardware Description Language (VHDL) programming and custom electronic hardware. Due to JWST's rapid schedule on the development of the micro-shutters, rapid changes were required to the FPGA code to facilitate new approaches being discovered to optimize the array performance. Such rapid changes simply could not be made using conventional VHDL programming. Subsequently, National Instruments introduced an FPGA product that could be programmed through a Labview interface. Because Labview programming is considerably easier than VHDL programming, this method was adopted and brought success. The software/hardware allowed the rapid change the FPGA code and timely results of new micro-shutter array performance data. As a result, numerous labor hours and money to the project were conserved.
Hardware system of X-wave generator with simple driving pulses
NASA Astrophysics Data System (ADS)
Li, Xu; Li, Yaqin; Xiao, Feng; Ding, Mingyue; Yuchi, Ming
2013-03-01
The limited diffraction beams such as X-wave have the properties of larger depth of field. Thus, it has the potential to generate ultra-high frame rate ultrasound images. However, in practice, the real-time generation of X-wave ultrasonic field requires complex and high-cost system, especially the precise and specific voltage time distribution part for the excitation of each distinct array element. In order to simplify the hardware realization of X-wave, based on the previous works, X-wave excitation signals were decomposed and expressed as the superposition of a group of simple driving pulses, such as rectangular and triangular waves. The hardware system for the X-wave generator was also designed. The generator consists of a computer for communication with the circuit, universal serial bus (USB) based micro-controller unit (MCU) for data transmission, field programmable gate array (FPGA) based Direct Digital Synthesizer(DDS), 12-bit digital-to-analog (D/A) converter and a two stage amplifier.The hardware simulation results show that the designed system can generate the waveforms at different radius approximating the theoretical X-wave excitations with a maximum error of 0.49% triggered by the quantification of amplitude data.
NASA Astrophysics Data System (ADS)
Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.
2011-12-01
With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic speedup in execution time. NRM is sufficiently generic to support applications in any domain, as long as the application is parallelizable (i.e., can be subdivided into multiple individual processing tasks). At present, NRM has been effective in decreasing the overall runtime of several algorithms: 1) the generation of a global 3D model of the compressional velocity distribution in the Earth using tomographic inversion, 2) the calculation of the model resolution matrix, model covariance matrix, and travel time uncertainty for the aforementioned velocity model, and 3) the correlation of waveforms with archival data on a massive scale for seismic event detection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Unmanned Vehicle Guidance Using Video Camera/Vehicle Model
NASA Technical Reports Server (NTRS)
Sutherland, T.
1999-01-01
A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.
Assurance of COTS Boards for Space Flight. Part 1
NASA Technical Reports Server (NTRS)
Plante, Jeannette; Helmold, Norm; Eveland, Clay
1998-01-01
Space Flight hardware and software designers are increasingly turning to Commercial-Off-the-Shelf (COTS) products in hopes of meeting the demands imposed on them by projects with short development cycle times. The Technology Validation Assurance (TVA) team at NASA GSFC has embarked on applying a method for inserting COTS hardware into the Spartan 251 spacecraft. This method includes Procurement, Characterization, Ruggedization/Remediation and Verification Testing process steps which are intended to increase the uses confidence in the hardware's ability to function in the intended application for the required duration. As this method is refined with use, it has the potential for becoming a benchmark for industry-wide use of COTS in high reliability systems.
Precision Clean Hardware: Maintenance of Fluid Systems Cleanliness
NASA Technical Reports Server (NTRS)
Sharp, Sheila; Pedley, Mike; Bond, Tim; Quaglino, Joseph; Lorenz, Mary Jo; Bentz, Michael; Banta, Richard; Tolliver, Nancy; Golden, John; Levesque, Ray
2003-01-01
The ISS fluid systems are so complex that fluid system cleanliness cannot be verified at the assembly level. A "build clean / maintain clean" approach was used by all major fluid systems: Verify cleanliness at the detail and subassembly level. Maintain cleanliness during assembly.
Proposal for hierarchical description of software systems
NASA Technical Reports Server (NTRS)
Thauboth, H.
1973-01-01
The programming of digital computers has developed into a new dimension full of diffculties, because the hardware of computers has become so powerful that more complex applications are entrusted to computers. The costs of software development, verification, and maintenance are outpacing those of the hardware and the trend is toward futher increase of sophistication of application of computers and consequently of sophistication of software. To obtain better visibility into software systems and to improve the structure of software systems for better tests, verification, and maintenance, a clear, but rigorous description and documentation of software is needed. The purpose of the report is to extend the present methods in order to obtain a documentation that better reflects the interplay between the various components and functions of a software system at different levels of detail without losing the precision in expression. This is done by the use of block diagrams, sequence diagrams, and cross-reference charts. In the appendices, examples from an actual large sofware system, i.e. the Marshall System for Aerospace Systems Simulation (MARSYAS), are presented. The proposed documentation structure is compatible to automation of updating significant portions of the documentation for better software change control.
NASA Astrophysics Data System (ADS)
Udell, C.; Selker, J. S.
2017-12-01
The increasing availability and functionality of Open-Source software and hardware along with 3D printing, low-cost electronics, and proliferation of open-access resources for learning rapid prototyping are contributing to fundamental transformations and new technologies in environmental sensing. These tools invite reevaluation of time-tested methodologies and devices toward more efficient, reusable, and inexpensive alternatives. Building upon Open-Source design facilitates community engagement and invites a Do-It-Together (DIT) collaborative framework for research where solutions to complex problems may be crowd-sourced. However, barriers persist that prevent researchers from taking advantage of the capabilities afforded by open-source software, hardware, and rapid prototyping. Some of these include: requisite technical skillsets, knowledge of equipment capabilities, identifying inexpensive sources for materials, money, space, and time. A university MAKER space staffed by engineering students to assist researchers is one proposed solution to overcome many of these obstacles. This presentation investigates the unique capabilities the USDA-funded Openly Published Environmental Sensing (OPEnS) Lab affords researchers, within Oregon State and internationally, and the unique functions these types of initiatives support at the intersection of MAKER spaces, Open-Source academic research, and open-access dissemination.
The International Space Station human life sciences experiment implementation process
NASA Technical Reports Server (NTRS)
Miller, L. J.; Haven, C. P.; McCollum, S. G.; Lee, A. M.; Kamman, M. R.; Baumann, D. K.; Anderson, M. E.; Buderer, M. C.
2001-01-01
The selection, definition, and development phases of a Life Sciences flight research experiment has been consistent throughout the past decade. The implementation process, however, has changed significantly within the past two years. This change is driven primarily by the shift from highly integrated, dedicated research missions on platforms with well defined processes to self contained experiments with stand alone operations on platforms which are being concurrently designed. For experiments manifested on the International Space Station (ISS) and/or on short duration missions, the more modular, streamlined, and independent the individual experiment is, the more likely it is to be successfully implemented before the ISS assembly is completed. During the assembly phase of the ISS, science operations are lower in priority than the construction of the station. After the station has been completed, it is expected that more resources will be available to perform research. The complexity of implementing investigations increases with the logistics needed to perform the experiment. Examples of logistics issues include- hardware unique to the experiment; large up and down mass and volume needs; access to crew and hardware during the ascent or descent phases; maintenance of hardware and supplies with a limited shelf life,- baseline data collection schedules with lengthy sessions or sessions close to the launch or landing; onboard stowage availability, particularly cold stowage; and extensive training where highly proficient skills must be maintained. As the ISS processes become better defined, experiment implementation will meet new challenges due to distributed management, on-orbit resource sharing, and adjustments to crew availability pre- and post-increment. c 2001. Elsevier Science Ltd. All rights reserved.
High performance VLSI telemetry data systems
NASA Technical Reports Server (NTRS)
Chesney, J.; Speciale, N.; Horner, W.; Sabia, S.
1990-01-01
NASA's deployment of major space complexes such as Space Station Freedom (SSF) and the Earth Observing System (EOS) will demand increased functionality and performance from ground based telemetry acquisition systems well above current system capabilities. Adaptation of space telemetry data transport and processing standards such as those specified by the Consultative Committee for Space Data Systems (CCSDS) standards and those required for commercial ground distribution of telemetry data, will drive these functional and performance requirements. In addition, budget limitations will force the requirement for higher modularity, flexibility, and interchangeability at lower cost in new ground telemetry data system elements. At NASA's Goddard Space Flight Center (GSFC), the design and development of generic ground telemetry data system elements, over the last five years, has resulted in significant solutions to these problems. This solution, referred to as the functional components approach includes both hardware and software components ready for end user application. The hardware functional components consist of modern data flow architectures utilizing Application Specific Integrated Circuits (ASIC's) developed specifically to support NASA's telemetry data systems needs and designed to meet a range of data rate requirements up to 300 Mbps. Real-time operating system software components support both embedded local software intelligence, and overall system control, status, processing, and interface requirements. These components, hardware and software, form the superstructure upon which project specific elements are added to complete a telemetry ground data system installation. This paper describes the functional components approach, some specific component examples, and a project example of the evolution from VLSI component, to basic board level functional component, to integrated telemetry data system.
[Orthogonal Vector Projection Algorithm for Spectral Unmixing].
Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li
2015-12-01
Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.
Isse, Kumiko; Lesniak, Andrew; Grama, Kedar; Roysam, Badrinath; Minervini, Martha I.; Demetris, Anthony J
2013-01-01
Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. “-Omics” analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: a) spatial-temporal relationships; b) rare events/cells; c) complex structural context; and d) integration into a “systems” model. Nevertheless, except for immunostaining, no transformative advancements have “modernized” routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology - global “–omic” analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. PMID:22053785
Automatic Parameter Tuning for the Morpheus Vehicle Using Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Birge, B.
2013-01-01
A high fidelity simulation using a PC based Trick framework has been developed for Johnson Space Center's Morpheus test bed flight vehicle. There is an iterative development loop of refining and testing the hardware, refining the software, comparing the software simulation to hardware performance and adjusting either or both the hardware and the simulation to extract the best performance from the hardware as well as the most realistic representation of the hardware from the software. A Particle Swarm Optimization (PSO) based technique has been developed that increases speed and accuracy of the iterative development cycle. Parameters in software can be automatically tuned to make the simulation match real world subsystem data from test flights. Special considerations for scale, linearity, discontinuities, can be all but ignored with this technique, allowing fast turnaround both for simulation tune up to match hardware changes as well as during the test and validation phase to help identify hardware issues. Software models with insufficient control authority to match hardware test data can be immediately identified and using this technique requires very little to no specialized knowledge of optimization, freeing model developers to concentrate on spacecraft engineering. Integration of the PSO into the Morpheus development cycle will be discussed as well as a case study highlighting the tool's effectiveness.
Payload Crew Training Complex (PCTC) utilization and training plan
NASA Technical Reports Server (NTRS)
Self, M. R.
1980-01-01
The physical facilities that comprise the payload crew training complex (PCTC) are described including the host simulator; experiment simulators; Spacelab aft flight deck, experiment pallet, and experiment rack mockups; the simulation director's console; payload operations control center; classrooms; and supporting soft- and hardware. The parameters of a training philosophy for payload crew training at the PCTC are established. Finally the development of the training plan is addressed including discussions of preassessment, and evaluation options.
Signal processing and control challenges for smart vehicles
NASA Astrophysics Data System (ADS)
Zhang, Hui; Braun, Simon G.
2017-03-01
Smart phones have changed not only the mobile phone market but also our society during the past few years. Could the next potential intelligent device may be the vehicle? Judging by the visibility, in all media, of the numerous attempts to develop autonomous vehicles, this is certainly one of the logical outcomes. Smart vehicles would be equipped with an advanced operating system such that the vehicles could communicate with others, optimize the operation to reduce fuel consumption and emissions, enhance safety, or even become self-driving. These combined new features of vehicles require instrumentation and hardware developments, fast signal processing/fusion, decision making and online optimization. Meanwhile, the inevitable increasing system complexity would certainly challenges the control unit design.
A Model-Based Approach to Developing Your Mission Operations System
NASA Technical Reports Server (NTRS)
Smith, Robert R.; Schimmels, Kathryn A.; Lock, Patricia D; Valerio, Charlene P.
2014-01-01
Model-Based System Engineering (MBSE) is an increasingly popular methodology for designing complex engineering systems. As the use of MBSE has grown, it has begun to be applied to systems that are less hardware-based and more people- and process-based. We describe our approach to incorporating MBSE as a way to streamline development, and how to build a model consisting of core resources, such as requirements and interfaces, that can be adapted and used by new and upcoming projects. By comparing traditional Mission Operations System (MOS) system engineering with an MOS designed via a model, we will demonstrate the benefits to be obtained by incorporating MBSE in system engineering design processes.
Software Process Assurance for Complex Electronics (SPACE)
NASA Technical Reports Server (NTRS)
Plastow, Richard A.
2007-01-01
Complex Electronics (CE) are now programmed to perform tasks that were previously handled in software, such as communication protocols. Many of the methods used to develop software bare a close resemblance to CE development. For instance, Field Programmable Gate Arrays (FPGAs) can have over a million logic gates while system-on-chip (SOC) devices can combine a microprocessor, input and output channels, and sometimes an FPGA for programmability. With this increased intricacy, the possibility of software-like bugs such as incorrect design, logic, and unexpected interactions within the logic is great. Since CE devices are obscuring the hardware/software boundary, we propose that mature software methodologies may be utilized with slight modifications in the development of these devices. Software Process Assurance for Complex Electronics (SPACE) is a research project that looks at using standardized S/W Assurance/Engineering practices to provide an assurance framework for development activities. Tools such as checklists, best practices and techniques can be used to detect missing requirements and bugs earlier in the development cycle creating a development process for CE that will be more easily maintained, consistent and configurable based on the device used.
Price, Jeffrey H; Goodacre, Angela; Hahn, Klaus; Hodgson, Louis; Hunter, Edward A; Krajewski, Stanislaw; Murphy, Robert F; Rabinovich, Andrew; Reed, John C; Heynen, Susanne
2002-01-01
Cellular behavior is complex. Successfully understanding systems at ever-increasing complexity is fundamental to advances in modern science and unraveling the functional details of cellular behavior is no exception. We present a collection of prospectives to provide a glimpse of the techniques that will aid in collecting, managing and utilizing information on complex cellular processes via molecular imaging tools. These include: 1) visualizing intracellular protein activity with fluorescent markers, 2) high throughput (and automated) imaging of multilabeled cells in statistically significant numbers, and 3) machine intelligence to analyze subcellular image localization and pattern. Although not addressed here, the importance of combining cell-image-based information with detailed molecular structure and ligand-receptor binding models cannot be overlooked. Advanced molecular imaging techniques have the potential to impact cellular diagnostics for cancer screening, clinical correlations of tissue molecular patterns for cancer biology, and cellular molecular interactions for accelerating drug discovery. The goal of finally understanding all cellular components and behaviors will be achieved by advances in both instrumentation engineering (software and hardware) and molecular biochemistry. Copyright 2002 Wiley-Liss, Inc.
Instrumentino: An Open-Source Software for Scientific Instruments.
Koenka, Israel Joel; Sáiz, Jorge; Hauser, Peter C
2015-01-01
Scientists often need to build dedicated computer-controlled experimental systems. For this purpose, it is becoming common to employ open-source microcontroller platforms, such as the Arduino. These boards and associated integrated software development environments provide affordable yet powerful solutions for the implementation of hardware control of transducers and acquisition of signals from detectors and sensors. It is, however, a challenge to write programs that allow interactive use of such arrangements from a personal computer. This task is particularly complex if some of the included hardware components are connected directly to the computer and not via the microcontroller. A graphical user interface framework, Instrumentino, was therefore developed to allow the creation of control programs for complex systems with minimal programming effort. By writing a single code file, a powerful custom user interface is generated, which enables the automatic running of elaborate operation sequences and observation of acquired experimental data in real time. The framework, which is written in Python, allows extension by users, and is made available as an open source project.
NASA Astrophysics Data System (ADS)
Ba, Seydou N.; Waheed, Khurram; Zhou, G. Tong
2010-12-01
Digital predistortion is an effective means to compensate for the nonlinear effects of a memoryless system. In case of a cellular transmitter, a digital baseband predistorter can mitigate the undesirable nonlinear effects along the signal chain, particularly the nonlinear impairments in the radiofrequency (RF) amplifiers. To be practically feasible, the implementation complexity of the predistorter must be minimized so that it becomes a cost-effective solution for the resource-limited wireless handset. This paper proposes optimizations that facilitate the design of a low-cost high-performance adaptive digital baseband predistorter for memoryless systems. A comparative performance analysis of the amplitude and power lookup table (LUT) indexing schemes is presented. An optimized low-complexity amplitude approximation and its hardware synthesis results are also studied. An efficient LUT predistorter training algorithm that combines the fast convergence speed of the normalized least mean squares (NLMSs) with a small hardware footprint is proposed. Results of fixed-point simulations based on the measured nonlinear characteristics of an RF amplifier are presented.
NASA Technical Reports Server (NTRS)
Boulanger, Richard; Overland, David
2004-01-01
Technologies that facilitate the design and control of complex, hybrid, and resource-constrained systems are examined. This paper focuses on design methodologies, and system architectures, not on specific control methods that may be applied to life support subsystems. Honeywell and Boeing have estimated that 60-80Y0 of the effort in developing complex control systems is software development, and only 20-40% is control system development. It has also been shown that large software projects have failure rates of as high as 50-65%. Concepts discussed include the Unified Modeling Language (UML) and design patterns with the goal of creating a self-improving, self-documenting system design process. Successful architectures for control must not only facilitate hardware to software integration, but must also reconcile continuously changing software with much less frequently changing hardware. These architectures rely on software modules or components to facilitate change. Architecting such systems for change leverages the interfaces between these modules or components.
Replicating Human Hand Synergies Onto Robotic Hands: A Review on Software and Hardware Strategies.
Salvietti, Gionata
2018-01-01
This review reports the principal solutions proposed in the literature to reduce the complexity of the control and of the design of robotic hands taking inspiration from the organization of the human brain. Several studies in neuroscience concerning the sensorimotor organization of the human hand proved that, despite the complexity of the hand, a few parameters can describe most of the variance in the patterns of configurations and movements. In other words, humans exploit a reduced set of parameters, known in the literature as synergies, to control their hands. In robotics, this dimensionality reduction can be achieved by coupling some of the degrees of freedom (DoFs) of the robotic hand, that results in a reduction of the needed inputs. Such coupling can be obtained at the software level, exploiting mapping algorithm to reproduce human hand organization, and at the hardware level, through either rigid or compliant physical couplings between the joints of the robotic hand. This paper reviews the main solutions proposed for both the approaches.
Space and Time Partitioning with Hardware Support for Space Applications
NASA Astrophysics Data System (ADS)
Pinto, S.; Tavares, A.; Montenegro, S.
2016-08-01
Complex and critical systems like airplanes and spacecraft implement a very fast growing amount of functions. Typically, those systems were implemented with fully federated architectures, but the number and complexity of desired functions of todays systems led aerospace industry to follow another strategy. Integrated Modular Avionics (IMA) arose as an attractive approach for consolidation, by combining several applications into one single generic computing resource. Current approach goes towards higher integration provided by space and time partitioning (STP) of system virtualization. The problem is existent virtualization solutions are not ready to fully provide what the future of aerospace are demanding: performance, flexibility, safety, security while simultaneously containing Size, Weight, Power and Cost (SWaP-C).This work describes a real time hypervisor for space applications assisted by commercial off-the-shell (COTS) hardware. ARM TrustZone technology is exploited to implement a secure virtualization solution with low overhead and low memory footprint. This is demonstrated by running multiple guest partitions of RODOS operating system on a Xilinx Zynq platform.
Applications of artificial intelligence to mission planning
NASA Technical Reports Server (NTRS)
Ford, Donnie R.; Rogers, John S.; Floyd, Stephen A.
1990-01-01
The scheduling problem facing NASA-Marshall mission planning is extremely difficult for several reasons. The most critical factor is the computational complexity involved in developing a schedule. The size of the search space is large along some dimensions and infinite along others. It is because of this and other difficulties that many of the conventional operation research techniques are not feasible or inadequate to solve the problems by themselves. Therefore, the purpose is to examine various artificial intelligence (AI) techniques to assist conventional techniques or to replace them. The specific tasks performed were as follows: (1) to identify mission planning applications for object oriented and rule based programming; (2) to investigate interfacing AI dedicated hardware (Lisp machines) to VAX hardware; (3) to demonstrate how Lisp may be called from within FORTRAN programs; (4) to investigate and report on programming techniques used in some commercial AI shells, such as Knowledge Engineering Environment (KEE); and (5) to study and report on algorithmic methods to reduce complexity as related to AI techniques.
NASA Astrophysics Data System (ADS)
Marcus, Kelvin
2014-06-01
The U.S Army Research Laboratory (ARL) has built a "Network Science Research Lab" to support research that aims to improve their ability to analyze, predict, design, and govern complex systems that interweave the social/cognitive, information, and communication network genres. Researchers at ARL and the Network Science Collaborative Technology Alliance (NS-CTA), a collaborative research alliance funded by ARL, conducted experimentation to determine if automated network monitoring tools and task-aware agents deployed within an emulated tactical wireless network could potentially increase the retrieval of relevant data from heterogeneous distributed information nodes. ARL and NS-CTA required the capability to perform this experimentation over clusters of heterogeneous nodes with emulated wireless tactical networks where each node could contain different operating systems, application sets, and physical hardware attributes. Researchers utilized the Dynamically Allocated Virtual Clustering Management System (DAVC) to address each of the infrastructure support requirements necessary in conducting their experimentation. The DAVC is an experimentation infrastructure that provides the means to dynamically create, deploy, and manage virtual clusters of heterogeneous nodes within a cloud computing environment based upon resource utilization such as CPU load, available RAM and hard disk space. The DAVC uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex private networks. Clusters created by the DAVC system can be utilized for software development, experimentation, and integration with existing hardware and software. The goal of this paper is to explore how ARL and the NS-CTA leveraged the DAVC to create, deploy and manage multiple experimentation clusters to support their experimentation goals.
Linear precoding based on polynomial expansion: reducing complexity in massive MIMO.
Mueller, Axel; Kammoun, Abla; Björnson, Emil; Debbah, Mérouane
Massive multiple-input multiple-output (MIMO) techniques have the potential to bring tremendous improvements in spectral efficiency to future communication systems. Counterintuitively, the practical issues of having uncertain channel knowledge, high propagation losses, and implementing optimal non-linear precoding are solved more or less automatically by enlarging system dimensions. However, the computational precoding complexity grows with the system dimensions. For example, the close-to-optimal and relatively "antenna-efficient" regularized zero-forcing (RZF) precoding is very complicated to implement in practice, since it requires fast inversions of large matrices in every coherence period. Motivated by the high performance of RZF, we propose to replace the matrix inversion and multiplication by a truncated polynomial expansion (TPE), thereby obtaining the new TPE precoding scheme which is more suitable for real-time hardware implementation and significantly reduces the delay to the first transmitted symbol. The degree of the matrix polynomial can be adapted to the available hardware resources and enables smooth transition between simple maximum ratio transmission and more advanced RZF. By deriving new random matrix results, we obtain a deterministic expression for the asymptotic signal-to-interference-and-noise ratio (SINR) achieved by TPE precoding in massive MIMO systems. Furthermore, we provide a closed-form expression for the polynomial coefficients that maximizes this SINR. To maintain a fixed per-user rate loss as compared to RZF, the polynomial degree does not need to scale with the system, but it should be increased with the quality of the channel knowledge and the signal-to-noise ratio.
Real-time skin feature identification in a time-sequential video stream
NASA Astrophysics Data System (ADS)
Kramberger, Iztok
2005-04-01
Skin color can be an important feature when tracking skin-colored objects. Particularly this is the case for computer-vision-based human-computer interfaces (HCI). Humans have a highly developed feeling of space and, therefore, it is reasonable to support this within intelligent HCI, where the importance of augmented reality can be foreseen. Joining human-like interaction techniques within multimodal HCI could, or will, gain a feature for modern mobile telecommunication devices. On the other hand, real-time processing plays an important role in achieving more natural and physically intuitive ways of human-machine interaction. The main scope of this work is the development of a stereoscopic computer-vision hardware-accelerated framework for real-time skin feature identification in the sense of a single-pass image segmentation process. The hardware-accelerated preprocessing stage is presented with the purpose of color and spatial filtering, where the skin color model within the hue-saturation-value (HSV) color space is given with a polyhedron of threshold values representing the basis of the filter model. An adaptive filter management unit is suggested to achieve better segmentation results. This enables the adoption of filter parameters to the current scene conditions in an adaptive way. Implementation of the suggested hardware structure is given at the level of filed programmable system level integrated circuit (FPSLIC) devices using an embedded microcontroller as their main feature. A stereoscopic clue is achieved using a time-sequential video stream, but this shows no difference for real-time processing requirements in terms of hardware complexity. The experimental results for the hardware-accelerated preprocessing stage are given by efficiency estimation of the presented hardware structure using a simple motion-detection algorithm based on a binary function.
Amols, Howard I
2008-11-01
New technologies such as intensity modulated and image guided radiation therapy, computer controlled linear accelerators, record and verify systems, electronic charts, and digital imaging have revolutionized radiation therapy over the past 10-15 y. Quality assurance (QA) as historically practiced and as recommended in reports such as American Association of Physicists in Medicine Task Groups 40 and 53 needs to be updated to address the increasing complexity and computerization of radiotherapy equipment, and the increased quantity of data defining a treatment plan and treatment delivery. While new technology has reduced the probability of many types of medical events, seeing new types of errors caused by improper use of new technology, communication failures between computers, corrupted or erroneous computer data files, and "software bugs" are now being seen. The increased use of computed tomography, magnetic resonance, and positron emission tomography imaging has become routine for many types of radiotherapy treatment planning, and QA for imaging modalities is beyond the expertise of most radiotherapy physicists. Errors in radiotherapy rarely result solely from hardware failures. More commonly they are a combination of computer and human errors. The increased use of radiosurgery, hypofractionation, more complex intensity modulated treatment plans, image guided radiation therapy, and increasing financial pressures to treat more patients in less time will continue to fuel this reliance on high technology and complex computer software. Clinical practitioners and regulatory agencies are beginning to realize that QA for new technologies is a major challenge and poses dangers different in nature than what are historically familiar.
A Low-Complexity Circuit for On-Sensor Concurrent A/D Conversion and Compression
NASA Technical Reports Server (NTRS)
Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.
2007-01-01
A low-complexity circuit for on-sensor compression is presented. The proposed circuit achieves complexity savings by combining a single-slope analog-to-digital converter with a Golomb-Rice entropy encoder and by implementing a low-complexity adaptation rule. The adaptation rule monitors the output codewords and minimizes their length by incrementing or decrementing the value of the Golomb-Rice coding parameter k. Its hardware implementation is one order of magnitude lower than existing adaptive algorithms. The compression circuit has been fabricated using a 0.35 micrometers CMOS technology and occupies an area of 0.0918 mm2. Test measurements confirm the validity of the design
FPGA implementation of sparse matrix algorithm for information retrieval
NASA Astrophysics Data System (ADS)
Bojanic, Slobodan; Jevtic, Ruzica; Nieto-Taladriz, Octavio
2005-06-01
Information text data retrieval requires a tremendous amount of processing time because of the size of the data and the complexity of information retrieval algorithms. In this paper the solution to this problem is proposed via hardware supported information retrieval algorithms. Reconfigurable computing may adopt frequent hardware modifications through its tailorable hardware and exploits parallelism for a given application through reconfigurable and flexible hardware units. The degree of the parallelism can be tuned for data. In this work we implemented standard BLAS (basic linear algebra subprogram) sparse matrix algorithm named Compressed Sparse Row (CSR) that is showed to be more efficient in terms of storage space requirement and query-processing timing over the other sparse matrix algorithms for information retrieval application. Although inverted index algorithm is treated as the de facto standard for information retrieval for years, an alternative approach to store the index of text collection in a sparse matrix structure gains more attention. This approach performs query processing using sparse matrix-vector multiplication and due to parallelization achieves a substantial efficiency over the sequential inverted index. The parallel implementations of information retrieval kernel are presented in this work targeting the Virtex II Field Programmable Gate Arrays (FPGAs) board from Xilinx. A recent development in scientific applications is the use of FPGA to achieve high performance results. Computational results are compared to implementations on other platforms. The design achieves a high level of parallelism for the overall function while retaining highly optimised hardware within processing unit.
NASA Technical Reports Server (NTRS)
Hoffman, William C., III
1996-01-01
Determining deterioration characteristics of the Space Shuttle crew escape system pyrotechnic components loaded with hexanitrostilbene would enable us to establish a hardware life-limit for these items, so we could better plan our equipment use and, possibly, extend the useful life of the hardware. We subjected components to accelerated-age environments to determine degradation characteristics and established a hardware life-limit based upon observed and calculated trends. We extracted samples using manufacturing lots currently installed in the Space Shuttle crew escape system and from other NASA programs. Hardware included in the study consisted of various forms and ages of mild detonating fuse, linear shaped charge, and flexible confined detonating cord. The hardware types were segregated into 5 groups. One was subjected to detonation velocity testing for a baseline. Two were first subjected to prolonged 155 F heat exposure, and the other two were first subjected to 255 F, before undergoing detonation velocity testing and/or chromatography analysis. Test results showed no measurable changes in performance to allow a prediction of an end of life given the storage and elevated temperature environments the hardware experiences. Given the lack of a definitive performance trend, coupled with previous tests on post-flight Space Shuttle hardware showing no significant changes in chemical purity or detonation velocity, we recommend a safe increase in the useful life of the hardware to 20 years, from the current maximum limits of 10 and 15 years, depending on the hardware.
Proposed hardware architectures of particle filter for object tracking
NASA Astrophysics Data System (ADS)
Abd El-Halym, Howida A.; Mahmoud, Imbaby Ismail; Habib, SED
2012-12-01
In this article, efficient hardware architectures for particle filter (PF) are presented. We propose three different architectures for Sequential Importance Resampling Filter (SIRF) implementation. The first architecture is a two-step sequential PF machine, where particle sampling, weight, and output calculations are carried out in parallel during the first step followed by sequential resampling in the second step. For the weight computation step, a piecewise linear function is used instead of the classical exponential function. This decreases the complexity of the architecture without degrading the results. The second architecture speeds up the resampling step via a parallel, rather than a serial, architecture. This second architecture targets a balance between hardware resources and the speed of operation. The third architecture implements the SIRF as a distributed PF composed of several processing elements and central unit. All the proposed architectures are captured using VHDL synthesized using Xilinx environment, and verified using the ModelSim simulator. Synthesis results confirmed the resource reduction and speed up advantages of our architectures.
NASA Astrophysics Data System (ADS)
Berdychowski, Piotr P.; Zabolotny, Wojciech M.
2010-09-01
The main goal of C to VHDL compiler project is to make FPGA platform more accessible for scientists and software developers. FPGA platform offers unique ability to configure the hardware to implement virtually any dedicated architecture, and modern devices provide sufficient number of hardware resources to implement parallel execution platforms with complex processing units. All this makes the FPGA platform very attractive for those looking for efficient heterogeneous, computing environment. Current industry standard in development of digital systems on FPGA platform is based on HDLs. Although very effective and expressive in hands of hardware development specialists, these languages require specific knowledge and experience, unreachable for most scientists and software programmers. C to VHDL compiler project attempts to remedy that by creating an application, that derives initial VHDL description of a digital system (for further compilation and synthesis), from purely algorithmic description in C programming language. This idea itself is not new, and the C to VHDL compiler combines the best approaches from existing solutions developed over many previous years, with the introduction of some new unique improvements.
Hardware-software face detection system based on multi-block local binary patterns
NASA Astrophysics Data System (ADS)
Acasandrei, Laurentiu; Barriga, Angel
2015-03-01
Face detection is an important aspect for biometrics, video surveillance and human computer interaction. Due to the complexity of the detection algorithms any face detection system requires a huge amount of computational and memory resources. In this communication an accelerated implementation of MB LBP face detection algorithm targeting low frequency, low memory and low power embedded system is presented. The resulted implementation is time deterministic and uses a customizable AMBA IP hardware accelerator. The IP implements the kernel operations of the MB-LBP algorithm and can be used as universal accelerator for MB LBP based applications. The IP employs 8 parallel MB-LBP feature evaluators cores, uses a deterministic bandwidth, has a low area profile and the power consumption is ~95 mW on a Virtex5 XC5VLX50T. The resulted implementation acceleration gain is between 5 to 8 times, while the hardware MB-LBP feature evaluation gain is between 69 and 139 times.
Process of videotape making: presentation design, software, and hardware
NASA Astrophysics Data System (ADS)
Dickinson, Robert R.; Brady, Dan R.; Bennison, Tim; Burns, Thomas; Pines, Sheldon
1991-06-01
The use of technical video tape presentations for communicating abstractions of complex data is now becoming commonplace. While the use of video tapes in the day-to-day work of scientists and engineers is still in its infancy, their use as applications oriented conferences is now growing rapidly. Despite these advancements, there is still very little that is written down about the process of making technical videotapes. For printed media, different presentation styles are well known for categories such as results reports, executive summary reports, and technical papers and articles. In this paper, the authors present ideas on the topic of technical videotape presentation design in a format that is worth referring to. They have started to document the ways in which the experience of media specialist, teaching professionals, and character animators can be applied to scientific animation. Software and hardware considerations are also discussed. For this portion, distinctions are drawn between the software and hardware required for computer animation (frame at a time) productions, and live recorded interaction with a computer graphics display.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-08-21
Recent advancements in technology scaling have shown a trend towards greater integration with large-scale chips containing thousands of processors connected to memories and other I/O devices using non-trivial network topologies. Software simulation proves insufficient to study the tradeoffs in such complex systems due to slow execution time, whereas hardware RTL development is too time-consuming. We present OpenSoC Fabric, an on-chip network generation infrastructure which aims to provide a parameterizable and powerful on-chip network generator for evaluating future high performance computing architectures based on SoC technology. OpenSoC Fabric leverages a new hardware DSL, Chisel, which contains powerful abstractions provided by itsmore » base language, Scala, and generates both software (C++) and hardware (Verilog) models from a single code base. The OpenSoC Fabric2 infrastructure is modeled after existing state-of-the-art simulators, offers large and powerful collections of configuration options, and follows object-oriented design and functional programming to make functionality extension as easy as possible.« less
Rodríguez, Manuel; Magdaleno, Eduardo; Pérez, Fernando; García, Cristhian
2017-03-28
Non-equispaced Fast Fourier transform (NFFT) is a very important algorithm in several technological and scientific areas such as synthetic aperture radar, computational photography, medical imaging, telecommunications, seismic analysis and so on. However, its computation complexity is high. In this paper, we describe an efficient NFFT implementation with a hardware coprocessor using an All-Programmable System-on-Chip (APSoC). This is a hybrid device that employs an Advanced RISC Machine (ARM) as Processing System with Programmable Logic for high-performance digital signal processing through parallelism and pipeline techniques. The algorithm has been coded in C language with pragma directives to optimize the architecture of the system. We have used the very novel Software Develop System-on-Chip (SDSoC) evelopment tool that simplifies the interface and partitioning between hardware and software. This provides shorter development cycles and iterative improvements by exploring several architectures of the global system. The computational results shows that hardware acceleration significantly outperformed the software based implementation.
Rodríguez, Manuel; Magdaleno, Eduardo; Pérez, Fernando; García, Cristhian
2017-01-01
Non-equispaced Fast Fourier transform (NFFT) is a very important algorithm in several technological and scientific areas such as synthetic aperture radar, computational photography, medical imaging, telecommunications, seismic analysis and so on. However, its computation complexity is high. In this paper, we describe an efficient NFFT implementation with a hardware coprocessor using an All-Programmable System-on-Chip (APSoC). This is a hybrid device that employs an Advanced RISC Machine (ARM) as Processing System with Programmable Logic for high-performance digital signal processing through parallelism and pipeline techniques. The algorithm has been coded in C language with pragma directives to optimize the architecture of the system. We have used the very novel Software Develop System-on-Chip (SDSoC) evelopment tool that simplifies the interface and partitioning between hardware and software. This provides shorter development cycles and iterative improvements by exploring several architectures of the global system. The computational results shows that hardware acceleration significantly outperformed the software based implementation. PMID:28350358
Hardware design and implementation of fast DOA estimation method based on multicore DSP
NASA Astrophysics Data System (ADS)
Guo, Rui; Zhao, Yingxiao; Zhang, Yue; Lin, Qianqiang; Chen, Zengping
2016-10-01
In this paper, we present a high-speed real-time signal processing hardware platform based on multicore digital signal processor (DSP). The real-time signal processing platform shows several excellent characteristics including high performance computing, low power consumption, large-capacity data storage and high speed data transmission, which make it able to meet the constraint of real-time direction of arrival (DOA) estimation. To reduce the high computational complexity of DOA estimation algorithm, a novel real-valued MUSIC estimator is used. The algorithm is decomposed into several independent steps and the time consumption of each step is counted. Based on the statistics of the time consumption, we present a new parallel processing strategy to distribute the task of DOA estimation to different cores of the real-time signal processing hardware platform. Experimental results demonstrate that the high processing capability of the signal processing platform meets the constraint of real-time direction of arrival (DOA) estimation.
Plant, Richard R; Turner, Garry
2009-08-01
Since the publication of Plant, Hammond, and Turner (2004), which highlighted a pressing need for researchers to pay more attention to sources of error in computer-based experiments, the landscape has undoubtedly changed, but not necessarily for the better. Readily available hardware has improved in terms of raw speed; multi core processors abound; graphics cards now have hundreds of megabytes of RAM; main memory is measured in gigabytes; drive space is measured in terabytes; ever larger thin film transistor displays capable of single-digit response times, together with newer Digital Light Processing multimedia projectors, enable much greater graphic complexity; and new 64-bit operating systems, such as Microsoft Vista, are now commonplace. However, have millisecond-accurate presentation and response timing improved, and will they ever be available in commodity computers and peripherals? In the present article, we used a Black Box ToolKit to measure the variability in timing characteristics of hardware used commonly in psychological research.
KSC ground operations planning for Space Station
NASA Technical Reports Server (NTRS)
Lyon, J. R.; Revesz, W., Jr.
1993-01-01
At the Kennedy Space Center (KSC) in Florida, processing facilities are being built and activated to support the processing, checkout, and launch of Space Station elements. The generic capability of these facilities will be utilized to support resupply missions for payloads, life support services, and propellants for the 30-year life of the program. Special Ground Support Equipment (GSE) is being designed for Space Station hardware special handling requirements, and a Test, Checkout, and Monitoring System (TCMS) is under development to verify that the flight elements are ready for launch. The facilities and equipment used at KSC, along with the testing required to accomplish the mission, are described in detail to provide an understanding of the complexity of operations at the launch site. Assessments of hardware processing flows through KSC are being conducted to minimize the processing flow times for each hardware element. Baseline operations plans and the changes made to improve operations and reduce costs are described, recognizing that efficient ground operations are a major key to success of the Space Station.
Campus Financial Systems for the Future.
ERIC Educational Resources Information Center
Jonas, Stephen; And Others
This handbook guides college and university business officers, from small liberal arts colleges to community colleges to research universities, through the complex set of decisions and actions associated with replacing financial management systems. It lists the steps necessary to evaluate an institution's current hardware, network, and software;…
Determination of depth-viewing volumes for stereo three-dimensional graphic displays
NASA Technical Reports Server (NTRS)
Parrish, Russell V.; Williams, Steven P.
1990-01-01
Real-world, 3-D, pictorial displays incorporating true depth cues via stereopsis techniques offer a potential means of displaying complex information in a natural way to prevent loss of situational awareness and provide increases in pilot/vehicle performance in advanced flight display concepts. Optimal use of stereopsis requires an understanding of the depth viewing volume available to the display designer. Suggested guidelines are presented for the depth viewing volume from an empirical determination of the effective region of stereopsis cueing (at several viewer-CRT screen distances) for a time multiplexed stereopsis display system. The results provide the display designer with information that will allow more effective placement of depth information to enable the full exploitation of stereopsis cueing. Increasing viewer-CRT screen distances provides increasing amounts of usable depth, but with decreasing fields-of-view. A stereopsis hardware system that permits an increased viewer-screen distance by incorporating larger screen sizes or collimation optics to maintain the field-of-view at required levels would provide a much larger stereo depth-viewing volume.
A digital matched filter for reverse time chaos.
Bailey, J Phillip; Beal, Aubrey N; Dean, Robert N; Hamilton, Michael C
2016-07-01
The use of reverse time chaos allows the realization of hardware chaotic systems that can operate at speeds equivalent to existing state of the art while requiring significantly less complex circuitry. Matched filter decoding is possible for the reverse time system since it exhibits a closed form solution formed partially by a linear basis pulse. Coefficients have been calculated and are used to realize the matched filter digitally as a finite impulse response filter. Numerical simulations confirm that this correctly implements a matched filter that can be used for detection of the chaotic signal. In addition, the direct form of the filter has been implemented in hardware description language and demonstrates performance in agreement with numerical results.
One year old and growing: a status report of the International Space Station and its partners
NASA Technical Reports Server (NTRS)
Bartoe, J. D.; Fortenberry, L.
2000-01-01
The International Space Station (ISS), as the largest international science and engineering program in history, features unprecedented technical, cost, scheduling, managerial, and international complexity. A number of major milestones have been accomplished to date, including the construction of major elements of flight hardware, the development of operations and sustaining engineering centers, astronaut training, and eight Space Shuttle/Mir docking missions. International partner contributions and levels of participation have been baselined, and negotiations and discussions are nearing completion regarding bartering arrangements for services and new hardware. As ISS is successfully executed, it can pave the way for more inspiring cooperative achievements in the future. Published by Elsevier Science Ltd.
Big Science, Small-Budget Space Experiment Package Aka MISSE-5: A Hardware And Software Perspective
NASA Technical Reports Server (NTRS)
Krasowski, Michael; Greer, Lawrence; Flatico, Joseph; Jenkins, Phillip; Spina, Dan
2007-01-01
Conducting space experiments with small budgets is a fact of life for many design groups with low-visibility science programs. One major consequence is that specialized space grade electronic components are often too costly to incorporate into the design. Radiation mitigation now becomes more complex as a result of being restricted to the use of commercial off-the-shelf (COTS) parts. Unique hardware and software design techniques are required to succeed in producing a viable instrument suited for use in space. This paper highlights some of the design challenges and associated solutions encountered in the production of a highly capable, low cost space experiment package.
Hardware implementation of CMAC neural network with reduced storage requirement.
Ker, J S; Kuo, Y H; Wen, R C; Liu, B D
1997-01-01
The cerebellar model articulation controller (CMAC) neural network has the advantages of fast convergence speed and low computation complexity. However, it suffers from a low storage space utilization rate on weight memory. In this paper, we propose a direct weight address mapping approach, which can reduce the required weight memory size with a utilization rate near 100%. Based on such an address mapping approach, we developed a pipeline architecture to efficiently perform the addressing operations. The proposed direct weight address mapping approach also speeds up the computation for the generation of weight addresses. Besides, a CMAC hardware prototype used for color calibration has been implemented to confirm the proposed approach and architecture.
NASA Technical Reports Server (NTRS)
Krasowski, Michael; Greer, Lawrence; Flatico, Joseph; Jenkins, Phillip; Spina, Dan
2005-01-01
Conducting space experiments with small budgets is a fact of life for many design groups with low-visibility science programs. One major consequence is that specialized space grade electronic components are often too costly to incorporate into the design. Radiation mitigation now becomes more complex as a result of being restricted to the use of commercial off-the-shelf (COTS) parts. Unique hardware and software design techniques are required to succeed in producing a viable instrument suited for use in space. This paper highlights some of the design challenges and associated solutions encountered in the production of a highly capable, low cost space experiment package.
NASA Technical Reports Server (NTRS)
Mulhall, B. D. L.
1980-01-01
An evaluation is presented which is defined as the adequacy of system design with known functional and performance requirements. The proposed Rockwell International AIDS 3 card, document and data flow are presented to summarize the concepts involved and the relationships between functions. The analysis and evaluation includes a study of system capability, processing rates, search requirements and response accuracy as well as a consideration of operational components and hardware integration. Results indicate that the AIDS 3 System concept is operationally feasible if production capacity is slightly enhanced but that operational complexity, hardware integration and a lack of conceptual data pertinent to some of the functions are areas of concern.
A Low Cost Matching Motion Estimation Sensor Based on the NIOS II Microprocessor
González, Diego; Botella, Guillermo; Meyer-Baese, Uwe; García, Carlos; Sanz, Concepción; Prieto-Matías, Manuel; Tirado, Francisco
2012-01-01
This work presents the implementation of a matching-based motion estimation sensor on a Field Programmable Gate Array (FPGA) and NIOS II microprocessor applying a C to Hardware (C2H) acceleration paradigm. The design, which involves several matching algorithms, is mapped using Very Large Scale Integration (VLSI) technology. These algorithms, as well as the hardware implementation, are presented here together with an extensive analysis of the resources needed and the throughput obtained. The developed low-cost system is practical for real-time throughput and reduced power consumption and is useful in robotic applications, such as tracking, navigation using an unmanned vehicle, or as part of a more complex system. PMID:23201989
Performance of GeantV EM Physics Models
NASA Astrophysics Data System (ADS)
Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.
2017-10-01
The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.
A digital matched filter for reverse time chaos
NASA Astrophysics Data System (ADS)
Bailey, J. Phillip; Beal, Aubrey N.; Dean, Robert N.; Hamilton, Michael C.
2016-07-01
The use of reverse time chaos allows the realization of hardware chaotic systems that can operate at speeds equivalent to existing state of the art while requiring significantly less complex circuitry. Matched filter decoding is possible for the reverse time system since it exhibits a closed form solution formed partially by a linear basis pulse. Coefficients have been calculated and are used to realize the matched filter digitally as a finite impulse response filter. Numerical simulations confirm that this correctly implements a matched filter that can be used for detection of the chaotic signal. In addition, the direct form of the filter has been implemented in hardware description language and demonstrates performance in agreement with numerical results.
Electronic Business Transaction Infrastructure Analysis Using Petri Nets and Simulation
ERIC Educational Resources Information Center
Feller, Andrew Lee
2010-01-01
Rapid growth in eBusiness has made industry and commerce increasingly dependent on the hardware and software infrastructure that enables high-volume transaction processing across the Internet. Large transaction volumes at major industrial-firm data centers rely on robust transaction protocols and adequately provisioned hardware capacity to ensure…
Code of Federal Regulations, 2010 CFR
2010-04-01
... increases. (b) At the owner's option, the cost of the computer software may include service contracts to... requirements. (c) The source of funds for the purchase of hardware or software, or contracting for services for... formatted data, including either the purchase and maintenance of computer hardware or software, or both, the...
High-performance reconfigurable hardware architecture for restricted Boltzmann machines.
Ly, Daniel Le; Chow, Paul
2010-11-01
Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 × 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor.
Easy Handling of Sensors and Actuators over TCP/IP Networks by Open Source Hardware/Software
Mejías, Andrés; Herrera, Reyes S.; Márquez, Marco A.; Calderón, Antonio José; González, Isaías; Andújar, José Manuel
2017-01-01
There are several specific solutions for accessing sensors and actuators present in any process or system through a TCP/IP network, either local or a wide area type like the Internet. The usage of sensors and actuators of different nature and diverse interfaces (SPI, I2C, analogue, etc.) makes access to them from a network in a homogeneous and secure way more complex. A framework, including both software and hardware resources, is necessary to simplify and unify networked access to these devices. In this paper, a set of open-source software tools, specifically designed to cover the different issues concerning the access to sensors and actuators, and two proposed low-cost hardware architectures to operate with the abovementioned software tools are presented. They allow integrated and easy access to local or remote sensors and actuators. The software tools, integrated in the free authoring tool Easy Java and Javascript Simulations (EJS) solve the interaction issues between the subsystem that integrates sensors and actuators into the network, called convergence subsystem in this paper, and the Human Machine Interface (HMI)—this one designed using the intuitive graphical system of EJS—located on the user’s computer. The proposed hardware architectures and software tools are described and experimental implementations with the proposed tools are presented. PMID:28067801
Easy Handling of Sensors and Actuators over TCP/IP Networks by Open Source Hardware/Software.
Mejías, Andrés; Herrera, Reyes S; Márquez, Marco A; Calderón, Antonio José; González, Isaías; Andújar, José Manuel
2017-01-05
There are several specific solutions for accessing sensors and actuators present in any process or system through a TCP/IP network, either local or a wide area type like the Internet. The usage of sensors and actuators of different nature and diverse interfaces (SPI, I2C, analogue, etc.) makes access to them from a network in a homogeneous and secure way more complex. A framework, including both software and hardware resources, is necessary to simplify and unify networked access to these devices. In this paper, a set of open-source software tools, specifically designed to cover the different issues concerning the access to sensors and actuators, and two proposed low-cost hardware architectures to operate with the abovementioned software tools are presented. They allow integrated and easy access to local or remote sensors and actuators. The software tools, integrated in the free authoring tool Easy Java and Javascript Simulations (EJS) solve the interaction issues between the subsystem that integrates sensors and actuators into the network, called convergence subsystem in this paper, and the Human Machine Interface (HMI)-this one designed using the intuitive graphical system of EJS-located on the user's computer. The proposed hardware architectures and software tools are described and experimental implementations with the proposed tools are presented.
The Use of HFC (CFC Free) Processes at the NASA Stennis Space Center
NASA Technical Reports Server (NTRS)
Ross, Richard H.
1997-01-01
The search for ozone depleting alternative chemicals was heightened when, in 1990, the more than 65 countries that had signed the Montreal Protocol agreed to phase out completely by the year 2000. In 1992, then-president Bush advanced this date for the United States to January l, 1996. In 1991, it was realized that the planned phase out and eventual elimination of ozone depleting chemicals imposed by the Montreal Protocol and the resulting Clean Air Act (CAA) amendments would impact the cleaning and testing of aerospace hardware at the NASA Stennis Space Center. Because of this regulation, the Test & Engineering Sciences Laboratory has been working on solvent conversion studies to replace CFC-113. Aerospace hardware and test equipment used in rocket propulsion systems require extreme cleanliness levels to function and maintain their integrity. Because the cleanliness of aerospace hardware will be affected by the elimination of CFC-113; alternate cleaning technologies, including the use of fluoridated solvents have been studied as potential replacements. Several aqueous processes have been identified for cleaning moderately sized components. However, no known aqueous alternative exists for cleaning and validating T&ME and complex geometry based hardware. This paper discusses the choices and the methodologies that were used to screen potential alternatives to CFC-113.
Superconducting Optoelectronic Circuits for Neuromorphic Computing
NASA Astrophysics Data System (ADS)
Shainline, Jeffrey M.; Buckley, Sonia M.; Mirin, Richard P.; Nam, Sae Woo
2017-03-01
Neural networks have proven effective for solving many difficult computational problems, yet implementing complex neural networks in software is computationally expensive. To explore the limits of information processing, it is necessary to implement new hardware platforms with large numbers of neurons, each with a large number of connections to other neurons. Here we propose a hybrid semiconductor-superconductor hardware platform for the implementation of neural networks and large-scale neuromorphic computing. The platform combines semiconducting few-photon light-emitting diodes with superconducting-nanowire single-photon detectors to behave as spiking neurons. These processing units are connected via a network of optical waveguides, and variable weights of connection can be implemented using several approaches. The use of light as a signaling mechanism overcomes fanout and parasitic constraints on electrical signals while simultaneously introducing physical degrees of freedom which can be employed for computation. The use of supercurrents achieves the low power density (1 mW /cm2 at 20-MHz firing rate) necessary to scale to systems with enormous entropy. Estimates comparing the proposed hardware platform to a human brain show that with the same number of neurons (1 011) and 700 independent connections per neuron, the hardware presented here may achieve an order of magnitude improvement in synaptic events per second per watt.
Symphony: A Framework for Accurate and Holistic WSN Simulation
Riliskis, Laurynas; Osipov, Evgeny
2015-01-01
Research on wireless sensor networks has progressed rapidly over the last decade, and these technologies have been widely adopted for both industrial and domestic uses. Several operating systems have been developed, along with a multitude of network protocols for all layers of the communication stack. Industrial Wireless Sensor Network (WSN) systems must satisfy strict criteria and are typically more complex and larger in scale than domestic systems. Together with the non-deterministic behavior of network hardware in real settings, this greatly complicates the debugging and testing of WSN functionality. To facilitate the testing, validation, and debugging of large-scale WSN systems, we have developed a simulation framework that accurately reproduces the processes that occur inside real equipment, including both hardware- and software-induced delays. The core of the framework consists of a virtualized operating system and an emulated hardware platform that is integrated with the general purpose network simulator ns-3. Our framework enables the user to adjust the real code base as would be done in real deployments and also to test the boundary effects of different hardware components on the performance of distributed applications and protocols. Additionally we have developed a clock emulator with several different skew models and a component that handles sensory data feeds. The new framework should substantially shorten WSN application development cycles. PMID:25723144
Design and Evolution of a Modular Tensegrity Robot Platform
NASA Technical Reports Server (NTRS)
Bruce, Jonathan; Caluwaerts, Ken; Iscen, Atil; Sabelhaus, Andrew P.; SunSpiral, Vytas
2014-01-01
NASA Ames Research Center is developing a compliant modular tensegrity robotic platform for planetary exploration. In this paper we present the design and evolution of the platform's main hardware component, an untethered, robust tensegrity strut, with rich sensor feedback and cable actuation. Each strut is a complete robot, and multiple struts can be combined together to form a wide range of complex tensegrity robots. Our current goal for the tensegrity robotic platform is the development of SUPERball, a 6-strut icosahedron underactuated tensegrity robot aimed at dynamic locomotion for planetary exploration rovers and landers, but the aim is for the modular strut to enable a wide range of tensegrity morphologies. SUPERball is a second generation prototype, evolving from the tensegrity robot ReCTeR, which is also a modular, lightweight, highly compliant 6-strut tensegrity robot that was used to validate our physics based NASA Tensegrity Robot Toolkit (NTRT) simulator. Many hardware design parameters of the SUPERball were driven by locomotion results obtained in our validated simulator. These evolutionary explorations helped constrain motor torque and speed parameters, along with strut and string stress. As construction of the hardware has finalized, we have also used the same evolutionary framework to evolve controllers that respect the built hardware parameters.
[Soft- and hardware support for the setup for computer tracking of radiation teletherapy].
Tarutin, I G; Piliavets, V I; Strakh, A G; Minenko, V F; Golubovskiĭ, A I
1983-06-01
A hard and soft ware computer assisted complex has been worked out for gamma-beam therapy. The complex included all radiotherapeutic units, including a Siemens program controlled betatron with an energy of 42 MEV computer ES-1022, a Medigraf system of the processing of graphic information, a Mars-256 system for control over the homogeneity of distribution of dose rate on the field of irradiation and a package of mathematical programs to select a plan of irradiation of various tumor sites. The prospects of the utilization of such complexes in the dosimetric support of radiation therapy are discussed.
2004-04-15
Combustion Module-1 was one of the most complex and technologically sophisticated pieces of hardware ever to be included as a part of a Spacelab mission. Shown here are the two racks which comprised CM-1, the rack on the right shows the combustion chamber with the Structure Of Flame Balls at Low Lewis-numbers (SOFBALL) experiment inside.
Description of the microbial ecology evaluation device, flight equipment, and ground transporter
NASA Technical Reports Server (NTRS)
Chassay, C. E.; Taylor, G. R.
1973-01-01
Exposure of test systems in space required the fabrication of specialized hardware termed a Microbial Ecology Evaluation Device that had individual test chambers and a complex optical filter system. The characteristics of this device and the manner in which it was deployed in space are described.
Analysis of Software Systems for Specialized Computers,
computer) with given computer hardware and software . The object of study is the software system of a computer, designed for solving a fixed complex of...purpose of the analysis is to find parameters that characterize the system and its elements during operation, i.e., when servicing the given requirement flow. (Author)
Enhancing the Internet of Things Architecture with Flow Semantics
ERIC Educational Resources Information Center
DeSerranno, Allen Ronald
2017-01-01
Internet of Things ("IoT") systems are complex, asynchronous solutions often comprised of various software and hardware components developed in isolation of each other. These components function with different degrees of reliability and performance over an inherently unreliable network, the Internet. Many IoT systems are developed within…
Responding to an RFP: A Vendor's Viewpoint.
ERIC Educational Resources Information Center
Kington, Robert A.
1987-01-01
Outlines factors used by online vendors to decide whether to bid on RFPs (requests for proposals) for library automation systems, including specifications for software, hardware or performance requirements not met by the vendor; specifications based on competitors' systems; the size and complexity of the request itself; and vendors' time…
Fast and Adaptive Lossless On-Board Hyperspectral Data Compression System for Space Applications
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh; Bakhshi, Alireza; Keymeulen, Didier; Klimesh, Matthew
2009-01-01
Efficient on-board lossless hyperspectral data compression reduces the data volume necessary to meet NASA and DoD limited downlink capabilities. The techniques also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware, which makes it practical for flight implementations of pushbroom instruments. A prototype of the compressor (and decompressor) of the algorithm is available in software, but this implementation may not meet speed and real-time requirements of some space applications. Hardware acceleration provides performance improvements of 10x-100x vs. the software implementation (about 1M samples/sec on a Pentium IV machine). This paper describes a hardware implementation of the JPL-developed 'Fast Lossless' compression algorithm on a Field Programmable Gate Array (FPGA). The FPGA implementation targets the current state of the art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for Space applications.
Spintronic Nanodevices for Bioinspired Computing
Grollier, Julie; Querlioz, Damien; Stiles, Mark D.
2016-01-01
Bioinspired hardware holds the promise of low-energy, intelligent, and highly adaptable computing systems. Applications span from automatic classification for big data management, through unmanned vehicle control, to control for biomedical prosthesis. However, one of the major challenges of fabricating bioinspired hardware is building ultra-high-density networks out of complex processing units interlinked by tunable connections. Nanometer-scale devices exploiting spin electronics (or spintronics) can be a key technology in this context. In particular, magnetic tunnel junctions (MTJs) are well suited for this purpose because of their multiple tunable functionalities. One such functionality, non-volatile memory, can provide massive embedded memory in unconventional circuits, thus escaping the von-Neumann bottleneck arising when memory and processors are located separately. Other features of spintronic devices that could be beneficial for bioinspired computing include tunable fast nonlinear dynamics, controlled stochasticity, and the ability of single devices to change functions in different operating conditions. Large networks of interacting spintronic nanodevices can have their interactions tuned to induce complex dynamics such as synchronization, chaos, soliton diffusion, phase transitions, criticality, and convergence to multiple metastable states. A number of groups have recently proposed bioinspired architectures that include one or several types of spintronic nanodevices. In this paper, we show how spintronics can be used for bioinspired computing. We review the different approaches that have been proposed, the recent advances in this direction, and the challenges toward fully integrated spintronics complementary metal–oxide–semiconductor (CMOS) bioinspired hardware. PMID:27881881
Digital video timing analyzer for the evaluation of PC-based real-time simulation systems
NASA Astrophysics Data System (ADS)
Jones, Shawn R.; Crosby, Jay L.; Terry, John E., Jr.
2009-05-01
Due to the rapid acceleration in technology and the drop in costs, the use of commercial off-the-shelf (COTS) PC-based hardware and software components for digital and hardware-in-the-loop (HWIL) simulations has increased. However, the increase in PC-based components creates new challenges for HWIL test facilities such as cost-effective hardware and software selection, system configuration and integration, performance testing, and simulation verification/validation. This paper will discuss how the Digital Video Timing Analyzer (DiViTA) installed in the Aviation and Missile Research, Development and Engineering Center (AMRDEC) provides quantitative characterization data for PC-based real-time scene generation systems. An overview of the DiViTA is provided followed by details on measurement techniques, applications, and real-world examples of system benefits.
Friedrich, O; Schneidereit, D; Nikolaev, Y A; Nikolova-Krstevski, V; Schürmann, S; Wirth-Hücking, A; Merten, A L; Fatkin, D; Martinac, B
2017-11-01
Hollow organs (e.g. heart) experience pressure-induced mechanical wall stress sensed by molecular mechano-biosensors, including mechanosensitive ion channels, to translate into intracellular signaling. For direct mechanistic studies, stretch devices to apply defined extensions to cells adhered to elastomeric membranes have stimulated mechanotransduction research. However, most engineered systems only exploit unilateral cellular stretch. In addition, it is often taken for granted that stretch applied by hardware translates 1:1 to the cell membrane. However, the latter crucially depends on the tightness of the cell-substrate junction by focal adhesion complexes and is often not calibrated for. In the heart, (increased) hemodynamic volume/pressure load is associated with (increased) multiaxial wall tension, stretching individual cardiomyocytes in multiple directions. To adequately study cellular models of chronic organ distension on a cellular level, biomedical engineering faces challenges to implement multiaxial cell stretch systems that allow observing cell reactions to stretch during live-cell imaging, and to calibrate for hardware-to-cell membrane stretch translation. Here, we review mechanotransduction, cell stretch technologies from uni-to multiaxial designs in cardio-vascular research, and the importance of the stretch substrate-cell membrane junction. We also present new results using our IsoStretcher to demonstrate mechanosensitivity of Piezo1 in HEK293 cells and stretch-induced Ca 2+ entry in 3D-hydrogel-embedded cardiomyocytes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)
NASA Technical Reports Server (NTRS)
Dalton, Shelly D.; Daley, Philip C.
1988-01-01
As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.
A Hardware Model Validation Tool for Use in Complex Space Systems
NASA Technical Reports Server (NTRS)
Davies, Misty Dawn; Gundy-Burlet, Karen L.; Limes, Gregory L.
2010-01-01
One of the many technological hurdles that must be overcome in future missions is the challenge of validating as-built systems against the models used for design. We propose a technique composed of intelligent parameter exploration in concert with automated failure analysis as a scalable method for the validation of complex space systems. The technique is impervious to discontinuities and linear dependencies in the data, and can handle dimensionalities consisting of hundreds of variables over tens of thousands of experiments.
Reinjection laser oscillator and method
McLellan, Edward J.
1984-01-01
A uv preionized CO.sub.2 oscillator with integral four-pass amplifier capable of providing 1 to 5 GW laser pulses with pulse widths from 0.1 to 0.5 ns full width at half-maximum (FWHM) is described. The apparatus is operated at any pressure from 1 atm to 10 atm without the necessity of complex high voltage electronics. The reinjection technique employed gives rise to a compact, efficient system that is particularly immune to alignment instabilities with a minimal amount of hardware and complexity.
Microwave techniques for measuring complex permittivity and permeability of materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guillon, P.
1995-08-01
Different materials are of fundamental importance to the aerospace, microwave, electronics and communications industries, and include for example microwave absorbing materials, antennas lenses and radomes, substrates for MMIC and microwave components and antennaes. Basic measurements for the complex permittivity and permeability of those homogeneous solid materials in the microwave spectral region are described including hardware, instrumentation and analysis. Elevated temperature measurements as well as measurements intercomparisons, with a discussion of the strengths and weaknesses of each techniques are also presented.
NASA Technical Reports Server (NTRS)
Owen, Robert B.; Gyekenyesi, Andrew L.; Inman, Daniel J.; Ha, Dong S.
2011-01-01
The Integrated Vehicle Health Management (IVHM) Project, sponsored by NASA's Aeronautics Research Mission Directorate, is conducting research to advance the state of highly integrated and complex flight-critical health management technologies and systems. An effective IVHM system requires Structural Health Monitoring (SHM). The impedance method is one such SHM technique for detection and monitoring complex structures for damage. This position paper on the impedance method presents the current state of the art, future directions, applications and possible flight test demonstrations.
Laboratory complex for simulation of navigation signals of pseudosatellites
NASA Astrophysics Data System (ADS)
Ratushniak, V. N.; Gladyshev, A. B.; Sokolovskiy, A. V.; Mikhov, E. D.
2018-05-01
In the article, features of the organization, structure and questions of formation of navigation signals of pseudosatellites of the short - range navigation system based on the hardware-software complex National Instruments are considered. A software model that performs the formation and management of a pseudo-random sequence of a navigation signal and the formation and management of the format transmitted pseudosatellite navigation information is presented. The variant of constructing the transmitting equipment of the pseudosatellite base stations is provided.
Evolvable Smartphone-Based Platforms for Point-of-Care In-Vitro Diagnostics Applications.
Patou, François; AlZahra'a Alatraktchi, Fatima; Kjægaard, Claus; Dimaki, Maria; Madsen, Jan; Svendsen, Winnie E
2016-09-03
The association of smart mobile devices and lab-on-chip technologies offers unprecedented opportunities for the emergence of direct-to-consumer in vitro medical diagnostics applications. Despite their clear transformative potential, obstacles remain to the large-scale disruption and long-lasting success of these systems in the consumer market. For instance, the increasing level of complexity of instrumented lab-on-chip devices, coupled to the sporadic nature of point-of-care testing, threatens the viability of a business model mainly relying on disposable/consumable lab-on-chips. We argued recently that system evolvability, defined as the design characteristic that facilitates more manageable transitions between system generations via the modification of an inherited design, can help remedy these limitations. In this paper, we discuss how platform-based design can constitute a formal entry point to the design and implementation of evolvable smart device/lab-on-chip systems. We present both a hardware/software design framework and the implementation details of a platform prototype enabling at this stage the interfacing of several lab-on-chip variants relying on current- or impedance-based biosensors. Our findings suggest that several change-enabling mechanisms implemented in the higher abstraction software layers of the system can promote evolvability, together with the design of change-absorbing hardware/software interfaces. Our platform architecture is based on a mobile software application programming interface coupled to a modular hardware accessory. It allows the specification of lab-on-chip operation and post-analytic functions at the mobile software layer. We demonstrate its potential by operating a simple lab-on-chip to carry out the detection of dopamine using various electroanalytical methods.
Petrovici, Mihai A.; Vogginger, Bernhard; Müller, Paul; Breitwieser, Oliver; Lundqvist, Mikael; Muller, Lyle; Ehrlich, Matthias; Destexhe, Alain; Lansner, Anders; Schüffny, René; Schemmel, Johannes; Meier, Karlheinz
2014-01-01
Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks. PMID:25303102
Petrovici, Mihai A; Vogginger, Bernhard; Müller, Paul; Breitwieser, Oliver; Lundqvist, Mikael; Muller, Lyle; Ehrlich, Matthias; Destexhe, Alain; Lansner, Anders; Schüffny, René; Schemmel, Johannes; Meier, Karlheinz
2014-01-01
Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks.
Evolvable Smartphone-Based Platforms for Point-of-Care In-Vitro Diagnostics Applications
Patou, François; AlZahra’a Alatraktchi, Fatima; Kjægaard, Claus; Dimaki, Maria; Madsen, Jan; Svendsen, Winnie E.
2016-01-01
The association of smart mobile devices and lab-on-chip technologies offers unprecedented opportunities for the emergence of direct-to-consumer in vitro medical diagnostics applications. Despite their clear transformative potential, obstacles remain to the large-scale disruption and long-lasting success of these systems in the consumer market. For instance, the increasing level of complexity of instrumented lab-on-chip devices, coupled to the sporadic nature of point-of-care testing, threatens the viability of a business model mainly relying on disposable/consumable lab-on-chips. We argued recently that system evolvability, defined as the design characteristic that facilitates more manageable transitions between system generations via the modification of an inherited design, can help remedy these limitations. In this paper, we discuss how platform-based design can constitute a formal entry point to the design and implementation of evolvable smart device/lab-on-chip systems. We present both a hardware/software design framework and the implementation details of a platform prototype enabling at this stage the interfacing of several lab-on-chip variants relying on current- or impedance-based biosensors. Our findings suggest that several change-enabling mechanisms implemented in the higher abstraction software layers of the system can promote evolvability, together with the design of change-absorbing hardware/software interfaces. Our platform architecture is based on a mobile software application programming interface coupled to a modular hardware accessory. It allows the specification of lab-on-chip operation and post-analytic functions at the mobile software layer. We demonstrate its potential by operating a simple lab-on-chip to carry out the detection of dopamine using various electroanalytical methods. PMID:27598208
Integrated Hardware and Software for No-Loss Computing
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
When an algorithm is distributed across multiple threads executing on many distinct processors, a loss of one of those threads or processors can potentially result in the total loss of all the incremental results up to that point. When implementation is massively hardware distributed, then the probability of a hardware failure during the course of a long execution is potentially high. Traditionally, this problem has been addressed by establishing checkpoints where the current state of some or part of the execution is saved. Then in the event of a failure, this state information can be used to recompute that point in the execution and resume the computation from that point. A serious problem arises when one distributes a problem across multiple threads and physical processors is that one increases the likelihood of the algorithm failing due to no fault of the scientist but as a result of hardware faults coupled with operating system problems. With good reason, scientists expect their computing tools to serve them and not the other way around. What is novel here is a unique combination of hardware and software that reformulates an application into monolithic structure that can be monitored in real-time and dynamically reconfigured in the event of a failure. This unique reformulation of hardware and software will provide advanced aeronautical technologies to meet the challenges of next-generation systems in aviation, for civilian and scientific purposes, in our atmosphere and in atmospheres of other worlds. In particular, with respect to NASA s manned flight to Mars, this technology addresses the critical requirements for improving safety and increasing reliability of manned spacecraft.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Domanski, P.A.
1995-03-01
The report presents a theoretical analysis of three vapor compression cycles which are derived from the Rankine cycle by incorporating a liquid-line/suction-line heat exchanger, economizer, or ejector. These addendums to the basic cycle reduce throttling losses using different principles, and they require different mechanical hardware of different complexity and cost. The theoretical merits of the three modified cycles were evaluated in relation to the reversed Carnot and Rankine cycle. Thirty-eight fluids were included in the study using the Carnahan-Starling-DeSantis equation of state. In general, the benefit of these addendums increases with the amount of the throttling losses realized by themore » refrigerant in the Rankine cycle.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gentile, Ann C.; Brandt, James M.; Tucker, Thomas
2011-09-01
This report provides documentation for the completion of the Sandia Level II milestone 'Develop feedback system for intelligent dynamic resource allocation to improve application performance'. This milestone demonstrates the use of a scalable data collection analysis and feedback system that enables insight into how an application is utilizing the hardware resources of a high performance computing (HPC) platform in a lightweight fashion. Further we demonstrate utilizing the same mechanisms used for transporting data for remote analysis and visualization to provide low latency run-time feedback to applications. The ultimate goal of this body of work is performance optimization in the facemore » of the ever increasing size and complexity of HPC systems.« less
Mission and science activity scheduling language
NASA Technical Reports Server (NTRS)
Hull, Larry G.
1993-01-01
To support the distributed and complex operational scheduling required for future National Aeronautics and Space Administration (NASA) missions, a formal, textual language, the Scheduling Applications Interface Language (SAIL), has been developed. Increased geographic dispersion of investigators is leading to distributed mission and science activity planning, scheduling, and operations. SAIL is an innovation which supports the effective and efficient communication of scheduling information among physically dispersed applications in distributed scheduling environments. SAIL offers a clear, concise, unambiguous expression of scheduling information in a readable, hardware independent format. The language concept, syntax, and semantics incorporate language features found useful during five years of research and prototyping with scheduling languages in physically distributed environments. SAIL allows concise specification of mission and science activity plans in a format which promotes repetition and reuse.
NASA Technical Reports Server (NTRS)
Peach, Robert; Malarky, Alastair
1990-01-01
Currently proposed mobile satellite communications systems require a high degree of flexibility in assignment of spectral capacity to different geographic locations. Conventionally this results in poor spectral efficiency which may be overcome by the use of bandwidth switchable filtering. Surface acoustic wave (SAW) technology makes it possible to provide banks of filters whose responses may be contiguously combined to form variable bandwidth filters with constant amplitude and phase responses across the entire band. The high selectivity possible with SAW filters, combined with the variable bandwidth capability, makes it possible to achieve spectral efficiencies over the allocated bandwidths of greater than 90 percent, while retaining full system flexibility. Bandwidth switchable SAW filtering (BSSF) achieves these gains with a negligible increase in hardware complexity.
Developing software to use parallel processing effectively. Final report, June-December 1987
DOE Office of Scientific and Technical Information (OSTI.GOV)
Center, J.
1988-10-01
This report describes the difficulties involved in writing efficient parallel programs and describes the hardware and software support currently available for generating software that utilizes processing effectively. Historically, the processing rate of single-processor computers has increased by one order of magnitude every five years. However, this pace is slowing since electronic circuitry is coming up against physical barriers. Unfortunately, the complexity of engineering and research problems continues to require ever more processing power (far in excess of the maximum estimated 3 Gflops achievable by single-processor computers). For this reason, parallel-processing architectures are receiving considerable interest, since they offer high performancemore » more cheaply than a single-processor supercomputer, such as the Cray.« less
Pollution reduction technology program for small jet aircraft engines, phase 1
NASA Technical Reports Server (NTRS)
Bruce, T. W.; Davis, F. G.; Kuhn, T. E.; Mongia, H. C.
1977-01-01
A series of combustor pressure rig screening tests was conducted on three combustor concepts applied to the TFE731-2 turbofan engine combustion system for the purpose of evaluating their relative emissions reduction potential consistent with prescribed performance, durability, and envelope contraints. The three concepts and their modifications represented increasing potential for reducing emission levels with the penalty of increased hardware complexity and operational risk. Concept 1 entailed advanced modifications to the present production TFE731-2 combustion system. Concept 2 was based on the incorporation of an axial air-assisted airblast fuel injection system. Concept 3 was a staged premix/prevaporizing combustion system. Significant emissions reductions were achieved in all three concepts, consistent with acceptable combustion system performance. Concepts 2 and 3 were identified as having the greatest achievable emissions reduction potential, and were selected to undergo refinement to prepare for ultimate incorporation within an engine.
Isse, K; Lesniak, A; Grama, K; Roysam, B; Minervini, M I; Demetris, A J
2012-01-01
Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. "-Omics" analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: (a) spatial-temporal relationships; (b) rare events/cells; (c) complex structural context; and (d) integration into a "systems" model. Nevertheless, except for immunostaining, no transformative advancements have "modernized" routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology-global "-omic" analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. ©Copyright 2011 The American Society of Transplantation and the American Society of Transplant Surgeons.
Integrated modeling and design for realizing a two-wheeled wheelchair for disabled.
Altalmas, Tareq; Aula, Abqori; Ahmad, Salmiah; Tokhi, M O; Akmeliawati, Rini
2016-01-01
Two-wheeled wheelchairs are considered highly nonlinear and complex systems. The systems mimic a double-inverted pendulum scenario and will provide better maneuverability in confined spaces and also to reach higher level of height for pick and place tasks. The challenge resides in modeling and control of the two-wheeled wheelchair to perform comparably to a normal four-wheeled wheelchair. Most common modeling techniques have been accomplished by researchers utilizing the basic Newton's Laws of motion and some have used 3D tools to model the system where the models are much more theoretical and quite far from the practical implementation. This article is aimed at closing the gap between the conventional mathematical modeling approaches where the integrated 3D modeling approach with validation on the actual hardware implementation was conducted. To achieve this, both nonlinear and a linearized model in terms of state space model were obtained from the mathematical model of the system for analysis and, thereafter, a 3D virtual prototype of the wheelchair was developed, simulated, and analyzed. This has increased the confidence level for the proposed platform and facilitated the actual hardware implementation of the two-wheeled wheelchair. Results show that the prototype developed and tested has successfully worked within the specific requirements established.
Architecture independent environment for developing engineering software on MIMD computers
NASA Technical Reports Server (NTRS)
Valimohamed, Karim A.; Lopez, L. A.
1990-01-01
Engineers are constantly faced with solving problems of increasing complexity and detail. Multiple Instruction stream Multiple Data stream (MIMD) computers have been developed to overcome the performance limitations of serial computers. The hardware architectures of MIMD computers vary considerably and are much more sophisticated than serial computers. Developing large scale software for a variety of MIMD computers is difficult and expensive. There is a need to provide tools that facilitate programming these machines. First, the issues that must be considered to develop those tools are examined. The two main areas of concern were architecture independence and data management. Architecture independent software facilitates software portability and improves the longevity and utility of the software product. It provides some form of insurance for the investment of time and effort that goes into developing the software. The management of data is a crucial aspect of solving large engineering problems. It must be considered in light of the new hardware organizations that are available. Second, the functional design and implementation of a software environment that facilitates developing architecture independent software for large engineering applications are described. The topics of discussion include: a description of the model that supports the development of architecture independent software; identifying and exploiting concurrency within the application program; data coherence; engineering data base and memory management.
Devices and circuits for nanoelectronic implementation of artificial neural networks
NASA Astrophysics Data System (ADS)
Turel, Ozgur
Biological neural networks perform complicated information processing tasks at speeds better than conventional computers based on conventional algorithms. This has inspired researchers to look into the way these networks function, and propose artificial networks that mimic their behavior. Unfortunately, most artificial neural networks, either software or hardware, do not provide either the speed or the complexity of a human brain. Nanoelectronics, with high density and low power dissipation that it provides, may be used in developing more efficient artificial neural networks. This work consists of two major contributions in this direction. First is the proposal of the CMOL concept, hybrid CMOS-molecular hardware [1-8]. CMOL may circumvent most of the problems in posed by molecular devices, such as low yield, vet provide high active device density, ˜1012/cm 2. The second contribution is CrossNets, artificial neural networks that are based on CMOL. We showed that CrossNets, with their fault tolerance, exceptional speed (˜ 4 to 6 orders of magnitude faster than biological neural networks) can perform any task any artificial neural network can perform. Moreover, there is a hope that if their integration scale is increased to that of human cerebral cortex (˜ 1010 neurons and ˜ 1014 synapses), they may be capable of performing more advanced tasks.
Steering and positioning targets for HWIL IR testing at cryogenic conditions
NASA Astrophysics Data System (ADS)
Perkes, D. W.; Jensen, G. L.; Higham, D. L.; Lowry, H. S.; Simpson, W. R.
2006-05-01
In order to increase the fidelity of hardware-in-the-loop ground-truth testing, it is desirable to create a dynamic scene of multiple, independently controlled IR point sources. ATK-Mission Research has developed and supplied the steering mirror systems for the 7V and 10V Space Simulation Test Chambers at the Arnold Engineering Development Center (AEDC), Air Force Materiel Command (AFMC). A portion of the 10V system incorporates multiple target sources beam-combined at the focal point of a 20K cryogenic collimator. Each IR source consists of a precision blackbody with cryogenic aperture and filter wheels mounted on a cryogenic two-axis translation stage. This point source target scene is steered by a high-speed steering mirror to produce further complex motion. The scene changes dynamically in order to simulate an actual operational scene as viewed by the System Under Test (SUT) as it executes various dynamic look-direction changes during its flight to a target. Synchronization and real-time hardware-in-the-loop control is accomplished using reflective memory for each subsystem control and feedback loop. This paper focuses on the steering mirror system and the required tradeoffs of optical performance, precision, repeatability and high-speed motion as well as the complications of encoder feedback calibration and operation at 20K.
NASA Astrophysics Data System (ADS)
Bird, Robert; Nystrom, David; Albright, Brian
2017-10-01
The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.
About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture
NASA Astrophysics Data System (ADS)
Grauer, Manfred; Barth, Thomas
2004-06-01
Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.
Memory Benchmarks for SMP-Based High Performance Parallel Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, A B; de Supinski, B; Mueller, F
2001-11-20
As the speed gap between CPU and main memory continues to grow, memory accesses increasingly dominates the performance of many applications. The problem is particularly acute for symmetric multiprocessor (SMP) systems, where the shared memory may be accessed concurrently by a group of threads running on separate CPUs. Unfortunately, several key issues governing memory system performance in current systems are not well understood. Complex interactions between the levels of the memory hierarchy, buses or switches, DRAM back-ends, system software, and application access patterns can make it difficult to pinpoint bottlenecks and determine appropriate optimizations, and the situation is even moremore » complex for SMP systems. To partially address this problem, we formulated a set of multi-threaded microbenchmarks for characterizing and measuring the performance of the underlying memory system in SMP-based high-performance computers. We report our use of these microbenchmarks on two important SMP-based machines. This paper has four primary contributions. First, we introduce a microbenchmark suite to systematically assess and compare the performance of different levels in SMP memory hierarchies. Second, we present a new tool based on hardware performance monitors to determine a wide array of memory system characteristics, such as cache sizes, quickly and easily; by using this tool, memory performance studies can be targeted to the full spectrum of performance regimes with many fewer data points than is otherwise required. Third, we present experimental results indicating that the performance of applications with large memory footprints remains largely constrained by memory. Fourth, we demonstrate that thread-level parallelism further degrades memory performance, even for the latest SMPs with hardware prefetching and switch-based memory interconnects.« less
SEnviro: a sensorized platform proposal using open hardware and open standards.
Trilles, Sergio; Luján, Alejandro; Belmonte, Óscar; Montoliu, Raúl; Torres-Sospedra, Joaquín; Huerta, Joaquín
2015-03-06
The need for constant monitoring of environmental conditions has produced an increase in the development of wireless sensor networks (WSN). The drive towards smart cities has produced the need for smart sensors to be able to monitor what is happening in our cities. This, combined with the decrease in hardware component prices and the increase in the popularity of open hardware, has favored the deployment of sensor networks based on open hardware. The new trends in Internet Protocol (IP) communication between sensor nodes allow sensor access via the Internet, turning them into smart objects (Internet of Things and Web of Things). Currently, WSNs provide data in different formats. There is a lack of communication protocol standardization, which turns into interoperability issues when connecting different sensor networks or even when connecting different sensor nodes within the same network. This work presents a sensorized platform proposal that adheres to the principles of the Internet of Things and theWeb of Things. Wireless sensor nodes were built using open hardware solutions, and communications rely on the HTTP/IP Internet protocols. The Open Geospatial Consortium (OGC) SensorThings API candidate standard was used as a neutral format to avoid interoperability issues. An environmental WSN developed following the proposed architecture was built as a proof of concept. Details on how to build each node and a study regarding energy concerns are presented.
SEnviro: A Sensorized Platform Proposal Using Open Hardware and Open Standards
Trilles, Sergio; Luján, Alejandro; Belmonte, Óscar; Montoliu, Raúl; Torres-Sospedra, Joaquín; Huerta, Joaquín
2015-01-01
The need for constant monitoring of environmental conditions has produced an increase in the development of wireless sensor networks (WSN). The drive towards smart cities has produced the need for smart sensors to be able to monitor what is happening in our cities. This, combined with the decrease in hardware component prices and the increase in the popularity of open hardware, has favored the deployment of sensor networks based on open hardware. The new trends in Internet Protocol (IP) communication between sensor nodes allow sensor access via the Internet, turning them into smart objects (Internet of Things and Web of Things). Currently, WSNs provide data in different formats. There is a lack of communication protocol standardization, which turns into interoperability issues when connecting different sensor networks or even when connecting different sensor nodes within the same network. This work presents a sensorized platform proposal that adheres to the principles of the Internet of Things and the Web of Things. Wireless sensor nodes were built using open hardware solutions, and communications rely on the HTTP/IP Internet protocols. The Open Geospatial Consortium (OGC) SensorThings API candidate standard was used as a neutral format to avoid interoperability issues. An environmental WSN developed following the proposed architecture was built as a proof of concept. Details on how to build each node and a study regarding energy concerns are presented. PMID:25756864
EMU processing - A myth dispelled
NASA Technical Reports Server (NTRS)
Peacock, Paul R.; Wilde, Richard C.; Lutz, Glenn C.; Melgares, Michael A.
1991-01-01
The refurbishment-and-checkout 'processing' activities entailed by the Space Shuttle Extravehicular Mobility Units (EMUs) are currently significantly more modest, at 1050 man-hours, than when Space Shuttle services began (involving about 4000 man-hours). This great improvement in hardware efficiency is due to the design or modification of test rigs for simplification of procedures, as well as those procedures' standardization, in conjunction with an increase in hardware confidence which has allowed the extension of inspection, service, and testing intervals. Recent simplification of the hardware-processing sequence could reduce EMU processing requirements to 600 man-hours in the near future.
A digital matched filter for reverse time chaos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, J. Phillip, E-mail: mchamilton@auburn.edu; Beal, Aubrey N.; Dean, Robert N.
2016-07-15
The use of reverse time chaos allows the realization of hardware chaotic systems that can operate at speeds equivalent to existing state of the art while requiring significantly less complex circuitry. Matched filter decoding is possible for the reverse time system since it exhibits a closed form solution formed partially by a linear basis pulse. Coefficients have been calculated and are used to realize the matched filter digitally as a finite impulse response filter. Numerical simulations confirm that this correctly implements a matched filter that can be used for detection of the chaotic signal. In addition, the direct form ofmore » the filter has been implemented in hardware description language and demonstrates performance in agreement with numerical results.« less
Programming languages and compiler design for realistic quantum hardware.
Chong, Frederic T; Franklin, Diana; Martonosi, Margaret
2017-09-13
Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.
Demonstration Advanced Avionics System (DAAS) function description
NASA Technical Reports Server (NTRS)
Bailey, A. J.; Bailey, D. G.; Gaabo, R. J.; Lahn, T. G.; Larson, J. C.; Peterson, E. M.; Schuck, J. W.; Rodgers, D. L.; Wroblewski, K. A.
1982-01-01
The Demonstration Advanced Avionics System, DAAS, is an integrated avionics system utilizing microprocessor technologies, data busing, and shared displays for demonstrating the potential of these technologies in improving the safety and utility of general aviation operations in the late 1980's and beyond. Major hardware elements of the DAAS include a functionally distributed microcomputer complex, an integrated data control center, an electronic horizontal situation indicator, and a radio adaptor unit. All processing and display resources are interconnected by an IEEE-488 bus in order to enhance the overall system effectiveness, reliability, modularity and maintainability. A detail description of the DAAS architecture, the DAAS hardware, and the DAAS functions is presented. The system is designed for installation and flight test in a NASA Cessna 402-B aircraft.
Programming languages and compiler design for realistic quantum hardware
NASA Astrophysics Data System (ADS)
Chong, Frederic T.; Franklin, Diana; Martonosi, Margaret
2017-09-01
Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.
Digital Equity and Intercultural Education
ERIC Educational Resources Information Center
Resta, Paul; Laferrière, Thérèse
2015-01-01
Digital equity and intercultural education continue to be areas of concern in the emerging knowledge-based society. The digital divide is present across the globe as the result of a complex of factors such as the inequality in: access to hardware and connectivity; autonomy of use; digital and literacy skills; availability of technical and social…
Efficient Bit-to-Symbol Likelihood Mappings
NASA Technical Reports Server (NTRS)
Moision, Bruce E.; Nakashima, Michael A.
2010-01-01
This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.
The Role of Skepticism in Preparing Teachers for the Use of Technology.
ERIC Educational Resources Information Center
Albaugh, Patti R.
The complexity of technology training for teachers can be partially explained in terms of three phenomena: the historical resistance of teachers to use media, the nature of teaching itself, and the life cycle of technological innovations. Factors that influence teachers' use of technology include: accessibility of hardware and software,…
Digital avionics: A cornerstone of aviation
NASA Technical Reports Server (NTRS)
Spitzer, Cary R.
1990-01-01
Digital avionics is continually expanding its role in communication (HF and VHF, satellite, data links), navigation (ground-based systems, inertial and satellite-based systems), and flight-by-wire control. Examples of electronic flight control system architecture, pitch, roll, and yaw control are presented. Modeling of complex hardware systems, electromagnetic interference, and software are discussed.
Eye-Tracking Study of Complexity in Gas Law Problems
ERIC Educational Resources Information Center
Tang, Hui; Pienta, Norbert
2012-01-01
This study, part of a series investigating students' use of online tools to assess problem solving, uses eye-tracking hardware and software to explore the effect of problem difficulty and cognitive processes when students solve gas law word problems. Eye movements are indices of cognition; eye-tracking data typically include the location,…
Software for Simulating a Complex Robot
NASA Technical Reports Server (NTRS)
Goza, S. Michael
2003-01-01
RoboSim (Robot Simulation) is a computer program that simulates the poses and motions of the Robonaut a developmental anthropomorphic robot that has a complex system of joints with 43 degrees of freedom and multiple modes of operation and control. RoboSim performs a full kinematic simulation of all degrees of freedom. It also includes interface components that duplicate the functionality of the real Robonaut interface with control software and human operators. Basically, users see no difference between the real Robonaut and the simulation. Consequently, new control algorithms can be tested by computational simulation, without risk to the Robonaut hardware, and without using excessive Robonaut-hardware experimental time, which is always at a premium. Previously developed software incorporated into RoboSim includes Enigma (for graphical displays), OSCAR (for kinematical computations), and NDDS (for communication between the Robonaut and external software). In addition, RoboSim incorporates unique inverse-kinematical algorithms for chains of joints that have fewer than six degrees of freedom (e.g., finger joints). In comparison with the algorithms of OSCAR, these algorithms are more readily adaptable and provide better results when using equivalent sets of data.
Enabling Advanced Wind-Tunnel Research Methods Using the NASA Langley 12-Foot Low Speed Tunnel
NASA Technical Reports Server (NTRS)
Busan, Ronald C.; Rothhaar, Paul M.; Croom, Mark A.; Murphy, Patrick C.; Grafton, Sue B.; O-Neal, Anthony W.
2014-01-01
Design of Experiment (DOE) testing methods were used to gather wind tunnel data characterizing the aerodynamic and propulsion forces and moments acting on a complex vehicle configuration with 10 motor-driven propellers, 9 control surfaces, a tilt wing, and a tilt tail. This paper describes the potential benefits and practical implications of using DOE methods for wind tunnel testing - with an emphasis on describing how it can affect model hardware, facility hardware, and software for control and data acquisition. With up to 23 independent variables (19 model and 2 tunnel) for some vehicle configurations, this recent test also provides an excellent example of using DOE methods to assess critical coupling effects in a reasonable timeframe for complex vehicle configurations. Results for an exploratory test using conventional angle of attack sweeps to assess aerodynamic hysteresis is summarized, and DOE results are presented for an exploratory test used to set the data sampling time for the overall test. DOE results are also shown for one production test characterizing normal force in the Cruise mode for the vehicle.
On decoding of multi-level MPSK modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Gupta, Alok Kumar
1990-01-01
The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.
EVA Suit R and D for Performance Optimization
NASA Technical Reports Server (NTRS)
Cowley, Matthew S.; Harvill, Lauren; Benson, Elizabeth; Rajulu, Sudhakar
2014-01-01
Designing a planetary suit is very complex and often requires difficult trade-offs between performance, cost, mass, and system complexity. To verify that new suit designs meet requirements, full prototypes must be built and tested with human subjects. However, numerous design iterations will occur before the hardware meets those requirements. Traditional draw-prototype-test paradigms for R&D are prohibitively expensive with today's shrinking Government budgets. Personnel at NASA are developing modern simulation techniques which focus on human-centric designs by creating virtual prototype simulations and fully adjustable physical prototypes of suit hardware. During the R&D design phase, these easily modifiable representations of an EVA suit's hard components will allow designers to think creatively and exhaust design possibilities before they build and test working prototypes with human subjects. It allows scientists to comprehensively benchmark current suit capabilities and limitations for existing suit sizes and sizes that do not exist. This is extremely advantageous and enables comprehensive design down-selections to be made early in the design process, enables the use of human performance as design criteria, and enables designs to target specific populations
NASA Astrophysics Data System (ADS)
Erez, Mattan; Dally, William J.
Stream processors, like other multi core architectures partition their functional units and storage into multiple processing elements. In contrast to typical architectures, which contain symmetric general-purpose cores and a cache hierarchy, stream processors have a significantly leaner design. Stream processors are specifically designed for the stream execution model, in which applications have large amounts of explicit parallel computation, structured and predictable control, and memory accesses that can be performed at a coarse granularity. Applications in the streaming model are expressed in a gather-compute-scatter form, yielding programs with explicit control over transferring data to and from on-chip memory. Relying on these characteristics, which are common to many media processing and scientific computing applications, stream architectures redefine the boundary between software and hardware responsibilities with software bearing much of the complexity required to manage concurrency, locality, and latency tolerance. Thus, stream processors have minimal control consisting of fetching medium- and coarse-grained instructions and executing them directly on the many ALUs. Moreover, the on-chip storage hierarchy of stream processors is under explicit software control, as is all communication, eliminating the need for complex reactive hardware mechanisms.
Lunar Landing Operational Risk Model
NASA Technical Reports Server (NTRS)
Mattenberger, Chris; Putney, Blake; Rust, Randy; Derkowski, Brian
2010-01-01
Characterizing the risk of spacecraft goes beyond simply modeling equipment reliability. Some portions of the mission require complex interactions between system elements that can lead to failure without an actual hardware fault. Landing risk is currently the least characterized aspect of the Altair lunar lander and appears to result from complex temporal interactions between pilot, sensors, surface characteristics and vehicle capabilities rather than hardware failures. The Lunar Landing Operational Risk Model (LLORM) seeks to provide rapid and flexible quantitative insight into the risks driving the landing event and to gauge sensitivities of the vehicle to changes in system configuration and mission operations. The LLORM takes a Monte Carlo based approach to estimate the operational risk of the Lunar Landing Event and calculates estimates of the risk of Loss of Mission (LOM) - Abort Required and is Successful, Loss of Crew (LOC) - Vehicle Crashes or Cannot Reach Orbit, and Success. The LLORM is meant to be used during the conceptual design phase to inform decision makers transparently of the reliability impacts of design decisions, to identify areas of the design which may require additional robustness, and to aid in the development and flow-down of requirements.
Cellular changes in microgravity and the design of space radiation experiments
NASA Technical Reports Server (NTRS)
Morrison, D. R.
1994-01-01
Cell metabolism, secretion and cell-cell interactions can be altered during space flight. Early radiobiology experiments have demonstrated synergistic effects of radiation and microgravity as indicated by increased mutagenesis, increased chromosome aberrations, inhibited development, and retarded growth. Microgravity-induced changes in immune cell functions include reduced blastogenesis and cell-mediated, delayed-type hypersensitivity responses, increased cytokine secretions, but inhibited cytotoxic effects an macrophage differentiation. These effects are important because of the high radiosensitivity of immune cells. It is difficult to compare ground studies with space radiation biology experiments because of the complexity of the space radiation environment, types of radiation damage and repair mechanisms. Altered intracellular functions and molecular mechanisms must be considered in the design and interpretation of space radiation experiments. Critical steps in radiocarcinogenesis could be affected. New cell systems and hardware are needed to determine the biological effectiveness of the low dose rate, isotropic, multispectral space radiation and the potential usefulness of radioprotectants during space flight.
Accelerating epistasis analysis in human genetics with consumer graphics hardware.
Sinnott-Armstrong, Nicholas A; Greene, Casey S; Cancare, Fabio; Moore, Jason H
2009-07-24
Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR) is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs) have more memory bandwidth and computational capability than Central Processing Units (CPUs) and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective performance while leaving the CPU available for other tasks. The GPU workstation containing three GPUs costs $2000 while obtaining similar performance on a Beowulf cluster requires 150 CPU cores which, including the added infrastructure and support cost of the cluster system, cost approximately $82,500. Graphics hardware based computing provides a cost effective means to perform genetic analysis of epistasis using MDR on large datasets without the infrastructure of a computing cluster.
Verification of Space Station Secondary Power System Stability Using Design of Experiment
NASA Technical Reports Server (NTRS)
Karimi, Kamiar J.; Booker, Andrew J.; Mong, Alvin C.; Manners, Bruce
1998-01-01
This paper describes analytical methods used in verification of large DC power systems with applications to the International Space Station (ISS). Large DC power systems contain many switching power converters with negative resistor characteristics. The ISS power system presents numerous challenges with respect to system stability such as complex sources and undefined loads. The Space Station program has developed impedance specifications for sources and loads. The overall approach to system stability consists of specific hardware requirements coupled with extensive system analysis and testing. Testing of large complex distributed power systems is not practical due to size and complexity of the system. Computer modeling has been extensively used to develop hardware specifications as well as to identify system configurations for lab testing. The statistical method of Design of Experiments (DoE) is used as an analysis tool for verification of these large systems. DOE reduces the number of computer runs which are necessary to analyze the performance of a complex power system consisting of hundreds of DC/DC converters. DoE also provides valuable information about the effect of changes in system parameters on the performance of the system. DoE provides information about various operating scenarios and identification of the ones with potential for instability. In this paper we will describe how we have used computer modeling to analyze a large DC power system. A brief description of DoE is given. Examples using applications of DoE to analysis and verification of the ISS power system are provided.
ISS Internal Active Thermal Control System (IATCS) Coolant Remediation Project -2006 Update
NASA Technical Reports Server (NTRS)
Morrison, Russell H.; Holt, Mike
2006-01-01
The IATCS coolant has experienced a number of anomalies in the time since the US Lab was first activated on Flight 5A in February 2001. These have included: 1) a decrease in coolant pH, 2) increases in inorganic carbon, 3) a reduction in phosphate concentration, 4) an increase in dissolved nickel and precipitation of nickel salts, and 5) increases in microbial concentration. These anomalies represent some risk to the system, have been implicated in some hardware failures and are suspect in others. The ISS program has conducted extensive investigations of the causes and effects of these anomalies and has developed a comprehensive program to remediate the coolant chemistry of the on-orbit system as well as provide a robust and compatible coolant solution for the hardware yet to be delivered. This paper presents a status of the coolant stability over the past year as well as results from destructive analyses of hardware removed from the on-orbit system and the current approach to coolant remediation.
Fracture of fusion mass after hardware removal in patients with high sagittal imbalance.
Sedney, Cara L; Daffner, Scott D; Stefanko, Jared J; Abdelfattah, Hesham; Emery, Sanford E; France, John C
2016-04-01
As spinal fusions become more common and more complex, so do the sequelae of these procedures, some of which remain poorly understood. The authors report on a series of patients who underwent removal of hardware after CT-proven solid fusion, confirmed by intraoperative findings. These patients later developed a spontaneous fracture of the fusion mass that was not associated with trauma. A series of such patients has not previously been described in the literature. An unfunded, retrospective review of the surgical logs of 3 fellowship-trained spine surgeons yielded 7 patients who suffered a fracture of a fusion mass after hardware removal. Adult patients from the West Virginia University Department of Orthopaedics who underwent hardware removal in the setting of adjacent-segment disease (ASD), and subsequently experienced fracture of the fusion mass through the uninstrumented segment, were studied. The medical records and radiological studies of these patients were examined for patient demographics and comorbidities, initial indication for surgery, total number of surgeries, timeline of fracture occurrence, risk factors for fracture, as well as sagittal imbalance. All 7 patients underwent hardware removal in conjunction with an extension of fusion for ASD. All had CT-proven solid fusion of their previously fused segments, which was confirmed intraoperatively. All patients had previously undergone multiple operations for a variety of indications, 4 patients were smokers, and 3 patients had osteoporosis. Spontaneous fracture of the fusion mass occurred in all patients and was not due to trauma. These fractures occurred 4 months to 4 years after hardware removal. All patients had significant sagittal imbalance of 13-15 cm. The fracture level was L-5 in 6 of the 7 patients, which was the first uninstrumented level caudal to the newly placed hardware in all 6 of these patients. Six patients underwent surgery due to this fracture. The authors present a case series of 7 patients who underwent surgery for ASD after a remote fusion. These patients later developed a fracture of the fusion mass after hardware removal from their previously successfully fused segment. All patients had a high sagittal imbalance and had previously undergone multiple spinal operations. The development of a spontaneous fracture of the fusion mass may be related to sagittal imbalance. Consideration should be given to reimplanting hardware for these patients, even across good fusions, to prevent spontaneous fracture of these areas if the sagittal imbalance is not corrected.
Software for Managing Inventory of Flight Hardware
NASA Technical Reports Server (NTRS)
Salisbury, John; Savage, Scott; Thomas, Shirman
2003-01-01
The Flight Hardware Support Request System (FHSRS) is a computer program that relieves engineers at Marshall Space Flight Center (MSFC) of most of the non-engineering administrative burden of managing an inventory of flight hardware. The FHSRS can also be adapted to perform similar functions for other organizations. The FHSRS affords a combination of capabilities, including those formerly provided by three separate programs in purchasing, inventorying, and inspecting hardware. The FHSRS provides a Web-based interface with a server computer that supports a relational database of inventory; electronic routing of requests and approvals; and electronic documentation from initial request through implementation of quality criteria, acquisition, receipt, inspection, storage, and final issue of flight materials and components. The database lists both hardware acquired for current projects and residual hardware from previous projects. The increased visibility of residual flight components provided by the FHSRS has dramatically improved the re-utilization of materials in lieu of new procurements, resulting in a cost savings of over $1.7 million. The FHSRS includes subprograms for manipulating the data in the database, informing of the status of a request or an item of hardware, and searching the database on any physical or other technical characteristic of a component or material. The software structure forces normalization of the data to facilitate inquiries and searches for which users have entered mixed or inconsistent values.
Kole, J S; Beekman, F J
2006-02-21
Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases.
Local wavelet transform: a cost-efficient custom processor for space image compression
NASA Astrophysics Data System (ADS)
Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier
2002-11-01
Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.
Versatile fluid-mixing device for cell and tissue microgravity research applications.
Wilfinger, W W; Baker, C S; Kunze, E L; Phillips, A T; Hammerstedt, R H
1996-01-01
Microgravity life-science research requires hardware that can be easily adapted to a variety of experimental designs and working environments. The Biomodule is a patented, computer-controlled fluid-mixing device that can accommodate these diverse requirements. A typical shuttle payload contains eight Biomodules with a total of 64 samples, a sealed containment vessel, and a NASA refrigeration-incubation module. Each Biomodule contains eight gas-permeable Silastic T tubes that are partitioned into three fluid-filled compartments. The fluids can be mixed at any user-specified time. Multiple investigators and complex experimental designs can be easily accommodated with the hardware. During flight, the Biomodules are sealed in a vessel that provides two levels of containment (liquids and gas) and a stable, investigator-controlled experimental environment that includes regulated temperature, internal pressure, humidity, and gas composition. A cell microencapsulation methodology has also been developed to streamline launch-site sample manipulation and accelerate postflight analysis through the use of fluorescent-activated cell sorting. The Biomodule flight hardware and analytical cell encapsulation methodology are ideally suited for temporal, qualitative, or quantitative life-science investigations.
An Embedded Sensor Node Microcontroller with Crypto-Processors.
Panić, Goran; Stecklina, Oliver; Stamenković, Zoran
2016-04-27
Wireless sensor network applications range from industrial automation and control, agricultural and environmental protection, to surveillance and medicine. In most applications, data are highly sensitive and must be protected from any type of attack and abuse. Security challenges in wireless sensor networks are mainly defined by the power and computing resources of sensor devices, memory size, quality of radio channels and susceptibility to physical capture. In this article, an embedded sensor node microcontroller designed to support sensor network applications with severe security demands is presented. It features a low power 16-bitprocessor core supported by a number of hardware accelerators designed to perform complex operations required by advanced crypto algorithms. The microcontroller integrates an embedded Flash and an 8-channel 12-bit analog-to-digital converter making it a good solution for low-power sensor nodes. The article discusses the most important security topics in wireless sensor networks and presents the architecture of the proposed hardware solution. Furthermore, it gives details on the chip implementation, verification and hardware evaluation. Finally, the chip power dissipation and performance figures are estimated and analyzed.
Reconfigurable vision system for real-time applications
NASA Astrophysics Data System (ADS)
Torres-Huitzil, Cesar; Arias-Estrada, Miguel
2002-03-01
Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.
An Embedded Sensor Node Microcontroller with Crypto-Processors
Panić, Goran; Stecklina, Oliver; Stamenković, Zoran
2016-01-01
Wireless sensor network applications range from industrial automation and control, agricultural and environmental protection, to surveillance and medicine. In most applications, data are highly sensitive and must be protected from any type of attack and abuse. Security challenges in wireless sensor networks are mainly defined by the power and computing resources of sensor devices, memory size, quality of radio channels and susceptibility to physical capture. In this article, an embedded sensor node microcontroller designed to support sensor network applications with severe security demands is presented. It features a low power 16-bitprocessor core supported by a number of hardware accelerators designed to perform complex operations required by advanced crypto algorithms. The microcontroller integrates an embedded Flash and an 8-channel 12-bit analog-to-digital converter making it a good solution for low-power sensor nodes. The article discusses the most important security topics in wireless sensor networks and presents the architecture of the proposed hardware solution. Furthermore, it gives details on the chip implementation, verification and hardware evaluation. Finally, the chip power dissipation and performance figures are estimated and analyzed. PMID:27128925
An intelligent maximum permissible exposure meter for safety assessments of laser radiation
NASA Astrophysics Data System (ADS)
Corder, D. A.; Evans, D. R.; Tyrer, J. R.
1996-09-01
There is frequently a need to make laser power or energy density measurements when determining whether radiation from a laser system exceeds the Maximum Permissible Exposure (MPE) as defined in BS EN 60825. This can be achieved using standard commercially available laser power or energy measurement equipment, but some of these have shortcomings when used in this application. Calculations must be performed by the user to compare the measured value to the MPE. The measurement and calculation procedure appears complex to the nonexpert who may be performing the assessment. A novel approach is described which uses purpose designed hardware and software to simplify the process. The hardware is optimized for measuring the relatively low powers associated with MPEs. The software runs on a Psion Series 3a palmtop computer. This reduces the cost and size of the system yet allows graphical and numerical presentation of data. Data output to other software running on PCs is also possible, enabling the instrument to be used as part of a quality system. Throughout the measurement process the opportunity for user error has been minimized by the hardware and software design.
Green Secure Processors: Towards Power-Efficient Secure Processor Design
NASA Astrophysics Data System (ADS)
Chhabra, Siddhartha; Solihin, Yan
With the increasing wealth of digital information stored on computer systems today, security issues have become increasingly important. In addition to attacks targeting the software stack of a system, hardware attacks have become equally likely. Researchers have proposed Secure Processor Architectures which utilize hardware mechanisms for memory encryption and integrity verification to protect the confidentiality and integrity of data and computation, even from sophisticated hardware attacks. While there have been many works addressing performance and other system level issues in secure processor design, power issues have largely been ignored. In this paper, we first analyze the sources of power (energy) increase in different secure processor architectures. We then present a power analysis of various secure processor architectures in terms of their increase in power consumption over a base system with no protection and then provide recommendations for designs that offer the best balance between performance and power without compromising security. We extend our study to the embedded domain as well. We also outline the design of a novel hybrid cryptographic engine that can be used to minimize the power consumption for a secure processor. We believe that if secure processors are to be adopted in future systems (general purpose or embedded), it is critically important that power issues are considered in addition to performance and other system level issues. To the best of our knowledge, this is the first work to examine the power implications of providing hardware mechanisms for security.
The IRMIS object model and services API.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saunders, C.; Dohan, D. A.; Arnold, N. D.
2005-01-01
The relational model developed for the Integrated Relational Model of Installed Systems (IRMIS) toolkit has been successfully used to capture the Advanced Photon Source (APS) control system software (EPICS process variables and their definitions). The relational tables are populated by a crawler script that parses each Input/Output Controller (IOC) start-up file when an IOC reboot is detected. User interaction is provided by a Java Swing application that acts as a desktop for viewing the process variable information. Mapping between the display objects and the relational tables was carried out with the Hibernate Object Relational Modeling (ORM) framework. Work is wellmore » underway at the APS to extend the relational modeling to include control system hardware. For this work, due in part to the complex user interaction required, the primary application development environment has shifted from the relational database view to the object oriented (Java) perspective. With this approach, the business logic is executed in Java rather than in SQL stored procedures. This paper describes the object model used to represent control system software, hardware, and interconnects in IRMIS. We also describe the services API used to encapsulate the required behaviors for creating and maintaining the complex data. In addition to the core schema and object model, many important concepts in IRMIS are captured by the services API. IRMIS is an ambitious collaborative effort for defining and developing a relational database and associated applications to comprehensively document the large and complex EPICS-based control systems of today's accelerators. The documentation effort includes process variables, control system hardware, and interconnections. The approach could also be used to document all components of the accelerator, including mechanical, vacuum, power supplies, etc. One key aspect of IRMIS is that it is a documentation framework, not a design and development tool. We do not generate EPICS control system configurations from IRMIS, and hence do not impose any additional requirements on EPICS developers.« less
Lossless compression algorithm for REBL direct-write e-beam lithography system
NASA Astrophysics Data System (ADS)
Cramer, George; Liu, Hsin-I.; Zakhor, Avideh
2010-03-01
Future lithography systems must produce microchips with smaller feature sizes, while maintaining throughputs comparable to those of today's optical lithography systems. This places stringent constraints on the effective data throughput of any maskless lithography system. In recent years, we have developed a datapath architecture for direct-write lithography systems, and have shown that compression plays a key role in reducing throughput requirements of such systems. Our approach integrates a low complexity hardware-based decoder with the writers, in order to decompress a compressed data layer in real time on the fly. In doing so, we have developed a spectrum of lossless compression algorithms for integrated circuit layout data to provide a tradeoff between compression efficiency and hardware complexity, the latest of which is Block Golomb Context Copy Coding (Block GC3). In this paper, we present a modified version of Block GC3 called Block RGC3, specifically tailored to the REBL direct-write E-beam lithography system. Two characteristic features of the REBL system are a rotary stage resulting in arbitrarily-rotated layout imagery, and E-beam corrections prior to writing the data, both of which present significant challenges to lossless compression algorithms. Together, these effects reduce the effectiveness of both the copy and predict compression methods within Block GC3. Similar to Block GC3, our newly proposed technique Block RGC3, divides the image into a grid of two-dimensional "blocks" of pixels, each of which copies from a specified location in a history buffer of recently-decoded pixels. However, in Block RGC3 the number of possible copy locations is significantly increased, so as to allow repetition to be discovered along any angle of orientation, rather than horizontal or vertical. Also, by copying smaller groups of pixels at a time, repetition in layout patterns is easier to find and take advantage of. As a side effect, this increases the total number of copy locations to transmit; this is combated with an extra region-growing step, which enforces spatial coherence among neighboring copy locations, thereby improving compression efficiency. We characterize the performance of Block RGC3 in terms of compression efficiency and encoding complexity on a number of rotated Metal 1, Poly, and Via layouts at various angles, and show that Block RGC3 provides higher compression efficiency than existing lossless compression algorithms, including JPEG-LS, ZIP, BZIP2, and Block GC3.
Cone-beam micro-CT system based on LabVIEW software.
Ionita, Ciprian N; Hoffmann, Keneth R; Bednarek, Daniel R; Chityala, Ravishankar; Rudin, Stephen
2008-09-01
Construction of a cone-beam computed tomography (CBCT) system for laboratory research usually requires integration of different software and hardware components. As a result, building and operating such a complex system require the expertise of researchers with significantly different backgrounds. Additionally, writing flexible code to control the hardware components of a CBCT system combined with designing a friendly graphical user interface (GUI) can be cumbersome and time consuming. An intuitive and flexible program structure, as well as the program GUI for CBCT acquisition, is presented in this note. The program was developed in National Instrument's Laboratory Virtual Instrumentation Engineering Workbench (LabVIEW) graphical language and is designed to control a custom-built CBCT system but has been also used in a standard angiographic suite. The hardware components are commercially available to researchers and are in general provided with software drivers which are LabVIEW compatible. The program structure was designed as a sequential chain. Each step in the chain takes care of one or two hardware commands at a time; the execution of the sequence can be modified according to the CBCT system design. We have scanned and reconstructed over 200 specimens using this interface and present three examples which cover different areas of interest encountered in laboratory research. The resulting 3D data are rendered using a commercial workstation. The program described in this paper is available for use or improvement by other researchers.
NASA Astrophysics Data System (ADS)
Mock, Jason R.; Beno, Joe; Rafferty, Tom H.; Cornell, Mark E.
2010-07-01
To enable the Hobby-Eberly Telescope Wide Field Upgrade, the University of Texas Center for Electromechanics and McDonald Observatory are developing a precision tracker system - a 15,000 kg robot to position a 3,100 kg payload within 10 microns of a desired dynamic track. Performance requirements to meet science needs and safety requirements that emerged from detailed Failure Modes and Effects Analysis resulted in a system of 14 precision controlled actuators and 100 additional analog and digital devices (primarily sensors and safety limit switches). This level of system complexity and emphasis on fail-safe operation is typical of large modern telescopes and numerous industrial applications. Due to this complexity, demanding accuracy requirements, and stringent safety requirements, a highly versatile and easily configurable centralized control system that easily links with modeling and simulation tools during the hardware and software design process was deemed essential. The Matlab/Simulink simulation environment, coupled with dSPACE controller hardware, was selected for controls development and realization. The dSPACE real-time operating system collects sensor information; motor commands are transmitted over a PROFIBUS network to servo amplifiers and drive motor status is received over the same network. Custom designed position feedback loops, supplemented by feed forward force commands for enhanced performance, and algorithms to accommodate self-locking gearboxes (for safety), reside in dSPACE. To interface the dSPACE controller directly to absolute Heidenhain sensors with EnDat 2.2 protocol, a custom communication board was developed. This paper covers details of software and hardware, design choices and analysis, and supporting simulations (primarily Simulink).
Design tools for complex dynamic security systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byrne, Raymond Harry; Rigdon, James Brian; Rohrer, Brandon Robinson
2007-01-01
The development of tools for complex dynamic security systems is not a straight forward engineering task but, rather, a scientific task where discovery of new scientific principles and math is necessary. For years, scientists have observed complex behavior but have had difficulty understanding it. Prominent examples include: insect colony organization, the stock market, molecular interactions, fractals, and emergent behavior. Engineering such systems will be an even greater challenge. This report explores four tools for engineered complex dynamic security systems: Partially Observable Markov Decision Process, Percolation Theory, Graph Theory, and Exergy/Entropy Theory. Additionally, enabling hardware technology for next generation security systemsmore » are described: a 100 node wireless sensor network, unmanned ground vehicle and unmanned aerial vehicle.« less
Cellular logic array for computation of squares
NASA Technical Reports Server (NTRS)
Shamanna, M.; Whitaker, S.; Canaris, J.
1991-01-01
A cellular logic array is described for squaring binary numbers. This array offers a significant increase in speed, with a relatively small hardware overhead. This improvement is a result of novel implementation of the formula (x + y)exp 2 = x(exp 2) + y(exp 2) + 2(x)(y). These results can also be incorporated in the existing arrays achieving considerable hardware reduction.
Neural Networks Based Approach to Enhance Space Hardware Reliability
NASA Technical Reports Server (NTRS)
Zebulum, Ricardo S.; Thakoor, Anilkumar; Lu, Thomas; Franco, Lauro; Lin, Tsung Han; McClure, S. S.
2011-01-01
This paper demonstrates the use of Neural Networks as a device modeling tool to increase the reliability analysis accuracy of circuits targeted for space applications. The paper tackles a number of case studies of relevance to the design of Flight hardware. The results show that the proposed technique generates more accurate models than the ones regularly used to model circuits.
An experimental evaluation of software redundancy as a strategy for improving reliability
NASA Technical Reports Server (NTRS)
Eckhardt, Dave E., Jr.; Caglayan, Alper K.; Knight, John C.; Lee, Larry D.; Mcallister, David F.; Vouk, Mladen A.; Kelly, John P. J.
1990-01-01
The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is suggested by the success of hardware redundancy for tolerating hardware failures. Although, as generally accepted, the independence of hardware failures resulting from physical wearout can lead to substantial increases in reliability for redundant hardware structures, a similar conclusion is not immediate for software. The degree to which design faults are manifested as independent failures determines the effectiveness of redundancy as a method for improving software reliability. Interest in multi-version software centers on whether it provides an adequate measure of increased reliability to warrant its use in critical applications. The effectiveness of multi-version software is studied by comparing estimates of the failure probabilities of these systems with the failure probabilities of single versions. The estimates are obtained under a model of dependent failures and compared with estimates obtained when failures are assumed to be independent. The experimental results are based on twenty versions of an aerospace application developed and certified by sixty programmers from four universities. Descriptions of the application, development and certification processes, and operational evaluation are given together with an analysis of the twenty versions.
Software Defined Radios - Architectures, Systems and Functions
NASA Technical Reports Server (NTRS)
Sims, Herb
2017-01-01
Software Defined Radio (SDR) technology has been proven in the commercial sector since the early 90's. Today's rapid advancement in mobile telephone reliability and power management capabilities exemplifies the effectiveness of the SDR technology for the modern communications market. SDR technology offers potential to revolutionize satellite transponder technology by increasing science data through-put capability by at least an order of magnitude. While the SDR is adaptive in nature and is "One-size-fits-all" by design, conventional transponders are built to a specific platform and must be redesigned for every new bus. The SDR uses a minimum amount of analog/Radio Frequency (RF) components to up/down-convert the RF signal to/from a digital format. Once analog data is digitized, all processing is performed using hardware logic. Typical SDR processes include; filtering, modulation, up/down converting and demodulation. These innovations have reduced the cost of transceivers, a decrease in power requirements and a commensurate reduction in volume. An additional pay-off is the increased flexibility of the SDR: allowing the same hardware to implement multiple transponder types by altering hardware logic -no change of analog hardware is required -all of which can be ultimately accomplished in orbit.
ISS Microgravity Research Payload Training Methodology
NASA Technical Reports Server (NTRS)
Schlagheck, Ronald; Geveden, Rex (Technical Monitor)
2001-01-01
The NASA Microgravity Research Discipline has multiple categories of science payloads that are being planned and currently under development to operate on various ISS on-orbit increments. The current program includes six subdisciplines; Materials Science, Fluids Physics, Combustion Science, Fundamental Physics, Cellular Biology and Macromolecular Biotechnology. All of these experiment payloads will require the astronaut various degrees of crew interaction and science observation. With the current programs planning to build various facility class science racks, the crew will need to be trained on basic core operations as well as science background. In addition, many disciplines will use the Express Rack and the Microgravity Science Glovebox (MSG) to utilize the accommodations provided by these facilities for smaller and less complex type hardware. The Microgravity disciplines will be responsible to have a training program designed to maximize the experiment and hardware throughput as well as being prepared for various contingencies both with anomalies as well as unexpected experiment observations. The crewmembers will need various levels of training from simple tasks as power on and activate to extensive training on hardware mode change out to observing the cell growth of various types of tissue cultures. Sample replacement will be required for furnaces and combustion type modules. The Fundamental Physics program will need crew EVA support to provide module change out of experiment. Training will take place various research centers and hardware development locations. It is expected that onboard training through various methods and video/digital technology as well as limited telecommunication interaction. Since hardware will be designed to operate from a few weeks to multiple research increments, flexibility must be planned in the training approach and procedure skills to optimize the output as well as the equipment maintainability. Early increment lessons learned will be addressed.
Acoustic Noise Prediction of the Amine Swingbed ISS ExPRESS Rack Payload
NASA Technical Reports Server (NTRS)
Welsh, David; Smith, Holly; Wang, Shuo
2010-01-01
Acoustics plays a vital role in maintaining the health, safety, and comfort of crew members aboard the International Space Station (ISS). In order to maintain this livable and workable environment, acoustic requirements have been established to ensure that ISS hardware and payload developers account for the acoustic emissions of their equipment and develop acoustic mitigations as necessary. These requirements are verified by an acoustic emissions test of the integrated hardware. The Amine Swingbed ExPRESS (Expedite the PRocessing of ExperimentS to Space) rack payload creates a unique challenge to the developers in that the payload hardware is transported to the ISS in phases, making an acoustic emissions test on the integrated flight hardware impossible. In addition, the payload incorporates a high back pressure fan and a diaphragm vacuum pump, which are recognized as significant and complex noise sources. In order to accurately predict the acoustic emissions of the integrated payload, the individual acoustic noise sources and paths are first characterized. These characterizations are conducted though a series of acoustic emissions tests on the individual payload components. Secondly, the individual acoustic noise sources and paths are incorporated into a virtual model of the integrated hardware. The virtual model is constructed with the use of hybrid method utilizing the Finite Element Acoustic (FEA) and Statistical Energy Analysis (SEA) techniques, which predict the overall acoustic emissions. Finally, the acoustic model is validated though an acoustic characterization test performed on an acoustically similar mock-up of the flight unit. The results of the validated acoustic model are then used to assess the acoustic emissions of the flight unit and define further acoustic mitigation efforts.
Analytical Micromechanics Modeling Technique Developed for Ceramic Matrix Composites Analysis
NASA Technical Reports Server (NTRS)
Min, James B.
2005-01-01
Ceramic matrix composites (CMCs) promise many advantages for next-generation aerospace propulsion systems. Specifically, carbon-reinforced silicon carbide (C/SiC) CMCs enable higher operational temperatures and provide potential component weight savings by virtue of their high specific strength. These attributes may provide systemwide benefits. Higher operating temperatures lessen or eliminate the need for cooling, thereby reducing both fuel consumption and the complex hardware and plumbing required for heat management. This, in turn, lowers system weight, size, and complexity, while improving efficiency, reliability, and service life, resulting in overall lower operating costs.
NASA Astrophysics Data System (ADS)
Riggs, William R.
1994-05-01
SHARP is a Navy wide logistics technology development effort aimed at reducing the acquisition costs, support costs, and risks of military electronic weapon systems while increasing the performance capability, reliability, maintainability, and readiness of these systems. Lower life cycle costs for electronic hardware are achieved through technology transition, standardization, and reliability enhancement to improve system affordability and availability as well as enhancing fleet modernization. Advanced technology is transferred into the fleet through hardware specifications for weapon system building blocks of standard electronic modules, standard power systems, and standard electronic systems. The product lines are all defined with respect to their size, weight, I/O, environmental performance, and operational performance. This method of defining the standard is very conducive to inserting new technologies into systems using the standard hardware. This is the approach taken thus far in inserting photonic technologies into SHARP hardware. All of the efforts have been related to module packaging; i.e. interconnects, component packaging, and module developments. Fiber optic interconnects are discussed in this paper.
Gálvez, Sergio; Ferusic, Adis; Esteban, Francisco J; Hernández, Pilar; Caballero, Juan A; Dorado, Gabriel
2016-10-01
The Smith-Waterman algorithm has a great sensitivity when used for biological sequence-database searches, but at the expense of high computing-power requirements. To overcome this problem, there are implementations in literature that exploit the different hardware-architectures available in a standard PC, such as GPU, CPU, and coprocessors. We introduce an application that splits the original database-search problem into smaller parts, resolves each of them by executing the most efficient implementations of the Smith-Waterman algorithms in different hardware architectures, and finally unifies the generated results. Using non-overlapping hardware allows simultaneous execution, and up to 2.58-fold performance gain, when compared with any other algorithm to search sequence databases. Even the performance of the popular BLAST heuristic is exceeded in 78% of the tests. The application has been tested with standard hardware: Intel i7-4820K CPU, Intel Xeon Phi 31S1P coprocessors, and nVidia GeForce GTX 960 graphics cards. An important increase in performance has been obtained in a wide range of situations, effectively exploiting the available hardware.
NASA Technical Reports Server (NTRS)
Maccormack, R. W.
1978-01-01
The calculation of flow fields past aircraft configuration at flight Reynolds numbers is considered. Progress in devising accurate and efficient numerical methods, in understanding and modeling the physics of turbulence, and in developing reliable and powerful computer hardware is discussed. Emphasis is placed on efficient solutions to the Navier-Stokes equations.
Software engineering as an engineering discipline
NASA Technical Reports Server (NTRS)
Freedman, Glenn B.
1988-01-01
The purpose of this panel is to explore the emerging field of software engineering from a variety of perspectives: university programs; industry training and definition; government development; and technology transfer. In doing this, the panel will address the issues of distinctions among software engineering, computer science, and computer hardware engineering as they relate to the challenges of large, complex systems.
Can a biologist fix a smartphone?-Just hack it!
Kamoun, Sophien
2017-05-08
Biological systems integrate multiscale processes and networks and are, therefore, viewed as difficult to dissect. However, because of the clear-cut separation between the software code (the information encoded in the genome sequence) and hardware (organism), genome editors can operate as software engineers to hack biological systems without any particularly deep understanding of the complexity of the systems.
2015-02-03
requiring the least hardware investment is to localize by received signal strength [1, 4, 5]. Because our intended scenario of low-complexity...MHz were taken with a spectrum analyzer program on the USRP and a range finder was used to measure the distance between the emitter and sensor
Automation of checkout for the shuttle operations era
NASA Technical Reports Server (NTRS)
Anderson, J. A.; Hendrickson, K. O.
1985-01-01
The Space Shuttle checkout is different from its Apollo predecessor. The complexity of the hardware, the shortened turnaround time, and the software that performs ground checkout are outlined. Generating new techniques and standards for software development and the management structure to control it are implemented. The utilization of computer systems for vehicle testing is high lighted.
2017-04-19
A display at the Kennedy Space Center Visitor Complex describes the purpose of Swarmies. Computer scientists are developing these robots focusing not so much on the hardware, but the software. In the spaceport's annual Swarmathon, students from 12 colleges and universities across the nation were invited to develop software code to operate Swarmies to help find resources when astronauts explore distant planets, such as Mars.
Interactive Videodisc: An Emerging Technology for Educators. ERIC Digest.
ERIC Educational Resources Information Center
Grabowski, Barbara L.
Interactive video can be a very complex learning system, or it can be a simple tool for teachers to use to enhance their instruction. The term has been used broadly in the literature and includes three major aspects: (1) interactive video as storage; (2) interactive video as hardware; and (3) interactive video as learning concept. This digest…
Optical Computing Based on Neuronal Models
1988-05-01
walking, and cognition are far too complex for existing sequential digital computers. Therefore new architectures, hardware, and algorithms modeled...collective behavior, and iterative processing into optical processing and artificial neurodynamical systems. Another intriguing promise of neural nets is...with architectures, implementations, and programming; and material research s -7- called for. Our future research in neurodynamics will continue to
Payload crew training complex simulation engineer's handbook
NASA Technical Reports Server (NTRS)
Shipman, D. L.
1984-01-01
The Simulation Engineer's Handbook is a guide for new engineers assigned to Experiment Simulation and a reference for engineers previously assigned. The experiment simulation process, development of experiment simulator requirements, development of experiment simulator hardware and software, and the verification of experiment simulators are discussed. The training required for experiment simulation is extensive and is only referenced in the handbook.
NASA Astrophysics Data System (ADS)
Schmitz, Oliver; de Jong, Kor; Karssenberg, Derek
2017-04-01
There is an increasing demand to run environmental models on a big scale: simulations over large areas at high resolution. The heterogeneity of available computing hardware such as multi-core CPUs, GPUs or supercomputer potentially provides significant computing power to fulfil this demand. However, this requires detailed knowledge of the underlying hardware, parallel algorithm design and the implementation thereof in an efficient system programming language. Domain scientists such as hydrologists or ecologists often lack this specific software engineering knowledge, their emphasis is (and should be) on exploratory building and analysis of simulation models. As a result, models constructed by domain specialists mostly do not take full advantage of the available hardware. A promising solution is to separate the model building activity from software engineering by offering domain specialists a model building framework with pre-programmed building blocks that they combine to construct a model. The model building framework, consequently, needs to have built-in capabilities to make full usage of the available hardware. Developing such a framework providing understandable code for domain scientists and being runtime efficient at the same time poses several challenges on developers of such a framework. For example, optimisations can be performed on individual operations or the whole model, or tasks need to be generated for a well-balanced execution without explicitly knowing the complexity of the domain problem provided by the modeller. Ideally, a modelling framework supports the optimal use of available hardware whichsoever combination of model building blocks scientists use. We demonstrate our ongoing work on developing parallel algorithms for spatio-temporal modelling and demonstrate 1) PCRaster, an environmental software framework (http://www.pcraster.eu) providing spatio-temporal model building blocks and 2) parallelisation of about 50 of these building blocks using the new Fern library (https://github.com/geoneric/fern/), an independent generic raster processing library. Fern is a highly generic software library and its algorithms can be configured according to the configuration of a modelling framework. With manageable programming effort (e.g. matching data types between programming and domain language) we created a binding between Fern and PCRaster. The resulting PCRaster Python multicore module can be used to execute existing PCRaster models without having to make any changes to the model code. We show initial results on synthetic and geoscientific models indicating significant runtime improvements provided by parallel local and focal operations. We further outline challenges in improving remaining algorithms such as flow operations over digital elevation maps and further potential improvements like enhancing disk I/O.
Pfeil, Thomas; Potjans, Tobias C; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2012-01-01
Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists.
Pfeil, Thomas; Potjans, Tobias C.; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2012-01-01
Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists. PMID:22822388
NASA Technical Reports Server (NTRS)
Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)
1998-01-01
Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.
An Identification Procedure for Behavioral Analysis in a Multi-User environment.
Guerra, Claudio; Bianchi, Valentina; De Munari, Ilaria; Ciampolini, Paolo
2015-01-01
As the average age of the EU population increases, ICT solutions are going to play a key role in order to find answers to the new challenges the demographic change is carrying on. At the University of Parma an AAL (Ambient Assisted Living) system named CARDEA has been developed during the last 10 years. Within CARDEA, behavioral analysis is carried out, based on environmental sensors. If multiple users live in the same environment, however, data coming from sensors need to be properly tagged: in this paper, a simple technique for such tagging is proposed, which exploits the same wireless transmission used for transmitting data, thus not requiring additional hardware components and avoiding more complex and expensive (radio)localization techniques. Preliminary results are shown, featuring a satisfactory accuracy.
Software Reliability Issues Concerning Large and Safety Critical Software Systems
NASA Technical Reports Server (NTRS)
Kamel, Khaled; Brown, Barbara
1996-01-01
This research was undertaken to provide NASA with a survey of state-of-the-art techniques using in industrial and academia to provide safe, reliable, and maintainable software to drive large systems. Such systems must match the complexity and strict safety requirements of NASA's shuttle system. In particular, the Launch Processing System (LPS) is being considered for replacement. The LPS is responsible for monitoring and commanding the shuttle during test, repair, and launch phases. NASA built this system in the 1970's using mostly hardware techniques to provide for increased reliability, but it did so often using custom-built equipment, which has not been able to keep up with current technologies. This report surveys the major techniques used in industry and academia to ensure reliability in large and critical computer systems.
Solonin, Iu G; Markov, A L; Boĭko, E R; Potolitsyna, N N; Parshukova, O I
2014-01-01
17 male northerners participating in the satellite experiments of the '"Mars-500" project passed through the morphological, physiometric, psychological and biochemical studies. The prenosological health indices in different seasons were calculated using the hardware-software complex "Ecosan-2007". Seasonal sinusoidal fluctuations were detected for the thermoregulation (body and skin temperature), lipids metabolism (cholesterol, HDL and LDL levels in the blood), circulation regulation under physical exercise (the increase of "double product" and its recovery time). In the majority of the participants the unfavorable deviations of body mass index, "power" and "life" indices, simple visual-motor reaction time, Kerdo vegetative index, physical health levels and regulatory systems activity index (in comparison with the mid-latitude standards) were found.
A haptic interface for virtual simulation of endoscopic surgery.
Rosenberg, L B; Stredney, D
1996-01-01
Virtual reality can be described as a convincingly realistic and naturally interactive simulation in which the user is given a first person illusion of being immersed within a computer generated environment While virtual reality systems offer great potential to reduce the cost and increase the quality of medical training, many technical challenges must be overcome before such simulation platforms offer effective alternatives to more traditional training means. A primary challenge in developing effective virtual reality systems is designing the human interface hardware which allows rich sensory information to be presented to users in natural ways. When simulating a given manual procedure, task specific human interface requirements dictate task specific human interface hardware. The following paper explores the design of human interface hardware that satisfies the task specific requirements of virtual reality simulation of Endoscopic surgical procedures. Design parameters were derived through direct cadaver studies and interviews with surgeons. Final hardware design is presented.
Spacelab experiment computer study. Volume 1: Executive summary (presentation)
NASA Technical Reports Server (NTRS)
Lewis, J. L.; Hodges, B. C.; Christy, J. O.
1976-01-01
A quantitative cost for various Spacelab flight hardware configurations is provided along with varied software development options. A cost analysis of Spacelab computer hardware and software is presented. The cost study is discussed based on utilization of a central experiment computer with optional auxillary equipment. Groundrules and assumptions used in deriving the costing methods for all options in the Spacelab experiment study are presented. The groundrules and assumptions, are analysed and the options along with their cost considerations, are discussed. It is concluded that Spacelab program cost for software development and maintenance is independent of experimental hardware and software options, that distributed standard computer concept simplifies software integration without a significant increase in cost, and that decisions on flight computer hardware configurations should not be made until payload selection for a given mission and a detailed analysis of the mission requirements are completed.
A simulation framework for the CMS Track Trigger electronics
NASA Astrophysics Data System (ADS)
Amstutz, C.; Magazzù, G.; Weber, M.; Palla, F.
2015-03-01
A simulation framework has been developed to test and characterize algorithms, architectures and hardware implementations of the vastly complex CMS Track Trigger for the high luminosity upgrade of the CMS experiment at the Large Hadron Collider in Geneva. High-level SystemC models of all system components have been developed to simulate a portion of the track trigger. The simulation of the system components together with input data from physics simulations allows evaluating figures of merit, like delays or bandwidths, under realistic conditions. The use of SystemC for high-level modelling allows co-simulation with models developed in Hardware Description Languages, e.g. VHDL or Verilog. Therefore, the simulation framework can also be used as a test bench for digital modules developed for the final system.
Navier-Stokes flow field analysis of compressible flow in a high pressure safety relief valve
NASA Technical Reports Server (NTRS)
Vu, Bruce; Wang, Ten-See; Shih, Ming-Hsin; Soni, Bharat
1993-01-01
The objective of this study is to investigate the complex three-dimensional flowfield of an oxygen safety pressure relieve valve during an incident, with a computational fluid dynamic (CFD) analysis. Specifically, the analysis will provide a flow pattern that would lead to the expansion of the eventual erosion pattern of the hardware, so as to combine it with other findings to piece together a most likely scenario for the investigation. The CFD model is a pressure based solver. An adaptive upwind difference scheme is employed for the spatial discretization, and a predictor, multiple corrector method is used for the velocity-pressure coupling. The computational result indicated vortices formation near the opening of the valve which matched the erosion pattern of the damaged hardware.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-09-25
The Megatux platform enables the emulation of large scale (multi-million node) distributed systems. In particular, it allows for the emulation of large-scale networks interconnecting a very large number of emulated computer systems. It does this by leveraging virtualization and associated technologies to allow hundreds of virtual computers to be hosted on a single moderately sized server or workstation. Virtualization technology provided by modern processors allows for multiple guest OSs to run at the same time, sharing the hardware resources. The Megatux platform can be deployed on a single PC, a small cluster of a few boxes or a large clustermore » of computers. With a modest cluster, the Megatux platform can emulate complex organizational networks. By using virtualization, we emulate the hardware, but run actual software enabling large scale without sacrificing fidelity.« less
[Development of a portable ambulatory ECG monitor based on embedded microprocessor unit].
Wang, Da-xiong; Wang, Guo-jun
2005-06-01
To develop a new kind of portable ambulatory ECG monitor. The hardware and software were designed based on RCA-CDP1802. New methods of ECG data compression and feature extraction of QRS complexes were applied to software design. A model for automatic arrhythmia analysis was established for real-time ambulatory ECG Data analysis. Compact, low power consumption and low cost were emphasized in the hardware design. This compact and light-weight monitor with low power consumption and high intelligence was capable of real-time monitoring arrhythmia for more than 48 h. More than ten types of arrhythmia could be detected, only the compressed abnormal ECG data was recorded and could be transmitted to the host if required. The monitor meets the design requirements and can be used for ambulatory ECG monitoring.
Research on the adaptive optical control technology based on DSP
NASA Astrophysics Data System (ADS)
Zhang, Xiaolu; Xue, Qiao; Zeng, Fa; Zhao, Junpu; Zheng, Kuixing; Su, Jingqin; Dai, Wanjun
2018-02-01
Adaptive optics is a real-time compensation technique using high speed support system for wavefront errors caused by atmospheric turbulence. However, the randomness and instantaneity of atmospheric changing introduce great difficulties to the design of adaptive optical systems. A large number of complex real-time operations lead to large delay, which is an insurmountable problem. To solve this problem, hardware operation and parallel processing strategy are proposed, and a high-speed adaptive optical control system based on DSP is developed. The hardware counter is used to check the system. The results show that the system can complete a closed loop control in 7.1ms, and improve the controlling bandwidth of the adaptive optical system. Using this system, the wavefront measurement and closed loop experiment are carried out, and obtain the good results.
Emerging Techniques for Field Device Security
Schwartz, Moses; Bechtel Corp.; Mulder, John; ...
2014-11-01
Critical infrastructure, such as electrical power plants and oil refineries, rely on embedded devices to control essential processes. State of the art security is unable to detect attacks on these devices at the hardware or firmware level. We provide an overview of the hardware used in industrial control system field devices, look at how these devices have been attacked, and discuss techniques and new technologies that may be used to secure them. We follow three themes: (1) Inspectability, the capability for an external arbiter to monitor the internal state of a device. (2) Trustworthiness, the degree to which a systemmore » will continue to function correctly despite disruption, error, or attack. (3) Diversity, the use of adaptive systems and complexity to make attacks more difficult by reducing the feasible attack surface.« less
Thermal control extravehicular life support system
NASA Technical Reports Server (NTRS)
1975-01-01
The results of a comprehensive study which defined an Extravehicular Life Support System Thermal Control System (TCS) are presented. The design of the prototype hardware and a detail summary of the prototype TCS fabrication and test effort are given. Several heat rejection subsystems, water management subsystems, humidity control subsystems, pressure control schemes and temperature control schemes were evaluated. Alternative integrated TCS systems were studied, and an optimum system was selected based on quantitative weighing of weight, volume, cost, complexity and other factors. The selected subsystem contains a sublimator for heat rejection, bubble expansion tank for water management, a slurper and rotary separator for humidity control, and a pump, a temperature control valve, a gas separator and a vehicle umbilical connector for water transport. The prototype hardware complied with program objectives.
Streamlining Payload Integration
NASA Technical Reports Server (NTRS)
Lufkin, Susan N.
2010-01-01
Payload integration onto space transport vehicles and the International Space Station (ISS) is a complex process. Yet, cargo transport is the sole reason for any space mission, be it for ferrying humans, science, or hardware. As the largest such effort in history, the ISS offers a wide variety of payload experience. However, for any payload to reach the Space Station under the current process, Payload Developers face a list of daunting tasks that go well beyond just designing the payload to the constraints of the transport vehicle and its stowage topology. Payload customers are required to prove their payload s functionality, structural integrity, and safe integration - including under less than nominal situations. They must also plan for or provide training, procedures, hardware labeling, ground support, and communications. In addition, they must deal with negotiating shared consumables, integrating software, obtaining video, and coordinating the return of data and hardware. All the while, they must meet export laws, launch schedules, budget limits, and the consensus of more than 12 panel and board reviews. Despite the cost and infrastructure overhead, payload proposals have increased. Just in the span from FY08 to FY09, the NASA Payload Space Station Support Office budget rose from $78M to $96M in attempt to manage the growing manifest, but the potential number of payloads still exceeds available Payload Integration Management manpower. The growth has also increased management difficulties due to the fact that payloads are more frequently added to a flight schedule late in the flow. The current standard ISS template for payload integration from concept to payload turn-over is 36 months, or 18 months if the payload already has a preliminary design. Customers are increasingly requiring a turn-around of 3 to 6-months to meet market needs. The following paper suggests options for streamlining the current payload integration process in order to meet customer schedule needs and reduce costs for both the integration support teams and the developers, without reducing quality or compromising safety. Issues for the key integration areas of planning, training, verification, and safety are presented in a Root-Cause Analysis study, with plausible solutions provided that involve technology and tools already available to the ISS community. Although based upon the ISS process, the payload integration techniques outlined herein also offer an integration template for any space transport endeavor.
High-Gain Airborne Microphone Windscreen Characterization Method Using Modified Research Wind Tunnel
NASA Astrophysics Data System (ADS)
Banks, Joseph Andrew
In recent years, UAS (unmanned aerial systems) have gained improved functionality by integrating advanced cameras, sensors, and hardware systems; however, UAS still lack effective means to detect and record audio signals. This is partially due to the physical scale of hardware and complexity of that hardware's integration into UAS. The current study is part of a larger research effort to integrate a high-gain parabolic microphone into a UAV (unmanned aerial vehicle) for use in acoustic surveying. Due to the aerodynamic interaction between a flush mounted parabolic antenna and the free-stream grazing flow, it is necessary to fair the antenna into the aircraft using a windscreen. The current study develops a characterization method by which various windscreen designs and configurations can be optimized. This method measures a candidate windscreen's normal incidence sound transmission loss (STL) as well as the increase of hydrodynamic noise generated by its installation at a range of flow speeds. A test apparatus was designed and installed on the Low Speed Wind Tunnel at Oklahoma State University. The test apparatus utilizes a "quiet box" attached to the wind tunnel test section floor. A pass-through window between the wind tunnel test section and the quiet box allows candidate wind screens to be mounted between the two environments. Microphones mounted both in the wind tunnel test section, and within the quiet box record the acoustic spectrum at various flow speeds, ranging between 36 and 81 feet per second. A tensioned KevlarRTM wind screen validation specimen was fabricated to validate system performance. The STL spectrum is measured based on comparing the signal from microphones on either side of the KevlarRTM membrane. The results for normal incidence STL for the flow off scenario are compared to results presented in other studies for the same material under tension. Flow-on transmission loss spectral data along with the increase in flow noise caused by the membrane is also measured at several flow speeds. The system has been shown to produce STL data consistent with the reference data for flow-on and flow-off test configurations, as well as being able to detect the increase in flow-induced noise generated by the validation specimen windscreen.
NASA Technical Reports Server (NTRS)
Feher, Kamilo
1993-01-01
The performance and implementation complexity of coherent and of noncoherent QPSK and GMSK modulation/demodulation techniques in a complex mobile satellite systems environment, including large Doppler shift, delay spread, and low C/I, are compared. We demonstrate that for large f(sub d)T(sub b) products, where f(sub d) is the Doppler shift and T(sub b) is the bit duration, noncoherent (discriminator detector or differential demodulation) systems have a lower BER floor than their coherent counterparts. For significant delay spreads, e.g., tau(sub rms) greater than 0.4 T(sub b), and low C/I, coherent systems outperform noncoherent systems. However, the synchronization time of coherent systems is longer than that of noncoherent systems. Spectral efficiency, overall capacity, and related hardware complexity issues of these systems are also analyzed. We demonstrate that coherent systems have a simpler overall architecture (IF filter implementation-cost versus carrier recovery) and are more robust in an RF frequency drift environment. Additionally, the prediction tools, computer simulations, and analysis of coherent systems is simpler. The threshold or capture effect in low C/I interference environment is critical for noncoherent discriminator based systems. We conclude with a comparison of hardware architectures of coherent and of noncoherent systems, including recent trends in commercial VLSI technology and direct baseband to RF transmit, RF to baseband (0-IF) receiver implementation strategies.
NASA Astrophysics Data System (ADS)
Feher, Kamilo
The performance and implementation complexity of coherent and of noncoherent QPSK and GMSK modulation/demodulation techniques in a complex mobile satellite systems environment, including large Doppler shift, delay spread, and low C/I, are compared. We demonstrate that for large f(sub d)T(sub b) products, where f(sub d) is the Doppler shift and T(sub b) is the bit duration, noncoherent (discriminator detector or differential demodulation) systems have a lower BER floor than their coherent counterparts. For significant delay spreads, e.g., tau(sub rms) greater than 0.4 T(sub b), and low C/I, coherent systems outperform noncoherent systems. However, the synchronization time of coherent systems is longer than that of noncoherent systems. Spectral efficiency, overall capacity, and related hardware complexity issues of these systems are also analyzed. We demonstrate that coherent systems have a simpler overall architecture (IF filter implementation-cost versus carrier recovery) and are more robust in an RF frequency drift environment. Additionally, the prediction tools, computer simulations, and analysis of coherent systems is simpler. The threshold or capture effect in low C/I interference environment is critical for noncoherent discriminator based systems. We conclude with a comparison of hardware architectures of coherent and of noncoherent systems, including recent trends in commercial VLSI technology and direct baseband to RF transmit, RF to baseband (0-IF) receiver implementation strategies.
ERIC Educational Resources Information Center
Cohen, Linda M.
Significant advances in hardware and software have increased the possible applications for educating limited-English-proficient students. Two major studies have focused on technology and its relationship to the education of this population. There is a great diversity of hardware currently in use in schools, either stand-alone computers or computer…
Streamlining Simulation Development using a Commercial Game Engine
2009-10-01
few years. The realism is stunning and the Commercial Game Industry fuels the fire of cutting edge advances in hardware and immersive experiences...Technology applies to Military training in more than just the obvious upgrades in game engines and hardware. The increased visual realism and performance...elaborate storytelling and cinematic effects provide a more immersive and compelling experience to the player. The underlying game engine technology
2005-12-01
Logistics Support Activity MIT Massachusetts Institution of Technology MRE Meals -Ready-to-Eat MRO Material Release Order MSC Military Sealift... increase once item tagging becomes mandated. Reader, middleware, specialized hardware, and physical infrastructure costs can add up too. Reader’s...that system integration revenues could surpass hardware by 2007 (Asif & Mandviwalla, 2005, p. 26). (4) Training. Training is another challenge
Lasercom system architecture with reduced complexity
NASA Technical Reports Server (NTRS)
Lesh, James R. (Inventor); Chen, Chien-Chung (Inventor); Ansari, Homayoon (Inventor)
1994-01-01
Spatial acquisition and precision beam pointing functions are critical to spaceborne laser communication systems. In the present invention, a single high bandwidth CCD detector is used to perform both spatial acquisition and tracking functions. Compared to previous lasercom hardware design, the array tracking concept offers reduced system complexity by reducing the number of optical elements in the design. Specifically, the design requires only one detector and one beam steering mechanism. It also provides the means to optically close the point-ahead control loop. The technology required for high bandwidth array tracking was examined and shown to be consistent with current state of the art. The single detector design can lead to a significantly reduced system complexity and a lower system cost.
LaserCom System Architecture With Reduced Complexity
NASA Technical Reports Server (NTRS)
Lesh, James R. (Inventor); Chen, Chien-Chung (Inventor); Ansari, Homa-Yoon (Inventor)
1996-01-01
Spatial acquisition and precision beam pointing functions are critical to spaceborne laser communication systems. In the present invention a single high bandwidth CCD detector is used to perform both spatial acquisition and tracking functions. Compared to previous lasercom hardware design, the array tracking concept offers reduced system complexity by reducing the number of optical elements in the design. Specifically, the design requires only one detector and one beam steering mechanism. It also provides means to optically close the point-ahead control loop. The technology required for high bandwidth array tracking was examined and shown to be consistent with current state of the art. The single detector design can lead to a significantly reduced system complexity and a lower system cost.
Real-time high speed generator system emulation with hardware-in-the-loop application
NASA Astrophysics Data System (ADS)
Stroupe, Nicholas
The emerging emphasis and benefits of distributed generation on smaller scale networks has prompted much attention and focus to research in this field. Much of the research that has grown in distributed generation has also stimulated the development of simulation software and techniques. Testing and verification of these distributed power networks is a complex task and real hardware testing is often desired. This is where simulation methods such as hardware-in-the-loop become important in which an actual hardware unit can be interfaced with a software simulated environment to verify proper functionality. In this thesis, a simulation technique is taken one step further by utilizing a hardware-in-the-loop technique to emulate the output voltage of a generator system interfaced to a scaled hardware distributed power system for testing. The purpose of this thesis is to demonstrate a new method of testing a virtually simulated generation system supplying a scaled distributed power system in hardware. This task is performed by using the Non-Linear Loads Test Bed developed by the Energy Conversion and Integration Thrust at the Center for Advanced Power Systems. This test bed consists of a series of real hardware developed converters consistent with the Navy's All-Electric-Ship proposed power system to perform various tests on controls and stability under the expected non-linear load environment of the Navy weaponry. This test bed can also explore other distributed power system research topics and serves as a flexible hardware unit for a variety of tests. In this thesis, the test bed will be utilized to perform and validate this newly developed method of generator system emulation. In this thesis, the dynamics of a high speed permanent magnet generator directly coupled with a micro turbine are virtually simulated on an FPGA in real-time. The calculated output stator voltage will then serve as a reference for a controllable three phase inverter at the input of the test bed that will emulate and reproduce these voltages on real hardware. The output of the inverter is then connected with the rest of the test bed and can consist of a variety of distributed system topologies for many testing scenarios. The idea is that the distributed power system under test in hardware can also integrate real generator system dynamics without physically involving an actual generator system. The benefits of successful generator system emulation are vast and lead to much more detailed system studies without the draw backs of needing physical generator units. Some of these advantages are safety, reduced costs, and the ability of scaling while still preserving the appropriate system dynamics. This thesis will introduce the ideas behind generator emulation and explain the process and necessary steps to obtaining such an objective. It will also demonstrate real results and verification of numerical values in real-time. The final goal of this thesis is to introduce this new idea and show that it is in fact obtainable and can prove to be a highly useful tool in the simulation and verification of distributed power systems.
Computationally efficient algorithm for high sampling-frequency operation of active noise control
NASA Astrophysics Data System (ADS)
Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati
2015-05-01
In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.
Automatic Fault Characterization via Abnormality-Enhanced Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronevetsky, G; Laguna, I; de Supinski, B R
Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help tomore » identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.« less
A complexity-scalable software-based MPEG-2 video encoder.
Chen, Guo-bin; Lu, Xin-ning; Wang, Xing-guo; Liu, Ji-lin
2004-05-01
With the development of general-purpose processors (GPP) and video signal processing algorithms, it is possible to implement a software-based real-time video encoder on GPP, and its low cost and easy upgrade attract developers' interests to transfer video encoding from specialized hardware to more flexible software. In this paper, the encoding structure is set up first to support complexity scalability; then a lot of high performance algorithms are used on the key time-consuming modules in coding process; finally, at programming level, processor characteristics are considered to improve data access efficiency and processing parallelism. Other programming methods such as lookup table are adopted to reduce the computational complexity. Simulation results showed that these ideas could not only improve the global performance of video coding, but also provide great flexibility in complexity regulation.
Open-Source 3D-Printable Optics Equipment
Zhang, Chenlong; Anzalone, Nicholas C.; Faria, Rodrigo P.; Pearce, Joshua M.
2013-01-01
Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods. PMID:23544104
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary Jo W.; Roche, Rigoberto
2017-01-01
The Space Telecommunications Radio System (STRS) provides a common, consistent framework for software defined radios (SDRs) to abstract the application software from the radio platform hardware. The STRS standard aims to reduce the cost and risk of using complex, configurable and reprogrammable radio systems across NASA missions. To promote the use of the STRS architecture for future NASA advanced exploration missions, NASA Glenn Research Center (GRC) developed an STRS-compliant SDR on a radio platform used by the Advance Exploration System program at the Johnson Space Center (JSC) in their Integrated Power, Avionics, and Software (iPAS) laboratory. The iPAS STRS Radio was implemented on the Reconfigurable, Intelligently-Adaptive Communication System (RIACS) platform, currently being used for radio development at JSC. The platform consists of a Xilinx ML605 Virtex-6 FPGA board, an Analog Devices FMCOMMS1-EBZ RF transceiver board, and an Embedded PC (Axiomtek eBox 620-110-FL) running the Ubuntu 12.4 operating system. Figure 1 shows the RIACS platform hardware. The result of this development is a very low cost STRS compliant platform that can be used for waveform developments for multiple applications.The purpose of this document is to describe how to develop a new waveform using the RIACS platform and the Very High Speed Integrated Circuits (VHSIC) Hardware Description Language (VHDL) FPGA wrapper code and the STRS implementation on the Axiomtek processor.
Hardware implementation of fuzzy Petri net as a controller.
Gniewek, Lesław; Kluska, Jacek
2004-06-01
The paper presents a new approach to fuzzy Petri net (FPN) and its hardware implementation. The authors' motivation is as follows. Complex industrial processes can be often decomposed into many parallelly working subprocesses, which can, in turn, be modeled using Petri nets. If all the process variables (or events) are assumed to be two-valued signals, then it is possible to obtain a hardware or software control device, which works according to the algorithm described by conventional Petri net. However, the values of real signals are contained in some bounded interval and can be interpreted as events which are not only true or false, but rather true in some degree from the interval [0, 1]. Such a natural interpretation from multivalued logic (fuzzy logic) point of view, concerns sensor outputs, control signals, time expiration, etc. It leads to the idea of FPN as a controller, which one can rather simply obtain, and which would be able to process both analog, and binary signals. In the paper both graphical, and algebraic representations of the proposed FPN are given. The conditions under which transitions can be fired are described. The algebraic description of the net and a theorem which enables computation of new marking in the net, based on current marking, are formulated. Hardware implementation of the FPN, which uses fuzzy JK flip-flops and fuzzy gates, are proposed. An example illustrating usefulness of the proposed FPN for control algorithm description and its synthesis as a controller device for the concrete production process are presented.
Open-source 3D-printable optics equipment.
Zhang, Chenlong; Anzalone, Nicholas C; Faria, Rodrigo P; Pearce, Joshua M
2013-01-01
Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods.
Vapor Compression Distillation Urine Processor Lessons Learned from Development and Life Testing
NASA Technical Reports Server (NTRS)
Hutchens, Cindy F.; Long, David A.
1999-01-01
Vapor Compression Distillation (VCD) is the chosen technology for urine processing aboard the International Space Station (155). Development and life testing over the past several years have brought to the forefront problems and solutions for the VCD technology. Testing between 1992 and 1998 has been instrumental in developing estimates of hardware life and reliability. It has also helped improve the hardware design in ways that either correct existing problems or enhance the existing design of the hardware. The testing has increased the confidence in the VCD technology and reduced technical and programmatic risks. This paper summarizes the test results and changes that have been made to the VCD design.
Using CLIPS to represent knowledge in a VR simulation
NASA Technical Reports Server (NTRS)
Engelberg, Mark L.
1994-01-01
Virtual reality (VR) is an exciting use of advanced hardware and software technologies to achieve an immersive simulation. Until recently, the majority of virtual environments were merely 'fly-throughs' in which a user could freely explore a 3-dimensional world or a visualized dataset. Now that the underlying technologies are reaching a level of maturity, programmers are seeking ways to increase the complexity and interactivity of immersive simulations. In most cases, interactivity in a virtual environment can be specified in the form 'whenever such-and-such happens to object X, it reacts in the following manner.' CLIPS and COOL provide a simple and elegant framework for representing this knowledge-base in an efficient manner that can be extended incrementally. The complexity of a detailed simulation becomes more manageable when the control flow is governed by CLIPS' rule-based inference engine as opposed to by traditional procedural mechanisms. Examples in this paper will illustrate an effective way to represent VR information in CLIPS, and to tie this knowledge base to the input and output C routines of a typical virtual environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, E. Wes; Greenwald, Martin; Kleese van Dam, Kerstin
Scientific user facilities—particle accelerators, telescopes, colliders, supercomputers, light sources, sequencing facilities, and more—operated by the U.S. Department of Energy (DOE) Office of Science (SC) generate ever increasing volumes of data at unprecedented rates from experiments, observations, and simulations. At the same time there is a growing community of experimentalists that require real-time data analysis feedback, to enable them to steer their complex experimental instruments to optimized scientific outcomes and new discoveries. Recent efforts in DOE-SC have focused on articulating the data-centric challenges and opportunities facing these science communities. Key challenges include difficulties coping with data size, rate, and complexity inmore » the context of both real-time and post-experiment data analysis and interpretation. Solutions will require algorithmic and mathematical advances, as well as hardware and software infrastructures that adequately support data-intensive scientific workloads. This paper presents the summary findings of a workshop held by DOE-SC in September 2015, convened to identify the major challenges and the research that is needed to meet those challenges.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, E. Wes; Greenwald, Martin; Kleese van Dam, Kersten
Scientific user facilities---particle accelerators, telescopes, colliders, supercomputers, light sources, sequencing facilities, and more---operated by the U.S. Department of Energy (DOE) Office of Science (SC) generate ever increasing volumes of data at unprecedented rates from experiments, observations, and simulations. At the same time there is a growing community of experimentalists that require real-time data analysis feedback, to enable them to steer their complex experimental instruments to optimized scientific outcomes and new discoveries. Recent efforts in DOE-SC have focused on articulating the data-centric challenges and opportunities facing these science communities. Key challenges include difficulties coping with data size, rate, and complexity inmore » the context of both real-time and post-experiment data analysis and interpretation. Solutions will require algorithmic and mathematical advances, as well as hardware and software infrastructures that adequately support data-intensive scientific workloads. This paper presents the summary findings of a workshop held by DOE-SC in September 2015, convened to identify the major challenges and the research that is needed to meet those challenges.« less
Ultrasonic phased array controller for hyperthermia applications.
Benkeser, P J; Pao, T L; Yoon, Y J
1991-01-01
Multiple and mechanically scanned ultrasound transducer systems have demonstrated the efficacy of using ultrasound to produce deep localized hyperthermia. The use of ultrasonic phased arrays has been proposed as an alternative to these systems. A phased array offers a more flexible approach to heating tumours in that the size, shape, and position of its focal region can be altered during the course of treatment in order to achieve the desired temperature distribution. This added flexibility comes at the cost of increased complexity of the hardware necessary to drive the transducer because each element requires its own amplifer with both phase and amplitude control. In order for phased arrays with large numbers of elements to be feasible for hyperthermia applications, the complexity of this circuitry must be minimized. This paper describes a circuit design which simplifies the electronics required to control a phased array transducer system for hyperthermia applications. The design is capable of controlling virtually any type of phased array transducer operating at frequencies less than 2 MHz. The system performance was verified through beam profile measurements using a 48-element tapered phased array transducer.
Serendipitous Offline Learning in a Neuromorphic Robot.
Stewart, Terrence C; Kleinhans, Ashley; Mundy, Andrew; Conradt, Jörg
2016-01-01
We demonstrate a hybrid neuromorphic learning paradigm that learns complex sensorimotor mappings based on a small set of hard-coded reflex behaviors. A mobile robot is first controlled by a basic set of reflexive hand-designed behaviors. All sensor data is provided via a spike-based silicon retina camera (eDVS), and all control is implemented via spiking neurons simulated on neuromorphic hardware (SpiNNaker). Given this control system, the robot is capable of simple obstacle avoidance and random exploration. To train the robot to perform more complex tasks, we observe the robot and find instances where the robot accidentally performs the desired action. Data recorded from the robot during these times is then used to update the neural control system, increasing the likelihood of the robot performing that task in the future, given a similar sensor state. As an example application of this general-purpose method of training, we demonstrate the robot learning to respond to novel sensory stimuli (a mirror) by turning right if it is present at an intersection, and otherwise turning left. In general, this system can learn arbitrary relations between sensory input and motor behavior.
Li, Y; Nielsen, P V
2011-12-01
There has been a rapid growth of scientific literature on the application of computational fluid dynamics (CFD) in the research of ventilation and indoor air science. With a 1000-10,000 times increase in computer hardware capability in the past 20 years, CFD has become an integral part of scientific research and engineering development of complex air distribution and ventilation systems in buildings. This review discusses the major and specific challenges of CFD in terms of turbulence modelling, numerical approximation, and boundary conditions relevant to building ventilation. We emphasize the growing need for CFD verification and validation, suggest ongoing needs for analytical and experimental methods to support the numerical solutions, and discuss the growing capacity of CFD in opening up new research areas. We suggest that CFD has not become a replacement for experiment and theoretical analysis in ventilation research, rather it has become an increasingly important partner. We believe that an effective scientific approach for ventilation studies is still to combine experiments, theory, and CFD. We argue that CFD verification and validation are becoming more crucial than ever as more complex ventilation problems are solved. It is anticipated that ventilation problems at the city scale will be tackled by CFD in the next 10 years. © 2011 John Wiley & Sons A/S.
NASA Technical Reports Server (NTRS)
Walls, Laurie K.; Kirk, Daniel; deLuis, Kavier; Haberbusch, Mark S.
2011-01-01
As space programs increasingly investigate various options for long duration space missions the accurate prediction of propellant behavior over long periods of time in microgravity environment has become increasingly imperative. This has driven the development of a detailed, physics-based understanding of slosh behavior of cryogenic propellants over a range of conditions and environments that are relevant for rocket and space storage applications. Recent advancements in computational fluid dynamics (CFD) models and hardware capabilities have enabled the modeling of complex fluid behavior in microgravity environment. Historically, launch vehicles with moderate duration upper stage coast periods have contained very limited instrumentation to quantify propellant stratification and boil-off in these environments, thus the ability to benchmark these complex computational models is of great consequence. To benchmark enhanced CFD models, recent work focuses on establishing an extensive experimental database of liquid slosh under a wide range of relevant conditions. In addition, a mass gauging system specifically designed to provide high fidelity measurements for both liquid stratification and liquid/ullage position in a micro-gravity environment has been developed. This pUblication will summarize the various experimental programs established to produce this comprehensive database and unique flight measurement techniques.
FPGA in-the-loop simulations of cardiac excitation model under voltage clamp conditions
NASA Astrophysics Data System (ADS)
Othman, Norliza; Adon, Nur Atiqah; Mahmud, Farhanahani
2017-01-01
Voltage clamp technique allows the detection of single channel currents in biological membranes in identifying variety of electrophysiological problems in the cellular level. In this paper, a simulation study of the voltage clamp technique has been presented to analyse current-voltage (I-V) characteristics of ion currents based on Luo-Rudy Phase-I (LR-I) cardiac model by using a Field Programmable Gate Array (FPGA). Nowadays, cardiac models are becoming increasingly complex which can cause a vast amount of time to run the simulation. Thus, a real-time hardware implementation using FPGA could be one of the best solutions for high-performance real-time systems as it provides high configurability and performance, and able to executes in parallel mode operation. For shorter time development while retaining high confidence results, FPGA-based rapid prototyping through HDL Coder from MATLAB software has been used to construct the algorithm for the simulation system. Basically, the HDL Coder is capable to convert the designed MATLAB Simulink blocks into hardware description language (HDL) for the FPGA implementation. As a result, the voltage-clamp fixed-point design of LR-I model has been successfully conducted in MATLAB Simulink and the simulation of the I-V characteristics of the ionic currents has been verified on Xilinx FPGA Virtex-6 XC6VLX240T development board through an FPGA-in-the-loop (FIL) simulation.
Fast and Flexible Successive-Cancellation List Decoders for Polar Codes
NASA Astrophysics Data System (ADS)
Hashemi, Seyyed Ali; Condo, Carlo; Gross, Warren J.
2017-11-01
Polar codes have gained significant amount of attention during the past few years and have been selected as a coding scheme for the next generation of mobile broadband standard. Among decoding schemes, successive-cancellation list (SCL) decoding provides a reasonable trade-off between the error-correction performance and hardware implementation complexity when used to decode polar codes, at the cost of limited throughput. The simplified SCL (SSCL) and its extension SSCL-SPC increase the speed of decoding by removing redundant calculations when encountering particular information and frozen bit patterns (rate one and single parity check codes), while keeping the error-correction performance unaltered. In this paper, we improve SSCL and SSCL-SPC by proving that the list size imposes a specific number of bit estimations required to decode rate one and single parity check codes. Thus, the number of estimations can be limited while guaranteeing exactly the same error-correction performance as if all bits of the code were estimated. We call the new decoding algorithms Fast-SSCL and Fast-SSCL-SPC. Moreover, we show that the number of bit estimations in a practical application can be tuned to achieve desirable speed, while keeping the error-correction performance almost unchanged. Hardware architectures implementing both algorithms are then described and implemented: it is shown that our design can achieve 1.86 Gb/s throughput, higher than the best state-of-the-art decoders.
Adapting Strategic Aircraft Assets to a Changing World: Technology Insertion to Provide Flexibility
1994-09-01
Distributed Processing ............................ 67 Unembedded Expansions ............................ 71 Concepts for, and Impacts on, Preflight...capabilities through the use of unembedded expansions to the existing avionics complex. But, before we explore these technology insertion concepts, we must... Unembedded Expansions The final hardware technology insertion area focuses on the concept of expanding the aircraft’s capabilities by inserting, or perhaps
Ways of achieving continuous service from computers
NASA Technical Reports Server (NTRS)
Quinn, M. J., Jr.
1974-01-01
This paper outlines the methods used in the real-time computer complex to keep computers operating. Methods include selectover, high-speed restart, and low-speed restart. The hardware and software needed to implement these methods is discussed as well as the system recovery facility, alternate device support, and timeout. In general, methods developed while supporting the Gemini, Apollo, and Skylab space missions are presented.
The Paradigm of Utilizing Robots in the Teaching Process: A Comparative Study
ERIC Educational Resources Information Center
Bacivarov, Ioan C.; Ilian, Virgil L. M.
2012-01-01
This paper discusses a comparative study of the effects of using a humanoid robot for introducing students to personal robotics. Even if a humanoid robot is one of the more complicated types of robots, comprehension was not an issue. The study highlighted the importance of using real hardware for teaching such complex subjects as opposed to…
The design and construction of a cost-efficient confocal laser scanning microscope
NASA Astrophysics Data System (ADS)
Xi, Peng; Rajwa, Bartlomiej; Jones, James T.; Robinson, J. Paul
2007-03-01
The optical dissection ability of confocal microscopy makes it a powerful tool for biological materials. However, the cost and complexity of confocal scanning laser microscopy hinders its wide application in education. We describe the construction of a simplified confocal scanning laser microscope and demonstrate three-dimensional projection based on cost-efficient commercial hardware, together with available open source software.
Runtime verification of embedded real-time systems.
Reinbacher, Thomas; Függer, Matthias; Brauer, Jörg
We present a runtime verification framework that allows on-line monitoring of past-time Metric Temporal Logic (ptMTL) specifications in a discrete time setting. We design observer algorithms for the time-bounded modalities of ptMTL, which take advantage of the highly parallel nature of hardware designs. The algorithms can be translated into efficient hardware blocks, which are designed for reconfigurability, thus, facilitate applications of the framework in both a prototyping and a post-deployment phase of embedded real-time systems. We provide formal correctness proofs for all presented observer algorithms and analyze their time and space complexity. For example, for the most general operator considered, the time-bounded Since operator, we obtain a time complexity that is doubly logarithmic both in the point in time the operator is executed and the operator's time bounds. This result is promising with respect to a self-contained, non-interfering monitoring approach that evaluates real-time specifications in parallel to the system-under-test. We implement our framework on a Field Programmable Gate Array platform and use extensive simulation and logic synthesis runs to assess the benefits of the approach in terms of resource usage and operating frequency.
Kukushkin, V I; Satusheva, E V; Aleksandrov, M T; Morozova, O A; Pashkov, E P; Ambartsumyan, O A; Amosova, V A
2015-01-01
Determination of the effect of microorganisms on spoilage of meat products during various temperature regimes of storage by integral indexes of luminescent lines in their spectra and development of an algorithm of microorganism indication by an express method using laser Raman-luminescent spectroscopy. Minced meat from beef and pork was used. Determination of quantity of mesophilic aerobic and opportunistic-anaerobic microorganisms was carried out by serial 10-fold dilutions with subsequent parallel seeding into Rida count total 24 plates and Petri dishes with 5% blood agar. Sample study was carried out in luminescent software-hardware complex Enspectr L405 (a variant of Enspectr M software-hardware complexes). Meat spoilage was established to be caused to a large degree by Pseudomonas genus (P. fluorescens, P. putida, P. fragi et al.) bacterial growth. Raman-luminescent spectra of bacteria that compose microflora, characterizing and accompanying beef and pork spoilage, were measured and recorded into a database. The results obtained will allow to use this technique in the future for both express-indication and differentiation of microorganisms and express-evaluation of quality of meat products at all stages of their manufacturing, storage, transport and realization.
Real-time optimizations for integrated smart network camera
NASA Astrophysics Data System (ADS)
Desurmont, Xavier; Lienard, Bruno; Meessen, Jerome; Delaigle, Jean-Francois
2005-02-01
We present an integrated real-time smart network camera. This system is composed of an image sensor, an embedded PC based electronic card for image processing and some network capabilities. The application detects events of interest in visual scenes, highlights alarms and computes statistics. The system also produces meta-data information that could be shared between other cameras in a network. We describe the requirements of such a system and then show how the design of the system is optimized to process and compress video in real-time. Indeed, typical video-surveillance algorithms as background differencing, tracking and event detection should be highly optimized and simplified to be used in this hardware. To have a good adequation between hardware and software in this light embedded system, the software management is written on top of the java based middle-ware specification established by the OSGi alliance. We can integrate easily software and hardware in complex environments thanks to the Java Real-Time specification for the virtual machine and some network and service oriented java specifications (like RMI and Jini). Finally, we will report some outcomes and typical case studies of such a camera like counter-flow detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sparn, Bethany F; Ruth, Mark F; Krishnamurthy, Dheepak
Many have proposed that responsive load provided by distributed energy resources (DERs) and demand response (DR) are an option to provide flexibility to the grid and especially to distribution feeders. However, because responsive load involves a complex interplay between tariffs and DER and DR technologies, it is challenging to test and evaluate options without negatively impacting customers. This paper describes a hardware-in-the-loop (HIL) simulation system that has been developed to reduce the cost of evaluating the impact of advanced controllers (e.g., model predictive controllers) and technologies (e.g., responsive appliances). The HIL simulation system combines large-scale software simulation with a smallmore » set of representative building equipment hardware. It is used to perform HIL simulation of a distribution feeder and the loads on it under various tariff structures. In the reported HIL simulation, loads include many simulated air conditioners and one physical air conditioner. Independent model predictive controllers manage operations of all air conditioners under a time-of-use tariff. Results from this HIL simulation and a discussion of future development work of the system are presented.« less
NASA Astrophysics Data System (ADS)
Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue
2018-04-01
Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.
A modular suite of hardware enabling spaceflight cell culture research
NASA Technical Reports Server (NTRS)
Hoehn, Alexander; Klaus, David M.; Stodieck, Louis S.
2004-01-01
BioServe Space Technologies, a NASA Research Partnership Center (RPC), has developed and operated various middeck payloads launched on 23 shuttle missions since 1991 in support of commercial space biotechnology projects. Modular cell culture systems are contained within the Commercial Generic Bioprocessing Apparatus (CGBA) suite of flight-qualified hardware, compatible with Space Shuttle, SPACEHAB, Spacelab and International Space Station (ISS) EXPRESS Rack interfaces. As part of the CGBA family, the Isothermal Containment Module (ICM) incubator provides thermal control, data acquisition and experiment manipulation capabilities, including accelerometer launch detection for automated activation and thermal profiling for culture incubation and sample preservation. The ICM can accommodate up to 8 individually controlled temperature zones. Command and telemetry capabilities allow real-time downlink of data and video permitting remote payload operation and ground control synchronization. Individual cell culture experiments can be accommodated in a variety of devices ranging from 'microgravity test tubes' or standard 100 mm Petri dishes, to complex, fed-batch bioreactors with automated culture feeding, waste removal and multiple sample draws. Up to 3 levels of containment can be achieved for chemical fixative addition, and passive gas exchange can be provided through hydrophobic membranes. Many additional options exist for designing customized hardware depending on specific science requirements.
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.
Time assignment system and its performance aboard the Hitomi satellite
NASA Astrophysics Data System (ADS)
Terada, Yukikatsu; Yamaguchi, Sunao; Sugimoto, Shigenobu; Inoue, Taku; Nakaya, Souhei; Murakami, Maika; Yabe, Seiya; Oshimizu, Kenya; Ogawa, Mina; Dotani, Tadayasu; Ishisaki, Yoshitaka; Mizushima, Kazuyo; Kominato, Takashi; Mine, Hiroaki; Hihara, Hiroki; Iwase, Kaori; Kouzu, Tomomi; Tashiro, Makoto S.; Natsukari, Chikara; Ozaki, Masanobu; Kokubun, Motohide; Takahashi, Tadayuki; Kawakami, Satoko; Kasahara, Masaru; Kumagai, Susumu; Angelini, Lorella; Witthoeft, Michael
2018-01-01
Fast timing capability in x-ray observation of astrophysical objects is one of the key properties for the ASTRO-H (Hitomi) mission. Absolute timing accuracies of 350 or 35 μs are required to achieve nominal scientific goals or to study fast variabilities of specific sources. The satellite carries a GPS receiver to obtain accurate time information, which is distributed from the central onboard computer through the large and complex SpaceWire network. The details of the time system on the hardware and software design are described. In the distribution of the time information, the propagation delays and jitters affect the timing accuracy. Six other items identified within the timing system will also contribute to absolute time error. These error items have been measured and checked on ground to ensure the time error budgets meet the mission requirements. The overall timing performance in combination with hardware performance, software algorithm, and the orbital determination accuracies, etc. under nominal conditions satisfies the mission requirements of 35 μs. This work demonstrates key points for space-use instruments in hardware and software designs and calibration measurements for fine timing accuracy on the order of microseconds for midsized satellites using the SpaceWire (IEEE1355) network.
Adaptive Neuromorphic Circuit for Stereoscopic Disparity Using Ocular Dominance Map
Sharma, Sheena; Gupta, Priti; Markan, C. M.
2016-01-01
Stereopsis or depth perception is a critical aspect of information processing in the brain and is computed from the positional shift or disparity between the images seen by the two eyes. Various algorithms and their hardware implementation that compute disparity in real time have been proposed; however, most of them compute disparity through complex mathematical calculations that are difficult to realize in hardware and are biologically unrealistic. The brain presumably uses simpler methods to extract depth information from the environment and hence newer methodologies that could perform stereopsis with brain like elegance need to be explored. This paper proposes an innovative aVLSI design that leverages the columnar organization of ocular dominance in the brain and uses time-staggered Winner Take All (ts-WTA) to adaptively create disparity tuned cells. Physiological findings support the presence of disparity cells in the visual cortex and show that these cells surface as a result of binocular stimulation received after birth. Therefore, creating in hardware cells that can learn different disparities with experience not only is novel but also is biologically more realistic. These disparity cells, when allowed to interact diffusively on a larger scale, can be used to adaptively create stable topological disparity maps in silicon. PMID:27243029
NASA Astrophysics Data System (ADS)
Suarez, Hernan; Zhang, Yan R.
2015-05-01
New radar applications need to perform complex algorithms and process large quantity of data to generate useful information for the users. This situation has motivated the search for better processing solutions that include low power high-performance processors, efficient algorithms, and high-speed interfaces. In this work, hardware implementation of adaptive pulse compression for real-time transceiver optimization are presented, they are based on a System-on-Chip architecture for Xilinx devices. This study also evaluates the performance of dedicated coprocessor as hardware accelerator units to speed up and improve the computation of computing-intensive tasks such matrix multiplication and matrix inversion which are essential units to solve the covariance matrix. The tradeoffs between latency and hardware utilization are also presented. Moreover, the system architecture takes advantage of the embedded processor, which is interconnected with the logic resources through the high performance AXI buses, to perform floating-point operations, control the processing blocks, and communicate with external PC through a customized software interface. The overall system functionality is demonstrated and tested for real-time operations using a Ku-band tested together with a low-cost channel emulator for different types of waveforms.
Microgravity Manufacturing Via Fused Deposition
NASA Technical Reports Server (NTRS)
Cooper, K. G.; Griffin, M. R.
2003-01-01
Manufacturing polymer hardware during space flight is currently outside the state of the art. A process called fused deposition modeling (FDM) can make this approach a reality by producing net-shaped components of polymer materials directly from a CAE model. FDM is a rapid prototyping process developed by Stratasys, Inc.. which deposits a fine line of semi-molten polymer onto a substrate while moving via computer control to form the cross-sectional shape of the part it is building. The build platen is then lowered and the process is repeated, building a component directly layer by layer. This method enables direct net-shaped production of polymer components directly from a computer file. The layered manufacturing process allows for the manufacture of complex shapes and internal cavities otherwise impossible to machine. This task demonstrated the benefits of the FDM technique to quickly and inexpensively produce replacement components or repair broken hardware in a Space Shuttle or Space Station environment. The intent of the task was to develop and fabricate an FDM system that was lightweight, compact, and required minimum power consumption to fabricate ABS plastic hardware in microgravity. The final product of the shortened task turned out to be a ground-based breadboard device, demonstrating miniaturization capability of the system.
NASA Technical Reports Server (NTRS)
Nelson, James H.; Callan, Daniel R.
1985-01-01
To establish consistency and visibility within the Orbital Transfer Vehicle (OTV) program, a preliminary work breakdown structure (WBS) and dictionary were developed. The dictionary contains definitions of terms to be used in conjunction with the WBS so that a clear understanding of the content of the hardware, function, and cost elements may be established. The OTV WBS matrix is a two-dimensional structure which shows the interrelationship of these dimensions: the hardware elements dimension and the phase and function dimension. The dimension of time cannot be shown graphically, but must be considered. Each cost entry varies with time so that it is necessary to know these cost values by year for budget planning and approval as well as for establishing cost streams for discounting purposes in the economic analysis. While a multiple dimensional approach may at first appear complex, it actually provides benefits which outweigh any concern. This structural interrelationship provides the capability to view and analyze the OTV costs from a number of different financial and management aspects. Cost may be summed by hardware groupings, phases, or functions. The WBS may be used in a number of dimensional or single listing format applications.
Multicore Architectures for Multiple Independent Levels of Security Applications
2012-09-01
to bolster the MILS effort. However, current MILS operating systems are not designed for multi-core platforms. They do not have the hardware support...current MILS operating systems are not designed for multi‐core platforms. They do not have the hardware support to ensure that the separation...the availability of information at different security classification levels while increasing the overall security of the computing system . Due to the
Environmental Control and Life Support Integration Strategy for 6-Crew Operations Stephanie Duchesne
NASA Technical Reports Server (NTRS)
Duchesne, Stephanie M.
2009-01-01
The International Space Station (ISS) crew compliment has increased in size from 3 to 6 crew members . In order to support this increase in crew on ISS, the United States on-orbit Segment (USOS) has been outfitted with a suite of regenerative Environmental Control and Life Support (ECLS) hardware including an Oxygen Generation System(OGS), Waste and Hygiene Compartment (WHC), and a Water Recovery System (WRS). The WRS includes the Urine Processor Assembly (UPA) and the Water Processor Assembly (WPA). With this additional life support hardware, the ISS has achieved full redundancy in its on-orbit life support system between the USOS and Russian Segment (RS). The additional redundancy created by the Regenerative ECLS hardware creates the opportunity for independent support capabilities between segments, and for the first time since the start of ISS, the necessity to revise Life Support strategy agreements. Independent operating strategies coupled with the loss of the Space Shuttle supply and return capabilities in 2010 offer new and unique challenges. This paper will discuss the evolution of the ISS Life Support hardware strategy in support of 6-Crew on ISS, as well as the continued work that is necessary to ensure the support of crew and ISS Program objectives through the life of station.
Environmental Control and Life Support Integration Strategy for 6-Crew Operations
NASA Technical Reports Server (NTRS)
Duchesne, Stephanie M.; Tressler, Chad H.
2010-01-01
The International Space Station (ISS) crew complement has increased in size from 3 to 6 crew members. In order to support this increase in crew on ISS, the United States on-orbit Segment (USOS) has been outfitted with a suite of regenerative Environmental Control and Life Support (ECLS) hardware including an Oxygen Generation System (OGS), Waste and Hygiene Compartment (WHC), and a Water Recovery System (WRS). The WRS includes the Urine Processor Assembly (UPA) and the Water Processor Assembly (WPA). With this additional life support hardware, the ISS has achieved full redundancy in its on-orbit life support system between the t OS and Russian Segment (RS). The additional redundancy created by the Regenerative ECLS hardware creates the opportunity for independent support capabilities between segments, and for the first time since the start of ISS, the necessity to revise Life Support strategy agreements. Independent operating strategies coupled with the loss of the Space Shuttle supply and return capabilities in 2010 offer new and unique challenges. This paper will discuss the evolution of the ISS Life Support hardware strategy in support of 6-Crew on ISS, as well as the continued work that is necessary to ensure the support of crew and ISS Program objectives through the life of station
NASA Astrophysics Data System (ADS)
Xue, Xinwei; Cheryauka, Arvi; Tubbs, David
2006-03-01
CT imaging in interventional and minimally-invasive surgery requires high-performance computing solutions that meet operational room demands, healthcare business requirements, and the constraints of a mobile C-arm system. The computational requirements of clinical procedures using CT-like data are increasing rapidly, mainly due to the need for rapid access to medical imagery during critical surgical procedures. The highly parallel nature of Radon transform and CT algorithms enables embedded computing solutions utilizing a parallel processing architecture to realize a significant gain of computational intensity with comparable hardware and program coding/testing expenses. In this paper, using a sample 2D and 3D CT problem, we explore the programming challenges and the potential benefits of embedded computing using commodity hardware components. The accuracy and performance results obtained on three computational platforms: a single CPU, a single GPU, and a solution based on FPGA technology have been analyzed. We have shown that hardware-accelerated CT image reconstruction can be achieved with similar levels of noise and clarity of feature when compared to program execution on a CPU, but gaining a performance increase at one or more orders of magnitude faster. 3D cone-beam or helical CT reconstruction and a variety of volumetric image processing applications will benefit from similar accelerations.
Novel algorithm implementations in DARC: the Durham AO real-time controller
NASA Astrophysics Data System (ADS)
Basden, Alastair; Bitenc, Urban; Jenkins, David
2016-07-01
The Durham AO Real-time Controller has been used on-sky with the CANARY AO demonstrator instrument since 2010, and is also used to provide control for several AO test-benches, including DRAGON. Over this period, many new real-time algorithms have been developed, implemented and demonstrated, leading to performance improvements for CANARY. Additionally, the computational performance of this real-time system has continued to improve. Here, we provide details about recent updates and changes made to DARC, and the relevance of these updates, including new algorithms, to forthcoming AO systems. We present the computational performance of DARC when used on different hardware platforms, including hardware accelerators, and determine the relevance and potential for ELT scale systems. Recent updates to DARC have included algorithms to handle elongated laser guide star images, including correlation wavefront sensing, with options to automatically update references during AO loop operation. Additionally, sub-aperture masking options have been developed to increase signal to noise ratio when operating with non-symmetrical wavefront sensor images. The development of end-user tools has progressed with new options for configuration and control of the system. New wavefront sensor camera models and DM models have been integrated with the system, increasing the number of possible hardware configurations available, and a fully open-source AO system is now a reality, including drivers necessary for commercial cameras and DMs. The computational performance of DARC makes it suitable for ELT scale systems when implemented on suitable hardware. We present tests made on different hardware platforms, along with the strategies taken to optimise DARC for these systems.
High throughput computing: a solution for scientific analysis
O'Donnell, M.
2011-01-01
handle job failures due to hardware, software, or network interruptions (obviating the need to manually resubmit the job after each stoppage); be affordable; and most importantly, allow us to complete very large, complex analyses that otherwise would not even be possible. In short, we envisioned a job-management system that would take advantage of unused FORT CPUs within a local area network (LAN) to effectively distribute and run highly complex analytical processes. What we found was a solution that uses High Throughput Computing (HTC) and High Performance Computing (HPC) systems to do exactly that (Figure 1).
Computational high-resolution optical imaging of the living human retina
NASA Astrophysics Data System (ADS)
Shemonski, Nathan D.; South, Fredrick A.; Liu, Yuan-Zhi; Adie, Steven G.; Scott Carney, P.; Boppart, Stephen A.
2015-07-01
High-resolution in vivo imaging is of great importance for the fields of biology and medicine. The introduction of hardware-based adaptive optics (HAO) has pushed the limits of optical imaging, enabling high-resolution near diffraction-limited imaging of previously unresolvable structures. In ophthalmology, when combined with optical coherence tomography, HAO has enabled a detailed three-dimensional visualization of photoreceptor distributions and individual nerve fibre bundles in the living human retina. However, the introduction of HAO hardware and supporting software adds considerable complexity and cost to an imaging system, limiting the number of researchers and medical professionals who could benefit from the technology. Here we demonstrate a fully automated computational approach that enables high-resolution in vivo ophthalmic imaging without the need for HAO. The results demonstrate that computational methods in coherent microscopy are applicable in highly dynamic living systems.
FTAP: a Linux-based program for tapping and music experiments.
Finney, S A
2001-02-01
This paper describes FTAP, a flexible data collection system for tapping and music experiments. FTAP runs on standard PC hardware with the Linux operating system and can process input keystrokes and auditory output with reliable millisecond resolution. It uses standard MIDI devices for input and output and is particularly flexible in the area of auditory feedback manipulation. FTAP can run a wide variety of experiments, including synchronization/continuation tasks (Wing & Kristofferson, 1973), synchronization tasks combined with delayed auditory feedback (Aschersleben & Prinz, 1997), continuation tasks with isolated feedback perturbations (Wing, 1977), and complex alterations of feedback in music performance (Finney, 1997). Such experiments have often been implemented with custom hardware and software systems, but with FTAP they can be specified by a simple ASCII text parameter file. FTAP is available at no cost in source-code form.
A novel implementation of homodyne time interval analysis method for primary vibration calibration
NASA Astrophysics Data System (ADS)
Sun, Qiao; Zhou, Ling; Cai, Chenguang; Hu, Hongbo
2011-12-01
In this paper, the shortcomings and their causes of the conventional homodyne time interval analysis (TIA) method is described with respect to its software algorithm and hardware implementation, based on which a simplified TIA method is proposed with the help of virtual instrument technology. Equipped with an ordinary Michelson interferometer and dual channel synchronous data acquisition card, the primary vibration calibration system using the simplified method can perform measurements of complex sensitivity of accelerometers accurately, meeting the uncertainty requirements laid down in pertaining ISO standard. The validity and accuracy of the simplified TIA method is verified by simulation and comparison experiments with its performance analyzed. This simplified method is recommended to apply in national metrology institute of developing countries and industrial primary vibration calibration labs for its simplified algorithm and low requirements on hardware.
Autonomous Power System intelligent diagnosis and control
NASA Technical Reports Server (NTRS)
Ringer, Mark J.; Quinn, Todd M.; Merolla, Anthony
1991-01-01
The Autonomous Power System (APS) project at NASA Lewis Research Center is designed to demonstrate the abilities of integrated intelligent diagnosis, control, and scheduling techniques to space power distribution hardware. Knowledge-based software provides a robust method of control for highly complex space-based power systems that conventional methods do not allow. The project consists of three elements: the Autonomous Power Expert System (APEX) for fault diagnosis and control, the Autonomous Intelligent Power Scheduler (AIPS) to determine system configuration, and power hardware (Brassboard) to simulate a space based power system. The operation of the Autonomous Power System as a whole is described and the responsibilities of the three elements - APEX, AIPS, and Brassboard - are characterized. A discussion of the methodologies used in each element is provided. Future plans are discussed for the growth of the Autonomous Power System.
Autonomous power system intelligent diagnosis and control
NASA Technical Reports Server (NTRS)
Ringer, Mark J.; Quinn, Todd M.; Merolla, Anthony
1991-01-01
The Autonomous Power System (APS) project at NASA Lewis Research Center is designed to demonstrate the abilities of integrated intelligent diagnosis, control, and scheduling techniques to space power distribution hardware. Knowledge-based software provides a robust method of control for highly complex space-based power systems that conventional methods do not allow. The project consists of three elements: the Autonomous Power Expert System (APEX) for fault diagnosis and control, the Autonomous Intelligent Power Scheduler (AIPS) to determine system configuration, and power hardware (Brassboard) to simulate a space based power system. The operation of the Autonomous Power System as a whole is described and the responsibilities of the three elements - APEX, AIPS, and Brassboard - are characterized. A discussion of the methodologies used in each element is provided. Future plans are discussed for the growth of the Autonomous Power System.
Design and specification of a centralized manufacturing data management and scheduling system
NASA Technical Reports Server (NTRS)
Farrington, Phillip A.
1993-01-01
As was revealed in a previous study, the Materials and Processes Laboratory's Productivity Enhancement Complex (PEC) has a number of automated production areas/cells that are not effectively integrated, limiting the ability of users to readily share data. The recent decision to utilize the PEC for the fabrication of flight hardware has focused new attention on the problem and brought to light the need for an integrated data management and scheduling system. This report addresses this need by developing preliminary designs specifications for a centralized manufacturing data management and scheduling system for managing flight hardware fabrication in the PEC. This prototype system will be developed under the auspices of the Integrated Engineering Environment (IEE) Oversight team and the IEE Committee. At their recommendation the system specifications were based on the fabrication requirements of the AXAF-S Optical Bench.
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Novack, Steven D.; Hatfield, Glen S.; Britton, Paul
2017-01-01
Today's launch vehicles complex electronic and avionic systems heavily utilize the Field Programmable Gate Array (FPGA) integrated circuit (IC). FPGAs are prevalent ICs in communication protocols such as MIL-STD-1553B, and in control signal commands such as in solenoid/servo valves actuations. This paper will demonstrate guidelines to estimate FPGA failure rates for a launch vehicle, the guidelines will account for hardware, firmware, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC, FPGA memory and clock. The firmware portion will provide guidelines on the high level FPGA programming language and ways to account for software/code reliability growth. The radiation portion will provide guidelines on environment susceptibility as well as guidelines on tailoring other launch vehicle programs historical data to a specific launch vehicle.
Embedded System Implementation on FPGA System With μCLinux OS
NASA Astrophysics Data System (ADS)
Fairuz Muhd Amin, Ahmad; Aris, Ishak; Syamsul Azmir Raja Abdullah, Raja; Kalos Zakiah Sahbudin, Ratna
2011-02-01
Embedded systems are taking on more complicated tasks as the processors involved become more powerful. The embedded systems have been widely used in many areas such as in industries, automotives, medical imaging, communications, speech recognition and computer vision. The complexity requirements in hardware and software nowadays need a flexibility system for further enhancement in any design without adding new hardware. Therefore, any changes in the design system will affect the processor that need to be changed. To overcome this problem, a System On Programmable Chip (SOPC) has been designed and developed using Field Programmable Gate Array (FPGA). A softcore processor, NIOS II 32-bit RISC, which is the microprocessor core was utilized in FPGA system together with the embedded operating system(OS), μClinux. In this paper, an example of web server is explained and demonstrated
SpiderSpec: a low-cost compact colorimeter with IoT functionality
NASA Astrophysics Data System (ADS)
Mignani, Anna G.; Mencaglia, Andrea A.; Baldi, Massimo; Ciaccheri, Leonardo
2015-07-01
A miniaturized device for colorimetry is presented that utilizes a LED array for illumination and a compact spectrometer for detection. It can be battery-powered, operated locally as a stand-alone device, or connected via wi-fi to the internet. It has potentials to be remotely operated by means of a tablet or a smartphone. In practice, it consists of a low-cost hardware configuration that is adaptable via software to the user's most varied requests, as a spectroscopic platform appropriate for a variety of applications. The hardware and software modules can be designed with different performances, complexities and costs, with the aim of making the colorimeter a device for Internet-of-Things use. It will be suitable for a selected range of consumer applications, as well as for targeted industrial, environmental, and food applications.
Protein-protein docking on hardware accelerators: comparison of GPU and MIC architectures
2015-01-01
Background The hardware accelerators will provide solutions to computationally complex problems in bioinformatics fields. However, the effect of acceleration depends on the nature of the application, thus selection of an appropriate accelerator requires some consideration. Results In the present study, we compared the effects of acceleration using graphics processing unit (GPU) and many integrated core (MIC) on the speed of fast Fourier transform (FFT)-based protein-protein docking calculation. The GPU implementation performed the protein-protein docking calculations approximately five times faster than the MIC offload mode implementation. The MIC native mode implementation has the advantage in the implementation costs. However, the performance was worse with larger protein pairs because of memory limitations. Conclusion The results suggest that GPU is more suitable than MIC for accelerating FFT-based protein-protein docking applications. PMID:25707855
Operating systems. [of computers
NASA Technical Reports Server (NTRS)
Denning, P. J.; Brown, R. L.
1984-01-01
A counter operating system creates a hierarchy of levels of abstraction, so that at a given level all details concerning lower levels can be ignored. This hierarchical structure separates functions according to their complexity, characteristic time scale, and level of abstraction. The lowest levels include the system's hardware; concepts associated explicitly with the coordination of multiple tasks appear at intermediate levels, which conduct 'primitive processes'. Software semaphore is the mechanism controlling primitive processes that must be synchronized. At higher levels lie, in rising order, the access to the secondary storage devices of a particular machine, a 'virtual memory' scheme for managing the main and secondary memories, communication between processes by way of a mechanism called a 'pipe', access to external input and output devices, and a hierarchy of directories cataloguing the hardware and software objects to which access must be controlled.
Ultrasonic frequency selection for aqueous fine cleaning
NASA Technical Reports Server (NTRS)
Becker, Joann F.
1995-01-01
A study was conducted to evaluate ultrasonic cleaning systems for precision cleaning effectiveness for oxygen service hardware. This evaluation was specific for Rocketdyne Division of Rockwell Aerospace alloys and machining soils. Machining lubricants and hydraulic fluid were applied as soils to standardized complex test specimens designed to simulate typical hardware. The study consisted of tests which included 20, 25, 30, 40, 50, and 65 kHz ultrasonic cleaning systems. Two size categories of cleaning systems were evaluated, 3- to 10-gal laboratory size tanks and 35- to 320-gal industrial size tanks. The system properties of cavitation, frequency vs. cleaning effectiveness, the two types of transducers, and the power level of the system vs. size of the cleaning tank were investigated. The data obtained from this study was used to select the ultrasonic tanks for the aqueous fine clean facility installed at Rocketdyne.
Ultrasonic frequency selection for aqueous fine cleaning
NASA Technical Reports Server (NTRS)
Becker, Joann F.
1994-01-01
A study was conducted to evaluate ultrasonic cleaning systems for precision cleaning effectiveness for oxygen service hardware. This evaluation was specific for Rocketdyne Div. of Rockwell Aerospace alloys and machining soils. Machining lubricants and hydraulic fluid were applied as soils to standardized complex test specimens designed to simulate typical hardware. The study consisted of tests which included 20, 25, 30, 40, 50, and 65 kHz ultrasonic cleaning systems. Two size categories of cleaning systems were evaluated, 3- to 10-gal laboratory size tanks and 35- to 320-gal industrial size tanks. The system properties of cavitation; frequency vs. cleaning effectiveness; the two types of transducers; and the power level of the system vs. size of the cleaning tank were investigated. The data obtained from this study was used to select the ultrasonic tanks for the aqueous fine clean facility installed at Rocketdyne.
From linear to nonlinear control means: a practical progression.
Gao, Zhiqiang
2002-04-01
With the rapid advance of digital control hardware, it is time to take the simple but effective proportional-integral-derivative (PID) control technology to the next level of performance and robustness. For this purpose, a nonlinear PID and active disturbance rejection framework are introduced in this paper. It complements the existing theory in that (1) it actively and systematically explores the use of nonlinear control mechanisms for better performance, even for linear plants; (2) it represents a control strategy that is rather independent of mathematical models of the plants, thus achieving inherent robustness and reducing design complexity. Stability analysis, as well as software/hardware test results, are presented. It is evident that the proposed framework lends itself well in seeking innovative solutions to practical problems while maintaining the simplicity and the intuitiveness of the existing technology.
NASA Technical Reports Server (NTRS)
Lukash, James A.; Daley, Earl
2011-01-01
This work describes the design and development effort to adapt rapid-development space hardware by creating a ground system using solutions of low complexity, mass, & cost. The Lunar Atmosphere and Dust Environment Explorer (LADEE) spacecraft is based on the modular common spacecraft bus architecture developed at NASA Ames Research Center. The challenge was building upon the existing modular common bus design and development work and improving the LADEE spacecraft design by adding an Equipotential Voltage Reference (EVeR) system, commonly referred to as a ground system. This would aid LADEE in meeting Electromagnetic Environmental Effects (E3) requirements, thereby making the spacecraft more compatible with itself and its space environment. The methods used to adapt existing hardware are presented, including provisions which may be used on future spacecraft.
Use of small stand-alone Internet nodes as a distributed control system
NASA Astrophysics Data System (ADS)
Goodwin, Robert W.; Kucera, Michael J.; Shea, Michael F.
1994-12-01
For several years, the standard model for accelerator control systems has been workstation consoles connected to VME local stations by a Local Area Network with analog and digital data being accessed via a field bus to custom I/O interface electronics. Commercially available hardware has now made it possible to implement a small stand-alone data acquisition station that combines the LAN connection, the computer, and the analog and digital I/O interface on a single board. This eliminates the complexity of a field bus and the associated proprietary I/O hardware. A minimum control system is one data acquisition station and a Macintosh or workstation console, both connected to the network; larger systems have more consoles and nodes. An implementation of this architecture is described along with performance and operational experience.