Spares Management : Optimizing Hardware Usage for the Space Shuttle Main Engine
NASA Technical Reports Server (NTRS)
Gulbrandsen, K. A.
1999-01-01
The complexity of the Space Shuttle Main Engine (SSME), combined with mounting requirements to reduce operations costs have increased demands for accurate tracking, maintenance, and projections of SSME assets. The SSME Logistics Team is developing an integrated asset management process. This PC-based tool provides a user-friendly asset database for daily decision making, plus a variable-input hardware usage simulation with complex logic yielding output that addresses essential asset management issues. Cycle times on critical tasks are significantly reduced. Associated costs have decreased as asset data quality and decision-making capability has increased.
Hardware Implementation of a MIMO Decoder Using Matrix Factorization Based Channel Estimation
NASA Astrophysics Data System (ADS)
Islam, Mohammad Tariqul; Numan, Mostafa Wasiuddin; Misran, Norbahiah; Ali, Mohd Alauddin Mohd; Singh, Mandeep
2011-05-01
This paper presents an efficient hardware realization of multiple-input multiple-output (MIMO) wireless communication decoder that utilizes the available resources by adopting the technique of parallelism. The hardware is designed and implemented on Xilinx Virtex™-4 XC4VLX60 field programmable gate arrays (FPGA) device in a modular approach which simplifies and eases hardware update, and facilitates testing of the various modules independently. The decoder involves a proficient channel estimation module that employs matrix factorization on least squares (LS) estimation to reduce a full rank matrix into a simpler form in order to eliminate matrix inversion. This results in performance improvement and complexity reduction of the MIMO system. Performance evaluation of the proposed method is validated through MATLAB simulations which indicate 2 dB improvement in terms of SNR compared to LS estimation. Moreover complexity comparison is performed in terms of mathematical operations, which shows that the proposed approach appreciably outperforms LS estimation at a lower complexity and represents a good solution for channel estimation technique.
High-resolution imaging using a wideband MIMO radar system with two distributed arrays.
Wang, Dang-wei; Ma, Xiao-yan; Chen, A-Lei; Su, Yi
2010-05-01
Imaging a fast maneuvering target has been an active research area in past decades. Usually, an array antenna with multiple elements is implemented to avoid the motion compensations involved in the inverse synthetic aperture radar (ISAR) imaging. Nevertheless, there is a price dilemma due to the high level of hardware complexity compared to complex algorithm implemented in the ISAR imaging system with only one antenna. In this paper, a wideband multiple-input multiple-output (MIMO) radar system with two distributed arrays is proposed to reduce the hardware complexity of the system. Furthermore, the system model, the equivalent array production method and the imaging procedure are presented. As compared with the classical real aperture radar (RAR) imaging system, there is a very important contribution in our method that the lower hardware complexity can be involved in the imaging system since many additive virtual array elements can be obtained. Numerical simulations are provided for testing our system and imaging method.
Stream-based Hebbian eigenfilter for real-time neuronal spike discrimination
2012-01-01
Background Principal component analysis (PCA) has been widely employed for automatic neuronal spike sorting. Calculating principal components (PCs) is computationally expensive, and requires complex numerical operations and large memory resources. Substantial hardware resources are therefore needed for hardware implementations of PCA. General Hebbian algorithm (GHA) has been proposed for calculating PCs of neuronal spikes in our previous work, which eliminates the needs of computationally expensive covariance analysis and eigenvalue decomposition in conventional PCA algorithms. However, large memory resources are still inherently required for storing a large volume of aligned spikes for training PCs. The large size memory will consume large hardware resources and contribute significant power dissipation, which make GHA difficult to be implemented in portable or implantable multi-channel recording micro-systems. Method In this paper, we present a new algorithm for PCA-based spike sorting based on GHA, namely stream-based Hebbian eigenfilter, which eliminates the inherent memory requirements of GHA while keeping the accuracy of spike sorting by utilizing the pseudo-stationarity of neuronal spikes. Because of the reduction of large hardware storage requirements, the proposed algorithm can lead to ultra-low hardware resources and power consumption of hardware implementations, which is critical for the future multi-channel micro-systems. Both clinical and synthetic neural recording data sets were employed for evaluating the accuracy of the stream-based Hebbian eigenfilter. The performance of spike sorting using stream-based eigenfilter and the computational complexity of the eigenfilter were rigorously evaluated and compared with conventional PCA algorithms. Field programmable logic arrays (FPGAs) were employed to implement the proposed algorithm, evaluate the hardware implementations and demonstrate the reduction in both power consumption and hardware memories achieved by the streaming computing Results and discussion Results demonstrate that the stream-based eigenfilter can achieve the same accuracy and is 10 times more computationally efficient when compared with conventional PCA algorithms. Hardware evaluations show that 90.3% logic resources, 95.1% power consumption and 86.8% computing latency can be reduced by the stream-based eigenfilter when compared with PCA hardware. By utilizing the streaming method, 92% memory resources and 67% power consumption can be saved when compared with the direct implementation of GHA. Conclusion Stream-based Hebbian eigenfilter presents a novel approach to enable real-time spike sorting with reduced computational complexity and hardware costs. This new design can be further utilized for multi-channel neuro-physiological experiments or chronic implants. PMID:22490725
Area-delay trade-offs of texture decompressors for a graphics processing unit
NASA Astrophysics Data System (ADS)
Novoa Súñer, Emilio; Ituero, Pablo; López-Vallejo, Marisa
2011-05-01
Graphics Processing Units have become a booster for the microelectronics industry. However, due to intellectual property issues, there is a serious lack of information on implementation details of the hardware architecture that is behind GPUs. For instance, the way texture is handled and decompressed in a GPU to reduce bandwidth usage has never been dealt with in depth from a hardware point of view. This work addresses a comparative study on the hardware implementation of different texture decompression algorithms for both conventional (PCs and video game consoles) and mobile platforms. Circuit synthesis is performed targeting both a reconfigurable hardware platform and a 90nm standard cell library. Area-delay trade-offs have been extensively analyzed, which allows us to compare the complexity of decompressors and thus determine suitability of algorithms for systems with limited hardware resources.
Lasercom system architecture with reduced complexity
NASA Technical Reports Server (NTRS)
Lesh, James R. (Inventor); Chen, Chien-Chung (Inventor); Ansari, Homayoon (Inventor)
1994-01-01
Spatial acquisition and precision beam pointing functions are critical to spaceborne laser communication systems. In the present invention, a single high bandwidth CCD detector is used to perform both spatial acquisition and tracking functions. Compared to previous lasercom hardware design, the array tracking concept offers reduced system complexity by reducing the number of optical elements in the design. Specifically, the design requires only one detector and one beam steering mechanism. It also provides the means to optically close the point-ahead control loop. The technology required for high bandwidth array tracking was examined and shown to be consistent with current state of the art. The single detector design can lead to a significantly reduced system complexity and a lower system cost.
LaserCom System Architecture With Reduced Complexity
NASA Technical Reports Server (NTRS)
Lesh, James R. (Inventor); Chen, Chien-Chung (Inventor); Ansari, Homa-Yoon (Inventor)
1996-01-01
Spatial acquisition and precision beam pointing functions are critical to spaceborne laser communication systems. In the present invention a single high bandwidth CCD detector is used to perform both spatial acquisition and tracking functions. Compared to previous lasercom hardware design, the array tracking concept offers reduced system complexity by reducing the number of optical elements in the design. Specifically, the design requires only one detector and one beam steering mechanism. It also provides means to optically close the point-ahead control loop. The technology required for high bandwidth array tracking was examined and shown to be consistent with current state of the art. The single detector design can lead to a significantly reduced system complexity and a lower system cost.
Analyzing SystemC Designs: SystemC Analysis Approaches for Varying Applications
Stoppe, Jannis; Drechsler, Rolf
2015-01-01
The complexity of hardware designs is still increasing according to Moore's law. With embedded systems being more and more intertwined and working together not only with each other, but also with their environments as cyber physical systems (CPSs), more streamlined development workflows are employed to handle the increasing complexity during a system's design phase. SystemC is a C++ library for the design of hardware/software systems, enabling the designer to quickly prototype, e.g., a distributed CPS without having to decide about particular implementation details (such as whether to implement a feature in hardware or in software) early in the design process. Thereby, this approach reduces the initial implementation's complexity by offering an abstract layer with which to build a working prototype. However, as SystemC is based on C++, analyzing designs becomes a difficult task due to the complex language features that are available to the designer. Several fundamentally different approaches for analyzing SystemC designs have been suggested. This work illustrates several different SystemC analysis approaches, including their specific advantages and shortcomings, allowing designers to pick the right tools to assist them with a specific problem during the design of a system using SystemC. PMID:25946632
Analyzing SystemC Designs: SystemC Analysis Approaches for Varying Applications.
Stoppe, Jannis; Drechsler, Rolf
2015-05-04
The complexity of hardware designs is still increasing according to Moore's law. With embedded systems being more and more intertwined and working together not only with each other, but also with their environments as cyber physical systems (CPSs), more streamlined development workflows are employed to handle the increasing complexity during a system's design phase. SystemC is a C++ library for the design of hardware/software systems, enabling the designer to quickly prototype, e.g., a distributed CPS without having to decide about particular implementation details (such as whether to implement a feature in hardware or in software) early in the design process. Thereby, this approach reduces the initial implementation's complexity by offering an abstract layer with which to build a working prototype. However, as SystemC is based on C++, analyzing designs becomes a difficult task due to the complex language features that are available to the designer. Several fundamentally different approaches for analyzing SystemC designs have been suggested. This work illustrates several different SystemC analysis approaches, including their specific advantages and shortcomings, allowing designers to pick the right tools to assist them with a specific problem during the design of a system using SystemC.
NASA Technical Reports Server (NTRS)
Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark
2016-01-01
Radio-frequency interference (RFI) is a known problem for passive remote sensing as evidenced in the L-band radiometers SMOS, Aquarius and more recently, SMAP. Various algorithms have been developed and implemented on SMAP to improve science measurements. This was achieved by the use of a digital microwave radiometer. RFI mitigation becomes more challenging for microwave radiometers operating at higher frequencies in shared allocations. At higher frequencies larger bandwidths are also desirable for lower measurement noise further adding to processing challenges. This work focuses on finding improved RFI mitigation techniques that will be effective at additional frequencies and at higher bandwidths. To aid the development and testing of applicable detection and mitigation techniques, a wide-band RFI algorithm testing environment has been developed using the Reconfigurable Open Architecture Computing Hardware System (ROACH) built by the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) Group. The testing environment also consists of various test equipment used to reproduce typical signals that a radiometer may see including those with and without RFI. The testing environment permits quick evaluations of RFI mitigation algorithms as well as show that they are implementable in hardware. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The complex signal kurtosis detector showed improved performance over the real kurtosis detector under certain conditions. The real kurtosis is implemented on SMAP at 24 MHz bandwidth. The complex signal kurtosis algorithm was then implemented in hardware at 200 MHz bandwidth using the ROACH. In this work, performance of the complex signal kurtosis and the real signal kurtosis are compared. Performance evaluations and comparisons in both simulation as well as experimental hardware implementations were done with the use of receiver operating characteristic (ROC) curves. The complex kurtosis algorithm has the potential to reduce data rate due to onboard processing in addition to improving RFI detection performance.
Liu, Weisong; Huang, Zhitao; Wang, Xiang; Sun, Weichao
2017-01-01
In a cognitive radio sensor network (CRSN), wideband spectrum sensing devices which aims to effectively exploit temporarily vacant spectrum intervals as soon as possible are of great importance. However, the challenge of increasingly high signal frequency and wide bandwidth requires an extremely high sampling rate which may exceed today’s best analog-to-digital converters (ADCs) front-end bandwidth. Recently, the newly proposed architecture called modulated wideband converter (MWC), is an attractive analog compressed sensing technique that can highly reduce the sampling rate. However, the MWC has high hardware complexity owing to its parallel channel structure especially when the number of signals increases. In this paper, we propose a single channel modulated wideband converter (SCMWC) scheme for spectrum sensing of band-limited wide-sense stationary (WSS) signals. With one antenna or sensor, this scheme can save not only sampling rate but also hardware complexity. We then present a new, SCMWC based, single node CR prototype System, on which the spectrum sensing algorithm was tested. Experiments on our hardware prototype show that the proposed architecture leads to successful spectrum sensing. And the total sampling rate as well as hardware size is only one channel’s consumption of MWC. PMID:28471410
Hardware implementation of CMAC neural network with reduced storage requirement.
Ker, J S; Kuo, Y H; Wen, R C; Liu, B D
1997-01-01
The cerebellar model articulation controller (CMAC) neural network has the advantages of fast convergence speed and low computation complexity. However, it suffers from a low storage space utilization rate on weight memory. In this paper, we propose a direct weight address mapping approach, which can reduce the required weight memory size with a utilization rate near 100%. Based on such an address mapping approach, we developed a pipeline architecture to efficiently perform the addressing operations. The proposed direct weight address mapping approach also speeds up the computation for the generation of weight addresses. Besides, a CMAC hardware prototype used for color calibration has been implemented to confirm the proposed approach and architecture.
On decoding of multi-level MPSK modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Gupta, Alok Kumar
1990-01-01
The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.
NASA Astrophysics Data System (ADS)
Vikhlyantsev, O. P.; Generalov, L. N.; Kuryakin, A. V.; Karpov, I. A.; Gurin, N. E.; Tumkin, A. D.; Fil'chagin, S. V.
2017-12-01
A hardware-software complex for measurement of energy and angular distributions of charged particles formed in nuclear reactions is presented. Hardware and software structures of the complex, the basic set of the modular nuclear-physical apparatus of a multichannel detecting system on the basis of Δ E- E telescopes of silicon detectors, and the hardware of experimental data collection, storage, and processing are presented and described.
The Impact of Flight Hardware Scavenging on Space Logistics
NASA Technical Reports Server (NTRS)
Oeftering, Richard C.
2011-01-01
For a given fixed launch vehicle capacity the logistics payload delivered to the moon may be only roughly 20 percent of the payload delivered to the International Space Station (ISS). This is compounded by the much lower flight frequency to the moon and thus low availability of spares for maintenance. This implies that lunar hardware is much more scarce and more costly per kilogram than ISS and thus there is much more incentive to preserve hardware. The Constellation Lunar Surface System (LSS) program is considering ways of utilizing hardware scavenged from vehicles including the Altair lunar lander. In general, the hardware will have only had a matter of hours of operation yet there may be years of operational life remaining. By scavenging this hardware the program, in effect, is treating vehicle hardware as part of the payload. Flight hardware may provide logistics spares for system maintenance and reduce the overall logistics footprint. This hardware has a wide array of potential applications including expanding the power infrastructure, and exploiting in-situ resources. Scavenging can also be seen as a way of recovering the value of, literally, billions of dollars worth of hardware that would normally be discarded. Scavenging flight hardware adds operational complexity and steps must be taken to augment the crew s capability with robotics, capabilities embedded in flight hardware itself, and external processes. New embedded technologies are needed to make hardware more serviceable and scavengable. Process technologies are needed to extract hardware, evaluate hardware, reconfigure or repair hardware, and reintegrate it into new applications. This paper also illustrates how scavenging can be used to drive down the cost of the overall program by exploiting the intrinsic value of otherwise discarded flight hardware.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, Kathleen E.; Humble, Travis S.
Using quantum annealing to solve an optimization problem requires minor embedding a logic graph into a known hardware graph. We introduce the minor set cover (MSC) of a known graph GG : a subset of graph minors which contain any remaining minor of the graph as a subgraph, in an effort to reduce the complexity of the minor embedding problem. Any graph that can be embedded into GG will be embeddable into a member of the MSC. Focusing on embedding into the hardware graph of commercially available quantum annealers, we establish the MSC for a particular known virtual hardware, whichmore » is a complete bipartite graph. Furthermore, we show that the complete bipartite graph K N,N has a MSC of N minors, from which K N+1 is identified as the largest clique minor of K N,N. In the case of determining the largest clique minor of hardware with faults we briefly discussed this open question.« less
Replicating Human Hand Synergies Onto Robotic Hands: A Review on Software and Hardware Strategies.
Salvietti, Gionata
2018-01-01
This review reports the principal solutions proposed in the literature to reduce the complexity of the control and of the design of robotic hands taking inspiration from the organization of the human brain. Several studies in neuroscience concerning the sensorimotor organization of the human hand proved that, despite the complexity of the hand, a few parameters can describe most of the variance in the patterns of configurations and movements. In other words, humans exploit a reduced set of parameters, known in the literature as synergies, to control their hands. In robotics, this dimensionality reduction can be achieved by coupling some of the degrees of freedom (DoFs) of the robotic hand, that results in a reduction of the needed inputs. Such coupling can be obtained at the software level, exploiting mapping algorithm to reproduce human hand organization, and at the hardware level, through either rigid or compliant physical couplings between the joints of the robotic hand. This paper reviews the main solutions proposed for both the approaches.
BigDataScript: a scripting language for data pipelines.
Cingolani, Pablo; Sladek, Rob; Blanchette, Mathieu
2015-01-01
The analysis of large biological datasets often requires complex processing pipelines that run for a long time on large computational infrastructures. We designed and implemented a simple script-like programming language with a clean and minimalist syntax to develop and manage pipeline execution and provide robustness to various types of software and hardware failures as well as portability. We introduce the BigDataScript (BDS) programming language for data processing pipelines, which improves abstraction from hardware resources and assists with robustness. Hardware abstraction allows BDS pipelines to run without modification on a wide range of computer architectures, from a small laptop to multi-core servers, server farms, clusters and clouds. BDS achieves robustness by incorporating the concepts of absolute serialization and lazy processing, thus allowing pipelines to recover from errors. By abstracting pipeline concepts at programming language level, BDS simplifies implementation, execution and management of complex bioinformatics pipelines, resulting in reduced development and debugging cycles as well as cleaner code. BigDataScript is available under open-source license at http://pcingola.github.io/BigDataScript. © The Author 2014. Published by Oxford University Press.
The dynamical analysis of modified two-compartment neuron model and FPGA implementation
NASA Astrophysics Data System (ADS)
Lin, Qianjin; Wang, Jiang; Yang, Shuangming; Yi, Guosheng; Deng, Bin; Wei, Xile; Yu, Haitao
2017-10-01
The complexity of neural models is increasing with the investigation of larger biological neural network, more various ionic channels and more detailed morphologies, and the implementation of biological neural network is a task with huge computational complexity and power consumption. This paper presents an efficient digital design using piecewise linearization on field programmable gate array (FPGA), to succinctly implement the reduced two-compartment model which retains essential features of more complicated models. The design proposes an approximate neuron model which is composed of a set of piecewise linear equations, and it can reproduce different dynamical behaviors to depict the mechanisms of a single neuron model. The consistency of hardware implementation is verified in terms of dynamical behaviors and bifurcation analysis, and the simulation results including varied ion channel characteristics coincide with the biological neuron model with a high accuracy. Hardware synthesis on FPGA demonstrates that the proposed model has reliable performance and lower hardware resource compared with the original two-compartment model. These investigations are conducive to scalability of biological neural network in reconfigurable large-scale neuromorphic system.
BigDataScript: a scripting language for data pipelines
Cingolani, Pablo; Sladek, Rob; Blanchette, Mathieu
2015-01-01
Motivation: The analysis of large biological datasets often requires complex processing pipelines that run for a long time on large computational infrastructures. We designed and implemented a simple script-like programming language with a clean and minimalist syntax to develop and manage pipeline execution and provide robustness to various types of software and hardware failures as well as portability. Results: We introduce the BigDataScript (BDS) programming language for data processing pipelines, which improves abstraction from hardware resources and assists with robustness. Hardware abstraction allows BDS pipelines to run without modification on a wide range of computer architectures, from a small laptop to multi-core servers, server farms, clusters and clouds. BDS achieves robustness by incorporating the concepts of absolute serialization and lazy processing, thus allowing pipelines to recover from errors. By abstracting pipeline concepts at programming language level, BDS simplifies implementation, execution and management of complex bioinformatics pipelines, resulting in reduced development and debugging cycles as well as cleaner code. Availability and implementation: BigDataScript is available under open-source license at http://pcingola.github.io/BigDataScript. Contact: pablo.e.cingolani@gmail.com PMID:25189778
Hardware Implementation of Lossless Adaptive and Scalable Hyperspectral Data Compression for Space
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh; Keymeulen, Didier; Bakhshi, Alireza; Klimesh, Matthew
2009-01-01
On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. The technique also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware. A modified form of the algorithm that is better suited for data from pushbroom instruments is generally appropriate for flight implementation. A scalable field programmable gate array (FPGA) hardware implementation was developed. The FPGA implementation achieves a throughput performance of 58 Msamples/sec, which can be increased to over 100 Msamples/sec in a parallel implementation that uses twice the hardware resources This paper describes the hardware implementation of the 'Modified Fast Lossless' compression algorithm on an FPGA. The FPGA implementation targets the current state-of-the-art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for space applications.
New Single Piece Blast Hardware design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulrich, Andri; Steinzig, Michael Louis; Aragon, Daniel Adrian
W, Q and PF engineers and machinists designed and fabricated, on the new Mazak i300, the first Single Piece Blast Hardware (unclassified design shown) reducing fabrication and inspection time by over 50%. The first DU Single Piece is completed and will be used for Hydro Test 3680. Past hydro tests used a twopiece assembly due to a lack of equipment capable of machining the complex saddle shape in a single piece. The i300 provides turning and milling 5-axis machining on one machine. The milling head on the i300 can machine past 90 relative to the spindle axis. This makes itmore » possible to machine the complex saddle surface on a single piece. Going to a single piece eliminates tolerance problems, such as tilting and eccentricity, that typically occurred when assembling the two pieces together« less
Identifying the minor set cover of dense connected bipartite graphs via random matching edge sets
NASA Astrophysics Data System (ADS)
Hamilton, Kathleen E.; Humble, Travis S.
2017-04-01
Using quantum annealing to solve an optimization problem requires minor embedding a logic graph into a known hardware graph. In an effort to reduce the complexity of the minor embedding problem, we introduce the minor set cover (MSC) of a known graph G: a subset of graph minors which contain any remaining minor of the graph as a subgraph. Any graph that can be embedded into G will be embeddable into a member of the MSC. Focusing on embedding into the hardware graph of commercially available quantum annealers, we establish the MSC for a particular known virtual hardware, which is a complete bipartite graph. We show that the complete bipartite graph K_{N,N} has a MSC of N minors, from which K_{N+1} is identified as the largest clique minor of K_{N,N}. The case of determining the largest clique minor of hardware with faults is briefly discussed but remains an open question.
Identifying the minor set cover of dense connected bipartite graphs via random matching edge sets
Hamilton, Kathleen E.; Humble, Travis S.
2017-02-23
Using quantum annealing to solve an optimization problem requires minor embedding a logic graph into a known hardware graph. We introduce the minor set cover (MSC) of a known graph GG : a subset of graph minors which contain any remaining minor of the graph as a subgraph, in an effort to reduce the complexity of the minor embedding problem. Any graph that can be embedded into GG will be embeddable into a member of the MSC. Focusing on embedding into the hardware graph of commercially available quantum annealers, we establish the MSC for a particular known virtual hardware, whichmore » is a complete bipartite graph. Furthermore, we show that the complete bipartite graph K N,N has a MSC of N minors, from which K N+1 is identified as the largest clique minor of K N,N. In the case of determining the largest clique minor of hardware with faults we briefly discussed this open question.« less
NASA Astrophysics Data System (ADS)
Boyarnikov, A. V.; Boyarnikova, L. V.; Kozhushko, A. A.; Sekachev, A. F.
2017-08-01
In the article the process of verification (calibration) of oil metering units secondary equipment is considered. The purpose of the work is to increase the reliability and reduce the complexity of this process by developing a software and hardware system that provides automated verification and calibration. The hardware part of this complex carries out the commutation of the measuring channels of the verified controller and the reference channels of the calibrator in accordance with the introduced algorithm. The developed software allows controlling the commutation of channels, setting values on the calibrator, reading the measured data from the controller, calculating errors and compiling protocols. This system can be used for checking the controllers of the secondary equipment of the oil metering units in the automatic verification mode (with the open communication protocol) or in the semi-automatic verification mode (without it). The peculiar feature of the approach used is the development of a universal signal switch operating under software control, which can be configured for various verification methods (calibration), which allows to cover the entire range of controllers of metering units secondary equipment. The use of automatic verification with the help of a hardware and software system allows to shorten the verification time by 5-10 times and to increase the reliability of measurements, excluding the influence of the human factor.
NASA Astrophysics Data System (ADS)
Bazdrov, I. I.; Bortkevich, V. S.; Khokhlov, V. N.
2004-10-01
This paper describes a software-hardware complex for the input into a personal computer of telemetric information obtained by means of telemetry stations TRAL KR28, RTS-8, and TRAL K2N. Structural and functional diagrams are given of the input device and the hardware complex. Results that characterize the features of the input process and selective data of optical measurements of atmospheric radiation are given. © 2004
Third Conference on Fibrous Composites in Flight Vehicle Design, part 1
NASA Technical Reports Server (NTRS)
1976-01-01
The use of fibrous composite materials in the design of aircraft and space vehicle structures and their impact on future vehicle systems are discussed. The topics covered include: flight test work on composite components, design concepts and hardware, specialized applications, operational experience, certification and design criteria. Contributions to the design technology base include data concerning material properties, design procedures, environmental exposure effects, manufacturing procedures, and flight service reliability. By including composites as baseline design materials, significant payoffs are expected in terms of reduced structural weight fractions, longer structural life, reduced fuel consumption, reduced structural complexity, and reduced manufacturing cost.
A Low Cost Matching Motion Estimation Sensor Based on the NIOS II Microprocessor
González, Diego; Botella, Guillermo; Meyer-Baese, Uwe; García, Carlos; Sanz, Concepción; Prieto-Matías, Manuel; Tirado, Francisco
2012-01-01
This work presents the implementation of a matching-based motion estimation sensor on a Field Programmable Gate Array (FPGA) and NIOS II microprocessor applying a C to Hardware (C2H) acceleration paradigm. The design, which involves several matching algorithms, is mapped using Very Large Scale Integration (VLSI) technology. These algorithms, as well as the hardware implementation, are presented here together with an extensive analysis of the resources needed and the throughput obtained. The developed low-cost system is practical for real-time throughput and reduced power consumption and is useful in robotic applications, such as tracking, navigation using an unmanned vehicle, or as part of a more complex system. PMID:23201989
Event-driven processing for hardware-efficient neural spike sorting
NASA Astrophysics Data System (ADS)
Liu, Yan; Pereira, João L.; Constandinou, Timothy G.
2018-02-01
Objective. The prospect of real-time and on-node spike sorting provides a genuine opportunity to push the envelope of large-scale integrated neural recording systems. In such systems the hardware resources, power requirements and data bandwidth increase linearly with channel count. Event-based (or data-driven) processing can provide here a new efficient means for hardware implementation that is completely activity dependant. In this work, we investigate using continuous-time level-crossing sampling for efficient data representation and subsequent spike processing. Approach. (1) We first compare signals (synthetic neural datasets) encoded with this technique against conventional sampling. (2) We then show how such a representation can be directly exploited by extracting simple time domain features from the bitstream to perform neural spike sorting. (3) The proposed method is implemented in a low power FPGA platform to demonstrate its hardware viability. Main results. It is observed that considerably lower data rates are achievable when using 7 bits or less to represent the signals, whilst maintaining the signal fidelity. Results obtained using both MATLAB and reconfigurable logic hardware (FPGA) indicate that feature extraction and spike sorting accuracies can be achieved with comparable or better accuracy than reference methods whilst also requiring relatively low hardware resources. Significance. By effectively exploiting continuous-time data representation, neural signal processing can be achieved in a completely event-driven manner, reducing both the required resources (memory, complexity) and computations (operations). This will see future large-scale neural systems integrating on-node processing in real-time hardware.
NASA Technical Reports Server (NTRS)
Butler, Roy
2013-01-01
The growth in computer hardware performance, coupled with reduced energy requirements, has led to a rapid expansion of the resources available to software systems, driving them towards greater logical abstraction, flexibility, and complexity. This shift in focus from compacting functionality into a limited field towards developing layered, multi-state architectures in a grand field has both driven and been driven by the history of embedded processor design in the robotic spacecraft industry.The combinatorial growth of interprocess conditions is accompanied by benefits (concurrent development, situational autonomy, and evolution of goals) and drawbacks (late integration, non-deterministic interactions, and multifaceted anomalies) in achieving mission success, as illustrated by the case of the Mars Reconnaissance Orbiter. Approaches to optimizing the benefits while mitigating the drawbacks have taken the form of the formalization of requirements, modular design practices, extensive system simulation, and spacecraft data trend analysis. The growth of hardware capability and software complexity can be expected to continue, with future directions including stackable commodity subsystems, computer-generated algorithms, runtime reconfigurable processors, and greater autonomy.
Runtime and Architecture Support for Efficient Data Exchange in Multi-Accelerator Applications.
Cabezas, Javier; Gelado, Isaac; Stone, John E; Navarro, Nacho; Kirk, David B; Hwu, Wen-Mei
2015-05-01
Heterogeneous parallel computing applications often process large data sets that require multiple GPUs to jointly meet their needs for physical memory capacity and compute throughput. However, the lack of high-level abstractions in previous heterogeneous parallel programming models force programmers to resort to multiple code versions, complex data copy steps and synchronization schemes when exchanging data between multiple GPU devices, which results in high software development cost, poor maintainability, and even poor performance. This paper describes the HPE runtime system, and the associated architecture support, which enables a simple, efficient programming interface for exchanging data between multiple GPUs through either interconnects or cross-node network interfaces. The runtime and architecture support presented in this paper can also be used to support other types of accelerators. We show that the simplified programming interface reduces programming complexity. The research presented in this paper started in 2009. It has been implemented and tested extensively in several generations of HPE runtime systems as well as adopted into the NVIDIA GPU hardware and drivers for CUDA 4.0 and beyond since 2011. The availability of real hardware that support key HPE features gives rise to a rare opportunity for studying the effectiveness of the hardware support by running important benchmarks on real runtime and hardware. Experimental results show that in a exemplar heterogeneous system, peer DMA and double-buffering, pinned buffers, and software techniques can improve the inter-accelerator data communication bandwidth by 2×. They can also improve the execution speed by 1.6× for a 3D finite difference, 2.5× for 1D FFT, and 1.6× for merge sort, all measured on real hardware. The proposed architecture support enables the HPE runtime to transparently deploy these optimizations under simple portable user code, allowing system designers to freely employ devices of different capabilities. We further argue that simple interfaces such as HPE are needed for most applications to benefit from advanced hardware features in practice.
Runtime and Architecture Support for Efficient Data Exchange in Multi-Accelerator Applications
Cabezas, Javier; Gelado, Isaac; Stone, John E.; Navarro, Nacho; Kirk, David B.; Hwu, Wen-mei
2014-01-01
Heterogeneous parallel computing applications often process large data sets that require multiple GPUs to jointly meet their needs for physical memory capacity and compute throughput. However, the lack of high-level abstractions in previous heterogeneous parallel programming models force programmers to resort to multiple code versions, complex data copy steps and synchronization schemes when exchanging data between multiple GPU devices, which results in high software development cost, poor maintainability, and even poor performance. This paper describes the HPE runtime system, and the associated architecture support, which enables a simple, efficient programming interface for exchanging data between multiple GPUs through either interconnects or cross-node network interfaces. The runtime and architecture support presented in this paper can also be used to support other types of accelerators. We show that the simplified programming interface reduces programming complexity. The research presented in this paper started in 2009. It has been implemented and tested extensively in several generations of HPE runtime systems as well as adopted into the NVIDIA GPU hardware and drivers for CUDA 4.0 and beyond since 2011. The availability of real hardware that support key HPE features gives rise to a rare opportunity for studying the effectiveness of the hardware support by running important benchmarks on real runtime and hardware. Experimental results show that in a exemplar heterogeneous system, peer DMA and double-buffering, pinned buffers, and software techniques can improve the inter-accelerator data communication bandwidth by 2×. They can also improve the execution speed by 1.6× for a 3D finite difference, 2.5× for 1D FFT, and 1.6× for merge sort, all measured on real hardware. The proposed architecture support enables the HPE runtime to transparently deploy these optimizations under simple portable user code, allowing system designers to freely employ devices of different capabilities. We further argue that simple interfaces such as HPE are needed for most applications to benefit from advanced hardware features in practice. PMID:26180487
Analytical Micromechanics Modeling Technique Developed for Ceramic Matrix Composites Analysis
NASA Technical Reports Server (NTRS)
Min, James B.
2005-01-01
Ceramic matrix composites (CMCs) promise many advantages for next-generation aerospace propulsion systems. Specifically, carbon-reinforced silicon carbide (C/SiC) CMCs enable higher operational temperatures and provide potential component weight savings by virtue of their high specific strength. These attributes may provide systemwide benefits. Higher operating temperatures lessen or eliminate the need for cooling, thereby reducing both fuel consumption and the complex hardware and plumbing required for heat management. This, in turn, lowers system weight, size, and complexity, while improving efficiency, reliability, and service life, resulting in overall lower operating costs.
A hardware-oriented concurrent TZ search algorithm for High-Efficiency Video Coding
NASA Astrophysics Data System (ADS)
Doan, Nghia; Kim, Tae Sung; Rhee, Chae Eun; Lee, Hyuk-Jae
2017-12-01
High-Efficiency Video Coding (HEVC) is the latest video coding standard, in which the compression performance is double that of its predecessor, the H.264/AVC standard, while the video quality remains unchanged. In HEVC, the test zone (TZ) search algorithm is widely used for integer motion estimation because it effectively searches the good-quality motion vector with a relatively small amount of computation. However, the complex computation structure of the TZ search algorithm makes it difficult to implement it in the hardware. This paper proposes a new integer motion estimation algorithm which is designed for hardware execution by modifying the conventional TZ search to allow parallel motion estimations of all prediction unit (PU) partitions. The algorithm consists of the three phases of zonal, raster, and refinement searches. At the beginning of each phase, the algorithm obtains the search points required by the original TZ search for all PU partitions in a coding unit (CU). Then, all redundant search points are removed prior to the estimation of the motion costs, and the best search points are then selected for all PUs. Compared to the conventional TZ search algorithm, experimental results show that the proposed algorithm significantly decreases the Bjøntegaard Delta bitrate (BD-BR) by 0.84%, and it also reduces the computational complexity by 54.54%.
Translations on Eastern Europe Political, Sociological, and Military Affairs No. 1361
1977-03-07
focusing on theories and methods of late bourgeois, logical positivistic trends of a structuralistic mold reducing the complex and muUi...and articles on military and civil defense, organization, theory , budgets, and hardware. 17. Key Words and Document Analysis. 17a. Descriptors...unanimously emphasized its support for consolidation of administration for non-proliferation of nuclear arms. The political advisory committee, however
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuo, Wangda; McNeil, Andrew; Wetter, Michael
2013-05-23
Building designers are increasingly relying on complex fenestration systems to reduce energy consumed for lighting and HVAC in low energy buildings. Radiance, a lighting simulation program, has been used to conduct daylighting simulations for complex fenestration systems. Depending on the configurations, the simulation can take hours or even days using a personal computer. This paper describes how to accelerate the matrix multiplication portion of a Radiance three-phase daylight simulation by conducting parallel computing on heterogeneous hardware of a personal computer. The algorithm was optimized and the computational part was implemented in parallel using OpenCL. The speed of new approach wasmore » evaluated using various daylighting simulation cases on a multicore central processing unit and a graphics processing unit. Based on the measurements and analysis of the time usage for the Radiance daylighting simulation, further speedups can be achieved by using fast I/O devices and storing the data in a binary format.« less
Applications of artificial intelligence to mission planning
NASA Technical Reports Server (NTRS)
Ford, Donnie R.; Rogers, John S.; Floyd, Stephen A.
1990-01-01
The scheduling problem facing NASA-Marshall mission planning is extremely difficult for several reasons. The most critical factor is the computational complexity involved in developing a schedule. The size of the search space is large along some dimensions and infinite along others. It is because of this and other difficulties that many of the conventional operation research techniques are not feasible or inadequate to solve the problems by themselves. Therefore, the purpose is to examine various artificial intelligence (AI) techniques to assist conventional techniques or to replace them. The specific tasks performed were as follows: (1) to identify mission planning applications for object oriented and rule based programming; (2) to investigate interfacing AI dedicated hardware (Lisp machines) to VAX hardware; (3) to demonstrate how Lisp may be called from within FORTRAN programs; (4) to investigate and report on programming techniques used in some commercial AI shells, such as Knowledge Engineering Environment (KEE); and (5) to study and report on algorithmic methods to reduce complexity as related to AI techniques.
Hardware design and implementation of fast DOA estimation method based on multicore DSP
NASA Astrophysics Data System (ADS)
Guo, Rui; Zhao, Yingxiao; Zhang, Yue; Lin, Qianqiang; Chen, Zengping
2016-10-01
In this paper, we present a high-speed real-time signal processing hardware platform based on multicore digital signal processor (DSP). The real-time signal processing platform shows several excellent characteristics including high performance computing, low power consumption, large-capacity data storage and high speed data transmission, which make it able to meet the constraint of real-time direction of arrival (DOA) estimation. To reduce the high computational complexity of DOA estimation algorithm, a novel real-valued MUSIC estimator is used. The algorithm is decomposed into several independent steps and the time consumption of each step is counted. Based on the statistics of the time consumption, we present a new parallel processing strategy to distribute the task of DOA estimation to different cores of the real-time signal processing hardware platform. Experimental results demonstrate that the high processing capability of the signal processing platform meets the constraint of real-time direction of arrival (DOA) estimation.
KSC ground operations planning for Space Station
NASA Technical Reports Server (NTRS)
Lyon, J. R.; Revesz, W., Jr.
1993-01-01
At the Kennedy Space Center (KSC) in Florida, processing facilities are being built and activated to support the processing, checkout, and launch of Space Station elements. The generic capability of these facilities will be utilized to support resupply missions for payloads, life support services, and propellants for the 30-year life of the program. Special Ground Support Equipment (GSE) is being designed for Space Station hardware special handling requirements, and a Test, Checkout, and Monitoring System (TCMS) is under development to verify that the flight elements are ready for launch. The facilities and equipment used at KSC, along with the testing required to accomplish the mission, are described in detail to provide an understanding of the complexity of operations at the launch site. Assessments of hardware processing flows through KSC are being conducted to minimize the processing flow times for each hardware element. Baseline operations plans and the changes made to improve operations and reduce costs are described, recognizing that efficient ground operations are a major key to success of the Space Station.
Sensitivity study and parameter optimization of OCD tool for 14nm finFET process
NASA Astrophysics Data System (ADS)
Zhang, Zhensheng; Chen, Huiping; Cheng, Shiqiu; Zhan, Yunkun; Huang, Kun; Shi, Yaoming; Xu, Yiping
2016-03-01
Optical critical dimension (OCD) measurement has been widely demonstrated as an essential metrology method for monitoring advanced IC process in the technology node of 90 nm and beyond. However, the rapidly shrunk critical dimensions of the semiconductor devices and the increasing complexity of the manufacturing process bring more challenges to OCD. The measurement precision of OCD technology highly relies on the optical hardware configuration, spectral types, and inherently interactions between the incidence of light and various materials with various topological structures, therefore sensitivity analysis and parameter optimization are very critical in the OCD applications. This paper presents a method for seeking the optimum sensitive measurement configuration to enhance the metrology precision and reduce the noise impact to the greatest extent. In this work, the sensitivity of different types of spectra with a series of hardware configurations of incidence angles and azimuth angles were investigated. The optimum hardware measurement configuration and spectrum parameter can be identified. The FinFET structures in the technology node of 14 nm were constructed to validate the algorithm. This method provides guidance to estimate the measurement precision before measuring actual device features and will be beneficial for OCD hardware configuration.
Using Innovative Technologies for Manufacturing and Evaluating Rocket Engine Hardware
NASA Technical Reports Server (NTRS)
Betts, Erin M.; Hardin, Andy
2011-01-01
Many of the manufacturing and evaluation techniques that are currently used for rocket engine component production are traditional methods that have been proven through years of experience and historical precedence. As we enter into a new space age where new launch vehicles are being designed and propulsion systems are being improved upon, it is sometimes necessary to adopt new and innovative techniques for manufacturing and evaluating hardware. With a heavy emphasis on cost reduction and improvements in manufacturing time, manufacturing techniques such as Direct Metal Laser Sintering (DMLS) and white light scanning are being adopted and evaluated for their use on J-2X, with hopes of employing both technologies on a wide variety of future projects. DMLS has the potential to significantly reduce the processing time and cost of engine hardware, while achieving desirable material properties by using a layered powdered metal manufacturing process in order to produce complex part geometries. The white light technique is a non-invasive method that can be used to inspect for geometric feature alignment. Both the DMLS manufacturing method and the white light scanning technique have proven to be viable options for manufacturing and evaluating rocket engine hardware, and further development and use of these techniques is recommended.
NASA Astrophysics Data System (ADS)
Karmazikov, Y. V.; Fainberg, E. M.
2005-06-01
Work with DICOM compatible equipment integrated into hardware and software systems for medical purposes has been considered. Structures of process of reception and translormation of the data are resulted by the example of digital rentgenography and angiography systems, included in hardware-software complex DIMOL-IK. Algorithms of reception and the analysis of the data are offered. Questions of the further processing and storage of the received data are considered.
Parameterized hardware description as object oriented hardware model implementation
NASA Astrophysics Data System (ADS)
Drabik, Pawel K.
2010-09-01
The paper introduces novel model for design, visualization and management of complex, highly adaptive hardware systems. The model settles component oriented environment for both hardware modules and software application. It is developed on parameterized hardware description research. Establishment of stable link between hardware and software, as a purpose of designed and realized work, is presented. Novel programming framework model for the environment, named Graphic-Functional-Components is presented. The purpose of the paper is to present object oriented hardware modeling with mentioned features. Possible model implementation in FPGA chips and its management by object oriented software in Java is described.
Ali, Imran; Rikhan, Behnam Samadpoor; Kim, Dong-Gyu; Lee, Dong-Soo; Rehman, Muhammad Riaz Ur; Abbasizadeh, Hamed; Asif, Muhammad; Lee, Minjae; Hwang, Keum Cheol; Yang, Youngoo; Lee, Kang-Yoon
2018-05-14
In this paper, a low-power and small-area Single Edge Nibble Transmission (SENT) transmitter design is proposed for automotive pressure and temperature complex sensor applications. To reduce the cost and size of the hardware, the pressure and temperature information is processed with a single integrated circuit (IC) and transmitted at the same time to the electronic control unit (ECU) through SENT. Due to its digital nature, it is immune to noise, has reduced sensitivity to electromagnetic interference (EMI), and generates low EMI. It requires only one PAD for its connectivity with ECU, and thus reduces the pin requirements, simplifies the connectivity, and minimizes the printed circuit board (PCB) complexity. The design is fully synthesizable, and independent of technology. The finite state machine-based approach is employed for area efficient implementation, and to translate the proposed architecture into hardware. The IC is fabricated in 1P6M 180 nm CMOS process with an area of (116 μm × 116 μm) and 4.314 K gates. The current consumption is 50 μA from a 1.8 V supply with a total 90 μW power. For compliance with AEC-Q100 for automotive reliability, a reverse and over voltage protection circuit is also implemented with human body model (HBM) electro-static discharge (ESD) of +6 kV, reverse voltage of -16 V to 0 V, over voltage of 8.2 V to 16 V, and fabricated area of 330 μm × 680 μm. The extensive testing, measurement, and simulation results prove that the design is fully compliant with SAE J2716 standard.
Rikhan, Behnam Samadpoor; Kim, Dong-Gyu; Lee, Dong-Soo; Rehman, Muhammad Riaz Ur; Abbasizadeh, Hamed; Asif, Muhammad; Lee, Minjae; Yang, Youngoo; Lee, Kang-Yoon
2018-01-01
In this paper, a low-power and small-area Single Edge Nibble Transmission (SENT) transmitter design is proposed for automotive pressure and temperature complex sensor applications. To reduce the cost and size of the hardware, the pressure and temperature information is processed with a single integrated circuit (IC) and transmitted at the same time to the electronic control unit (ECU) through SENT. Due to its digital nature, it is immune to noise, has reduced sensitivity to electromagnetic interference (EMI), and generates low EMI. It requires only one PAD for its connectivity with ECU, and thus reduces the pin requirements, simplifies the connectivity, and minimizes the printed circuit board (PCB) complexity. The design is fully synthesizable, and independent of technology. The finite state machine-based approach is employed for area efficient implementation, and to translate the proposed architecture into hardware. The IC is fabricated in 1P6M 180 nm CMOS process with an area of (116 μm × 116 μm) and 4.314 K gates. The current consumption is 50 μA from a 1.8 V supply with a total 90 μW power. For compliance with AEC-Q100 for automotive reliability, a reverse and over voltage protection circuit is also implemented with human body model (HBM) electro-static discharge (ESD) of +6 kV, reverse voltage of −16 V to 0 V, over voltage of 8.2 V to 16 V, and fabricated area of 330 μm × 680 μm. The extensive testing, measurement, and simulation results prove that the design is fully compliant with SAE J2716 standard. PMID:29757996
Emerging Techniques for Field Device Security
Schwartz, Moses; Bechtel Corp.; Mulder, John; ...
2014-11-01
Critical infrastructure, such as electrical power plants and oil refineries, rely on embedded devices to control essential processes. State of the art security is unable to detect attacks on these devices at the hardware or firmware level. We provide an overview of the hardware used in industrial control system field devices, look at how these devices have been attacked, and discuss techniques and new technologies that may be used to secure them. We follow three themes: (1) Inspectability, the capability for an external arbiter to monitor the internal state of a device. (2) Trustworthiness, the degree to which a systemmore » will continue to function correctly despite disruption, error, or attack. (3) Diversity, the use of adaptive systems and complexity to make attacks more difficult by reducing the feasible attack surface.« less
NASA Technical Reports Server (NTRS)
Dewberry, Brandon S.
1990-01-01
The Environmental Control and Life Support System (ECLSS) is a Freedom Station distributed system with inherent applicability to advanced automation primarily due to the comparatively large reaction times of its subsystem processes. This allows longer contemplation times in which to form a more intelligent control strategy and to detect or prevent faults. The objective of the ECLSS Advanced Automation Project is to reduce the flight and ground manpower needed to support the initial and evolutionary ECLS system. The approach is to search out and make apparent those processes in the baseline system which are in need of more automatic control and fault detection strategies, to influence the ECLSS design by suggesting software hooks and hardware scars which will allow easy adaptation to advanced algorithms, and to develop complex software prototypes which fit into the ECLSS software architecture and will be shown in an ECLSS hardware testbed to increase the autonomy of the system. Covered here are the preliminary investigation and evaluation process, aimed at searching the ECLSS for candidate functions for automation and providing a software hooks and hardware scars analysis. This analysis shows changes needed in the baselined system for easy accommodation of knowledge-based or other complex implementations which, when integrated in flight or ground sustaining engineering architectures, will produce a more autonomous and fault tolerant Environmental Control and Life Support System.
Open-Source Wax RepRap 3-D Printer for Rapid Prototyping Paper-Based Microfluidics.
Pearce, J M; Anzalone, N C; Heldt, C L
2016-08-01
The open-source release of self-replicating rapid prototypers (RepRaps) has created a rich opportunity for low-cost distributed digital fabrication of complex 3-D objects such as scientific equipment. For example, 3-D printable reactionware devices offer the opportunity to combine open hardware microfluidic handling with lab-on-a-chip reactionware to radically reduce costs and increase the number and complexity of microfluidic applications. To further drive down the cost while improving the performance of lab-on-a-chip paper-based microfluidic prototyping, this study reports on the development of a RepRap upgrade capable of converting a Prusa Mendel RepRap into a wax 3-D printer for paper-based microfluidic applications. An open-source hardware approach is used to demonstrate a 3-D printable upgrade for the 3-D printer, which combines a heated syringe pump with the RepRap/Arduino 3-D control. The bill of materials, designs, basic assembly, and use instructions are provided, along with a completely free and open-source software tool chain. The open-source hardware device described here accelerates the potential of the nascent field of electrochemical detection combined with paper-based microfluidics by dropping the marginal cost of prototyping to nearly zero while accelerating the turnover between paper-based microfluidic designs. © 2016 Society for Laboratory Automation and Screening.
A complexity-scalable software-based MPEG-2 video encoder.
Chen, Guo-bin; Lu, Xin-ning; Wang, Xing-guo; Liu, Ji-lin
2004-05-01
With the development of general-purpose processors (GPP) and video signal processing algorithms, it is possible to implement a software-based real-time video encoder on GPP, and its low cost and easy upgrade attract developers' interests to transfer video encoding from specialized hardware to more flexible software. In this paper, the encoding structure is set up first to support complexity scalability; then a lot of high performance algorithms are used on the key time-consuming modules in coding process; finally, at programming level, processor characteristics are considered to improve data access efficiency and processing parallelism. Other programming methods such as lookup table are adopted to reduce the computational complexity. Simulation results showed that these ideas could not only improve the global performance of video coding, but also provide great flexibility in complexity regulation.
NASA Technical Reports Server (NTRS)
Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark
2016-01-01
In the field of microwave radiometry, Radio Frequency Interference (RFI) consistently degrades the value of scientific results. Through the use of digital receivers and signal processing, the effects of RFI on scientific measurements can be reduced depending on certain circumstances. As technology allows us to implement wider band digital receivers for radiometry, the problem of RFI mitigation changes. Our work focuses on finding a detector that outperforms real kurtosis in wide band scenarios. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The performance of both complex and real signal kurtosis is evaluated for continuous wave, pulsed continuous wave, and wide band quadrature phase shift keying (QPSK) modulations. The use of complex signal kurtosis increased the detectability of interference.
High-performance reconfigurable hardware architecture for restricted Boltzmann machines.
Ly, Daniel Le; Chow, Paul
2010-11-01
Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 × 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor.
Using Modern Design Tools for Digital Avionics Development
NASA Technical Reports Server (NTRS)
Hyde, David W.; Lakin, David R., II; Asquith, Thomas E.
2000-01-01
Using Modem Design Tools for Digital Avionics Development Shrinking development time and increased complexity of new avionics forces the designer to use modem tools and methods during hardware development. Engineers at the Marshall Space Flight Center have successfully upgraded their design flow and used it to develop a Mongoose V based radiation tolerant processor board for the International Space Station's Water Recovery System. The design flow, based on hardware description languages, simulation, synthesis, hardware models, and full functional software model libraries, allowed designers to fully simulate the processor board from reset, through initialization before any boards were built. The fidelity of a digital simulation is limited to the accuracy of the models used and how realistically the designer drives the circuit's inputs during simulation. By using the actual silicon during simulation, device modeling errors are reduced. Numerous design flaws were discovered early in the design phase when they could be easily fixed. The use of hardware models and actual MIPS software loaded into full functional memory models also provided checkout of the software development environment. This paper will describe the design flow used to develop the processor board and give examples of errors that were found using the tools. An overview of the processor board firmware will also be covered.
NASA Astrophysics Data System (ADS)
Ismail, K.; Muharam, A.; Amin; Widodo Budi, S.
2015-12-01
Inverter is widely used for industrial, office, and residential purposes. Inverter supports the development of alternative energy such as solar cells, wind turbines and fuel cells by converting dc voltage to ac voltage. Inverter has been made with a variety of hardware and software combinations, such as the use of pure analog circuit and various types of microcontroller as controller. When using pure analog circuit, modification would be difficult because it will change the entire hardware components. In inverter with microcontroller based design (with software), calculations to generate AC modulation is done in the microcontroller. This increases programming complexity and amount of coding downloaded to the microcontroller chip (capacity flash memory in the microcontroller is limited). This paper discusses the design of a single phase inverter using unipolar modulation of sine wave and triangular wave, which is done outside the microcontroller using data processing software application (Microsoft Excel), result shows that complexity programming was reduce and resolution sampling data is very influence to THD. Resolution sampling must taking ½ A degree to get best THD (15.8%).
Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E
2014-01-01
This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.
An intelligent maximum permissible exposure meter for safety assessments of laser radiation
NASA Astrophysics Data System (ADS)
Corder, D. A.; Evans, D. R.; Tyrer, J. R.
1996-09-01
There is frequently a need to make laser power or energy density measurements when determining whether radiation from a laser system exceeds the Maximum Permissible Exposure (MPE) as defined in BS EN 60825. This can be achieved using standard commercially available laser power or energy measurement equipment, but some of these have shortcomings when used in this application. Calculations must be performed by the user to compare the measured value to the MPE. The measurement and calculation procedure appears complex to the nonexpert who may be performing the assessment. A novel approach is described which uses purpose designed hardware and software to simplify the process. The hardware is optimized for measuring the relatively low powers associated with MPEs. The software runs on a Psion Series 3a palmtop computer. This reduces the cost and size of the system yet allows graphical and numerical presentation of data. Data output to other software running on PCs is also possible, enabling the instrument to be used as part of a quality system. Throughout the measurement process the opportunity for user error has been minimized by the hardware and software design.
Laser Peening Effects on Friction Stir Welding
NASA Technical Reports Server (NTRS)
Hatameleh, Omar
2009-01-01
The laser peening process can result in considerable improvement to crack initiation, propagation, and mechanical properties in FSW which equates to longer hardware service life Processed hardware safety is improved by producing higher failure tolerant hardware, and reducing risk. Lowering hardware maintenance cost produces longer hardware service life, and lower hardware down time. Application of this proposed technology will result in substantial benefits and savings throughout the life of the treated components
From linear to nonlinear control means: a practical progression.
Gao, Zhiqiang
2002-04-01
With the rapid advance of digital control hardware, it is time to take the simple but effective proportional-integral-derivative (PID) control technology to the next level of performance and robustness. For this purpose, a nonlinear PID and active disturbance rejection framework are introduced in this paper. It complements the existing theory in that (1) it actively and systematically explores the use of nonlinear control mechanisms for better performance, even for linear plants; (2) it represents a control strategy that is rather independent of mathematical models of the plants, thus achieving inherent robustness and reducing design complexity. Stability analysis, as well as software/hardware test results, are presented. It is evident that the proposed framework lends itself well in seeking innovative solutions to practical problems while maintaining the simplicity and the intuitiveness of the existing technology.
New technologies for supporting real-time on-board software development
NASA Astrophysics Data System (ADS)
Kerridge, D.
1995-03-01
The next generation of on-board data management systems will be significantly more complex than current designs, and will be required to perform more complex and demanding tasks in software. Improved hardware technology, in the form of the MA31750 radiation hard processor, is one key component in addressing the needs of future embedded systems. However, to complement these hardware advances, improved support for the design and implementation of real-time data management software is now needed. This will help to control the cost and risk assoicated with developing data management software development as it becomes an increasingly significant element within embedded systems. One particular problem with developing embedded software is managing the non-functional requirements in a systematic way. This paper identifies how Logica has exploited recent developments in hard real-time theory to address this problem through the use of new hard real-time analysis and design methods which can be supported by specialized tools. The first stage in transferring this technology from the research domain to industrial application has already been completed. The MA37150 Hard Real-Time Embedded Software Support Environment (HESSE) is a loosely integrated set of hardware and software tools which directly support the process of hard real-time analysis for software targeting the MA31750 processor. With further development, this HESSE promises to provide embedded system developers with software tools which can reduce the risks associated with developing complex hard real-time software. Supported in this way by more sophisticated software methods and tools, it is foreseen that MA31750 based embedded systems can meet the processing needs for the next generation of on-board data management systems.
Self-Reacting Friction Stir Welding for Aluminum Complex Curvature Applications
NASA Technical Reports Server (NTRS)
Brown, Randy J.; Martin, W.; Schneider, J.; Hartley, P. J.; Russell, Carolyn; Lawless, Kirby; Jones, Chip
2003-01-01
This viewgraph representation provides an overview of sucessful research conducted by Lockheed Martin and NASA to develop an advanced self-reacting friction stir technology for complex curvature aluminum alloys. The research included weld process development for 0.320 inch Al 2219, sucessful transfer from the 'lab' scale to the production scale tool and weld quality exceeding strenght goals. This process will enable development and implementation of large scale complex geometry hardware fabrication. Topics covered include: weld process development, weld process transfer, and intermediate hardware fabrication.
Hardware problems encountered in solar heating and cooling systems
NASA Technical Reports Server (NTRS)
Cash, M.
1978-01-01
Numerous problems in the design, production, installation, and operation of solar energy systems are discussed. Described are hardware problems, which range from simple to obscure and complex, and their resolution.
Fast and Adaptive Lossless On-Board Hyperspectral Data Compression System for Space Applications
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh; Bakhshi, Alireza; Keymeulen, Didier; Klimesh, Matthew
2009-01-01
Efficient on-board lossless hyperspectral data compression reduces the data volume necessary to meet NASA and DoD limited downlink capabilities. The techniques also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware, which makes it practical for flight implementations of pushbroom instruments. A prototype of the compressor (and decompressor) of the algorithm is available in software, but this implementation may not meet speed and real-time requirements of some space applications. Hardware acceleration provides performance improvements of 10x-100x vs. the software implementation (about 1M samples/sec on a Pentium IV machine). This paper describes a hardware implementation of the JPL-developed 'Fast Lossless' compression algorithm on a Field Programmable Gate Array (FPGA). The FPGA implementation targets the current state of the art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for Space applications.
Criticality as a Set-Point for Adaptive Behavior in Neuromorphic Hardware
Srinivasa, Narayan; Stepp, Nigel D.; Cruz-Albrecht, Jose
2015-01-01
Neuromorphic hardware are designed by drawing inspiration from biology to overcome limitations of current computer architectures while forging the development of a new class of autonomous systems that can exhibit adaptive behaviors. Several designs in the recent past are capable of emulating large scale networks but avoid complexity in network dynamics by minimizing the number of dynamic variables that are supported and tunable in hardware. We believe that this is due to the lack of a clear understanding of how to design self-tuning complex systems. It has been widely demonstrated that criticality appears to be the default state of the brain and manifests in the form of spontaneous scale-invariant cascades of neural activity. Experiment, theory and recent models have shown that neuronal networks at criticality demonstrate optimal information transfer, learning and information processing capabilities that affect behavior. In this perspective article, we argue that understanding how large scale neuromorphic electronics can be designed to enable emergent adaptive behavior will require an understanding of how networks emulated by such hardware can self-tune local parameters to maintain criticality as a set-point. We believe that such capability will enable the design of truly scalable intelligent systems using neuromorphic hardware that embrace complexity in network dynamics rather than avoiding it. PMID:26648839
Movable Ground Based Recovery System for Reuseable Space Flight Hardware
NASA Technical Reports Server (NTRS)
Sarver, George L. (Inventor)
2013-01-01
A reusable space flight launch system is configured to eliminate complex descent and landing systems from the space flight hardware and move them to maneuverable ground based systems. Precision landing of the reusable space flight hardware is enabled using a simple, light weight aerodynamic device on board the flight hardware such as a parachute, and one or more translating ground based vehicles such as a hovercraft that include active speed, orientation and directional control. The ground based vehicle maneuvers itself into position beneath the descending flight hardware, matching its speed and direction and captures the flight hardware. The ground based vehicle will contain propulsion, command and GN&C functionality as well as space flight hardware landing cushioning and retaining hardware. The ground based vehicle propulsion system enables longitudinal and transverse maneuverability independent of its physical heading.
Engine dynamic analysis with general nonlinear finite element codes
NASA Technical Reports Server (NTRS)
Adams, M. L.; Padovan, J.; Fertis, D. G.
1991-01-01
A general engine dynamic analysis as a standard design study computational tool is described for the prediction and understanding of complex engine dynamic behavior. Improved definition of engine dynamic response provides valuable information and insights leading to reduced maintenance and overhaul costs on existing engine configurations. Application of advanced engine dynamic simulation methods provides a considerable cost reduction in the development of new engine designs by eliminating some of the trial and error process done with engine hardware development.
Using a graphical programming language to write CAMAC/GPIB instrument drivers
NASA Technical Reports Server (NTRS)
Zambrana, Horacio; Johanson, William
1991-01-01
To reduce the complexities of conventional programming, graphical software was used in the development of instrumentation drivers. The graphical software provides a standard set of tools (graphical subroutines) which are sufficient to program the most sophisticated CAMAC/GPIB drivers. These tools were used and instrumentation drivers were successfully developed for operating CAMAC/GPIB hardware from two different manufacturers: LeCroy and DSP. The use of these tools is presented for programming a LeCroy A/D Waveform Analyzer.
Porting of EPICS to Real Time UNIX, and Usage Ported EPICS for FEL Automation
NASA Astrophysics Data System (ADS)
Salikova, Tatiana
This article describes concepts and mechanisms used in porting of EPICS (Experimental Physical and Industrial Control System) codes to platform of operating system UNIX. Without destruction of EPICS architecture, new features of EPICS provides the support for real time operating system LynxOS/x86 and equipment produced by INP (Budker Institute of Nuclear Physics). Application of ported EPICS reduces the cost of software and hardware is used for automation of FEL (Free Electron Laser) complex.
NASA Technical Reports Server (NTRS)
Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark
2016-01-01
Radio-frequency interference (RFI) is a known problem for passive remote sensing as evidenced in the L-band radiometers SMOS, Aquarius and more recently, SMAP. Various algorithms have been developed and implemented on SMAP to improve science measurements. This was achieved by the use of a digital microwave radiometer. RFI mitigation becomes more challenging for microwave radiometers operating at higher frequencies in shared allocations. At higher frequencies larger bandwidths are also desirable for lower measurement noise further adding to processing challenges. This work focuses on finding improved RFI mitigation techniques that will be effective at additional frequencies and at higher bandwidths. To aid the development and testing of applicable detection and mitigation techniques, a wide-band RFI algorithm testing environment has been developed using the Reconfigurable Open Architecture Computing Hardware System (ROACH) built by the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) Group. The testing environment also consists of various test equipment used to reproduce typical signals that a radiometer may see including those with and without RFI. The testing environment permits quick evaluations of RFI mitigation algorithms as well as show that they are implementable in hardware. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The complex signal kurtosis detector showed improved performance over the real kurtosis detector under certain conditions. The real kurtosis is implemented on SMAP at 24 MHz bandwidth. The complex signal kurtosis algorithm was then implemented in hardware at 200 MHz bandwidth using the ROACH. In this work, performance of the complex signal kurtosis and the real signal kurtosis are compared. Performance evaluations and comparisons in both simulation as well as experimental hardware implementations were done with the use of receiver operating characteristic (ROC) curves.
Wen, Gen; Wang, Chun-Yang; Chai, Yi-Min; Cheng, Liang; Chen, Ming; Yi-Min, L V
2013-11-01
The complex wound with the exposed hardware and infection is one of the common complications after the internal fixation of the tibia fracture. The salvage of hardware and reconstruction of soft tissue defect remain challenging. In this report, we presented our experience on the use of the distally based saphenous neurocutaneous perforator flap combined with vacuum-assisted closure (VAC) therapy for the coverage of the soft tissue defect and the exposed hardware in the lower extremity with fracture. Between January 2008 and July 2010, seven patients underwent the VAC therapy followed by transferring a reversed saphenous neurocutaneous perforator flap for reconstruction of the wound with exposed hardware around the distal tibia. The sizes of the flaps ranged from 6 × 3 cm to 15 × 6 cm. Six flaps survived completely. Partial necrosis occurred in one patient. There were no other complications of repair and donor sites. Bone healing was achieved in all patients. In conclusion, the reversed saphenous neurocutaneous perfortor flaps combined with the VAC therapy might be one of the options to cover the complex wound with exposed hardware in the lower extremities. © 2013 Wiley Periodicals, Inc.
Innovative Contamination Certification of Multi-Mission Flight Hardware
NASA Technical Reports Server (NTRS)
Hansen, Patricia A.; Hughes, David W.; Montt, Kristina M.; Triolo, Jack J.
1998-01-01
Maintaining contamination certification of multi-mission flight hardware is an innovative approach to controlling mission costs. Methods for assessing ground induced degradation between missions have been employed by the Hubble Space Telescope (HST) Project for the multi-mission (servicing) hardware. By maintaining the cleanliness of the hardware between missions, and by controlling the materials added to the hardware during modification and refurbishment both project funding for contamination recertification and schedule have been significantly reduced. These methods will be discussed and HST hardware data will be presented.
Innovative Contamination Certification of Multi-Mission Flight Hardware
NASA Technical Reports Server (NTRS)
Hansen, Patricia A.; Hughes, David W.; Montt, Kristina M.; Triolo, Jack J.
1999-01-01
Maintaining contamination certification of multi-mission flight hardware is an innovative approach to controlling mission costs. Methods for assessing ground induced degradation between missions have been employed by the Hubble Space Telescope (HST) Project for the multi-mission (servicing) hardware. By maintaining the cleanliness of the hardware between missions, and by controlling the materials added to the hardware during modification and refurbishment both project funding for contamination recertification and schedule have been significantly reduced. These methods will be discussed and HST hardware data will be presented.
Development of simulation computer complex specification
NASA Technical Reports Server (NTRS)
1973-01-01
The Training Simulation Computer Complex Study was one of three studies contracted in support of preparations for procurement of a shuttle mission simulator for shuttle crew training. The subject study was concerned with definition of the software loads to be imposed on the computer complex to be associated with the shuttle mission simulator and the development of procurement specifications based on the resulting computer requirements. These procurement specifications cover the computer hardware and system software as well as the data conversion equipment required to interface the computer to the simulator hardware. The development of the necessary hardware and software specifications required the execution of a number of related tasks which included, (1) simulation software sizing, (2) computer requirements definition, (3) data conversion equipment requirements definition, (4) system software requirements definition, (5) a simulation management plan, (6) a background survey, and (7) preparation of the specifications.
NASA Astrophysics Data System (ADS)
Tian, Shudong; Han, Jun; Yang, Jianwei; Zeng, Xiaoyang
2017-10-01
Electrocardiogram (ECG) can be used as a valid way for diagnosing heart disease. To fulfill ECG processing in wearable devices by reducing computation complexity and hardware cost, two kinds of adaptive filters are designed to perform QRS complex detection and motion artifacts removal, respectively. The proposed design achieves a sensitivity of 99.49% and a positive predictivity of 99.72%, tested under the MIT-BIH ECG database. The proposed design is synthesized under the SMIC 65-nm CMOS technology and verified by post-synthesis simulation. Experimental results show that the power consumption and area cost of this design are of 160 μW and 1.09 × 10 5 μm2, respectively. Project supported by the National Natural Science Foundation of China (Nos. 61574040, 61234002, 61525401).
NASA Astrophysics Data System (ADS)
Jaworski, Allan
1993-08-01
The Earth Observing System (EOS) Data and Information System (EOSDIS) will serve as a major resource for the earth science community, supporting both command and control of complex instruments onboard the EOS spacecraft and the archiving, distribution, and analysis of data. The scale of EOSDIS and the volume of multidisciplinary research to be conducted using EOSDIS resources will produce unparalleled needs for technology transparency, data integration, and system interoperability. The scale of this effort far outscopes any previous scientific data system in its breadth or operational and performance needs. Modern hardware technology can meet the EOSDIS technical challenge. Multiprocessing speeds of many giga-flops are being realized by modern computers. Online storage disk, optical disk, and videocassette libraries with storage capacities of many terabytes are now commercially available. Radio frequency and fiber optics communications networks with gigabit rates are demonstrable today. It remains, of course, to perform the system engineering to establish the requirements, architectures, and designs that will implement the EOSDIS systems. Software technology, however, has not enjoyed the price/performance advances of hardware. Although we have learned to engineer hardware systems which have several orders of magnitude greater complexity and performance than those built in the 1960's, we have not made comparable progress in dramatically reducing the cost of software development. This lack of progress may significantly reduce our capabilities to achieve economically the types of highly interoperable, responsive, integraded, and productive environments which are needed by the earth science community. This paper describes some of the EOSDIS software requirements and current activities in the software community which are applicable to meeting the EOSDIS challenge. Some of these areas include intelligent user interfaces, software reuse libraries, and domain engineering. Also included are discussions of applicable standards in the areas of operating systems interfaces, user interfaces, communications interfaces, data transport, and science algorithm support, and their role in supporting the software development process.
The evolution of image-guided lumbosacral spine surgery.
Bourgeois, Austin C; Faulkner, Austin R; Pasciak, Alexander S; Bradley, Yong C
2015-04-01
Techniques and approaches of spinal fusion have considerably evolved since their first description in the early 1900s. The incorporation of pedicle screw constructs into lumbosacral spine surgery is among the most significant advances in the field, offering immediate stability and decreased rates of pseudarthrosis compared to previously described methods. However, early studies describing pedicle screw fixation and numerous studies thereafter have demonstrated clinically significant sequelae of inaccurate surgical fusion hardware placement. A number of image guidance systems have been developed to reduce morbidity from hardware malposition in increasingly complex spine surgeries. Advanced image guidance systems such as intraoperative stereotaxis improve the accuracy of pedicle screw placement using a variety of surgical approaches, however their clinical indications and clinical impact remain debated. Beginning with intraoperative fluoroscopy, this article describes the evolution of image guided lumbosacral spinal fusion, emphasizing two-dimensional (2D) and three-dimensional (3D) navigational methods.
TES: A modular systems approach to expert system development for real-time space applications
NASA Technical Reports Server (NTRS)
Cacace, Ralph; England, Brenda
1988-01-01
A major goal of the Space Station era is to reduce reliance on support from ground based experts. The development of software programs using expert systems technology is one means of reaching this goal without requiring crew members to become intimately familiar with the many complex spacecraft subsystems. Development of an expert systems program requires a validation of the software with actual flight hardware. By combining accurate hardware and software modelling techniques with a modular systems approach to expert systems development, the validation of these software programs can be successfully completed with minimum risk and effort. The TIMES Expert System (TES) is an application that monitors and evaluates real time data to perform fault detection and fault isolation tasks as they would otherwise be carried out by a knowledgeable designer. The development process and primary features of TES, a modular systems approach, and the lessons learned are discussed.
Design of transnational mobile e-payment application based on SIM card
NASA Astrophysics Data System (ADS)
Qian, Tang; Zhen, Li
2018-05-01
Facing the stronger demands of transnational mobile communications and internet-based mobile wireless value-added services, the interconnection and interworking of multiple communication operators and their win-win cooperations become a crucial target in the new round of mobile economic development. Previous researches showed that mobile communications and value-add services are not only technical problems, but also more economic problems.we design a general oncard operating system based on SIM card that could be responsible for coordinating and distributing card hardware and software resources. These applications such as transnational mobile payment, consumption management and many other supplemented functions share the API interfaces, hardware and software resources provided by the operation system, although they are independent of each other. The layer structure of SIM card design not only greatly reduces the complexity of COS development, but also saves the most tense card resources and extends SIM cards applications.
NASA Astrophysics Data System (ADS)
Semenishchev, E. A.; Marchuk, V. I.; Fedosov, V. P.; Stradanchenko, S. G.; Ruslyakov, D. V.
2015-05-01
This work aimed to study computationally simple method of saliency map calculation. Research in this field received increasing interest for the use of complex techniques in portable devices. A saliency map allows increasing the speed of many subsequent algorithms and reducing the computational complexity. The proposed method of saliency map detection based on both image and frequency space analysis. Several examples of test image from the Kodak dataset with different detalisation considered in this paper demonstrate the effectiveness of the proposed approach. We present experiments which show that the proposed method providing better results than the framework Salience Toolbox in terms of accuracy and speed.
Verification of Space Station Secondary Power System Stability Using Design of Experiment
NASA Technical Reports Server (NTRS)
Karimi, Kamiar J.; Booker, Andrew J.; Mong, Alvin C.; Manners, Bruce
1998-01-01
This paper describes analytical methods used in verification of large DC power systems with applications to the International Space Station (ISS). Large DC power systems contain many switching power converters with negative resistor characteristics. The ISS power system presents numerous challenges with respect to system stability such as complex sources and undefined loads. The Space Station program has developed impedance specifications for sources and loads. The overall approach to system stability consists of specific hardware requirements coupled with extensive system analysis and testing. Testing of large complex distributed power systems is not practical due to size and complexity of the system. Computer modeling has been extensively used to develop hardware specifications as well as to identify system configurations for lab testing. The statistical method of Design of Experiments (DoE) is used as an analysis tool for verification of these large systems. DOE reduces the number of computer runs which are necessary to analyze the performance of a complex power system consisting of hundreds of DC/DC converters. DoE also provides valuable information about the effect of changes in system parameters on the performance of the system. DoE provides information about various operating scenarios and identification of the ones with potential for instability. In this paper we will describe how we have used computer modeling to analyze a large DC power system. A brief description of DoE is given. Examples using applications of DoE to analysis and verification of the ISS power system are provided.
Safe to Fly: Certifying COTS Hardware for Spaceflight
NASA Technical Reports Server (NTRS)
Fichuk, Jessica L.
2011-01-01
Providing hardware for the astronauts to use on board the Space Shuttle or International Space Station (ISS) involves a certification process that entails evaluating hardware safety, weighing risks, providing mitigation, and verifying requirements. Upon completion of this certification process, the hardware is deemed safe to fly. This process from start to finish can be completed as quickly as 1 week or can take several years in length depending on the complexity of the hardware and whether the item is a unique custom design. One area of cost and schedule savings that NASA implements is buying Commercial Off the Shelf (COTS) hardware and certifying it for human spaceflight as safe to fly. By utilizing commercial hardware, NASA saves time not having to develop, design and build the hardware from scratch, as well as a timesaving in the certification process. By utilizing COTS hardware, the current detailed certification process can be simplified which results in schedule savings. Cost savings is another important benefit of flying COTS hardware. Procuring COTS hardware for space use can be more economical than custom building the hardware. This paper will investigate the cost savings associated with certifying COTS hardware to NASA s standards rather than performing a custom build.
Computationally efficient algorithm for high sampling-frequency operation of active noise control
NASA Astrophysics Data System (ADS)
Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati
2015-05-01
In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.
Real-time demonstration hardware for enhanced DPCM video compression algorithm
NASA Technical Reports Server (NTRS)
Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.
1992-01-01
The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home).
Study of efficient video compression algorithms for space shuttle applications
NASA Technical Reports Server (NTRS)
Poo, Z.
1975-01-01
Results are presented of a study on video data compression techniques applicable to space flight communication. This study is directed towards monochrome (black and white) picture communication with special emphasis on feasibility of hardware implementation. The primary factors for such a communication system in space flight application are: picture quality, system reliability, power comsumption, and hardware weight. In terms of hardware implementation, these are directly related to hardware complexity, effectiveness of the hardware algorithm, immunity of the source code to channel noise, and data transmission rate (or transmission bandwidth). A system is recommended, and its hardware requirement summarized. Simulations of the study were performed on the improved LIM video controller which is computer-controlled by the META-4 CPU.
Hardware friendly probabilistic spiking neural network with long-term and short-term plasticity.
Hsieh, Hung-Yi; Tang, Kea-Tiong
2013-12-01
This paper proposes a probabilistic spiking neural network (PSNN) with unimodal weight distribution, possessing long- and short-term plasticity. The proposed algorithm is derived by both the arithmetic gradient decent calculation and bioinspired algorithms. The algorithm is benchmarked by the Iris and Wisconsin breast cancer (WBC) data sets. The network features fast convergence speed and high accuracy. In the experiment, the PSNN took not more than 40 epochs for convergence. The average testing accuracy for Iris and WBC data is 96.7% and 97.2%, respectively. To test the usefulness of the PSNN to real world application, the PSNN was also tested with the odor data, which was collected by our self-developed electronic nose (e-nose). Compared with the algorithm (K-nearest neighbor) that has the highest classification accuracy in the e-nose for the same odor data, the classification accuracy of the PSNN is only 1.3% less but the memory requirement can be reduced at least 40%. All the experiments suggest that the PSNN is hardware friendly. First, it requires only nine-bits weight resolution for training and testing. Second, the PSNN can learn complex data sets with a little number of neurons that in turn reduce the cost of VLSI implementation. In addition, the algorithm is insensitive to synaptic noise and the parameter variation induced by the VLSI fabrication. Therefore, the algorithm can be implemented by either software or hardware, making it suitable for wider application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sparn, Bethany F; Ruth, Mark F; Krishnamurthy, Dheepak
Many have proposed that responsive load provided by distributed energy resources (DERs) and demand response (DR) are an option to provide flexibility to the grid and especially to distribution feeders. However, because responsive load involves a complex interplay between tariffs and DER and DR technologies, it is challenging to test and evaluate options without negatively impacting customers. This paper describes a hardware-in-the-loop (HIL) simulation system that has been developed to reduce the cost of evaluating the impact of advanced controllers (e.g., model predictive controllers) and technologies (e.g., responsive appliances). The HIL simulation system combines large-scale software simulation with a smallmore » set of representative building equipment hardware. It is used to perform HIL simulation of a distribution feeder and the loads on it under various tariff structures. In the reported HIL simulation, loads include many simulated air conditioners and one physical air conditioner. Independent model predictive controllers manage operations of all air conditioners under a time-of-use tariff. Results from this HIL simulation and a discussion of future development work of the system are presented.« less
NASA Astrophysics Data System (ADS)
Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue
2018-04-01
Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.
A software framework for pipelined arithmetic algorithms in field programmable gate arrays
NASA Astrophysics Data System (ADS)
Kim, J. B.; Won, E.
2018-03-01
Pipelined algorithms implemented in field programmable gate arrays are extensively used for hardware triggers in the modern experimental high energy physics field and the complexity of such algorithms increases rapidly. For development of such hardware triggers, algorithms are developed in C++, ported to hardware description language for synthesizing firmware, and then ported back to C++ for simulating the firmware response down to the single bit level. We present a C++ software framework which automatically simulates and generates hardware description language code for pipelined arithmetic algorithms.
Semiannual Technical Summary, 1 April-30 September 1993
1993-12-01
Hardware failure 11 Jul 2200 - Hardware failure 12 Jul - 0531 Hardware failure 12 Jul 0744 - 1307 Hardware service 1OAug 0821 - 1514 Line failure 29 Aug...1000 - Line failure 30 Aug - 1211 Line failure 08 Sep 1518 - Line failure 09 Sep - 0428 Line failure 10 Sep 0821 - 1030 Hardware failure 18 Sep 0817...reair. Between 8 September 1306 hrs and 9 September 0428 hre all communications systems wene affected (13.5 hrs). Reduced 01B performance started 10
Open-source hardware for medical devices
2016-01-01
Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device. PMID:27158528
Open-source hardware for medical devices.
Niezen, Gerrit; Eslambolchilar, Parisa; Thimbleby, Harold
2016-04-01
Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device.
Optimized design of embedded DSP system hardware supporting complex algorithms
NASA Astrophysics Data System (ADS)
Li, Yanhua; Wang, Xiangjun; Zhou, Xinling
2003-09-01
The paper presents an optimized design method for a flexible and economical embedded DSP system that can implement complex processing algorithms as biometric recognition, real-time image processing, etc. It consists of a floating-point DSP, 512 Kbytes data RAM, 1 Mbytes FLASH program memory, a CPLD for achieving flexible logic control of input channel and a RS-485 transceiver for local network communication. Because of employing a high performance-price ratio DSP TMS320C6712 and a large FLASH in the design, this system permits loading and performing complex algorithms with little algorithm optimization and code reduction. The CPLD provides flexible logic control for the whole DSP board, especially in input channel, and allows convenient interface between different sensors and DSP system. The transceiver circuit can transfer data between DSP and host computer. In the paper, some key technologies are also introduced which make the whole system work efficiently. Because of the characters referred above, the hardware is a perfect flat for multi-channel data collection, image processing, and other signal processing with high performance and adaptability. The application section of this paper presents how this hardware is adapted for the biometric identification system with high identification precision. The result reveals that this hardware is easy to interface with a CMOS imager and is capable of carrying out complex biometric identification algorithms, which require real-time process.
Development of structural model of adaptive training complex in ergatic systems for professional use
NASA Astrophysics Data System (ADS)
Obukhov, A. D.; Dedov, D. L.; Arkhipov, A. E.
2018-03-01
The article considers the structural model of the adaptive training complex (ATC), which reflects the interrelations between the hardware, software and mathematical model of ATC and describes the processes in this subject area. The description of the main components of software and hardware complex, their interaction and functioning within the common system are given. Also the article scrutinizers a brief description of mathematical models of personnel activity, a technical system and influences, the interactions of which formalize the regularities of ATC functioning. The studies of main objects of training complexes and connections between them will make it possible to realize practical implementation of ATC in ergatic systems for professional use.
HEVC optimizations for medical environments
NASA Astrophysics Data System (ADS)
Fernández, D. G.; Del Barrio, A. A.; Botella, Guillermo; García, Carlos; Meyer-Baese, Uwe; Meyer-Baese, Anke
2016-05-01
HEVC/H.265 is the most interesting and cutting-edge topic in the world of digital video compression, allowing to reduce by half the required bandwidth in comparison with the previous H.264 standard. Telemedicine services and in general any medical video application can benefit from the video encoding advances. However, the HEVC is computationally expensive to implement. In this paper a method for reducing the HEVC complexity in the medical environment is proposed. The sequences that are typically processed in this context contain several homogeneous regions. Leveraging these regions, it is possible to simplify the HEVC flow while maintaining a high-level quality. In comparison with the HM16.2 standard, the encoding time is reduced up to 75%, with a negligible quality loss. Moreover, the algorithm is straightforward to implement in any hardware platform.
Use of heat pipes in electronic hardware
NASA Technical Reports Server (NTRS)
Graves, J. R.
1977-01-01
A modular, multiple output power converter was developed in order to reduce costs of space hardware in future missions. The converter is of reduced size and weight, and utilizes advanced heat removal techniques, in the form of heat pipes which remove internally generated heat more effectively than conventional methods.
Instrumentation complex for Langley Research Center's National Transonic Facility
NASA Technical Reports Server (NTRS)
Russell, C. H.; Bryant, C. S.
1977-01-01
The instrumentation discussed in the present paper was developed to ensure reliable operation for a 2.5-meter cryogenic high-Reynolds-number fan-driven transonic wind tunnel. It will incorporate four CPU's and associated analog and digital input/output equipment, necessary for acquiring research data, controlling the tunnel parameters, and monitoring the process conditions. Connected in a multipoint distributed network, the CPU's will support data base management and processing; research measurement data acquisition and display; process monitoring; and communication control. The design will allow essential processes to continue, in the case of major hardware failures, by switching input/output equipment to alternate CPU's and by eliminating nonessential functions. It will also permit software modularization by CPU activity and thereby reduce complexity and development time.
Open-Source 3D-Printable Optics Equipment
Zhang, Chenlong; Anzalone, Nicholas C.; Faria, Rodrigo P.; Pearce, Joshua M.
2013-01-01
Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods. PMID:23544104
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary Jo W.; Roche, Rigoberto
2017-01-01
The Space Telecommunications Radio System (STRS) provides a common, consistent framework for software defined radios (SDRs) to abstract the application software from the radio platform hardware. The STRS standard aims to reduce the cost and risk of using complex, configurable and reprogrammable radio systems across NASA missions. To promote the use of the STRS architecture for future NASA advanced exploration missions, NASA Glenn Research Center (GRC) developed an STRS-compliant SDR on a radio platform used by the Advance Exploration System program at the Johnson Space Center (JSC) in their Integrated Power, Avionics, and Software (iPAS) laboratory. The iPAS STRS Radio was implemented on the Reconfigurable, Intelligently-Adaptive Communication System (RIACS) platform, currently being used for radio development at JSC. The platform consists of a Xilinx ML605 Virtex-6 FPGA board, an Analog Devices FMCOMMS1-EBZ RF transceiver board, and an Embedded PC (Axiomtek eBox 620-110-FL) running the Ubuntu 12.4 operating system. Figure 1 shows the RIACS platform hardware. The result of this development is a very low cost STRS compliant platform that can be used for waveform developments for multiple applications.The purpose of this document is to describe how to develop a new waveform using the RIACS platform and the Very High Speed Integrated Circuits (VHSIC) Hardware Description Language (VHDL) FPGA wrapper code and the STRS implementation on the Axiomtek processor.
Open-source 3D-printable optics equipment.
Zhang, Chenlong; Anzalone, Nicholas C; Faria, Rodrigo P; Pearce, Joshua M
2013-01-01
Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Austin; Chakraborty, Sudipta; Wang, Dexin
This paper presents a cyber-physical testbed, developed to investigate the complex interactions between emerging microgrid technologies such as grid-interactive power sources, control systems, and a wide variety of communication platforms and bandwidths. The cyber-physical testbed consists of three major components for testing and validation: real time models of a distribution feeder model with microgrid assets that are integrated into the National Renewable Energy Laboratory's (NREL) power hardware-in-the-loop (PHIL) platform; real-time capable network-simulator-in-the-loop (NSIL) models; and physical hardware including inverters and a simple system controller. Several load profiles and microgrid configurations were tested to examine the effect on system performance withmore » increasing channel delays and router processing delays in the network simulator. Testing demonstrated that the controller's ability to maintain a target grid import power band was severely diminished with increasing network delays and laid the foundation for future testing of more complex cyber-physical systems.« less
SafeConnect Solar - Final Scientific/Technical Report (Updated)
DOE Office of Scientific and Technical Information (OSTI.GOV)
McNish, Zachary
2016-02-03
Final Scientific/Technical Report from Tier 0 SunShot Incubator award for hardware-based solution to reducing soft costs of installed solar. The primary objective of this project was for SafeConnect Solar (“SafeConnect”) to create working proof-of-concept hardware prototypes from its proprietary intellectual property and business concepts for a plug-and-play, safety-oriented hardware solution for photovoltaic solar systems. Specifically, SafeConnect sought to build prototypes of its “SmartBox” and related cabling and connectors, as well as the firmware needed to run the hardware. This hardware is designed to ensure a residential PV system installed with it can address all safety concerns that currently form themore » basis of AHJ electrical permitting and licensing requirements, thereby reducing the amount of permitting and specialized labor required on a residential PV system, and also opening up new sales channels and customer acquisition opportunities.« less
Statistical Analysis of Complexity Generators for Cost Estimation
NASA Technical Reports Server (NTRS)
Rowell, Ginger Holmes
1999-01-01
Predicting the cost of cutting edge new technologies involved with spacecraft hardware can be quite complicated. A new feature of the NASA Air Force Cost Model (NAFCOM), called the Complexity Generator, is being developed to model the complexity factors that drive the cost of space hardware. This parametric approach is also designed to account for the differences in cost, based on factors that are unique to each system and subsystem. The cost driver categories included in this model are weight, inheritance from previous missions, technical complexity, and management factors. This paper explains the Complexity Generator framework, the statistical methods used to select the best model within this framework, and the procedures used to find the region of predictability and the prediction intervals for the cost of a mission.
FPGA implementation of low complexity LDPC iterative decoder
NASA Astrophysics Data System (ADS)
Verma, Shivani; Sharma, Sanjay
2016-07-01
Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.
Human Systems Engineering for Launch processing at Kennedy Space Center (KSC)
NASA Technical Reports Server (NTRS)
Henderson, Gena; Stambolian, Damon B.; Stelges, Katrine
2012-01-01
Launch processing at Kennedy Space Center (KSC) is primarily accomplished by human users of expensive and specialized equipment. In order to reduce the likelihood of human error, to reduce personal injuries, damage to hardware, and loss of mission the design process for the hardware needs to include the human's relationship with the hardware. Just as there is electrical, mechanical, and fluids, the human aspect is just as important. The focus of this presentation is to illustrate how KSC accomplishes the inclusion of the human aspect in the design using human centered hardware modeling and engineering. The presentations also explain the current and future plans for research and development for improving our human factors analysis tools and processes.
Formal hardware verification of digital circuits
NASA Technical Reports Server (NTRS)
Joyce, J.; Seger, C.-J.
1991-01-01
The use of formal methods to verify the correctness of digital circuits is less constrained by the growing complexity of digital circuits than conventional methods based on exhaustive simulation. This paper briefly outlines three main approaches to formal hardware verification: symbolic simulation, state machine analysis, and theorem-proving.
FPGA-accelerated adaptive optics wavefront control
NASA Astrophysics Data System (ADS)
Mauch, S.; Reger, J.; Reinlein, C.; Appelfelder, M.; Goy, M.; Beckert, E.; Tünnermann, A.
2014-03-01
The speed of real-time adaptive optical systems is primarily restricted by the data processing hardware and computational aspects. Furthermore, the application of mirror layouts with increasing numbers of actuators reduces the bandwidth (speed) of the system and, thus, the number of applicable control algorithms. This burden turns out a key-impediment for deformable mirrors with continuous mirror surface and highly coupled actuator influence functions. In this regard, specialized hardware is necessary for high performance real-time control applications. Our approach to overcome this challenge is an adaptive optics system based on a Shack-Hartmann wavefront sensor (SHWFS) with a CameraLink interface. The data processing is based on a high performance Intel Core i7 Quadcore hard real-time Linux system. Employing a Xilinx Kintex-7 FPGA, an own developed PCie card is outlined in order to accelerate the analysis of a Shack-Hartmann Wavefront Sensor. A recently developed real-time capable spot detection algorithm evaluates the wavefront. The main features of the presented system are the reduction of latency and the acceleration of computation For example, matrix multiplications which in general are of complexity O(n3 are accelerated by using the DSP48 slices of the field-programmable gate array (FPGA) as well as a novel hardware implementation of the SHWFS algorithm. Further benefits are the Streaming SIMD Extensions (SSE) which intensively use the parallelization capability of the processor for further reducing the latency and increasing the bandwidth of the closed-loop. Due to this approach, up to 64 actuators of a deformable mirror can be handled and controlled without noticeable restriction from computational burdens.
A Low-Complexity and High-Performance 2D Look-Up Table for LDPC Hardware Implementation
NASA Astrophysics Data System (ADS)
Chen, Jung-Chieh; Yang, Po-Hui; Lain, Jenn-Kaie; Chung, Tzu-Wen
In this paper, we propose a low-complexity, high-efficiency two-dimensional look-up table (2D LUT) for carrying out the sum-product algorithm in the decoding of low-density parity-check (LDPC) codes. Instead of employing adders for the core operation when updating check node messages, in the proposed scheme, the main term and correction factor of the core operation are successfully merged into a compact 2D LUT. Simulation results indicate that the proposed 2D LUT not only attains close-to-optimal bit error rate performance but also enjoys a low complexity advantage that is suitable for hardware implementation.
Small Private Key PKS on an Embedded Microprocessor
Seo, Hwajeong; Kim, Jihyun; Choi, Jongseok; Park, Taehwan; Liu, Zhe; Kim, Howon
2014-01-01
Multivariate quadratic ( ) cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011), a small public key scheme, was proposed, and its feasible implementation on an embedded microprocessor was reported at CHES2012. However, the implementation of a small private key scheme was not reported. For efficient implementation, random number generators can contribute to reduce the key size, but the cost of using a random number generator is much more complex than computing on modern microprocessors. Therefore, no feasible results have been reported on embedded microprocessors. In this paper, we propose a feasible implementation on embedded microprocessors for a small private key scheme using a pseudo-random number generator and hash function based on a block-cipher exploiting a hardware Advanced Encryption Standard (AES) accelerator. To speed up the performance, we apply various implementation methods, including parallel computation, on-the-fly computation, optimized logarithm representation, vinegar monomials and assembly programming. The proposed method reduces the private key size by about 99.9% and boosts signature generation and verification by 5.78% and 12.19% than previous results in CHES2012. PMID:24651722
Small private key MQPKS on an embedded microprocessor.
Seo, Hwajeong; Kim, Jihyun; Choi, Jongseok; Park, Taehwan; Liu, Zhe; Kim, Howon
2014-03-19
Multivariate quadratic (MQ) cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to MQ cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011), a small public key MQ scheme, was proposed, and its feasible implementation on an embedded microprocessor was reported at CHES2012. However, the implementation of a small private key MQ scheme was not reported. For efficient implementation, random number generators can contribute to reduce the key size, but the cost of using a random number generator is much more complex than computing MQ on modern microprocessors. Therefore, no feasible results have been reported on embedded microprocessors. In this paper, we propose a feasible implementation on embedded microprocessors for a small private key MQ scheme using a pseudo-random number generator and hash function based on a block-cipher exploiting a hardware Advanced Encryption Standard (AES) accelerator. To speed up the performance, we apply various implementation methods, including parallel computation, on-the-fly computation, optimized logarithm representation, vinegar monomials and assembly programming. The proposed method reduces the private key size by about 99.9% and boosts signature generation and verification by 5.78% and 12.19% than previous results in CHES2012.
Linear precoding based on polynomial expansion: reducing complexity in massive MIMO.
Mueller, Axel; Kammoun, Abla; Björnson, Emil; Debbah, Mérouane
Massive multiple-input multiple-output (MIMO) techniques have the potential to bring tremendous improvements in spectral efficiency to future communication systems. Counterintuitively, the practical issues of having uncertain channel knowledge, high propagation losses, and implementing optimal non-linear precoding are solved more or less automatically by enlarging system dimensions. However, the computational precoding complexity grows with the system dimensions. For example, the close-to-optimal and relatively "antenna-efficient" regularized zero-forcing (RZF) precoding is very complicated to implement in practice, since it requires fast inversions of large matrices in every coherence period. Motivated by the high performance of RZF, we propose to replace the matrix inversion and multiplication by a truncated polynomial expansion (TPE), thereby obtaining the new TPE precoding scheme which is more suitable for real-time hardware implementation and significantly reduces the delay to the first transmitted symbol. The degree of the matrix polynomial can be adapted to the available hardware resources and enables smooth transition between simple maximum ratio transmission and more advanced RZF. By deriving new random matrix results, we obtain a deterministic expression for the asymptotic signal-to-interference-and-noise ratio (SINR) achieved by TPE precoding in massive MIMO systems. Furthermore, we provide a closed-form expression for the polynomial coefficients that maximizes this SINR. To maintain a fixed per-user rate loss as compared to RZF, the polynomial degree does not need to scale with the system, but it should be increased with the quality of the channel knowledge and the signal-to-noise ratio.
Code-modulated interferometric imaging system using phased arrays
NASA Astrophysics Data System (ADS)
Chauhan, Vikas; Greene, Kevin; Floyd, Brian
2016-05-01
Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.
Reducing the PAPR in FBMC-OQAM systems with low-latency trellis-based SLM technique
NASA Astrophysics Data System (ADS)
Bulusu, S. S. Krishna Chaitanya; Shaiek, Hmaied; Roviras, Daniel
2016-12-01
Filter-bank multi-carrier (FBMC) modulations, and more specifically FBMC-offset quadrature amplitude modulation (OQAM), are seen as an interesting alternative to orthogonal frequency division multiplexing (OFDM) for the 5th generation radio access technology. In this paper, we investigate the problem of peak-to-average power ratio (PAPR) reduction for FBMC-OQAM signals. Recently, it has been shown that FBMC-OQAM with trellis-based selected mapping (TSLM) scheme not only is superior to any scheme based on symbol-by-symbol approach but also outperforms that of the OFDM with classical SLM scheme. This paper is an extension of that work, where we analyze the TSLM in terms of computational complexity, required hardware memory, and latency issues. We have proposed an improvement to the TSLM, which requires very less hardware memory, compared to the originally proposed TSLM, and also have low latency. Additionally, the impact of the time duration of partial PAPR on the performance of TSLM is studied, and its lower bound has been identified by proposing a suitable time duration. Also, a thorough and fair comparison of performance has been done with an existing trellis-based scheme proposed in literature. The simulation results show that the proposed low-latency TSLM yields better PAPR reduction performance with relatively less hardware memory requirements.
GPU Lossless Hyperspectral Data Compression System for Space Applications
NASA Technical Reports Server (NTRS)
Keymeulen, Didier; Aranki, Nazeeh; Hopson, Ben; Kiely, Aaron; Klimesh, Matthew; Benkrid, Khaled
2012-01-01
On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. At JPL, a novel, adaptive and predictive technique for lossless compression of hyperspectral data, named the Fast Lossless (FL) algorithm, was recently developed. This technique uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. Because of its outstanding performance and suitability for real-time onboard hardware implementation, the FL compressor is being formalized as the emerging CCSDS Standard for Lossless Multispectral & Hyperspectral image compression. The FL compressor is well-suited for parallel hardware implementation. A GPU hardware implementation was developed for FL targeting the current state-of-the-art GPUs from NVIDIA(Trademark). The GPU implementation on a NVIDIA(Trademark) GeForce(Trademark) GTX 580 achieves a throughput performance of 583.08 Mbits/sec (44.85 MSamples/sec) and an acceleration of at least 6 times a software implementation running on a 3.47 GHz single core Intel(Trademark) Xeon(Trademark) processor. This paper describes the design and implementation of the FL algorithm on the GPU. The massively parallel implementation will provide in the future a fast and practical real-time solution for airborne and space applications.
Sublimator Driven Coldplate Engineering Development Unit Test Results
NASA Technical Reports Server (NTRS)
Sheth, Rubik B.; Stephan, Ryan A.; Leimkuehler, Thomas O.
2010-01-01
The Sublimator Driven Coldplate (SDC) is a unique piece of thermal control hardware that has several advantages over a traditional thermal control scheme. The principal advantage is the possible elimination of a pumped fluid loop, potentially increasing reliability and reducing complexity while saving both mass and power. Because the SDC requires a consumable feedwater, it can only be used for short mission durations. Additionally, the SDC is ideal for a vehicle with small transport distances and low heat rejection requirements. An SDC Engineering Development Unit was designed and fabricated. Performance tests were performed in a vacuum chamber to quantify and assess the performance of the SDC. The test data was then used to develop correlated thermal math models. Nonetheless, an Integrated Sublimator Driven Coldplate (ISDC) concept is being developed. The ISDC couples a coolant loop with the previously described SDC hardware. This combination allows the SDC to be used as a traditional coldplate during long mission phases and provides for dissimilar system redundancy
Neighbour lists for smoothed particle hydrodynamics on GPUs
NASA Astrophysics Data System (ADS)
Winkler, Daniel; Rezavand, Massoud; Rauch, Wolfgang
2018-04-01
The efficient iteration of neighbouring particles is a performance critical aspect of any high performance smoothed particle hydrodynamics (SPH) solver. SPH solvers that implement a constant smoothing length generally divide the simulation domain into a uniform grid to reduce the computational complexity of the neighbour search. Based on this method, particle neighbours are either stored per grid cell or for each individual particle, denoted as Verlet list. While the latter approach has significantly higher memory requirements, it has the potential for a significant computational speedup. A theoretical comparison is performed to estimate the potential improvements of the method based on unknown hardware dependent factors. Subsequently, the computational performance of both approaches is empirically evaluated on graphics processing units. It is shown that the speedup differs significantly for different hardware, dimensionality and floating point precision. The Verlet list algorithm is implemented as an alternative to the cell linked list approach in the open-source SPH solver DualSPHysics and provided as a standalone software package.
High-throughput sample adaptive offset hardware architecture for high-efficiency video coding
NASA Astrophysics Data System (ADS)
Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin
2018-03-01
A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.
Software and hardware complex for research and management of the separation process
NASA Astrophysics Data System (ADS)
Borisov, A. P.
2018-01-01
The article is devoted to the development of a program for studying the operation of an asynchronous electric drive using vector-algorithmic switching of windings, as well as the development of a hardware-software complex for controlling parameters and controlling the speed of rotation of an asynchronous electric drive for investigating the operation of a cyclone. To study the operation of an asynchronous electric drive, a method was used in which the average value of flux linkage is found and a method for vector-algorithmic calculation of the power and electromagnetic moment of an asynchronous electric drive feeding from a single-phase network is developed, with vector-algorithmic commutation, and software for calculating parameters. The software part of the complex allows to regulate the speed of rotation of the motor by vector-algorithmic switching of transistors or, using pulse-width modulation (PWM), set any engine speed. Also sensors are connected to the hardware-software complex at the inlet and outlet of the cyclone. The developed cyclone with an inserted complex allows to receive high efficiency of product separation at various entrance speeds. At an inlet air speed of 18 m / s, the cyclone’s maximum efficiency is achieved. For this, it is necessary to provide the rotational speed of an asynchronous electric drive with a frequency of 45 Hz.
A high throughput architecture for a low complexity soft-output demapping algorithm
NASA Astrophysics Data System (ADS)
Ali, I.; Wasenmüller, U.; Wehn, N.
2015-11-01
Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.
PyGirl: Generating Whole-System VMs from High-Level Prototypes Using PyPy
NASA Astrophysics Data System (ADS)
Bruni, Camillo; Verwaest, Toon
Virtual machines (VMs) emulating hardware devices are generally implemented in low-level languages for performance reasons. This results in unmaintainable systems that are difficult to understand. In this paper we report on our experience using the PyPy toolchain to improve the portability and reduce the complexity of whole-system VM implementations. As a case study we implement a VM prototype for a Nintendo Game Boy, called PyGirl, in which the high-level model is separated from low-level VM implementation issues. We shed light on the process of refactoring from a low-level VM implementation in Java to a high-level model in RPython. We show that our whole-system VM written with PyPy is significantly less complex than standard implementations, without substantial loss in performance.
Ambulatory REACT: real-time seizure detection with a DSP microprocessor.
McEvoy, Robert P; Faul, Stephen; Marnane, William P
2010-01-01
REACT (Real-Time EEG Analysis for event deteCTion) is a Support Vector Machine based technology which, in recent years, has been successfully applied to the problem of automated seizure detection in both adults and neonates. This paper describes the implementation of REACT on a commercial DSP microprocessor; the Analog Devices Blackfin®. The primary aim of this work is to develop a prototype system for use in ambulatory or in-ward automated EEG analysis. Furthermore, the complexity of the various stages of the REACT algorithm on the Blackfin processor is analysed; in particular the EEG feature extraction stages. This hardware profile is used to select a reduced, platform-aware feature set, in order to evaluate the seizure classification accuracy of a lower-complexity, lower-power REACT system.
Planetary Suit Hip Bearing Model for Predicting Design vs. Performance
NASA Technical Reports Server (NTRS)
Cowley, Matthew S.; Margerum, Sarah; Harvil, Lauren; Rajulu, Sudhakar
2011-01-01
Designing a planetary suit is very complex and often requires difficult trade-offs between performance, cost, mass, and system complexity. In order to verifying that new suit designs meet requirements, full prototypes must eventually be built and tested with human subjects. Using computer models early in the design phase of new hardware development can be advantageous, allowing virtual prototyping to take place. Having easily modifiable models of the suit hard sections may reduce the time it takes to make changes to the hardware designs and then to understand their impact on suit and human performance. A virtual design environment gives designers the ability to think outside the box and exhaust design possibilities before building and testing physical prototypes with human subjects. Reductions in prototyping and testing may eventually reduce development costs. This study is an attempt to develop computer models of the hard components of the suit with known physical characteristics, supplemented with human subject performance data. Objectives: The primary objective was to develop an articulating solid model of the Mark III hip bearings to be used for evaluating suit design performance of the hip joint. Methods: Solid models of a planetary prototype (Mark III) suit s hip bearings and brief section were reverse-engineered from the prototype. The performance of the models was then compared by evaluating the mobility performance differences between the nominal hardware configuration and hardware modifications. This was accomplished by gathering data from specific suited tasks. Subjects performed maximum flexion and abduction tasks while in a nominal suit bearing configuration and in three off-nominal configurations. Performance data for the hip were recorded using state-of-the-art motion capture technology. Results: The results demonstrate that solid models of planetary suit hard segments for use as a performance design tool is feasible. From a general trend perspective, the suited performance trends were comparable between the model and the suited subjects. With the three off-nominal bearing configurations compared to the nominal bearing configurations, human subjects showed decreases in hip flexion of 64%, 6%, and 13% and in hip abduction of 59%, 2%, and 20%. Likewise the solid model showed decreases in hip flexion of 58%, 1%, and 25% and in hip abduction of 56%, 0%, and 30%, under the same condition changes from the nominal configuration. Differences seen between the model predictions and the human subject performance data could be attributed to the model lacking dynamic elements and performing kinematic analysis only, the level of fit of the subjects with the suit, the levels of the subject s suit experience.
Yang, Yuan; Quan, Nannan; Bu, Jingjing; Li, Xueping; Yu, Ningmei
2016-09-26
High order modulation and demodulation technology can solve the frequency requirement between the wireless energy transmission and data communication. In order to achieve reliable wireless data communication based on high order modulation technology for visual prosthesis, this work proposed a Reed-Solomon (RS) error correcting code (ECC) circuit on the basis of differential amplitude and phase shift keying (DAPSK) soft demodulation. Firstly, recognizing the weakness of the traditional DAPSK soft demodulation algorithm based on division that is complex for hardware implementation, an improved phase soft demodulation algorithm for visual prosthesis to reduce the hardware complexity is put forward. Based on this new algorithm, an improved RS soft decoding method is hence proposed. In this new decoding method, the combination of Chase algorithm and hard decoding algorithms is used to achieve soft decoding. In order to meet the requirements of implantable visual prosthesis, the method to calculate reliability of symbol-level based on multiplication of bit reliability is derived, which reduces the testing vectors number of Chase algorithm. The proposed algorithms are verified by MATLAB simulation and FPGA experimental results. During MATLAB simulation, the biological channel attenuation property model is added into the ECC circuit. The data rate is 8 Mbps in the MATLAB simulation and FPGA experiments. MATLAB simulation results show that the improved phase soft demodulation algorithm proposed in this paper saves hardware resources without losing bit error rate (BER) performance. Compared with the traditional demodulation circuit, the coding gain of the ECC circuit has been improved by about 3 dB under the same BER of [Formula: see text]. The FPGA experimental results show that under the condition of data demodulation error with wireless coils 3 cm away, the system can correct it. The greater the distance, the higher the BER. Then we use a bit error rate analyzer to measure BER of the demodulation circuit and the RS ECC circuit with different distance of two coils. And the experimental results show that the RS ECC circuit has about an order of magnitude lower BER than the demodulation circuit when under the same coils distance. Therefore, the RS ECC circuit has more higher reliability of the communication in the system. The improved phase soft demodulation algorithm and soft decoding algorithm proposed in this paper enables data communication that is more reliable than other demodulation system, which also provide a significant reference for further study to the visual prosthesis system.
Gyro-based Maximum-Likelihood Thruster Fault Detection and Identification
NASA Technical Reports Server (NTRS)
Wilson, Edward; Lages, Chris; Mah, Robert; Clancy, Daniel (Technical Monitor)
2002-01-01
When building smaller, less expensive spacecraft, there is a need for intelligent fault tolerance vs. increased hardware redundancy. If fault tolerance can be achieved using existing navigation sensors, cost and vehicle complexity can be reduced. A maximum likelihood-based approach to thruster fault detection and identification (FDI) for spacecraft is developed here and applied in simulation to the X-38 space vehicle. The system uses only gyro signals to detect and identify hard, abrupt, single and multiple jet on- and off-failures. Faults are detected within one second and identified within one to five accords,
Improved Skin Friction Interferometer
NASA Technical Reports Server (NTRS)
Westphal, R. V.; Bachalo, W. D.; Houser, M. H.
1986-01-01
An improved system for measuring aerodynamic skin friction which uses a dual-laser-beam oil-film interferometer was developed. Improvements in the optical hardware provided equal signal characteristics for each beam and reduced the cost and complexity of the system by replacing polarization rotation by a mirrored prism for separation of the two signals. An automated, objective, data-reduction procedure was implemented to eliminate tedious manual manipulation of the interferometry data records. The present system was intended for use in two-dimensional, incompressible flows over a smooth, level surface without pressure gradient, but the improvements discussed are not limited to this application.
Programming Language Software For Graphics Applications
NASA Technical Reports Server (NTRS)
Beckman, Brian C.
1993-01-01
New approach reduces repetitive development of features common to different applications. High-level programming language and interactive environment with access to graphical hardware and software created by adding graphical commands and other constructs to standardized, general-purpose programming language, "Scheme". Designed for use in developing other software incorporating interactive computer-graphics capabilities into application programs. Provides alternative to programming entire applications in C or FORTRAN, specifically ameliorating design and implementation of complex control and data structures typifying applications with interactive graphics. Enables experimental programming and rapid development of prototype software, and yields high-level programs serving as executable versions of software-design documentation.
EMI / EMC Design for Class D Payloads (Resource Prospector / NIRVSS)
NASA Technical Reports Server (NTRS)
Forgione, Josh; Benton, Joshua Eric; Thompson, Sarah; Colaprete, Anthony
2015-01-01
EMI/EMC techniques are applied to a Class D instrument (NIRVSS) to achieve low noise performance and reduce risk of EMI/EMC testing failures and/or issues during system integration and test. Basic techniques are not terribly expensive or complex, but do require close coordination between electrical and mechanical staff early in the design process. Low-cost methods to test subsystems on the bench without renting an EMI chamber are discussed. This method was applied to the NIRVSS instrument and achieved improvements up to 59dB on conducted emissions measurements between hardware revisions.
Using Innovative Technologies for Manufacturing Rocket Engine Hardware
NASA Technical Reports Server (NTRS)
Betts, E. M.; Eddleman, D. E.; Reynolds, D. C.; Hardin, N. A.
2011-01-01
Many of the manufacturing techniques that are currently used for rocket engine component production are traditional methods that have been proven through years of experience and historical precedence. As the United States enters into the next space age where new launch vehicles are being designed and propulsion systems are being improved upon, it is sometimes necessary to adopt innovative techniques for manufacturing hardware. With a heavy emphasis on cost reduction and improvements in manufacturing time, rapid manufacturing techniques such as Direct Metal Laser Sintering (DMLS) are being adopted and evaluated for their use on NASA s Space Launch System (SLS) upper stage engine, J-2X, with hopes of employing this technology on a wide variety of future projects. DMLS has the potential to significantly reduce the processing time and cost of engine hardware, while achieving desirable material properties by using a layered powder metal manufacturing process in order to produce complex part geometries. Marshall Space Flight Center (MSFC) has recently hot-fire tested a J-2X gas generator (GG) discharge duct that was manufactured using DMLS. The duct was inspected and proof tested prior to the hot-fire test. Using a workhorse gas generator (WHGG) test fixture at MSFC's East Test Area, the duct was subjected to extreme J-2X hot gas environments during 7 tests for a total of 537 seconds of hot-fire time. The duct underwent extensive post-test evaluation and showed no signs of degradation. DMLS manufacturing has proven to be a viable option for manufacturing rocket engine hardware, and further development and use of this manufacturing method is recommended.
Using Innovative Techniques for Manufacturing Rocket Engine Hardware
NASA Technical Reports Server (NTRS)
Betts, Erin M.; Reynolds, David C.; Eddleman, David E.; Hardin, Andy
2011-01-01
Many of the manufacturing techniques that are currently used for rocket engine component production are traditional methods that have been proven through years of experience and historical precedence. As we enter into a new space age where new launch vehicles are being designed and propulsion systems are being improved upon, it is sometimes necessary to adopt new and innovative techniques for manufacturing hardware. With a heavy emphasis on cost reduction and improvements in manufacturing time, manufacturing techniques such as Direct Metal Laser Sintering (DMLS) are being adopted and evaluated for their use on J-2X, with hopes of employing this technology on a wide variety of future projects. DMLS has the potential to significantly reduce the processing time and cost of engine hardware, while achieving desirable material properties by using a layered powder metal manufacturing process in order to produce complex part geometries. Marshall Space Flight Center (MSFC) has recently hot-fire tested a J-2X gas generator discharge duct that was manufactured using DMLS. The duct was inspected and proof tested prior to the hot-fire test. Using the Workhorse Gas Generator (WHGG) test setup at MSFC?s East Test Area test stand 116, the duct was subject to extreme J-2X gas generator environments and endured a total of 538 seconds of hot-fire time. The duct survived the testing and was inspected after the test. DMLS manufacturing has proven to be a viable option for manufacturing rocket engine hardware, and further development and use of this manufacturing method is recommended.
FPGA Based Reconfigurable ATM Switch Test Bed
NASA Technical Reports Server (NTRS)
Chu, Pong P.; Jones, Robert E.
1998-01-01
Various issues associated with "FPGA Based Reconfigurable ATM Switch Test Bed" are presented in viewgraph form. Specific topics include: 1) Network performance evaluation; 2) traditional approaches; 3) software simulation; 4) hardware emulation; 5) test bed highlights; 6) design environment; 7) test bed architecture; 8) abstract sheared-memory switch; 9) detailed switch diagram; 10) traffic generator; 11) data collection circuit and user interface; 12) initial results; and 13) the following conclusions: Advances in FPGA make hardware emulation feasible for performance evaluation, hardware emulation can provide several orders of magnitude speed-up over software simulation; due to the complexity of hardware synthesis process, development in emulation is much more difficult than simulation and requires knowledge in both networks and digital design.
Recycling Flight Hardware Components and Systems to Reduce Next Generation Research Costs
NASA Technical Reports Server (NTRS)
Turner, Wlat
2011-01-01
With the recent 'new direction' put forth by President Obama identifying NASA's new focus in research rather than continuing on a path to return to the Moon and Mars, the focus of work at Kennedy Space Center (KSC) may be changing dramatically. Research opportunities within the micro-gravity community potentially stands at the threshold of resurgence when the new direction of the agency takes hold for the next generation of experimenters. This presentation defines a strategy for recycling flight experiment components or part numbers, in order to reduce research project costs, not just in component selection and fabrication, but in expediting qualification of hardware for flight. A key component of the strategy is effective communication of relevant flight hardware information and available flight hardware components to researchers, with the goal of 'short circuiting' the design process for flight experiments
Dragas, Jelena; Jäckel, David; Hierlemann, Andreas; Franke, Felix
2017-01-01
Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction. PMID:25415989
Dragas, Jelena; Jackel, David; Hierlemann, Andreas; Franke, Felix
2015-03-01
Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction.
NASA Astrophysics Data System (ADS)
Nicolaeva, B. K.; Borisov, A. P.; Zlochevskiy, V. L.
2017-08-01
The article is devoted to the development of a hardware-software complex for monitoring and controlling the process of air purification by means of a cyclone-separator. The hardware of this complex is the Arduino platform, to which are connected pressure sensors, air velocities, dustmeters, which allow monitoring of the main parameters of the cyclone-separator. Also, a frequency converter was developed to regulate the rotation speed of an asynchronous motor necessary to correct the flow rate, the control signals of which come with Arduino. The program part of the complex is written in the form of a web application in the programming language JavaScript and inserts into CSS and HTML for the user interface. This program allows you to receive data from sensors, build dependencies in real time and control the speed of rotation of an asynchronous electric drive. The conducted experiment shows that the cleaning efficiency is 95-99.9%, while the airflow at the cyclone inlet is 16-18 m/s, and at the exit 50-70 m/s.
Paraskevopoulou, Sivylla E; Barsakcioglu, Deren Y; Saberi, Mohammed R; Eftekhar, Amir; Constandinou, Timothy G
2013-04-30
Next generation neural interfaces aspire to achieve real-time multi-channel systems by integrating spike sorting on chip to overcome limitations in communication channel capacity. The feasibility of this approach relies on developing highly efficient algorithms for feature extraction and clustering with the potential of low-power hardware implementation. We are proposing a feature extraction method, not requiring any calibration, based on first and second derivative features of the spike waveform. The accuracy and computational complexity of the proposed method are quantified and compared against commonly used feature extraction methods, through simulation across four datasets (with different single units) at multiple noise levels (ranging from 5 to 20% of the signal amplitude). The average classification error is shown to be below 7% with a computational complexity of 2N-3, where N is the number of sample points of each spike. Overall, this method presents a good trade-off between accuracy and computational complexity and is thus particularly well-suited for hardware-efficient implementation. Copyright © 2013 Elsevier B.V. All rights reserved.
Kohlberg, Gavriel D; Mancuso, Dean M; Chari, Divya A; Lalwani, Anil K
2015-01-01
Enjoyment of music remains an elusive goal following cochlear implantation. We test the hypothesis that reengineering music to reduce its complexity can enhance the listening experience for the cochlear implant (CI) listener. Normal hearing (NH) adults (N = 16) and CI listeners (N = 9) evaluated a piece of country music on three enjoyment modalities: pleasantness, musicality, and naturalness. Participants listened to the original version along with 20 modified, less complex, versions created by including subsets of the musical instruments from the original song. NH participants listened to the segments both with and without CI simulation processing. Compared to the original song, modified versions containing only 1-3 instruments were less enjoyable to the NH listeners but more enjoyable to the CI listeners and the NH listeners with CI simulation. Excluding vocals and including rhythmic instruments improved enjoyment for NH listeners with CI simulation but made no difference for CI listeners. Reengineering a piece of music to reduce its complexity has the potential to enhance music enjoyment for the cochlear implantee. Thus, in addition to improvements in software and hardware, engineering music specifically for the CI listener may be an alternative means to enhance their listening experience.
Assessment Environment for Complex Systems Software Guide
NASA Technical Reports Server (NTRS)
2013-01-01
This Software Guide (SG) describes the software developed to test the Assessment Environment for Complex Systems (AECS) by the West Virginia High Technology Consortium (WVHTC) Foundation's Mission Systems Group (MSG) for the National Aeronautics and Space Administration (NASA) Aeronautics Research Mission Directorate (ARMD). This software is referred to as the AECS Test Project throughout the remainder of this document. AECS provides a framework for developing, simulating, testing, and analyzing modern avionics systems within an Integrated Modular Avionics (IMA) architecture. The purpose of the AECS Test Project is twofold. First, it provides a means to test the AECS hardware and system developed by MSG. Second, it provides an example project upon which future AECS research may be based. This Software Guide fully describes building, installing, and executing the AECS Test Project as well as its architecture and design. The design of the AECS hardware is described in the AECS Hardware Guide. Instructions on how to configure, build and use the AECS are described in the User's Guide. Sample AECS software, developed by the WVHTC Foundation, is presented in the AECS Software Guide. The AECS Hardware Guide, AECS User's Guide, and AECS Software Guide are authored by MSG. The requirements set forth for AECS are presented in the Statement of Work for the Assessment Environment for Complex Systems authored by NASA Dryden Flight Research Center (DFRC). The intended audience for this document includes software engineers, hardware engineers, project managers, and quality assurance personnel from WVHTC Foundation (the suppliers of the software), NASA (the customer), and future researchers (users of the software). Readers are assumed to have general knowledge in the field of real-time, embedded computer software development.
A Model for Minimizing Numeric Function Generator Complexity and Delay
2007-12-01
allow computation of difficult mathematical functions in less time and with less hardware than commonly employed methods. They compute piecewise...Programmable Gate Arrays (FPGAs). The algorithms and estimation techniques apply to various NFG architectures and mathematical functions. This...thesis compares hardware utilization and propagation delay for various NFG architectures, mathematical functions, word widths, and segmentation methods
Classified one-step high-radix signed-digit arithmetic units
NASA Astrophysics Data System (ADS)
Cherri, Abdallah K.
1998-08-01
High-radix number systems enable higher information storage density, less complexity, fewer system components, and fewer cascaded gates and operations. A simple one-step fully parallel high-radix signed-digit arithmetic is proposed for parallel optical computing based on new joint spatial encodings. This reduces hardware requirements and improves throughput by reducing the space-bandwidth produce needed. The high-radix signed-digit arithmetic operations are based on classifying the neighboring input digit pairs into various groups to reduce the computation rules. A new joint spatial encoding technique is developed to present both the operands and the computation rules. This technique increases the spatial bandwidth product of the spatial light modulators of the system. An optical implementation of the proposed high-radix signed-digit arithmetic operations is also presented. It is shown that our one-step trinary signed-digit and quaternary signed-digit arithmetic units are much simpler and better than all previously reported high-radix signed-digit techniques.
Reducing the Requirements and Cost of Astronomical Telescopes
NASA Technical Reports Server (NTRS)
Smith, W. Scott; Whitakter, Ann F. (Technical Monitor)
2002-01-01
Limits on astronomical telescope apertures are being rapidly approached. These limits result from logistics, increasing complexity, and finally budgetary constraints. In an historical perspective, great strides have been made in the area of aperture, adaptive optics, wavefront sensors, detectors, stellar interferometers and image reconstruction. What will be the next advances? Emerging data analysis techniques based on communication theory holds the promise of yielding more information from observational data based on significant computer post-processing. This paper explores some of the current telescope limitations and ponders the possibilities increasing the yield of scientific data based on the migration computer post-processing techniques to higher dimensions. Some of these processes hold the promise of reducing the requirements on the basic telescope hardware making the next generation of instruments more affordable.
FPS-RAM: Fast Prefix Search RAM-Based Hardware for Forwarding Engine
NASA Astrophysics Data System (ADS)
Zaitsu, Kazuya; Yamamoto, Koji; Kuroda, Yasuto; Inoue, Kazunari; Ata, Shingo; Oka, Ikuo
Ternary content addressable memory (TCAM) is becoming very popular for designing high-throughput forwarding engines on routers. However, TCAM has potential problems in terms of hardware and power costs, which limits its ability to deploy large amounts of capacity in IP routers. In this paper, we propose new hardware architecture for fast forwarding engines, called fast prefix search RAM-based hardware (FPS-RAM). We designed FPS-RAM hardware with the intent of maintaining the same search performance and physical user interface as TCAM because our objective is to replace the TCAM in the market. Our RAM-based hardware architecture is completely different from that of TCAM and has dramatically reduced the costs and power consumption to 62% and 52%, respectively. We implemented FPS-RAM on an FPGA to examine its lookup operation.
ESTL tracking and data relay satellite /TDRSS/ simulation system
NASA Technical Reports Server (NTRS)
Kapell, M. H.
1980-01-01
The Tracking Data Relay Satellite System (TDRSS) provides single access forward and return communication links with the Shuttle/Orbiter via S-band and Ku-band frequency bands. The ESTL (Electronic Systems Test Laboratory) at Lyndon B. Johnson Space Center (JSC) utilizes a TDRS satellite simulator and critical TDRS ground hardware for test operations. To accomplish Orbiter/TDRSS relay communications performance testing in the ESTL, a satellite simulator was developed which met the specification requirements of the TDRSS channels utilized by the Orbiter. Actual TDRSS ground hardware unique to the Orbiter communication interfaces was procured from individual vendors, integrated in the ESTL, and interfaced via a data bus for control and status monitoring. This paper discusses the satellite simulation hardware in terms of early development and subsequent modifications. The TDRS ground hardware configuration and the complex computer interface requirements are reviewed. Also, special test hardware such as a radio frequency interference test generator is discussed.
Evaluation of the OpenCL AES Kernel using the Intel FPGA SDK for OpenCL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal
The OpenCL standard is an open programming model for accelerating algorithms on heterogeneous computing system. OpenCL extends the C-based programming language for developing portable codes on different platforms such as CPU, Graphics processing units (GPUs), Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs). The Intel FPGA SDK for OpenCL is a suite of tools that allows developers to abstract away the complex FPGA-based development flow for a high-level software development flow. Users can focus on the design of hardware-accelerated kernel functions in OpenCL and then direct the tools to generate the low-level FPGA implementations. The approach makes themore » FPGA-based development more accessible to software users as the needs for hybrid computing using CPUs and FPGAs are increasing. It can also significantly reduce the hardware development time as users can evaluate different ideas with high-level language without deep FPGA domain knowledge. In this report, we evaluate the performance of the kernel using the Intel FPGA SDK for OpenCL and Nallatech 385A FPGA board. Compared to the M506 module, the board provides more hardware resources for a larger design exploration space. The kernel performance is measured with the compute kernel throughput, an upper bound to the FPGA throughput. The report presents the experimental results in details. The Appendix lists the kernel source code.« less
Environmental Conditions for Space Flight Hardware: A Survey
NASA Technical Reports Server (NTRS)
Plante, Jeannette; Lee, Brandon
2005-01-01
Interest in generalization of the physical environment experienced by NASA hardware from the natural Earth environment (on the launch pad), man-made environment on Earth (storage acceptance an d qualification testing), the launch environment, and the space environment, is ed to find commonality among our hardware in an effort to reduce cost and complexity. NASA is entering a period of increase in its number of planetary missions and it is important to understand how our qualification requirements will evolve with and track these new environments. Environmental conditions are described for NASA projects in several ways for the different periods of the mission life cycle. At the beginning, the mission manager defines survivability requirements based on the mission length, orbit, launch date, launch vehicle, and other factors . such as the use of reactor engines. Margins are then applied to these values (temperature extremes, vibration extremes, radiation tolerances, etc,) and a new set of conditions is generalized for design requirements. Mission assurance documents will then assign an additional margin for reliability, and a third set of values is provided for during testing. A fourth set of environmental condition values may evolve intermittently from heritage hardware that has been tested to a level beyond the actual mission requirement. These various sets of environment figures can make it quite confusing and difficult to capture common hardware environmental requirements. Environmental requirement information can be found in a wide variety of places. The most obvious is with the individual projects. We can easily get answers to questions about temperature extremes being used and radiation tolerance goals, but it is more difficult to map the answers to the process that created these requirements: for design, for qualification, and for actual environment with no margin applied. Not everyone assigned to a NASA project may have that kind of insight, as many have only the environmental requirement numbers needed to do their jobs but do not necessarily have a programmatic-level understanding of how all of the environmental requirements fit together.
An Alternative Approach to Human Servicing of Crewed Earth Orbiting Spacecraft
NASA Technical Reports Server (NTRS)
Mularski, John R.; Alpert, Brian K.
2017-01-01
As crewed spacecraft have grown larger and more complex, they have come to rely on spacewalks, or Extravehicular Activities (EVA), for mission success and crew safety. Typically, these spacecraft maintain all of the hardware and trained personnel needed to perform an EVA on-board at all times. Maintaining this capability requires volume and up-mass for storage of EVA hardware, crew time for ground and on-orbit training, and on-orbit maintenance of EVA hardware. This paper proposes an alternative methodology, utilizing launch on-need hardware and crew to provide EVA capability for space stations in Earth orbit after assembly complete, in the same way that one would call a repairman to fix something at their home. This approach would reduce ground training requirements, save Intravehicular Activity (IVA) crew time in the form of EVA hardware maintenance and on-orbit training, and lead to more efficient EVAs because they would be performed by specialists with detailed knowledge and training stemming from their direct involvement in the development of the EVA. The on-orbit crew would then be available to focus on the immediate response to the failure as well as the day-to-day operations of the spacecraft and payloads. This paper will look at how current unplanned EVAs are conducted, including the time required for preparation, and offer alternatives for future spacecraft. As this methodology relies on the on-time and on-need launch of spacecraft, any space station that utilized this approach would need a robust transportation system including more than one launch vehicle capable of carrying crew. In addition, the fault tolerance of the space station would be an important consideration in how much time was available for EVA preparation after the failure. Each future program would have to weigh the risk of on-time launch against the increase in available crew time for the main objective of the spacecraft.
An Alternative Approach to Human Servicing of Manned Earth Orbiting Spacecraft
NASA Technical Reports Server (NTRS)
Mularski, John; Alpert, Brian
2011-01-01
As manned spacecraft have grown larger and more complex, they have come to rely on spacewalks or Extravehicular Activities (EVA) for both mission success and crew safety. Typically these spacecraft maintain all of the hardware and trained personnel needed to perform an EVA on-board at all times. Maintaining this capability requires volume and up-mass for storage of EVA hardware, crew time for ground and on-orbit training, and on-orbit maintenance of EVA hardware . This paper proposes an alternative methodology to utilize launch-on-need hardware and crew to provide EVA capability for space stations in Earth orbit after assembly complete, in the same way that most people would call a repairman to fix something at their home. This approach would not only reduce ground training requirements and save Intravehicular Activity (IVA) crew time in the form of EVA hardware maintenance and on-orbit training, but would also lead to more efficient EVAs because they would be performed by specialists with detailed knowledge and training stemming from their direct involvement in the development of the EVA. The on-orbit crew would then be available to focus on the immediate response to the failure as well as the day-to-day operations of the spacecraft and payloads. This paper will look at how current ISS unplanned EVAs are conducted, including the time required for preparation, and offer alternatives for future spacecraft utilizing lessons learned from ISS. As this methodology relies entirely on the on-time and on-need launch of spacecraft, any space station that utilized this approach would need a robust transportation system including more than one launch vehicle capable of carrying crew. In addition the fault tolerance of the space station would be an important consideration in how much time was available for EVA preparation after the failure. Each future program would have to weigh the risk of on-time launch against the increase in available crew time for the main objective of the spacecraft.
Field experience with remote monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desrosiers, A.E.
1995-03-01
The Remote Monitoring System (RMS) is a combination of Merlin Gerin detection hardware, digital data communications hardware, and computer software from Bartlett Services, Inc. (BSI) that can improve the conduct of reactor plant operations in several areas. Using the RMS can reduce radiation exposures to radiation protection technicians (RPTs), reduce radiation exposures to plant maintenance and operations personnel, and reduce the time required to complete maintenance and inspections during outages. The number of temporary RPTs required during refueling outages can also be reduced. Data from use of the RMS at a two power plants are presented to illustrate these points.
Lossless data compression for improving the performance of a GPU-based beamformer.
Lok, U-Wai; Fan, Gang-Wei; Li, Pai-Chi
2015-04-01
The powerful parallel computation ability of a graphics processing unit (GPU) makes it feasible to perform dynamic receive beamforming However, a real time GPU-based beamformer requires high data rate to transfer radio-frequency (RF) data from hardware to software memory, as well as from central processing unit (CPU) to GPU memory. There are data compression methods (e.g. Joint Photographic Experts Group (JPEG)) available for the hardware front end to reduce data size, alleviating the data transfer requirement of the hardware interface. Nevertheless, the required decoding time may even be larger than the transmission time of its original data, in turn degrading the overall performance of the GPU-based beamformer. This article proposes and implements a lossless compression-decompression algorithm, which enables in parallel compression and decompression of data. By this means, the data transfer requirement of hardware interface and the transmission time of CPU to GPU data transfers are reduced, without sacrificing image quality. In simulation results, the compression ratio reached around 1.7. The encoder design of our lossless compression approach requires low hardware resources and reasonable latency in a field programmable gate array. In addition, the transmission time of transferring data from CPU to GPU with the parallel decoding process improved by threefold, as compared with transferring original uncompressed data. These results show that our proposed lossless compression plus parallel decoder approach not only mitigate the transmission bandwidth requirement to transfer data from hardware front end to software system but also reduce the transmission time for CPU to GPU data transfer. © The Author(s) 2014.
Demonstration of spectral calibration for stellar interferometry
NASA Technical Reports Server (NTRS)
Demers, Richard T.; An, Xin; Tang, Hong; Rud, Mayer; Wayne, Leonard; Kissil, Andrew; Kwack, Eug-Yun
2006-01-01
A breadboard is under development to demonstrate the calibration of spectral errors in microarcsecond stellar interferometers. Analysis shows that thermally and mechanically stable hardware in addition to careful optical design can reduce the wavelength dependent error to tens of nanometers. Calibration of the hardware can further reduce the error to the level of picometers. The results of thermal, mechanical and optical analysis supporting the breadboard design will be shown.
NASA Astrophysics Data System (ADS)
Sun, Wenhao; Cai, Xudong; Meng, Qiao
2016-04-01
Complex automatic protection functions are being added to the onboard software of the Alpha Magnetic Spectrometer. A hardware-in-the-loop simulation method has been introduced to overcome the difficulties of ground testing that are brought by hardware and environmental limitations. We invented a time-saving approach by reusing the flight data as the data source of the simulation system instead of mathematical models. This is easy to implement and it works efficiently. This paper presents the system framework, implementation details and some application examples.
NASA Technical Reports Server (NTRS)
Kavi, K. M.
1984-01-01
There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.
Using SysML to model complex systems for security.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cano, Lester Arturo
2010-08-01
As security systems integrate more Information Technology the design of these systems has tended to become more complex. Some of the most difficult issues in designing Complex Security Systems (CSS) are: Capturing Requirements: Defining Hardware Interfaces: Defining Software Interfaces: Integrating Technologies: Radio Systems: Voice Over IP Systems: Situational Awareness Systems.
Modular System to Enable Extravehicular Activity
NASA Technical Reports Server (NTRS)
Sargusingh, Miriam J.
2011-01-01
The ability to perform extravehicular activity (EVA), both human and robotic, has been identified as a key component to space missions to support such operations as assembly and maintenance of space system (e.g. construction and maintenance of the International Space Station), and unscheduled activities to repair an element of the transportation and habitation systems that can only be accessed externally and via unpressurized areas. In order to make human transportation beyond lower earth orbit (BLEO) practical, efficiencies must be incorporated into the integrated transportation systems to reduce system mass and operational complexity. Affordability is also a key aspect to be considered in space system development; this could be achieved through commonality, modularity and component reuse. Another key aspect identified for the EVA system was the ability to produce flight worthy hardware quickly to support early missions and near Earth technology demonstrations. This paper details a conceptual architecture for a modular extravehicular activity system (MEVAS) that would meet these stated needs for EVA capability that is affordable, and that could be produced relatively quickly. Operational concepts were developed to elaborate on the defined needs and define the key capabilities, operational and design constraints, and general timelines. The operational concept lead to a high level design concept for a module that interfaces with various space transportation elements and contains the hardware and systems required to support human and telerobotic EVA; the module would not be self-propelled and would rely on an interfacing element for consumable resources. The conceptual architecture was then compared to EVA Systems used in the Shuttle Orbiter, on the International Space Station to develop high level design concepts that incorporate opportunities for cost savings through hardware reuse, and quick production through the use of existing technologies and hardware designs. An upgrade option was included to make use of the developing suitport technologies.
Case for Deploying Complex Systems Utilizing Commodity Components
NASA Technical Reports Server (NTRS)
Bryant, Barry S.; Pitts, R. Lee
2003-01-01
When the International Space Station (ISS) finally reached an operational state, many of the Payload Operations and Integration Facility (POIF) hardware components were reaching end of life, COTS product costs were soaring, and the ISS budget was becoming severely constrained. However, most requirement development was complete. In addition, the ISS program is a fully functioning program with at least fifteen years of operational life remaining. Therefore it is critical that any upgrades, refurbishments, or enhancements be accomplished in realtime with minimal disruptions to service. For these and other reasons, it was necessary to ensure the viability of the POIF. Due to the to the breadth of capability of the POIF (a NASA ground station), it is believed that the lessons to be learned by other complex systems are applicable and any solutions garnered by the POIF are applicable to other complex systems as well. With that in mind, a number of new approaches have been investigated to increase the portability of the POIF and reduce the cost of refurbishment, operations, and maintenance. These new approaches were directed at the Total Cost of Ownership (TCO); not only the refurbishment but also current operational difficulties, licensing, and anticipation of the next refurbishment. Our basic premise is that technology had evolved dramatically since the concept of the POIF ground system and we should leverage our experience on this new technological landscape. Fortunately, Moore's law and market forces have changed the landscape considerably. These changes are manifest in five (5) ways that are particularly relevant to POIF: 1. Complex Instruction Set Computing (CISC) processors have advanced to unprecedented levels of compute capacity with a dramatic cost break, 2. Linux has become a major operating system supported by most vendors on a broad range of platforms, 3. Windows(TradeMark) based desktops are pervasive in the office environment, 4. Stable and affordable WindowsTM development environments and tools are available and offer a rich set of capabilities, 5. The WindowsTM 2000 provides a stable client platform, Therefore, five studies were proposed, developed, and are in the current process of deployment which dramatically reduces the cost of operations, maintenance, refurbishment, and deployment of a ground system. Restating and refining the basic premise stated earlier, it is possible to enhance operations through the replacement of hardware and software components with commodity based items wherever applicable. This will dramatically reduce the overall lifecycle cost of the project. The first study leveraged the POIF S secure, three-tier, web architecture to replace the client workstations with lower cost PC platforms. A second study initiated a review of COTS products to examine the level of added value of each product. This study included replacement of some COTS products with custom code, deletions, substitutions, and consolidation of COTS products. Studies three and four reviewed the server architectures of the data distribution systems and Enhanced HOSC System (EHS) command and telemetry system to propose migration to new platforms, both software and hardware. The final study reviewed current IP communication technologies, developed an operational model for flight operations, and demonstrated that voice over IP was practical and could be integrated into operations.
Hardware architecture design of image restoration based on time-frequency domain computation
NASA Astrophysics Data System (ADS)
Wen, Bo; Zhang, Jing; Jiao, Zipeng
2013-10-01
The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed. Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality, hardware realizability and high efficiency.
System architectures for telerobotic research
NASA Technical Reports Server (NTRS)
Harrison, F. Wallace
1989-01-01
Several activities are performed related to the definition and creation of telerobotic systems. The effort and investment required to create architectures for these complex systems can be enormous; however, the magnitude of process can be reduced if structured design techniques are applied. A number of informal methodologies supporting certain aspects of the design process are available. More recently, prototypes of integrated tools supporting all phases of system design from requirements analysis to code generation and hardware layout have begun to appear. Activities related to system architecture of telerobots are described, including current activities which are designed to provide a methodology for the comparison and quantitative analysis of alternative system architectures.
[Medical Equipment Maintenance Methods].
Liu, Hongbin
2015-09-01
Due to the high technology and the complexity of medical equipment, as well as to the safety and effectiveness, it determines the high requirements of the medical equipment maintenance work. This paper introduces some basic methods of medical instrument maintenance, including fault tree analysis, node method and exclusive method which are the three important methods in the medical equipment maintenance, through using these three methods for the instruments that have circuit drawings, hardware breakdown maintenance can be done easily. And this paper introduces the processing methods of some special fault conditions, in order to reduce little detours in meeting the same problems. Learning is very important for stuff just engaged in this area.
Read buffer optimizations to support compiler-assisted multiple instruction retry
NASA Technical Reports Server (NTRS)
Alewine, N. J.; Fuchs, W. K.; Hwu, W. M.
1993-01-01
Multiple instruction retry is a recovery mechanism for transient processor faults. We previously developed a compiler-assisted approach to multiple instruction ferry in which a read buffer of size 2N (where N represents the maximum instruction rollback distance) was used to resolve some data hazards while the compiler resolved the remaining hazards. The compiler-assisted scheme was shown to reduce the performance overhead and/or hardware complexity normally associated with hardware-only retry schemes. This paper examines the size and design of the read buffer. We establish a practical lower bound and average size requirement for the read buffer by modifying the scheme to save only the data required for rollback. The study measures the effect on the performance of a DECstation 3100 running ten application programs using six read buffer configurations with varying read buffer sizes. Two alternative configurations are shown to be the most efficient and differed depending on whether split-cycle-saves are assumed. Up to a 55 percent read buffer size reduction is achievable with an average reduction of 39 percent given the most efficient read buffer configuration and a variety of applications.
Blade Vibration Measurement System
NASA Technical Reports Server (NTRS)
Platt, Michael J.
2014-01-01
The Phase I project successfully demonstrated that an advanced noncontacting stress measurement system (NSMS) could improve classification of blade vibration response in terms of mistuning and closely spaced modes. The Phase II work confirmed the microwave sensor design process, modified the sensor so it is compatible as an upgrade to existing NSMS, and improved and finalized the NSMS software. The result will be stand-alone radar/tip timing radar signal conditioning for current conventional NSMS users (as an upgrade) and new users. The hybrid system will use frequency data and relative mode vibration levels from the radar sensor to provide substantially superior capabilities over current blade-vibration measurement technology. This frequency data, coupled with a reduced number of tip timing probes, will result in a system capable of detecting complex blade vibrations that would confound traditional NSMS systems. The hardware and software package was validated on a compressor rig at Mechanical Solutions, Inc. (MSI). Finally, the hybrid radar/tip timing NSMS software package and associated sensor hardware will be installed for use in the NASA Glenn spin pit test facility.
Benchmarking Model Variants in Development of a Hardware-in-the-Loop Simulation System
NASA Technical Reports Server (NTRS)
Aretskin-Hariton, Eliot D.; Zinnecker, Alicia M.; Kratz, Jonathan L.; Culley, Dennis E.; Thomas, George L.
2016-01-01
Distributed engine control architecture presents a significant increase in complexity over traditional implementations when viewed from the perspective of system simulation and hardware design and test. Even if the overall function of the control scheme remains the same, the hardware implementation can have a significant effect on the overall system performance due to differences in the creation and flow of data between control elements. A Hardware-in-the-Loop (HIL) simulation system is under development at NASA Glenn Research Center that enables the exploration of these hardware dependent issues. The system is based on, but not limited to, the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k). This paper describes the step-by-step conversion from the self-contained baseline model to the hardware in the loop model, and the validation of each step. As the control model hardware fidelity was improved during HIL system development, benchmarking simulations were performed to verify that engine system performance characteristics remained the same. The results demonstrate the goal of the effort; the new HIL configurations have similar functionality and performance compared to the baseline C-MAPSS40k system.
NASA Information Technology Implementation Plan
NASA Technical Reports Server (NTRS)
2000-01-01
NASA's Information Technology (IT) resources and IT support continue to be a growing and integral part of all NASA missions. Furthermore, the growing IT support requirements are becoming more complex and diverse. The following are a few examples of the growing complexity and diversity of NASA's IT environment. NASA is conducting basic IT research in the Intelligent Synthesis Environment (ISE) and Intelligent Systems (IS) Initiatives. IT security, infrastructure protection, and privacy of data are requiring more and more management attention and an increasing share of the NASA IT budget. Outsourcing of IT support is becoming a key element of NASA's IT strategy as exemplified by Outsourcing Desktop Initiative for NASA (ODIN) and the outsourcing of NASA Integrated Services Network (NISN) support. Finally, technology refresh is helping to provide improved support at lower cost. Recently the NASA Automated Data Processing (ADP) Consolidation Center (NACC) upgraded its bipolar technology computer systems with Complementary Metal Oxide Semiconductor (CMOS) technology systems. This NACC upgrade substantially reduced the hardware maintenance and software licensing costs, significantly increased system speed and capacity, and reduced customer processing costs by 11 percent.
A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potok, Thomas E; Schuman, Catherine D; Young, Steven R
Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less
Resolution-independent surface rendering using programmable graphics hardware
Loop, Charles T.; Blinn, James Frederick
2008-12-16
Surfaces defined by a Bezier tetrahedron, and in particular quadric surfaces, are rendered on programmable graphics hardware. Pixels are rendered through triangular sides of the tetrahedra and locations on the shapes, as well as surface normals for lighting evaluations, are computed using pixel shader computations. Additionally, vertex shaders are used to aid interpolation over a small number of values as input to the pixel shaders. Through this, rendering of the surfaces is performed independently of viewing resolution, allowing for advanced level-of-detail management. By individually rendering tetrahedrally-defined surfaces which together form complex shapes, the complex shapes can be rendered in their entirety.
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Britton, Paul; Hatfield, Glen Spencer; Novack, Steven D.
2017-01-01
Today's launch vehicles complex electronic and avionics systems heavily utilize Field Programmable Gate Array (FPGA) integrated circuits (IC) for their superb speed and reconfiguration capabilities. Consequently, FPGAs are prevalent ICs in communication protocols such as MILSTD- 1553B and in control signal commands such as in solenoid valve actuations. This paper will identify reliability concerns and high level guidelines to estimate FPGA total failure rates in a launch vehicle application. The paper will discuss hardware, hardware description language, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC. The hardware description language portion will discuss the high level FPGA programming languages and software/code reliability growth. The radiation portion will discuss FPGA susceptibility to space environment radiation.
NASA Technical Reports Server (NTRS)
Oeftering, Richard C.; Bradish, Martin A.
2011-01-01
The role of synthetic instruments (SIs) for Component-Level Electronic-Assembly Repair (CLEAR) is to provide an external lower-level diagnostic and functional test capability beyond the built-in-test capabilities of spacecraft electronics. Built-in diagnostics can report faults and symptoms, but isolating the root cause and performing corrective action requires specialized instruments. Often a fault can be revealed by emulating the operation of external hardware. This implies complex hardware that is too massive to be accommodated in spacecraft. The SI strategy is aimed at minimizing complexity and mass by employing highly reconfigurable instruments that perform diagnostics and emulate external functions. In effect, SI can synthesize an instrument on demand. The SI architecture section of this document summarizes the result of a recent program diagnostic and test needs assessment based on the International Space Station. The SI architecture addresses operational issues such as minimizing crew time and crew skill level, and the SI data transactions between the crew and supporting ground engineering searching for the root cause and formulating corrective actions. SI technology is described within a teleoperations framework. The remaining sections describe a lab demonstration intended to show that a single SI circuit could synthesize an instrument in hardware and subsequently clear the hardware and synthesize a completely different instrument on demand. An analysis of the capabilities and limitations of commercially available SI hardware and programming tools is included. Future work in SI technology is also described.
NASA ERA Integrated CFD for Wind Tunnel Testing of Hybrid Wing-Body Configuration
NASA Technical Reports Server (NTRS)
Garcia, Joseph A.; Melton, John E.; Schuh, Michael; James, Kevin D.; Long, Kurtis R.; Vicroy, Dan D.; Deere, Karen A.; Luckring, James M.; Carter, Melissa B.; Flamm, Jeffrey D.;
2016-01-01
The NASA Environmentally Responsible Aviation (ERA) Project explored enabling technologies to reduce impact of aviation on the environment. One project research challenge area was the study of advanced airframe and engine integration concepts to reduce community noise and fuel burn. To address this challenge, complex wind tunnel experiments at both the NASA Langley Research Center's (LaRC) 14'x22' and the Ames Research Center's 40'x80' low-speed wind tunnel facilities were conducted on a BOEING Hybrid Wing Body (HWB) configuration. These wind tunnel tests entailed various entries to evaluate the propulsion-airframe interference effects, including aerodynamic performance and aeroacoustics. In order to assist these tests in producing high quality data with minimal hardware interference, extensive Computational Fluid Dynamic (CFD) simulations were performed for everything from sting design and placement for both the wing body and powered ejector nacelle systems to the placement of aeroacoustic arrays to minimize its impact on vehicle aerodynamics. This paper presents a high-level summary of the CFD simulations that NASA performed in support of the model integration hardware design as well as the development of some CFD simulation guidelines based on post-test aerodynamic data. In addition, the paper includes details on how multiple CFD codes (OVERFLOW, STAR-CCM+, USM3D, and FUN3D) were efficiently used to provide timely insight into the wind tunnel experimental setup and execution.
NASA ERA Integrated CFD for Wind Tunnel Testing of Hybrid Wing-Body Configuration
NASA Technical Reports Server (NTRS)
Garcia, Joseph A.; Melton, John E.; Schuh, Michael; James, Kevin D.; Long, Kurt R.; Vicroy, Dan D.; Deere, Karen A.; Luckring, James M.; Carter, Melissa B.; Flamm, Jeffrey D.;
2016-01-01
NASAs Environmentally Responsible Aviation (ERA) Project explores enabling technologies to reduce aviations impact on the environment. One research challenge area for the project has been to study advanced airframe and engine integration concepts to reduce community noise and fuel burn. In order to achieve this, complex wind tunnel experiments at both the NASA Langley Research Centers (LaRC) 14x22 and the Ames Research Centers 40x80 low-speed wind tunnel facilities were conducted on a Boeing Hybrid Wing Body (HWB) configuration. These wind tunnel tests entailed various entries to evaluate the propulsion airframe interference effects including aerodynamic performance and aeroacoustics. In order to assist these tests in producing high quality data with minimal hardware interference, extensive Computational Fluid Dynamic (CFD) simulations were performed for everything from sting design and placement for both the wing body and powered ejector nacelle systems to the placement of aeroacoustic arrays to minimize its impact on the vehicles aerodynamics. This paper will provide a high level summary of the CFD simulations that NASA performed in support of the model integration hardware design as well as some simulation guideline development based on post-test aerodynamic data. In addition, the paper includes details on how multiple CFD codes (OVERFLOW, STAR-CCM+, USM3D, and FUN3D) were efficiently used to provide timely insight into the wind tunnel experimental setup and execution.
Using SRAM Based FPGAs for Power-Aware High Performance Wireless Sensor Networks
Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa
2012-01-01
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today’s applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements. PMID:22736971
Using SRAM based FPGAs for power-aware high performance wireless sensor networks.
Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa
2012-01-01
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today's applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements.
Space operations and the human factor
NASA Astrophysics Data System (ADS)
Brody, Adam R.
1993-10-01
Although space flight does not put the public at high risk, billions of dollars in hardware are destroyed and the space program halted when an accident occurs. Researchers are therefore applying human-factors techniques similar to those used in the aircraft industry, albeit at a greatly reduced level, to the spacecraft environment. The intent is to reduce the likelihood of catastrophic failure. To increase safety and efficiency, space human factors researchers have simulated spacecraft docking and extravehicular activity rescue. Engineers have also studied EVA suit mobility and aids. Other basic human-factors issues that have been applied to the space environment include antropometry, biomechanics, and ergonomics. Workstation design, workload, and task analysis currently receive much attention, as do habitability and other aspects of confined environments. Much work also focuses on individual payloads, as each presents its own complexities.
A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications.
Revathy, M; Saravanan, R
2015-01-01
Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures.
Tri-state delta modulation system for Space Shuttle digital TV downlink
NASA Technical Reports Server (NTRS)
Udalov, S.; Huth, G. K.; Roberts, D.; Batson, B. H.
1981-01-01
Future requirements for Shuttle Orbiter downlink communication may include transmission of digital video which, in addition to black and white, may also be either field-sequential or NTSC color format. The use of digitized video could provide for picture privacy at the expense of additional onboard hardware, together with an increased bandwidth due to the digitization process. A general objective for the Space Shuttle application is to develop a digitization technique that is compatible with data rates in the 20-30 Mbps range but still provides good quality pictures. This paper describes a tri-state delta modulation/demodulation (TSDM) technique which is a good compromise between implementation complexity and performance. The unique feature of TSDM is that it provides for efficient run-length encoding of constant-intensity segments of a TV picture. Axiomatix has developed a hardware implementation of a high-speed TSDM transmitter and receiver for black-and-white TV and field-sequential color. The hardware complexity of this TSDM implementation is summarized in the paper.
Testing Microgravity Flight Hardware Concepts on the NASA KC-135
NASA Technical Reports Server (NTRS)
Motil, Susan M.; Harrivel, Angela R.; Zimmerli, Gregory A.
2001-01-01
This paper provides an overview of utilizing the NASA KC-135 Reduced Gravity Aircraft for the Foam Optics and Mechanics (FOAM) microgravity flight project. The FOAM science requirements are summarized, and the KC-135 test-rig used to test hardware concepts designed to meet the requirements are described. Preliminary results regarding foam dispensing, foam/surface slip tests, and dynamic light scattering data are discussed in support of the flight hardware development for the FOAM experiment.
NASA Astrophysics Data System (ADS)
Zhang, Yuli; Han, Jun; Weng, Xinqian; He, Zhongzhu; Zeng, Xiaoyang
This paper presents an Application Specific Instruction-set Processor (ASIP) for the SHA-3 BLAKE algorithm family by instruction set extensions (ISE) from an RISC (reduced instruction set computer) processor. With a design space exploration for this ASIP to increase the performance and reduce the area cost, we accomplish an efficient hardware and software implementation of BLAKE algorithm. The special instructions and their well-matched hardware function unit improve the calculation of the key section of the algorithm, namely G-functions. Also, relaxing the time constraint of the special function unit can decrease its hardware cost, while keeping the high data throughput of the processor. Evaluation results reveal the ASIP achieves 335Mbps and 176Mbps for BLAKE-256 and BLAKE-512. The extra area cost is only 8.06k equivalent gates. The proposed ASIP outperforms several software approaches on various platforms in cycle per byte. In fact, both high throughput and low hardware cost achieved by this programmable processor are comparable to that of ASIC implementations.
Kohlberg, Gavriel D.; Mancuso, Dean M.; Chari, Divya A.; Lalwani, Anil K.
2015-01-01
Objective. Enjoyment of music remains an elusive goal following cochlear implantation. We test the hypothesis that reengineering music to reduce its complexity can enhance the listening experience for the cochlear implant (CI) listener. Methods. Normal hearing (NH) adults (N = 16) and CI listeners (N = 9) evaluated a piece of country music on three enjoyment modalities: pleasantness, musicality, and naturalness. Participants listened to the original version along with 20 modified, less complex, versions created by including subsets of the musical instruments from the original song. NH participants listened to the segments both with and without CI simulation processing. Results. Compared to the original song, modified versions containing only 1–3 instruments were less enjoyable to the NH listeners but more enjoyable to the CI listeners and the NH listeners with CI simulation. Excluding vocals and including rhythmic instruments improved enjoyment for NH listeners with CI simulation but made no difference for CI listeners. Conclusions. Reengineering a piece of music to reduce its complexity has the potential to enhance music enjoyment for the cochlear implantee. Thus, in addition to improvements in software and hardware, engineering music specifically for the CI listener may be an alternative means to enhance their listening experience. PMID:26543322
Evolutionary Development of the Simulation by Logical Modeling System (SIBYL)
NASA Technical Reports Server (NTRS)
Wu, Helen
1995-01-01
Through the evolutionary development of the Simulation by Logical Modeling System (SIBYL) we have re-engineered the expensive and complex IBM mainframe based Long-term Hardware Projection Model (LHPM) to a robust cost-effective computer based mode that is easy to use. We achieved significant cost reductions and improved productivity in preparing long-term forecasts of Space Shuttle Main Engine (SSME) hardware. The LHPM for the SSME is a stochastic simulation model that projects the hardware requirements over 10 years. SIBYL is now the primary modeling tool for developing SSME logistics proposals and Program Operating Plan (POP) for NASA and divisional marketing studies.
NASA Astrophysics Data System (ADS)
Lasher, Mark E.; Henderson, Thomas B.; Drake, Barry L.; Bocker, Richard P.
1986-09-01
The modified signed-digit (MSD) number representation offers full parallel, carry-free addition. A MSD adder has been described by the authors. This paper describes how the adder can be used in a tree structure to implement an optical multiply algorithm. Three different optical schemes, involving position, polarization, and intensity encoding, are proposed for realizing the trinary logic system. When configured in the generic multiplier architecture, these schemes yield the combinatorial logic necessary to carry out the multiplication algorithm. The optical systems are essentially three dimensional arrangements composed of modular units. Of course, this modularity is important for design considerations, while the parallelism and noninterfering communication channels of optical systems are important from the standpoint of reduced complexity. The authors have also designed electronic hardware to demonstrate and model the combinatorial logic required to carry out the algorithm. The electronic and proposed optical systems will be compared in terms of complexity and speed.
Model-Based Verification and Validation of Spacecraft Avionics
NASA Technical Reports Server (NTRS)
Khan, M. Omair; Sievers, Michael; Standley, Shaun
2012-01-01
Verification and Validation (V&V) at JPL is traditionally performed on flight or flight-like hardware running flight software. For some time, the complexity of avionics has increased exponentially while the time allocated for system integration and associated V&V testing has remained fixed. There is an increasing need to perform comprehensive system level V&V using modeling and simulation, and to use scarce hardware testing time to validate models; the norm for thermal and structural V&V for some time. Our approach extends model-based V&V to electronics and software through functional and structural models implemented in SysML. We develop component models of electronics and software that are validated by comparison with test results from actual equipment. The models are then simulated enabling a more complete set of test cases than possible on flight hardware. SysML simulations provide access and control of internal nodes that may not be available in physical systems. This is particularly helpful in testing fault protection behaviors when injecting faults is either not possible or potentially damaging to the hardware. We can also model both hardware and software behaviors in SysML, which allows us to simulate hardware and software interactions. With an integrated model and simulation capability we can evaluate the hardware and software interactions and identify problems sooner. The primary missing piece is validating SysML model correctness against hardware; this experiment demonstrated such an approach is possible.
International space station wire program
NASA Technical Reports Server (NTRS)
May, Todd
1995-01-01
Hardware provider wire systems and current wire insulation issues for the International Space Station (ISS) program are discussed in this viewgraph presentation. Wire insulation issues include silicone wire contamination, Tefzel cold temperature flexibility, and Russian polyimide wire insulation. ISS is a complex program with hardware developed and managed by many countries and hundreds of contractors. Most of the obvious wire insulation issues are known by contractors and have been precluded by proper selection.
Knight, Christopher G.; Knight, Sylvia H. E.; Massey, Neil; Aina, Tolu; Christensen, Carl; Frame, Dave J.; Kettleborough, Jamie A.; Martin, Andrew; Pascoe, Stephen; Sanderson, Ben; Stainforth, David A.; Allen, Myles R.
2007-01-01
In complex spatial models, as used to predict the climate response to greenhouse gas emissions, parameter variation within plausible bounds has major effects on model behavior of interest. Here, we present an unprecedentedly large ensemble of >57,000 climate model runs in which 10 parameters, initial conditions, hardware, and software used to run the model all have been varied. We relate information about the model runs to large-scale model behavior (equilibrium sensitivity of global mean temperature to a doubling of carbon dioxide). We demonstrate that effects of parameter, hardware, and software variation are detectable, complex, and interacting. However, we find most of the effects of parameter variation are caused by a small subset of parameters. Notably, the entrainment coefficient in clouds is associated with 30% of the variation seen in climate sensitivity, although both low and high values can give high climate sensitivity. We demonstrate that the effect of hardware and software is small relative to the effect of parameter variation and, over the wide range of systems tested, may be treated as equivalent to that caused by changes in initial conditions. We discuss the significance of these results in relation to the design and interpretation of climate modeling experiments and large-scale modeling more generally. PMID:17640921
NASA Laser Light Scattering Advanced Technology Development Workshop, 1988
NASA Technical Reports Server (NTRS)
Meyer, William V. (Editor)
1989-01-01
The major objective of the workshop was to explore the capabilities of existing and prospective laser light scattering hardware and to assess user requirements and needs for a laser light scattering instrument in a reduced gravity environment. The workshop addressed experimental needs and stressed hardware development.
Hardware cloth seed-spot screens reduce high surface soil temperature
H.A. Fowells; R.K. Arnold
1939-01-01
Screens made of hardware cloth are commonly used to protect seed-spots in forest plantations from the depredations of birds and rodents. It is important to know whether or not the screens influence germination and survival of seedlings in addition to furnishing protection against animals.
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Britton, Paul; Hatfield, Glen Spencer; Novack, Steven D.
2017-01-01
Field Programmable Gate Arrays (FPGAs) integrated circuits (IC) are one of the key electronic components in today's sophisticated launch and space vehicle complex avionic systems, largely due to their superb reprogrammable and reconfigurable capabilities combined with relatively low non-recurring engineering costs (NRE) and short design cycle. Consequently, FPGAs are prevalent ICs in communication protocols and control signal commands. This paper will identify reliability concerns and high level guidelines to estimate FPGA total failure rates in a launch vehicle application. The paper will discuss hardware, hardware description language, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC. The hardware description language portion will discuss the high level FPGA programming languages and software/code reliability growth. The radiation portion will discuss FPGA susceptibility to space environment radiation.
Real-time high speed generator system emulation with hardware-in-the-loop application
NASA Astrophysics Data System (ADS)
Stroupe, Nicholas
The emerging emphasis and benefits of distributed generation on smaller scale networks has prompted much attention and focus to research in this field. Much of the research that has grown in distributed generation has also stimulated the development of simulation software and techniques. Testing and verification of these distributed power networks is a complex task and real hardware testing is often desired. This is where simulation methods such as hardware-in-the-loop become important in which an actual hardware unit can be interfaced with a software simulated environment to verify proper functionality. In this thesis, a simulation technique is taken one step further by utilizing a hardware-in-the-loop technique to emulate the output voltage of a generator system interfaced to a scaled hardware distributed power system for testing. The purpose of this thesis is to demonstrate a new method of testing a virtually simulated generation system supplying a scaled distributed power system in hardware. This task is performed by using the Non-Linear Loads Test Bed developed by the Energy Conversion and Integration Thrust at the Center for Advanced Power Systems. This test bed consists of a series of real hardware developed converters consistent with the Navy's All-Electric-Ship proposed power system to perform various tests on controls and stability under the expected non-linear load environment of the Navy weaponry. This test bed can also explore other distributed power system research topics and serves as a flexible hardware unit for a variety of tests. In this thesis, the test bed will be utilized to perform and validate this newly developed method of generator system emulation. In this thesis, the dynamics of a high speed permanent magnet generator directly coupled with a micro turbine are virtually simulated on an FPGA in real-time. The calculated output stator voltage will then serve as a reference for a controllable three phase inverter at the input of the test bed that will emulate and reproduce these voltages on real hardware. The output of the inverter is then connected with the rest of the test bed and can consist of a variety of distributed system topologies for many testing scenarios. The idea is that the distributed power system under test in hardware can also integrate real generator system dynamics without physically involving an actual generator system. The benefits of successful generator system emulation are vast and lead to much more detailed system studies without the draw backs of needing physical generator units. Some of these advantages are safety, reduced costs, and the ability of scaling while still preserving the appropriate system dynamics. This thesis will introduce the ideas behind generator emulation and explain the process and necessary steps to obtaining such an objective. It will also demonstrate real results and verification of numerical values in real-time. The final goal of this thesis is to introduce this new idea and show that it is in fact obtainable and can prove to be a highly useful tool in the simulation and verification of distributed power systems.
The Ruggedized STD Bus Microcomputer - A low cost computer suitable for Space Shuttle experiments
NASA Technical Reports Server (NTRS)
Budney, T. J.; Stone, R. W.
1982-01-01
Previous space flight computers have been costly in terms of both hardware and software. The Ruggedized STD Bus Microcomputer is based on the commercial Mostek/Pro-Log STD Bus. Ruggedized PC cards can be based on commercial cards from more than 60 manufacturers, reducing hardware cost and design time. Software costs are minimized by using standard 8-bit microprocessors and by debugging code using commercial versions of the ruggedized flight boards while the flight hardware is being fabricated.
Comparison of ZigBee Replay Attacks Using a Universal Software Radio Peripheral and USB Radio
2014-03-27
authentication code (CBC-MAC) CPU central processing unit CUT component under test db decibel dbm decibel referenced to one milliwatt FFD full- fuction ...categorized into two different types: full- fuction devices (FFDs) and reduced-function devices (RFDs). The difference between an FFD and an RFD is that...KillerBee Hardware. Although KillerBee can be used with any hardware that can interact with 802.15.4 networks, the primary development hardware is the
1998-01-01
This report is the 11th in a series of reports comparing the Department of Defense’s (DOD) logistics practices with those of the private sector . We...leading private sector practices. This report focuses on DOD’S progress in adopting best inventory management practices for hardware items such as bearings...valves, and bolts. The objectives of this review were to determine (1) DOD and private sector practices for managing hardware items, (2) whether DOD
NASA Technical Reports Server (NTRS)
Jedrziewski, S.
1976-01-01
The emission problem or source points were defined and new materials, hardware, or operational procedures were developed to exercise the trends defined by the data collected. The programs to reduce the emission output of aircraft powerplants were listed. Continued establishment of baseline emissions for various engine models, continued characterization of effect of production tolerances on emissions, carbureted engine development and flight tests, and cylinder cooling/fin design programs were several of the programs investigated.
A novel low-complexity digital filter design for wearable ECG devices
Mehrnia, Alireza
2017-01-01
Wearable and implantable Electrocardiograph (ECG) devices are becoming prevailing tools for continuous real-time personal health monitoring. The ECG signal can be contaminated by various types of noise and artifacts (e.g., powerline interference, baseline wandering) that must be removed or suppressed for accurate ECG signal processing. Limited device size, power consumption and cost are critical issues that need to be carefully considered when designing any portable health monitoring device, including a battery-powered ECG device. This work presents a novel low-complexity noise suppression reconfigurable finite impulse response (FIR) filter structure for wearable ECG and heart monitoring devices. The design relies on a recently introduced optimally-factored FIR filter method. The new filter structure and several of its useful features are presented in detail. We also studied the hardware complexity of the proposed structure and compared it with the state-of-the-art. The results showed that the new ECG filter has a lower hardware complexity relative to the state-of-the-art ECG filters. PMID:28384272
A novel low-complexity digital filter design for wearable ECG devices.
Asgari, Shadnaz; Mehrnia, Alireza
2017-01-01
Wearable and implantable Electrocardiograph (ECG) devices are becoming prevailing tools for continuous real-time personal health monitoring. The ECG signal can be contaminated by various types of noise and artifacts (e.g., powerline interference, baseline wandering) that must be removed or suppressed for accurate ECG signal processing. Limited device size, power consumption and cost are critical issues that need to be carefully considered when designing any portable health monitoring device, including a battery-powered ECG device. This work presents a novel low-complexity noise suppression reconfigurable finite impulse response (FIR) filter structure for wearable ECG and heart monitoring devices. The design relies on a recently introduced optimally-factored FIR filter method. The new filter structure and several of its useful features are presented in detail. We also studied the hardware complexity of the proposed structure and compared it with the state-of-the-art. The results showed that the new ECG filter has a lower hardware complexity relative to the state-of-the-art ECG filters.
Developing an Integration Infrastructure for Distributed Engine Control Technologies
NASA Technical Reports Server (NTRS)
Culley, Dennis; Zinnecker, Alicia; Aretskin-Hariton, Eliot; Kratz, Jonathan
2014-01-01
Turbine engine control technology is poised to make the first revolutionary leap forward since the advent of full authority digital engine control in the mid-1980s. This change aims squarely at overcoming the physical constraints that have historically limited control system hardware on aero-engines to a federated architecture. Distributed control architecture allows complex analog interfaces existing between system elements and the control unit to be replaced by standardized digital interfaces. Embedded processing, enabled by high temperature electronics, provides for digitization of signals at the source and network communications resulting in a modular system at the hardware level. While this scheme simplifies the physical integration of the system, its complexity appears in other ways. In fact, integration now becomes a shared responsibility among suppliers and system integrators. While these are the most obvious changes, there are additional concerns about performance, reliability, and failure modes due to distributed architecture that warrant detailed study. This paper describes the development of a new facility intended to address the many challenges of the underlying technologies of distributed control. The facility is capable of performing both simulation and hardware studies ranging from component to system level complexity. Its modular and hierarchical structure allows the user to focus their interaction on specific areas of interest.
Application of multirate digital filter banks to wideband all-digital phase-locked loops design
NASA Technical Reports Server (NTRS)
Sadr, Ramin; Shah, Biren; Hinedi, Sami
1993-01-01
A new class of architecture for all-digital phase-locked loops (DPLL's) is presented in this article. These architectures, referred to as parallel DPLL (PDPLL), employ multirate digital filter banks (DFB's) to track signals with a lower processing rate than the Nyquist rate, without reducing the input (Nyquist) bandwidth. The PDPLL basically trades complexity for hardware-processing speed by introducing parallel processing in the receiver. It is demonstrated here that the DPLL performance is identical to that of a PDPLL for both steady-state and transient behavior. A test signal with a time-varying Doppler characteristic is used to compare the performance of both the DPLL and the PDPLL.
Application of multirate digital filter banks to wideband all-digital phase-locked loops design
NASA Astrophysics Data System (ADS)
Sadr, Ramin; Shah, Biren; Hinedi, Sami
1993-06-01
A new class of architecture for all-digital phase-locked loops (DPLL's) is presented in this article. These architectures, referred to as parallel DPLL (PDPLL), employ multirate digital filter banks (DFB's) to track signals with a lower processing rate than the Nyquist rate, without reducing the input (Nyquist) bandwidth. The PDPLL basically trades complexity for hardware-processing speed by introducing parallel processing in the receiver. It is demonstrated here that the DPLL performance is identical to that of a PDPLL for both steady-state and transient behavior. A test signal with a time-varying Doppler characteristic is used to compare the performance of both the DPLL and the PDPLL.
Application of multirate digital filter banks to wideband all-digital phase-locked loops design
NASA Astrophysics Data System (ADS)
Sadr, R.; Shah, B.; Hinedi, S.
1992-11-01
A new class of architecture for all-digital phase-locked loops (DPLL's) is presented in this article. These architectures, referred to as parallel DPLL (PDPLL), employ multirate digital filter banks (DFB's) to track signals with a lower processing rate than the Nyquist rate, without reducing the input (Nyquist) bandwidth. The PDPLL basically trades complexity for hardware-processing speed by introducing parallel processing in the receiver. It is demonstrated here that the DPLL performance is identical to that of a PDPLL for both steady-state and transient behavior. A test signal with a time-varying Doppler characteristic is used to compare the performance of both the DPLL and the PDPLL.
Platform options for the Space Station program
NASA Technical Reports Server (NTRS)
Mangano, M. J.; Rowley, R. W.
1986-01-01
Platforms for polar and 28.5 deg orbits were studied to determine the platform requirements and characteristics necessary to support the science objectives. Large platforms supporting the Earth-Observing System (EOS) were initially studied. Co-orbiting platforms were derived from these designs. Because cost estimates indicated that the large platform approach was likely to be too expensive, require several launches, and generally be excessively complex, studies of small platforms were undertaken. Results of these studies show the small platform approach to be technically feasible at lower overall cost. All designs maximized hardware inheritance from the Space Station program to reduce costs. Science objectives as defined at the time of these studies are largely achievable.
High-speed wavelength-division multiplexing quantum key distribution system.
Yoshino, Ken-ichiro; Fujiwara, Mikio; Tanaka, Akihiro; Takahashi, Seigo; Nambu, Yoshihiro; Tomita, Akihisa; Miki, Shigehito; Yamashita, Taro; Wang, Zhen; Sasaki, Masahide; Tajima, Akio
2012-01-15
A high-speed quantum key distribution system was developed with the wavelength-division multiplexing (WDM) technique and dedicated key distillation hardware engines. Two interferometers for encoding and decoding are shared over eight wavelengths to reduce the system's size, cost, and control complexity. The key distillation engines can process a huge amount of data from the WDM channels by using a 1 Mbit block in real time. We demonstrated a three-channel WDM system that simultaneously uses avalanche photodiodes and superconducting single-photon detectors. We achieved 12 h continuous key generation with a secure key rate of 208 kilobits per second through a 45 km field fiber with 14.5 dB loss.
Signal processing and control challenges for smart vehicles
NASA Astrophysics Data System (ADS)
Zhang, Hui; Braun, Simon G.
2017-03-01
Smart phones have changed not only the mobile phone market but also our society during the past few years. Could the next potential intelligent device may be the vehicle? Judging by the visibility, in all media, of the numerous attempts to develop autonomous vehicles, this is certainly one of the logical outcomes. Smart vehicles would be equipped with an advanced operating system such that the vehicles could communicate with others, optimize the operation to reduce fuel consumption and emissions, enhance safety, or even become self-driving. These combined new features of vehicles require instrumentation and hardware developments, fast signal processing/fusion, decision making and online optimization. Meanwhile, the inevitable increasing system complexity would certainly challenges the control unit design.
Application of multirate digital filter banks to wideband all-digital phase-locked loops design
NASA Technical Reports Server (NTRS)
Sadr, R.; Shah, B.; Hinedi, S.
1992-01-01
A new class of architecture for all-digital phase-locked loops (DPLL's) is presented in this article. These architectures, referred to as parallel DPLL (PDPLL), employ multirate digital filter banks (DFB's) to track signals with a lower processing rate than the Nyquist rate, without reducing the input (Nyquist) bandwidth. The PDPLL basically trades complexity for hardware-processing speed by introducing parallel processing in the receiver. It is demonstrated here that the DPLL performance is identical to that of a PDPLL for both steady-state and transient behavior. A test signal with a time-varying Doppler characteristic is used to compare the performance of both the DPLL and the PDPLL.
NASA Technical Reports Server (NTRS)
Pavlock, Kate M.
2011-01-01
The National Aeronautics and Space Administration's Dryden Flight Research Center completed flight testing of adaptive controls research on the Full-Scale Advance Systems Testbed (FAST) in January of 2011. The research addressed technical challenges involved with reducing risk in an increasingly complex and dynamic national airspace. Specific challenges lie with the development of validated, multidisciplinary, integrated aircraft control design tools and techniques to enable safe flight in the presence of adverse conditions such as structural damage, control surface failures, or aerodynamic upsets. The testbed is an F-18 aircraft serving as a full-scale vehicle to test and validate adaptive flight control research and lends a significant confidence to the development, maturation, and acceptance process of incorporating adaptive control laws into follow-on research and the operational environment. The experimental systems integrated into FAST were designed to allow for flexible yet safe flight test evaluation and validation of modern adaptive control technologies and revolve around two major hardware upgrades: the modification of Production Support Flight Control Computers (PSFCC) and integration of two, fourth-generation Airborne Research Test Systems (ARTS). Post-hardware integration verification and validation provided the foundation for safe flight test of Nonlinear Dynamic Inversion and Model Reference Aircraft Control adaptive control law experiments. To ensure success of flight in terms of cost, schedule, and test results, emphasis on risk management was incorporated into early stages of design and flight test planning and continued through the execution of each flight test mission. Specific consideration was made to incorporate safety features within the hardware and software to alleviate user demands as well as into test processes and training to reduce human factor impacts to safe and successful flight test. This paper describes the research configuration, experiment functionality, overall risk mitigation, flight test approach and results, and lessons learned of adaptive controls research of the Full-Scale Advanced Systems Testbed.
A software methodology for compiling quantum programs
NASA Astrophysics Data System (ADS)
Häner, Thomas; Steiger, Damian S.; Svore, Krysta; Troyer, Matthias
2018-04-01
Quantum computers promise to transform our notions of computation by offering a completely new paradigm. To achieve scalable quantum computation, optimizing compilers and a corresponding software design flow will be essential. We present a software architecture for compiling quantum programs from a high-level language program to hardware-specific instructions. We describe the necessary layers of abstraction and their differences and similarities to classical layers of a computer-aided design flow. For each layer of the stack, we discuss the underlying methods for compilation and optimization. Our software methodology facilitates more rapid innovation among quantum algorithm designers, quantum hardware engineers, and experimentalists. It enables scalable compilation of complex quantum algorithms and can be targeted to any specific quantum hardware implementation.
NASA Astrophysics Data System (ADS)
Borisov, A. P.
2018-01-01
The article is devoted to the development of a software and hardware complex for investigating the grinding process on a pendulum deformer. The hardware part of this complex is the Raspberry Pi model 2B platform, to which a contactless angle sensor is connected, which allows to obtain data on the angle of deviation of the pendulum surface, usb-cameras, which allow to obtain grain images before and after grinding, and stepping motors allowing lifting of the pendulum surface and adjust the clearance between the pendulum and the supporting surfaces. The program part of the complex is written in C # and allows receiving data from the sensor and usb-cameras, processing the received data, and also controlling the synchronous-step motors in manual and automatic mode. The conducted studies show that the rational mode is the deviation of the pendulum surface by an angle of 400, and the location of the grain in the central zone of the support surface, regardless of the orientation of the grain in space. Also, due to the non-contact angle sensor, energy consumption for grinding, speed and acceleration of the pendulum surface, as well as vitreousness of grain and the energy consumption are calculated. With the help of photographs obtained from usb cameras, the work of a pendulum deformer based on the Rebinder formula and calculation of the grain area before and after grinding is determined.
NASA Astrophysics Data System (ADS)
Keshet, Aviv; Ketterle, Wolfgang
2013-01-01
Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.
Keshet, Aviv; Ketterle, Wolfgang
2013-01-01
Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.
Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao
2017-04-01
Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.
The Numerical Propulsion System Simulation: A Multidisciplinary Design System for Aerospace Vehicles
NASA Technical Reports Server (NTRS)
Lytle, John K.
1999-01-01
Advances in computational technology and in physics-based modeling are making large scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze ma or propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of design process and to provide the designer with critical information about the components early in the design process. This paper describes the development of the Numerical Propulsion System Simulation (NPSS), a multidisciplinary system of analysis tools that is focussed on extending the simulation capability from components to the full system. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.
A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications
Revathy, M.; Saravanan, R.
2015-01-01
Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures. PMID:26065017
Designing Novel Quaternary Quantum Reversible Subtractor Circuits
NASA Astrophysics Data System (ADS)
Haghparast, Majid; Monfared, Asma Taheri
2018-01-01
Reversible logic synthesis is an important area of current research because of its ability to reduce energy dissipation. In recent years, multiple valued logic has received great attention due to its ability to reduce the width of the reversible circuit which is a main requirement in quantum technology. Subtractor circuits are between major components used in quantum computers. In this paper, we will discuss the design of a quaternary quantum reversible half subtractor circuit using quaternary 1-qudit, 2-qudit Muthukrishnan-Stroud and 3-qudit controlled gates and a 2-qudit Generalized quaternary gate. Then a design of a quaternary quantum reversible full subtractor circuit based on the quaternary half subtractor will be presenting. The designs shall then be evaluated in terms of quantum cost, constant input, garbage output, and hardware complexity. The proposed quaternary quantum reversible circuits are the first attempt in the designing of the aforementioned subtractor.
Liquid Nitrogen Removal of Critical Aerospace Materials
NASA Technical Reports Server (NTRS)
Noah, Donald E.; Merrick, Jason; Hayes, Paul W.
2005-01-01
Identification of innovative solutions to unique materials problems is an every-day quest for members of the aerospace community. Finding a technique that will minimize costs, maximize throughput, and generate quality results is always the target. United Space Alliance Materials Engineers recently conducted such a search in their drive to return the Space Shuttle fleet to operational status. The removal of high performance thermal coatings from solid rocket motors represents a formidable task during post flight disassembly on reusable expended hardware. The removal of these coatings from unfired motors increases the complexity and safety requirements while reducing the available facilities and approved processes. A temporary solution to this problem was identified, tested and approved during the Solid Rocket Booster (SRB) return to flight activities. Utilization of ultra high-pressure liquid nitrogen (LN2) to strip the protective coating from assembled space shuttle hardware marked the first such use of the technology in the aerospace industry. This process provides a configurable stream of liquid nitrogen (LN2) at pressures of up to 55,000 psig. The performance of a one-time certification for the removal of thermal ablatives from SRB hardware involved extensive testing to ensure adequate material removal without causing undesirable damage to the residual materials or aluminum substrates. Testing to establish appropriate process parameters such as flow, temperature and pressures of the liquid nitrogen stream provided an initial benchmark for process testing. Equipped with these initial parameters engineers were then able to establish more detailed test criteria that set the process limits. Quantifying the potential for aluminum hardware damage represented the greatest hurdle for satisfying engineers as to the safety of this process. Extensive testing for aluminum erosion, surface profiling, and substrate weight loss was performed. This successful project clearly demonstrated that the liquid nitrogen jet possesses unique strengths that align remarkably well with the unusual challenges that space hardware and missile manufacturers face on a regular basis. Performance of this task within the confines of a critical manufacturing facility marks a milestone in advanced processing.
Hardware for Accelerating N-Modular Redundant Systems for High-Reliability Computing
NASA Technical Reports Server (NTRS)
Dobbs, Carl, Sr.
2012-01-01
A hardware unit has been designed that reduces the cost, in terms of performance and power consumption, for implementing N-modular redundancy (NMR) in a multiprocessor device. The innovation monitors transactions to memory, and calculates a form of sumcheck on-the-fly, thereby relieving the processors of calculating the sumcheck in software
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-01-10
Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-04-17
Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.
Automated control and data acquisition for a tunable diode laser heterodyne spectrometer
NASA Technical Reports Server (NTRS)
Shull, T. S.; Rinsland, P. L.
1983-01-01
This paper describes the hardware and software design, development, and implementation of the control and data electronics of a laser heterodyne spectrometer instrument being built at NASA Langley Research Center for a technology demonstration. Functional partitioning, applied at all levels of hardware and software, has been found to provide expedient design, development, and testing of the instrument. The instrument is composed of distributed microprocessor-based units. A master/slave protocol is presented which can be simulated by a terminal for unit checkout. All but one of the units are implemented using a set of core boards, plus unique boards where necessary. This design has led to reduced hardware development, reduced parts inventory, and replication of software modules, while providing the flexibility needed for a development instrument. The development tools and documentation guidelines are discussed.
NASA Astrophysics Data System (ADS)
Newman, Gregory A.
2014-01-01
Many geoscientific applications exploit electrostatic and electromagnetic fields to interrogate and map subsurface electrical resistivity—an important geophysical attribute for characterizing mineral, energy, and water resources. In complex three-dimensional geologies, where many of these resources remain to be found, resistivity mapping requires large-scale modeling and imaging capabilities, as well as the ability to treat significant data volumes, which can easily overwhelm single-core and modest multicore computing hardware. To treat such problems requires large-scale parallel computational resources, necessary for reducing the time to solution to a time frame acceptable to the exploration process. The recognition that significant parallel computing processes must be brought to bear on these problems gives rise to choices that must be made in parallel computing hardware and software. In this review, some of these choices are presented, along with the resulting trade-offs. We also discuss future trends in high-performance computing and the anticipated impact on electromagnetic (EM) geophysics. Topics discussed in this review article include a survey of parallel computing platforms, graphics processing units to multicore CPUs with a fast interconnect, along with effective parallel solvers and associated solver libraries effective for inductive EM modeling and imaging.
Implementing Capsule Representation in a Total Hip Dislocation Finite Element Model
Stewart, Kristofer J; Pedersen, Douglas R; Callaghan, John J; Brown, Thomas D
2004-01-01
Previously validated hardware-only finite element models of THA dislocation have clarified how various component design and surgical placement variables contribute to resisting the propensity for implant dislocation. This body of work has now been enhanced with the incorporation of experimentally based capsule representation, and with anatomic bone structures. The current form of this finite element model provides for large deformation multi-body contact (including capsule wrap-around on bone and/or implant), large displacement interfacial sliding, and large deformation (hyperelastic) capsule representation. In addition, the modular nature of this model now allows for rapid incorporation of current or future total hip implant designs, accepts complex multi-axial physiologic motion inputs, and outputs case-specific component/bone/soft-tissue impingement events. This soft-tissue-augmented finite element model is being used to investigate the performance of various implant designs for a range of clinically-representative soft tissue integrities and surgical techniques. Preliminary results show that capsule enhancement makes a substantial difference in stability, compared to an otherwise identical hardware-only model. This model is intended to help put implant design and surgical technique decisions on a firmer scientific basis, in terms of reducing the likelihood of dislocation. PMID:15296198
NASA Astrophysics Data System (ADS)
Yang, Shuangming; Deng, Bin; Wang, Jiang; Li, Huiyan; Liu, Chen; Fietkiewicz, Chris; Loparo, Kenneth A.
2017-01-01
Real-time estimation of dynamical characteristics of thalamocortical cells, such as dynamics of ion channels and membrane potentials, is useful and essential in the study of the thalamus in Parkinsonian state. However, measuring the dynamical properties of ion channels is extremely challenging experimentally and even impossible in clinical applications. This paper presents and evaluates a real-time estimation system for thalamocortical hidden properties. For the sake of efficiency, we use a field programmable gate array for strictly hardware-based computation and algorithm optimization. In the proposed system, the FPGA-based unscented Kalman filter is implemented into a conductance-based TC neuron model. Since the complexity of TC neuron model restrains its hardware implementation in parallel structure, a cost efficient model is proposed to reduce the resource cost while retaining the relevant ionic dynamics. Experimental results demonstrate the real-time capability to estimate thalamocortical hidden properties with high precision under both normal and Parkinsonian states. While it is applied to estimate the hidden properties of the thalamus and explore the mechanism of the Parkinsonian state, the proposed method can be useful in the dynamic clamp technique of the electrophysiological experiments, the neural control engineering and brain-machine interface studies.
Mechanics of Granular Materials (MGM0 Flight Hardware in Bench Test
NASA Technical Reports Server (NTRS)
2000-01-01
Engineering bench system hardware for the Mechanics of Granular Materials (MGM) experiment is tested on a lab bench at the University of Colorado in Boulder. This is done in a horizontal arrangement to reduce pressure differences so the tests more closely resemble behavior in the microgravity of space. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: University of Colorado at Boulder).
Integrated Tools for Future Distributed Engine Control Technologies
NASA Technical Reports Server (NTRS)
Culley, Dennis; Thomas, Randy; Saus, Joseph
2013-01-01
Turbine engines are highly complex mechanical systems that are becoming increasingly dependent on control technologies to achieve system performance and safety metrics. However, the contribution of controls to these measurable system objectives is difficult to quantify due to a lack of tools capable of informing the decision makers. This shortcoming hinders technology insertion in the engine design process. NASA Glenn Research Center is developing a Hardware-inthe- Loop (HIL) platform and analysis tool set that will serve as a focal point for new control technologies, especially those related to the hardware development and integration of distributed engine control. The HIL platform is intended to enable rapid and detailed evaluation of new engine control applications, from conceptual design through hardware development, in order to quantify their impact on engine systems. This paper discusses the complex interactions of the control system, within the context of the larger engine system, and how new control technologies are changing that paradigm. The conceptual design of the new HIL platform is then described as a primary tool to address those interactions and how it will help feed the insertion of new technologies into future engine systems.
NASA Technical Reports Server (NTRS)
Hughes, Mark S.; Hebert, Phillip W.; Davis, Dawn M.; Jensen, Scott L.; Abell, Frederick K., Jr.
2004-01-01
The John C. Stennis Space Center (SSC) provides test operations services to a variety of customers, including NASA, DoD, and commercial enterprises for the development of current and next-generation rocket propulsion systems. Many of these testing services are provided in the E-Complex test facilities composed of three active test stands (E1, E2, & E3) and 7 total test positions. Each test position is outfitted with unique sets of data acquisition and controls hardware and software that record both facility and test article data and enable safe operation of the test facility. This paper addresses each system in more detail including efforts to upgrade hardware and software.
Comparison of spike-sorting algorithms for future hardware implementation.
Gibson, Sarah; Judy, Jack W; Markovic, Dejan
2008-01-01
Applications such as brain-machine interfaces require hardware spike sorting in order to (1) obtain single-unit activity and (2) perform data reduction for wireless transmission of data. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection and feature extraction algorithms for spike sorting are described briefly and evaluated in terms of accuracy versus computational complexity. The nonlinear energy operator method is chosen as the optimal spike detection algorithm, being most robust over noise and relatively simple. The discrete derivatives method [1] is chosen as the optimal feature extraction method, maintaining high accuracy across SNRs with a complexity orders of magnitude less than that of traditional methods such as PCA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arion is a library and tool set that enables researchers to holistically define test system models. To define a complex system for testing an algorithm or control requires expertise across multiple domains. Simulating a complex system requires the integration of multiple simulators and test hardware, each with their own specification languages and concepts. This requires extensive set of knowledge and capabilities. Arion was developed to alleviate this challenge. Arion is a library of Java libraries that abstracts the concepts from supported simulators into a cohesive model language that allows someone to build models to their needed level of fidelity andmore » expertise. Arion is also a software tool that translates the users model back into the specification languages of the simulators and test hardware needed for execution.« less
Automated Analysis of ARM Binaries using the Low-Level Virtual Machine Compiler Framework
2011-03-01
president to insist on keeping his smartphone [CNN09]. A self-proclaimed BlackBerry addict , President Obama fought hard to keep his mobile device after his... smartphone but renders a device non-functional on installation [FSe09][Hof07]. Complex interactions between hardware and software components both within... smartphone (which is a big assumption), the phone may still be vulnerable if the hardware or software does not correctly implement the design
Creating an open environment software infrastructure
NASA Technical Reports Server (NTRS)
Jipping, Michael J.
1992-01-01
As the development of complex computer hardware accelerates at increasing rates, the ability of software to keep pace is essential. The development of software design tools, however, is falling behind the development of hardware for several reasons, the most prominent of which is the lack of a software infrastructure to provide an integrated environment for all parts of a software system. The research was undertaken to provide a basis for answering this problem by investigating the requirements of open environments.
MSFC Skylab student project report. [selected space experiments
NASA Technical Reports Server (NTRS)
1974-01-01
The Skylab Student Project some 4000 students submitted experiments from which twenty-five national winners were selected. Of these, eleven required special flight hardware, eight were allowed to obtain data using hardware available for professional investigations, and the remaining six were affiliated with researchers in alternate fields, since their proposals could not be accommodated due to complexity or similar incompatibility. The background of the project is elaborated and experiment performance results and evaluations are touched upon.
2017-02-19
software systems: the students design and build robotics software towards real-world applications, without being distracted by hardware issues; (ii) it...high school students require the students to focus on building and integrating the hardware that make up the robot, at the expense of designing and...robotics programs focus on the mechanics; as a result, they do not have room for students to design and implement relatively complex software systems, as
Digital adaptive optics line-scanning confocal imaging system.
Liu, Changgeng; Kim, Myung K
2015-01-01
A digital adaptive optics line-scanning confocal imaging (DAOLCI) system is proposed by applying digital holographic adaptive optics to a digital form of line-scanning confocal imaging system. In DAOLCI, each line scan is recorded by a digital hologram, which allows access to the complex optical field from one slice of the sample through digital holography. This complex optical field contains both the information of one slice of the sample and the optical aberration of the system, thus allowing us to compensate for the effect of the optical aberration, which can be sensed by a complex guide star hologram. After numerical aberration compensation, the corrected optical fields of a sequence of line scans are stitched into the final corrected confocal image. In DAOLCI, a numerical slit is applied to realize the confocality at the sensor end. The width of this slit can be adjusted to control the image contrast and speckle noise for scattering samples. DAOLCI dispenses with the hardware pieces, such as Shack–Hartmann wavefront sensor and deformable mirror, and the closed-loop feedbacks adopted in the conventional adaptive optics confocal imaging system, thus reducing the optomechanical complexity and cost. Numerical simulations and proof-of-principle experiments are presented that demonstrate the feasibility of this idea.
Sight Application Analysis Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronevetsky, G.
2014-09-17
The scale and complexity of scientific applications makes it very difficult to optimize, debug and extend them to support new capabilities. We have developed a tool that supports developers’ efforts to understand the logical flow of their applications and interactions between application components and hardware in a way that scales with application complexity and parallelism.
Optical multicast system for data center networks.
Samadi, Payman; Gupta, Varun; Xu, Junjie; Wang, Howard; Zussman, Gil; Bergman, Keren
2015-08-24
We present the design and experimental evaluation of an Optical Multicast System for Data Center Networks, a hardware-software system architecture that uniquely integrates passive optical splitters in a hybrid network architecture for faster and simpler delivery of multicast traffic flows. An application-driven control plane manages the integrated optical and electronic switched traffic routing in the data plane layer. The control plane includes a resource allocation algorithm to optimally assign optical splitters to the flows. The hardware architecture is built on a hybrid network with both Electronic Packet Switching (EPS) and Optical Circuit Switching (OCS) networks to aggregate Top-of-Rack switches. The OCS is also the connectivity substrate of splitters to the optical network. The optical multicast system implementation requires only commodity optical components. We built a prototype and developed a simulation environment to evaluate the performance of the system for bulk multicasting. Experimental and numerical results show simultaneous delivery of multicast flows to all receivers with steady throughput. Compared to IP multicast that is the electronic counterpart, optical multicast performs with less protocol complexity and reduced energy consumption. Compared to peer-to-peer multicast methods, it achieves at minimum an order of magnitude higher throughput for flows under 250 MB with significantly less connection overheads. Furthermore, for delivering 20 TB of data containing only 15% multicast flows, it reduces the total delivery energy consumption by 50% and improves latency by 55% compared to a data center with a sole non-blocking EPS network.
NASA Astrophysics Data System (ADS)
Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid
2016-11-01
The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.
Efficient Phase Unwrapping Architecture for Digital Holographic Microscopy
Hwang, Wen-Jyi; Cheng, Shih-Chang; Cheng, Chau-Jern
2011-01-01
This paper presents a novel phase unwrapping architecture for accelerating the computational speed of digital holographic microscopy (DHM). A fast Fourier transform (FFT) based phase unwrapping algorithm providing a minimum squared error solution is adopted for hardware implementation because of its simplicity and robustness to noise. The proposed architecture is realized in a pipeline fashion to maximize throughput of the computation. Moreover, the number of hardware multipliers and dividers are minimized to reduce the hardware costs. The proposed architecture is used as a custom user logic in a system on programmable chip (SOPC) for physical performance measurement. Experimental results reveal that the proposed architecture is effective for expediting the computational speed while consuming low hardware resources for designing an embedded DHM system. PMID:22163688
A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.
2014-01-01
Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a Simulink(R) library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.
Rapid-X - An FPGA Development Toolset Using a Custom Simulink Library for MTCA.4 Modules
NASA Astrophysics Data System (ADS)
Prędki, Paweł; Heuer, Michael; Butkowski, Łukasz; Przygoda, Konrad; Schlarb, Holger; Napieralski, Andrzej
2015-06-01
The recent introduction of advanced hardware architectures such as the Micro Telecommunications Computing Architecture (MTCA) caused a change in the approach to implementation of control schemes in many fields. The development has been moving away from traditional programming languages ( C/C++), to hardware description languages (VHDL, Verilog), which are used in FPGA development. With MATLAB/Simulink it is possible to describe complex systems with block diagrams and simulate their behavior. Those diagrams are then used by the HDL experts to implement exactly the required functionality in hardware. Both the porting of existing applications and adaptation of new ones require a lot of development time from them. To solve this, Xilinx System Generator, a toolbox for MATLAB/Simulink, allows rapid prototyping of those block diagrams using hardware modelling. It is still up to the firmware developer to merge this structure with the hardware-dependent HDL project. This prevents the application engineer from quickly verifying the proposed schemes in real hardware. The framework described in this article overcomes these challenges, offering a hardware-independent library of components that can be used in Simulink/System Generator models. The components are subsequently translated into VHDL entities and integrated with a pre-prepared VHDL project template. Furthermore, the entire implementation process is run in the background, giving the user an almost one-click path from control scheme modelling and simulation to bit-file generation. This approach allows the application engineers to quickly develop new schemes and test them in real hardware environment. The applications may range from simple data logging or signal generation ones to very advanced controllers. Taking advantage of the Simulink simulation capabilities and user-friendly hardware implementation routines, the framework significantly decreases the development time of FPGA-based applications.
A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.
2015-01-01
Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a SimulinkR library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.
A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia Mae; Culley, Dennis E.; Aretskin-Hariton, Eliot D.
2014-01-01
Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (40,000 pound force thrust) (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a Simulink (R) library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.
Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh I.; Keymeulen, Didier; Kimesh, Matthew A.
2012-01-01
Modern hyperspectral imaging systems are able to acquire far more data than can be downlinked from a spacecraft. Onboard data compression helps to alleviate this problem, but requires a system capable of power efficiency and high throughput. Software solutions have limited throughput performance and are power-hungry. Dedicated hardware solutions can provide both high throughput and power efficiency, while taking the load off of the main processor. Thus a hardware compression system was developed. The implementation uses a field-programmable gate array (FPGA). The implementation is based on the fast lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral-Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which achieves excellent compression performance and has low complexity. This algorithm performs predictive compression using an adaptive filtering method, and uses adaptive Golomb coding. The implementation also packetizes the coded data. The FL algorithm is well suited for implementation in hardware. In the FPGA implementation, one sample is compressed every clock cycle, which makes for a fast and practical realtime solution for space applications. Benefits of this implementation are: 1) The underlying algorithm achieves a combination of low complexity and compression effectiveness that exceeds that of techniques currently in use. 2) The algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. 3) Hardware acceleration provides a throughput improvement of 10 to 100 times vs. the software implementation. A prototype of the compressor is available in software, but it runs at a speed that does not meet spacecraft requirements. The hardware implementation targets the Xilinx Virtex IV FPGAs, and makes the use of this compressor practical for Earth satellites as well as beyond-Earth missions with hyperspectral instruments.
Modular System to Enable Extravehicular Activity
NASA Technical Reports Server (NTRS)
Sargusingh, Miriam J.
2012-01-01
The ability to perform extravehicular activity (EVA), both human and robotic, has been identified as a key component to space missions to support such operations as assembly and maintenance of space systems (e.g. construction and maintenance of the International Space Station), and unscheduled activities to repair an element of the transportation and habitation systems that can only be accessed externally and via unpressurized areas. In order to make human transportation beyond lower Earth orbit (LEO) practical, efficiencies must be incorporated into the integrated transportation systems to reduce system mass and operational complexity. Affordability is also a key aspect to be considered in space system development; this could be achieved through commonality, modularity and component reuse. Another key aspect identified for the EVA system was the ability to produce flight worthy hardware quickly to support early missions and near Earth technology demonstrations. This paper details a conceptual architecture for a modular EVA system that would meet these stated needs for EVA capability that is affordable, and that could be produced relatively quickly. Operational concepts were developed to elaborate on the defined needs, and to define the key capabilities, operational and design constraints, and general timelines. The operational concept lead to a high level design concept for a module that interfaces with various space transportation elements and contains the hardware and systems required to support human and telerobotic EVA; the module would not be self-propelled and would rely on an interfacing element for consumable resources. The conceptual architecture was then compared to EVA Systems used in the Space Shuttle Orbiter, on the International Space Station to develop high level design concepts that incorporate opportunities for cost savings through hardware reuse, and quick production through the use of existing technologies and hardware designs. An upgrade option was included to make use of the developing suit port technologies.
Skylab materials processing facility experiment developer's report
NASA Technical Reports Server (NTRS)
Parks, P. G.
1975-01-01
The development of the Skylab M512 Materials Processing Facility is traced from the design of a portable, self-contained electron beam welding system for terrestrial applications to the highly complex experiment system ultimately developed for three Skylab missions. The M512 experiment facility was designed to support six in-space experiments intended to explore the advantages of manufacturing materials in the near-zero-gravity environment of Earth orbit. Detailed descriptions of the M512 facility and related experiment hardware are provided, with discussions of hardware verification and man-machine interfaces included. An analysis of the operation of the facility and experiments during the three Skylab missions is presented, including discussions of the hardware performance, anomalies, and data returned to earth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshii, Kazutomo; Llopis, Pablo; Zhang, Kaicheng
As CMOS scaling nears its end, parameter variations (process, temperature and voltage) are becoming a major concern. To overcome parameter variations and provide stability, modern processors are becoming dynamic, opportunistically adjusting voltage and frequency based on thermal and energy constraints, which negatively impacts traditional bulk-synchronous parallelism-minded hardware and software designs. As node-level architecture is growing in complexity, implementing variation control mechanisms only with hardware can be a challenging task. In this paper we investigate a software strategy to manage hardwareinduced variations, leveraging low-level monitoring/controlling mechanisms.
A New Method for Control of the Efficiency of Gear Reducers
NASA Astrophysics Data System (ADS)
E Kozlov, K.; Egorov, A. V.; Belogusev, V. N.
2017-04-01
This article proposes a new method to control the energy efficiency of gear reducers. The method allows evaluating the friction losses in a drive motor, drive motor bearing assemblies, and toothing both at the stage of control of the finished product and in the course of its operation, maintenance, and repair. The proposed method, unlike currently used methods for control of the efficiency of gear reducers, allows determining the friction losses without the use of strain measurement, which requires calibration of tensometric sensors and expensive equipment. The method is based on the idea of invariability of mechanical characteristics of the induction motor at constant voltage, resistance of windings, and mains frequency, regardless of the driven inertia mass. This paper presents experimental results which verify the theoretical predictions. The proposed method can be implemented in the procedure of acceptance test at the companies that manufacture gear reducers, thereby assess their effectiveness and the level of degradation processes that significantly affect the service life of the research object. The method can be implemented both with universal and with specialized hardware and software complexes. At that, both an increment of the inertia moment and acceleration time of a gear reducer may serve as a performance criterion.
Yang, Changju; Kim, Hyongsuk; Adhikari, Shyam Prasad; Chua, Leon O.
2016-01-01
A hybrid learning method of a software-based backpropagation learning and a hardware-based RWC learning is proposed for the development of circuit-based neural networks. The backpropagation is known as one of the most efficient learning algorithms. A weak point is that its hardware implementation is extremely difficult. The RWC algorithm, which is very easy to implement with respect to its hardware circuits, takes too many iterations for learning. The proposed learning algorithm is a hybrid one of these two. The main learning is performed with a software version of the BP algorithm, firstly, and then, learned weights are transplanted on a hardware version of a neural circuit. At the time of the weight transplantation, a significant amount of output error would occur due to the characteristic difference between the software and the hardware. In the proposed method, such error is reduced via a complementary learning of the RWC algorithm, which is implemented in a simple hardware. The usefulness of the proposed hybrid learning system is verified via simulations upon several classical learning problems. PMID:28025566
Preparing the Next IT Leaders: Financial Management
ERIC Educational Resources Information Center
Goldstein, Karen L.
2007-01-01
The next generation of IT leaders must learn to navigate the complexities of higher education financial planning and negotiation. Technology infrastructure, hardware, software, and services are very expensive to provide and will continue to be so for some time to come. Most CIOs have learned about complex IT finances the hard way--they were handed…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Domanski, P.A.
1995-03-01
The report presents a theoretical analysis of three vapor compression cycles which are derived from the Rankine cycle by incorporating a liquid-line/suction-line heat exchanger, economizer, or ejector. These addendums to the basic cycle reduce throttling losses using different principles, and they require different mechanical hardware of different complexity and cost. The theoretical merits of the three modified cycles were evaluated in relation to the reversed Carnot and Rankine cycle. Thirty-eight fluids were included in the study using the Carnahan-Starling-DeSantis equation of state. In general, the benefit of these addendums increases with the amount of the throttling losses realized by themore » refrigerant in the Rankine cycle.« less
Computer-generated holograms by multiple wavefront recording plane method with occlusion culling.
Symeonidou, Athanasia; Blinder, David; Munteanu, Adrian; Schelkens, Peter
2015-08-24
We propose a novel fast method for full parallax computer-generated holograms with occlusion processing, suitable for volumetric data such as point clouds. A novel light wave propagation strategy relying on the sequential use of the wavefront recording plane method is proposed, which employs look-up tables in order to reduce the computational complexity in the calculation of the fields. Also, a novel technique for occlusion culling with little additional computation cost is introduced. Additionally, the method adheres a Gaussian distribution to the individual points in order to improve visual quality. Performance tests show that for a full-parallax high-definition CGH a speedup factor of more than 2,500 compared to the ray-tracing method can be achieved without hardware acceleration.
NASA Astrophysics Data System (ADS)
Shen, Tzu-Chiang; Ovando, Nicolás.; Bartsch, Marcelo; Simmond, Max; Vélez, Gastón; Robles, Manuel; Soto, Rubén.; Ibsen, Jorge; Saldias, Christian
2012-09-01
ALMA is the first astronomical project being constructed and operated under industrial approach due to the huge amount of elements involved. In order to achieve the maximum through put during the engineering and scientific commissioning phase, several production lines have been established to work in parallel. This decision required modification in the original system architecture in which all the elements are controlled and operated within a unique Standard Test Environment (STE). The advance in the network industry and together with the maturity of virtualization paradigm allows us to provide a solution which can replicate the STE infrastructure without changing their network address definition. This is only possible with Virtual Routing and Forwarding (VRF) and Virtual LAN (VLAN) concepts. The solution allows dynamic reconfiguration of antennas and other hardware across the production lines with minimum time and zero human intervention in the cabling. We also push the virtualization even further, classical rack mount servers are being replaced and consolidated by blade servers. On top of them virtualized server are centrally administrated with VMWare ESX. Hardware costs and system administration effort will be reduced considerably. This mechanism has been established and operated successfully during the last two years. This experience gave us confident to propose a solution to divide the main operation array into subarrays using the same concept which will introduce huge flexibility and efficiency for ALMA operation and eventually may simplify the complexity of ALMA core observing software since there will be no need to deal with subarrays complexity at software level.
Katz, Matthew H G; Slack, Rebecca; Bruno, Morgan; McMillan, Jermaine; Fleming, Jason B; Lee, Jeffrey E; Bednarski, Brian; Papadopoulos, John; Matin, Surena F
2016-05-01
Outpatient clinical encounters are used to promote recovery after complex surgical procedures for cancer. These care episodes are resource intensive. Virtual clinical encounters (VCEs) can now be conducted using widely available videoconferencing technologies. However, whether these technologies may be used to monitor postoperative recovery is unknown. In this pilot study, we provided care using a comprehensive "TeleDischarge" protocol to 15 patients after pancreatectomy. In addition to routine follow-up, all patients participated in two scheduled and an unlimited number of unscheduled VCEs using mobile hardware and secure videoconferencing software. We evaluated feasibility, patient satisfaction, postoperative adverse events, and health care human resource utilization. The median age of enrolled patients was 63 y (range, 52-83 y) and 93% underwent pancreatoduodenectomy. Twenty-eight scheduled VCEs (93%) were completed successfully, and only one unscheduled VCE was requested. Twelve patients (80%) felt their postoperative care was enhanced by VCEs and 14 (93%) felt that VCEs should be a regular part of postoperative care. Minor interventions in four patients (27%) were performed on the basis of clinical data gathered during a VCE. On a per patient basis, the TeleDischarge pathway was estimated to take 36 min longer and to have a direct labor cost $39 greater than the standard pathway. Secure VCEs can be conducted using widely available hardware and software solutions. Although cancer patients support the introduction of mobile technology into postoperative care, further studies are needed to identify ways in which such technology can be used most effectively and efficiently to reduce barriers to recovery. Copyright © 2016 Elsevier Inc. All rights reserved.
EMU processing - A myth dispelled
NASA Technical Reports Server (NTRS)
Peacock, Paul R.; Wilde, Richard C.; Lutz, Glenn C.; Melgares, Michael A.
1991-01-01
The refurbishment-and-checkout 'processing' activities entailed by the Space Shuttle Extravehicular Mobility Units (EMUs) are currently significantly more modest, at 1050 man-hours, than when Space Shuttle services began (involving about 4000 man-hours). This great improvement in hardware efficiency is due to the design or modification of test rigs for simplification of procedures, as well as those procedures' standardization, in conjunction with an increase in hardware confidence which has allowed the extension of inspection, service, and testing intervals. Recent simplification of the hardware-processing sequence could reduce EMU processing requirements to 600 man-hours in the near future.
Waste Collector System Technology Comparisons for Constellation Applications
NASA Technical Reports Server (NTRS)
Broyan, James Lee, Jr.
2006-01-01
The Waste Collection Systems (WCS) for space vehicles have utilized a variety of hardware for collecting human metabolic wastes. It has typically required multiple missions to resolve crew usability and hardware performance issues that are difficult to duplicate on the ground. New space vehicles should leverage off past WCS systems. Past WCS hardware designs are substantially different and unique for each vehicle. However, each WCS can be analyzed and compared as a subset of technologies which encompass fecal collection, urine collection, air systems, pretreatment systems. Technology components from the WCS of various vehicles can then be combined to reduce hardware mass and volume while maximizing use of previous technology and proven human-equipment interfaces. Analysis of past US and Russian WCS are compared and extrapolated to Constellation missions.
NASA Technical Reports Server (NTRS)
Meyer, William V.; Tscharnuter, Walther W.; Macgregor, Andrew D.; Dautet, Henri; Deschamps, Pierre; Boucher, Francois; Zuh, Jixiang; Tin, Padetha; Rogers, Richard B.; Ansari, Rafat R.
1994-01-01
Recent advancements in laser light scattering hardware are described. These include intelligent single card correlators; active quench/active reset avalanche photodiodes; laser diodes; and fiber optics which were used by or developed for a NASA advanced technology development program. A space shuttle experiment which will employ aspects of these hardware developments is previewed.
Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation
NASA Astrophysics Data System (ADS)
Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei
2016-11-01
Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.
Harnessing the Power of Light to See and Treat Breast Cancer
2015-12-01
complexity : the raw data, the most basic form of data, represents the raw numeric readout obtained from the acquisition hardware. The raw data...has the added advantage of full-field illumination and non-descanned detection, thus lowering the complexity compared to confocal scanning systems... complexity of images that have varying levels of contrast and non-uniform background heterogeneity. In 2004 Matas described a technique for detecting
Benchmarking and Hardware-In-The-Loop Operation of a ...
Engine Performance evaluation in support of LD MTE. EPA used elements of its ALPHA model to apply hardware-in-the-loop (HIL) controls to the SKYACTIV engine test setup to better understand how the engine would operate in a chassis test after combined with future leading edge technologies, advanced high-efficiency transmission, reduced mass, and reduced roadload. Predict future vehicle performance with Atkinson engine. As part of its technology assessment for the upcoming midterm evaluation of the 2017-2025 LD vehicle GHG emissions regulation, EPA has been benchmarking engines and transmissions to generate inputs for use in its ALPHA model
Programs for Testing Processor-in-Memory Computing Systems
NASA Technical Reports Server (NTRS)
Katz, Daniel S.
2006-01-01
The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards.
Reconfigurable Hardware Adapts to Changing Mission Demands
NASA Technical Reports Server (NTRS)
2003-01-01
A new class of computing architectures and processing systems, which use reconfigurable hardware, is creating a revolutionary approach to implementing future spacecraft systems. With the increasing complexity of electronic components, engineers must design next-generation spacecraft systems with new technologies in both hardware and software. Derivation Systems, Inc., of Carlsbad, California, has been working through NASA s Small Business Innovation Research (SBIR) program to develop key technologies in reconfigurable computing and Intellectual Property (IP) soft cores. Founded in 1993, Derivation Systems has received several SBIR contracts from NASA s Langley Research Center and the U.S. Department of Defense Air Force Research Laboratories in support of its mission to develop hardware and software for high-assurance systems. Through these contracts, Derivation Systems began developing leading-edge technology in formal verification, embedded Java, and reconfigurable computing for its PF3100, Derivational Reasoning System (DRS ), FormalCORE IP, FormalCORE PCI/32, FormalCORE DES, and LavaCORE Configurable Java Processor, which are designed for greater flexibility and security on all space missions.
Primary and secondary electrical space power based on advanced PEM systems
NASA Technical Reports Server (NTRS)
Vanderborgh, N. E.; Hedstrom, J. C.; Stroh, K. R.; Huff, J. R.
1993-01-01
For new space ventures, power continues to be a pacing function for mission planning and experiment endurance. Although electrochemical power is a well demonstrated space power technology, current hardware limitations impact future mission viability. In order to document and augment electrochemical technology, a series of experiments for the National Aeronautics and Space Administration Lewis Research Center (NASA LeRC) are underway at the Los Alamos National Laboratory that define operational parameters on contemporary proton exchange membrane (PEM) hardware operating with hydrogen and oxygen reactants. Because of the high efficiency possible for water electrolysis, this hardware is also thought part of a secondary battery design built around stored reactants - the so-called regenerative fuel cell. An overview of stack testing at Los Alamos and of analyses related to regenerative fuel cell systems are provided in this paper. Finally, this paper describes work looking at innovative concepts that remove complexity from stack hardware with the specific intent of higher system reliability. This new concept offers the potential for unprecedented electrochemical power system energy densities.
Reducing the cognitive workload - Trouble managing power systems
NASA Technical Reports Server (NTRS)
Manner, David B.; Liberman, Eugene M.; Dolce, James L.; Mellor, Pamela A.
1993-01-01
The complexity of space-based systems makes monitoring them and diagnosing their faults taxing for human beings. When a problem arises, immediate attention and quick resolution is mandatory. To aid humans in these endeavors we have developed an automated advisory system. Our advisory expert system, Trouble, incorporates the knowledge of the power system designers for Space Station Freedom. Trouble is designed to be a ground-based advisor for the mission controllers in the Control Center Complex at Johnson Space Center (JSC). It has been developed at NASA Lewis Research Center (LeRC) and tested in conjunction with prototype flight hardware contained in the Power Management and Distribution testbed and the Engineering Support Center, ESC, at LeRC. Our work will culminate with the adoption of these techniques by the mission controllers at JSC. This paper elucidates how we have captured power system failure knowledge, how we have built and tested our expert system, and what we believe its potential uses are.
Minimum Control Requirements for Advanced Life Support Systems
NASA Technical Reports Server (NTRS)
Boulange, Richard; Jones, Harry; Jones, Harry
2002-01-01
Advanced control technologies are not necessary for the safe, reliable and continuous operation of Advanced Life Support (ALS) systems. ALS systems can and are adequately controlled by simple, reliable, low-level methodologies and algorithms. The automation provided by advanced control technologies is claimed to decrease system mass and necessary crew time by reducing buffer size and minimizing crew involvement. In truth, these approaches increase control system complexity without clearly demonstrating an increase in reliability across the ALS system. Unless these systems are as reliable as the hardware they control, there is no savings to be had. A baseline ALS system is presented with the minimal control system required for its continuous safe reliable operation. This baseline control system uses simple algorithms and scheduling methodologies and relies on human intervention only in the event of failure of the redundant backup equipment. This ALS system architecture is designed for reliable operation, with minimal components and minimal control system complexity. The fundamental design precept followed is "If it isn't there, it can't fail".
Platform-Independence and Scheduling In a Multi-Threaded Real-Time Simulation
NASA Technical Reports Server (NTRS)
Sugden, Paul P.; Rau, Melissa A.; Kenney, P. Sean
2001-01-01
Aviation research often relies on real-time, pilot-in-the-loop flight simulation as a means to develop new flight software, flight hardware, or pilot procedures. Often these simulations become so complex that a single processor is incapable of performing the necessary computations within a fixed time-step. Threads are an elegant means to distribute the computational work-load when running on a symmetric multi-processor machine. However, programming with threads often requires operating system specific calls that reduce code portability and maintainability. While a multi-threaded simulation allows a significant increase in the simulation complexity, it also increases the workload of a simulation operator by requiring that the operator determine which models run on which thread. To address these concerns an object-oriented design was implemented in the NASA Langley Standard Real-Time Simulation in C++ (LaSRS++) application framework. The design provides a portable and maintainable means to use threads and also provides a mechanism to automatically load balance the simulation models.
Improved Fast Centralized Retransmission Scheme for High-Layer Functional Split in 5G Network
NASA Astrophysics Data System (ADS)
Xu, Sen; Hou, Meng; Fu, Yu; Bian, Honglian; Gao, Cheng
2018-01-01
In order to satisfy the varied 5G critical requirements and the virtualization of the RAN hardware, a two-level architecture for 5G RAN has been studied in 3GPP 5G SI stage. The performance of the PDCP-RLC split option and intra-RLC split option, two mainly concerned options for high layer functional split, exist an ongoing debate. This paper firstly gives an overview of CU-DU split study work in 3GPP. By the comparison of implementation complexity, the standardization impact and system performance, our evaluation result shows the PDCP-RLC split Option outperforms the intra-RLC split option. Aiming to how to reduce the retransmission delay during the intra-CU inter-DU handover, the mainly drawback of PDCP-RLC split option, this paper proposes an improved fast centralized retransmission solution with a low implementation complexity. Finally, system level simulations show that the PDCP-RLC split option with the proposed scheme can significantly improve the UE’s experience.
Computational adaptive optics for broadband interferometric tomography of tissues and cells
NASA Astrophysics Data System (ADS)
Adie, Steven G.; Mulligan, Jeffrey A.
2016-03-01
Adaptive optics (AO) can shape aberrated optical wavefronts to physically restore the constructive interference needed for high-resolution imaging. With access to the complex optical field, however, many functions of optical hardware can be achieved computationally, including focusing and the compensation of optical aberrations to restore the constructive interference required for diffraction-limited imaging performance. Holography, which employs interferometric detection of the complex optical field, was developed based on this connection between hardware and computational image formation, although this link has only recently been exploited for 3D tomographic imaging in scattering biological tissues. This talk will present the underlying imaging science behind computational image formation with optical coherence tomography (OCT) -- a beam-scanned version of broadband digital holography. Analogous to hardware AO (HAO), we demonstrate computational adaptive optics (CAO) and optimization of the computed pupil correction in 'sensorless mode' (Zernike polynomial corrections with feedback from image metrics) or with the use of 'guide-stars' in the sample. We discuss the concept of an 'isotomic volume' as the volumetric extension of the 'isoplanatic patch' introduced in astronomical AO. Recent CAO results and ongoing work is highlighted to point to the potential biomedical impact of computed broadband interferometric tomography. We also discuss the advantages and disadvantages of HAO vs. CAO for the effective shaping of optical wavefronts, and highlight opportunities for hybrid approaches that synergistically combine the unique advantages of hardware and computational methods for rapid volumetric tomography with cellular resolution.
NASA Technical Reports Server (NTRS)
McNeill, Justin
1995-01-01
The Multimission Image Processing Subsystem (MIPS) at the Jet Propulsion Laboratory (JPL) has managed transitions of application software sets from one operating system and hardware platform to multiple operating systems and hardware platforms. As a part of these transitions, cost estimates were generated from the personal experience of in-house developers and managers to calculate the total effort required for such projects. Productivity measures have been collected for two such transitions, one very large and the other relatively small in terms of source lines of code. These estimates used a cost estimation model similar to the Software Engineering Laboratory (SEL) Effort Estimation Model. Experience in transitioning software within JPL MIPS have uncovered a high incidence of interface complexity. Interfaces, both internal and external to individual software applications, have contributed to software transition project complexity, and thus to scheduling difficulties and larger than anticipated design work on software to be ported.
d'Entremont, Agnes G; Kolind, Shannon H; Mädler, Burkhard; Wilson, David R; MacKay, Alexander L
2014-03-01
To evaluate the effect of metal artifact reduction techniques on dGEMRIC T(1) calculation with surgical hardware present. We examined the effect of stainless-steel and titanium hardware on dGEMRIC T(1) maps. We tested two strategies to reduce metal artifact in dGEMRIC: (1) saturation recovery (SR) instead of inversion recovery (IR) and (2) applying the metal artifact reduction sequence (MARS), in a gadolinium-doped agarose gel phantom and in vivo with titanium hardware. T(1) maps were obtained using custom curve-fitting software and phantom ROIs were defined to compare conditions (metal, MARS, IR, SR). A large area of artifact appeared in phantom IR images with metal when T(I) ≤ 700 ms. IR maps with metal had additional artifact both in vivo and in the phantom (shifted null points, increased mean T(1) (+151 % IR ROI(artifact)) and decreased mean inversion efficiency (f; 0.45 ROI(artifact), versus 2 for perfect inversion)) compared to the SR maps (ROI(artifact): +13 % T(1) SR, 0.95 versus 1 for perfect excitation), however, SR produced noisier T(1) maps than IR (phantom SNR: 118 SR, 212 IR). MARS subtly reduced the extent of artifact in the phantom (IR and SR). dGEMRIC measurement in the presence of surgical hardware at 3T is possible with appropriately applied strategies. Measurements may work best in the presence of titanium and are severely limited with stainless steel. For regions near hardware where IR produces large artifacts making dGEMRIC analysis impossible, SR-MARS may allow dGEMRIC measurements. The position and size of the IR artifact is variable, and must be assessed for each implant/imaging set-up.
Agile hardware and software systems engineering for critical military space applications
NASA Astrophysics Data System (ADS)
Huang, Philip M.; Knuth, Andrew A.; Krueger, Robert O.; Garrison-Darrin, Margaret A.
2012-06-01
The Multi Mission Bus Demonstrator (MBD) is a successful demonstration of agile program management and system engineering in a high risk technology application where utilizing and implementing new, untraditional development strategies were necessary. MBD produced two fully functioning spacecraft for a military/DOD application in a record breaking time frame and at dramatically reduced costs. This paper discloses the adaptation and application of concepts developed in agile software engineering to hardware product and system development for critical military applications. This challenging spacecraft did not use existing key technology (heritage hardware) and created a large paradigm shift from traditional spacecraft development. The insertion of new technologies and methods in space hardware has long been a problem due to long build times, the desire to use heritage hardware, and lack of effective process. The role of momentum in the innovative process can be exploited to tackle ongoing technology disruptions and allowing risk interactions to be mitigated in a disciplined manner. Examples of how these concepts were used during the MBD program will be delineated. Maintaining project momentum was essential to assess the constant non recurring technological challenges which needed to be retired rapidly from the engineering risk liens. Development never slowed due to tactical assessment of the hardware with the adoption of the SCRUM technique. We adapted this concept as a representation of mitigation of technical risk while allowing for design freeze later in the program's development cycle. By using Agile Systems Engineering and Management techniques which enabled decisive action, the product development momentum effectively was used to produce two novel space vehicles in a fraction of time with dramatically reduced cost.
Real-time range generation for ladar hardware-in-the-loop testing
NASA Astrophysics Data System (ADS)
Olson, Eric M.; Coker, Charles F.
1996-05-01
Real-time closed loop simulation of LADAR seekers in a hardware-in-the-loop facility can reduce program risk and cost. This paper discusses an implementation of real-time range imagery generated in a synthetic environment at the Kinetic Kill Vehicle Hardware-in-the Loop facility at Eglin AFB, for the stimulation of LADAR seekers and algorithms. The computer hardware platform used was a Silicon Graphics Incorporated Onyx Reality Engine. This computer contains graphics hardware, and is optimized for generating visible or infrared imagery in real-time. A by-produce of the rendering process, in the form of a depth buffer, is generated from all objects in view during its rendering process. The depth buffer is an array of integer values that contributes to the proper rendering of overlapping objects and can be converted to range values using a mathematical formula. This paper presents an optimized software approach to the generation of the scenes, calculation of the range values, and outputting the range data for a LADAR seeker.
NASA Astrophysics Data System (ADS)
Yussup, N.; Ibrahim, M. M.; Lombigit, L.; Rahman, N. A. A.; Zin, M. R. M.
2014-02-01
Typically a system consists of hardware as the controller and software which is installed in the personal computer (PC). In the effective nuclear detection, the hardware involves the detection setup and the electronics used, with the software consisting of analysis tools and graphical display on PC. A data acquisition interface is necessary to enable the communication between the controller hardware and PC. Nowadays, Universal Serial Bus (USB) has become a standard connection method for computer peripherals and has replaced many varieties of serial and parallel ports. However the implementation of USB is complex. This paper describes the implementation of data acquisition interface between a field-programmable gate array (FPGA) board and a PC by exploiting the USB link of the FPGA board. The USB link is based on an FTDI chip which allows direct access of input and output to the Joint Test Action Group (JTAG) signals from a USB host and a complex programmable logic device (CPLD) with a 24 MHz clock input to the USB link. The implementation and results of using the USB link of FPGA board as the data interfacing are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yussup, N.; Ibrahim, M. M.; Lombigit, L.
Typically a system consists of hardware as the controller and software which is installed in the personal computer (PC). In the effective nuclear detection, the hardware involves the detection setup and the electronics used, with the software consisting of analysis tools and graphical display on PC. A data acquisition interface is necessary to enable the communication between the controller hardware and PC. Nowadays, Universal Serial Bus (USB) has become a standard connection method for computer peripherals and has replaced many varieties of serial and parallel ports. However the implementation of USB is complex. This paper describes the implementation of datamore » acquisition interface between a field-programmable gate array (FPGA) board and a PC by exploiting the USB link of the FPGA board. The USB link is based on an FTDI chip which allows direct access of input and output to the Joint Test Action Group (JTAG) signals from a USB host and a complex programmable logic device (CPLD) with a 24 MHz clock input to the USB link. The implementation and results of using the USB link of FPGA board as the data interfacing are discussed.« less
Compiler-assisted multiple instruction rollback recovery using a read buffer
NASA Technical Reports Server (NTRS)
Alewine, Neal J.; Chen, Shyh-Kwei; Fuchs, W. Kent; Hwu, Wen-Mei W.
1995-01-01
Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper describes compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. The compiler-assisted scheme presented consists of hardware that is less complex than shadow files, history files, history buffers, or delayed write buffers, while experimental evaluation indicates performance improvement over compiler-based schemes.
Gas-analytic measurement complexes of Baikal atmospheric-limnological observatory
NASA Astrophysics Data System (ADS)
Pestunov, D. A.; Shamrin, A. M.; Shmargunov, V. P.; Panchenko, M. V.
2015-11-01
The paper presents the present-day structure of stationary and mobile hardware-software gas-analytical complexes of Baikal atmospheric-limnological observatory (BALO) Siberian Branch Russian Academy of Sciences (SB RAS), designed to study the processes of gas exchange of carbon-containing gases in the "atmosphere-water" system, which are constantly updated to include new measuring and auxiliary instrumentation.
Scott, Graham W S
2012-01-01
The year 1962 was pre-medicare. The public was concerned about access and individual affordability of care. Funding involved public or private responsibility and the role of government. Physicians, the most influential providers, were concerned that government funding would result in the loss of their independence and their becoming state employees. The retrospective analysis "Looking Back 50 Years in Hospital Administration" by Graham and Sibbald is arresting as it underlines just how much progress we have made in what could be termed "hardware" in support of healthcare policy and hospital administration. From this perspective, the progress has been eye opening, given the advent of universal healthcare, the advancement in our physical facilities, the development of high-quality diagnostic equipment, the explosion of new research centres and new and complex clinical procedures. The development of this hardware has given our providers better weapons and contributed to a remarkable improvement in life expectancy. But progress in health administration and policy management involves more than hardware. If the hardware constitutes the tools, then the "software" of the healthcare system involves the human resources and the culture change that must be positioned to make maximum use of the hardware. In 2062, looking back at the 2012 experience, the legacy test may be whether we dealt with health human resources and culture change at a rate that matched our progress in hardware.
James, Conrad D.; Aimone, James B.; Miner, Nadine E.; ...
2017-01-04
In this study, biological neural networks continue to inspire new developments in algorithms and microelectronic hardware to solve challenging data processing and classification problems. Here in this research, we survey the history of neural-inspired and neuromorphic computing in order to examine the complex and intertwined trajectories of the mathematical theory and hardware developed in this field. Early research focused on adapting existing hardware to emulate the pattern recognition capabilities of living organisms. Contributions from psychologists, mathematicians, engineers, neuroscientists, and other professions were crucial to maturing the field from narrowly-tailored demonstrations to more generalizable systems capable of addressing difficult problem classesmore » such as object detection and speech recognition. Algorithms that leverage fundamental principles found in neuroscience such as hierarchical structure, temporal integration, and robustness to error have been developed, and some of these approaches are achieving world-leading performance on particular data classification tasks. Additionally, novel microelectronic hardware is being developed to perform logic and to serve as memory in neuromorphic computing systems with optimized system integration and improved energy efficiency. Key to such advancements was the incorporation of new discoveries in neuroscience research, the transition away from strict structural replication and towards the functional replication of neural systems, and the use of mathematical theory frameworks to guide algorithm and hardware developments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, Conrad D.; Aimone, James B.; Miner, Nadine E.
In this study, biological neural networks continue to inspire new developments in algorithms and microelectronic hardware to solve challenging data processing and classification problems. Here in this research, we survey the history of neural-inspired and neuromorphic computing in order to examine the complex and intertwined trajectories of the mathematical theory and hardware developed in this field. Early research focused on adapting existing hardware to emulate the pattern recognition capabilities of living organisms. Contributions from psychologists, mathematicians, engineers, neuroscientists, and other professions were crucial to maturing the field from narrowly-tailored demonstrations to more generalizable systems capable of addressing difficult problem classesmore » such as object detection and speech recognition. Algorithms that leverage fundamental principles found in neuroscience such as hierarchical structure, temporal integration, and robustness to error have been developed, and some of these approaches are achieving world-leading performance on particular data classification tasks. Additionally, novel microelectronic hardware is being developed to perform logic and to serve as memory in neuromorphic computing systems with optimized system integration and improved energy efficiency. Key to such advancements was the incorporation of new discoveries in neuroscience research, the transition away from strict structural replication and towards the functional replication of neural systems, and the use of mathematical theory frameworks to guide algorithm and hardware developments.« less
rpe v5: an emulator for reduced floating-point precision in large numerical simulations
NASA Astrophysics Data System (ADS)
Dawson, Andrew; Düben, Peter D.
2017-06-01
This paper describes the rpe (reduced-precision emulator) library which has the capability to emulate the use of arbitrary reduced floating-point precision within large numerical models written in Fortran. The rpe software allows model developers to test how reduced floating-point precision affects the result of their simulations without having to make extensive code changes or port the model onto specialized hardware. The software can be used to identify parts of a program that are problematic for numerical precision and to guide changes to the program to allow a stronger reduction in precision.The development of rpe was motivated by the strong demand for more computing power. If numerical precision can be reduced for an application under consideration while still achieving results of acceptable quality, computational cost can be reduced, since a reduction in numerical precision may allow an increase in performance or a reduction in power consumption. For simulations with weather and climate models, savings due to a reduction in precision could be reinvested to allow model simulations at higher spatial resolution or complexity, or to increase the number of ensemble members to improve predictions. rpe was developed with a particular focus on the community of weather and climate modelling, but the software could be used with numerical simulations from other domains.
Operating System Support for Shared Hardware Data Structures
2013-01-31
Carbon [73] uses hardware queues to improve fine-grained multitasking for Recognition, Mining , and Synthesis. Compared to software ap- proaches...web transaction processing, data mining , and multimedia. Early work in database processors [114, 96, 79, 111] reduce the costs of relational database...assignment can be solved statically or dynamically. Static assignment deter- mines offline which data structures are assigned to use HWDS resources and at
Learning in Neural Networks: VLSI Implementation Strategies
NASA Technical Reports Server (NTRS)
Duong, Tuan Anh
1995-01-01
Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.
Hardware Neural Network for a Visual Inspection System
NASA Astrophysics Data System (ADS)
Chun, Seungwoo; Hayakawa, Yoshihiro; Nakajima, Koji
The visual inspection of defects in products is heavily dependent on human experience and instinct. In this situation, it is difficult to reduce the production costs and to shorten the inspection time and hence the total process time. Consequently people involved in this area desire an automatic inspection system. In this paper, we propose a hardware neural network, which is expected to provide high-speed operation for automatic inspection of products. Since neural networks can learn, this is a suitable method for self-adjustment of criteria for classification. To achieve high-speed operation, we use parallel and pipelining techniques. Furthermore, we use a piecewise linear function instead of a conventional activation function in order to save hardware resources. Consequently, our proposed hardware neural network achieved 6GCPS and 2GCUPS, which in our test sample proved to be sufficiently fast.
NASA Technical Reports Server (NTRS)
Portree, David S. F.
1995-01-01
The heritage of the major Mir complex hardware elements is described. These elements include Soyuz-TM and Progress-M; the Kvant, Kvant 2, and Kristall modules; and the Mir base block. Configuration changes and major mission events of the Salyut 6, Salyut 7, and Mir multiport space stations are described in detail for the period 1977-1994. A comparative chronology of U.S. and Soviet/Russian manned spaceflight is also given for that period. The 68 illustrations include comparative scale drawings of U.S. and Russian spacecraft as well as sequential drawings depicting missions and mission events.
3D graphics hardware accelerator programming methods for real-time visualization systems
NASA Astrophysics Data System (ADS)
Souetov, Andrew E.
2001-02-01
The paper deals with new approaches in software design for creating real-time applications that use modern graphics acceleration hardware. The growing complexity of such type of software compels programmers to use different types of CASE systems in design and development process. The subject under discussion is integration of such systems in a development process, their effective use, and the combination of these new methods with the necessity to produce optimal codes. A method of simulation integration and modeling tools in real-time software development cycle is described.
3D graphics hardware accelerator programming methods for real-time visualization systems
NASA Astrophysics Data System (ADS)
Souetov, Andrew E.
2000-02-01
The paper deals with new approaches in software design for creating real-time applications that use modern graphics acceleration hardware. The growing complexity of such type of software compels programmers to use different types of CASE systems in design and development process. The subject under discussion is integration of such systems in a development process, their effective use, and the combination of these new methods with the necessity to produce optimal codes. A method of simulation integration and modeling tools in real-time software development cycle is described.
Advanced Booster Liquid Engine Combustion Stability
NASA Technical Reports Server (NTRS)
Tucker, Kevin; Gentz, Steve; Nettles, Mindy
2015-01-01
Combustion instability is a phenomenon in liquid rocket engines caused by complex coupling between the time-varying combustion processes and the fluid dynamics in the combustor. Consequences of the large pressure oscillations associated with combustion instability often cause significant hardware damage and can be catastrophic. The current combustion stability assessment tools are limited by the level of empiricism in many inputs and embedded models. This limited predictive capability creates significant uncertainty in stability assessments. This large uncertainty then increases hardware development costs due to heavy reliance on expensive and time-consuming testing.
2012-02-17
Industrial Area Construction: Located 5 miles south of Launch Complex 39, construction of the main buildings -- Operations and Checkout Building, Headquarters Building, and Central Instrumentation Facility – began in 1963. In 1992, the Space Station Processing Facility was designed and constructed for the pre-launch processing of International Space Station hardware that was flown on the space shuttle. Along with other facilities, the industrial area provides spacecraft assembly and checkout, crew training, computer and instrumentation equipment, hardware preflight testing and preparations, as well as administrative offices. Poster designed by Kennedy Space Center Graphics Department/Greg Lee. Credit: NASA
Towards fully analog hardware reservoir computing for speech recognition
NASA Astrophysics Data System (ADS)
Smerieri, Anteo; Duport, François; Paquot, Yvan; Haelterman, Marc; Schrauwen, Benjamin; Massar, Serge
2012-09-01
Reservoir computing is a very recent, neural network inspired unconventional computation technique, where a recurrent nonlinear system is used in conjunction with a linear readout to perform complex calculations, leveraging its inherent internal dynamics. In this paper we show the operation of an optoelectronic reservoir computer in which both the nonlinear recurrent part and the readout layer are implemented in hardware for a speech recognition application. The performance obtained is close to the one of to state-of-the-art digital reservoirs, while the analog architecture opens the way to ultrafast computation.
Evaluating geographic information systems technology
Guptill, Stephen C.
1989-01-01
Computerized geographic information systems (GISs) are emerging as the spatial data handling tools of choice for solving complex geographical problems. However, few guidelines exist for assisting potential users in identifying suitable hardware and software. A process to be followed in evaluating the merits of GIS technology is presented. Related standards and guidelines, software functions, hardware components, and benchmarking are discussed. By making users aware of all aspects of adopting GIS technology, they can decide if GIS is an appropriate tool for their application and, if so, which GIS should be used.
Hardware-in-the-loop simulation for undersea vehicle applications
NASA Astrophysics Data System (ADS)
Kelf, Michael A.
2001-08-01
Torpedoes and other Unmanned Undersea Vehicles (UUV) are employed by submarines and surface combatants, as well as aircraft, for undersea warfare. These vehicles are autonomous devices whose guidance systems rival the complexity of the most sophisticated air combat missiles. The tactical environment for undersea warfare is a difficult one in terms of target detection,k classification, and pursuit because of the physics of underwater sounds. Both hardware-in-the-loop and all-digital simulations have become vital tools in developing and evaluating undersea weapon and vehicle guidance performance in the undersea environment.
Recognizing and optimizing flight opportunities with hardware and life sciences limitations.
Luttges, M W
1992-01-01
The availability of orbital space flight opportunities to conduct life sciences research has been limited. It is possible to use parabolic flight and sounding rocket programs to conduct some kinds of experiments during short episodes (seconds to minutes) of reduced gravity, but there are constraints and limitations to these programs. Orbital flight opportunities are major undertakings, and the potential science achievable is often a function of the flight hardware available. A variety of generic types of flight hardware have been developed and tested, and show great promise for use during NSTS flights. One such payload configuration is described which has already flown.
Space Telecommunications Radio System (STRS) Architecture Standard. Release 1.02.1
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Kacpura, Thomas J.; Handler, Louis M.; Hall, C. Steve; Mortensen, Dale J.; Johnson, Sandra K.; Briones, Janette C.; Nappier, Jennifer M.; Downey, Joseph A.; Lux, James P.
2012-01-01
This document contains the NASA architecture standard for software defined radios used in space- and ground-based platforms to enable commonality among radio developments to enhance capability and services while reducing mission and programmatic risk. Transceivers (or transponders) with functionality primarily defined in software (e.g., firmware) have the ability to change their functional behavior through software alone. This radio architecture standard offers value by employing common waveform software interfaces, method of instantiation, operation, and testing among different compliant hardware and software products. These common interfaces within the architecture abstract application software from the underlying hardware to enable technology insertion independently at either the software or hardware layer.
International Space Station (ISS) Water Transfer Hardware Logistics
NASA Technical Reports Server (NTRS)
Shkedi, Brienne D.
2006-01-01
Water transferred from the Space Shuttle to the International Space Station (ISS) is generated as a by-product from the Shuttle fuel cells, and is generally preferred over the Progress which has to launch water from the ground. However, launch mass and volume are still required for the transfer and storage hardware. Some of these up-mass requirements have been reduced since ISS assembly began due to changes in the storage hardware (CWC). This paper analyzes the launch mass and volume required to transfer water from the Shuttle and analyzes the up-mass savings due to modifications in the CWC. Suggestions for improving the launch mass and volume are also provided.
Lossless compression algorithm for REBL direct-write e-beam lithography system
NASA Astrophysics Data System (ADS)
Cramer, George; Liu, Hsin-I.; Zakhor, Avideh
2010-03-01
Future lithography systems must produce microchips with smaller feature sizes, while maintaining throughputs comparable to those of today's optical lithography systems. This places stringent constraints on the effective data throughput of any maskless lithography system. In recent years, we have developed a datapath architecture for direct-write lithography systems, and have shown that compression plays a key role in reducing throughput requirements of such systems. Our approach integrates a low complexity hardware-based decoder with the writers, in order to decompress a compressed data layer in real time on the fly. In doing so, we have developed a spectrum of lossless compression algorithms for integrated circuit layout data to provide a tradeoff between compression efficiency and hardware complexity, the latest of which is Block Golomb Context Copy Coding (Block GC3). In this paper, we present a modified version of Block GC3 called Block RGC3, specifically tailored to the REBL direct-write E-beam lithography system. Two characteristic features of the REBL system are a rotary stage resulting in arbitrarily-rotated layout imagery, and E-beam corrections prior to writing the data, both of which present significant challenges to lossless compression algorithms. Together, these effects reduce the effectiveness of both the copy and predict compression methods within Block GC3. Similar to Block GC3, our newly proposed technique Block RGC3, divides the image into a grid of two-dimensional "blocks" of pixels, each of which copies from a specified location in a history buffer of recently-decoded pixels. However, in Block RGC3 the number of possible copy locations is significantly increased, so as to allow repetition to be discovered along any angle of orientation, rather than horizontal or vertical. Also, by copying smaller groups of pixels at a time, repetition in layout patterns is easier to find and take advantage of. As a side effect, this increases the total number of copy locations to transmit; this is combated with an extra region-growing step, which enforces spatial coherence among neighboring copy locations, thereby improving compression efficiency. We characterize the performance of Block RGC3 in terms of compression efficiency and encoding complexity on a number of rotated Metal 1, Poly, and Via layouts at various angles, and show that Block RGC3 provides higher compression efficiency than existing lossless compression algorithms, including JPEG-LS, ZIP, BZIP2, and Block GC3.
ISS-CREAM Thermal and Fluid System Design and Analysis
NASA Technical Reports Server (NTRS)
Thorpe, Rosemary S.
2015-01-01
Thermal and Fluids Analysis Workshop (TFAWS), Silver Spring MD NCTS 21070-15. The ISS-CREAM (Cosmic Ray Energetics And Mass for the International Space Station) payload is being developed by an international team and will provide significant cosmic ray characterization over a long time frame. Cold fluid provided by the ISS Exposed Facility (EF) is the primary means of cooling for 5 science instruments and over 7 electronics boxes. Thermal fluid integrated design and analysis was performed for CREAM using a Thermal Desktop model. This presentation will provide some specific design and modeling examples from the fluid cooling system, complex SCD (Silicon Charge Detector) and calorimeter hardware, and integrated payload and ISS level modeling. Features of Thermal Desktop such as CAD simplification, meshing of complex hardware, External References (Xrefs), and FloCAD modeling will be discussed.
A Parallel Rendering Algorithm for MIMD Architectures
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.; Orloff, Tobias
1991-01-01
Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.
Hardware-software complex of informing passengers of forecasted route transport arrival at stop
NASA Astrophysics Data System (ADS)
Pogrebnoy, V. Yu; Pushkarev, M. I.; Fadeev, A. S.
2017-02-01
The paper presents the hardware-software complex of informing the passengers of the forecasted route transport arrival. A client-server architecture of the forecasting information system is represented and an electronic information board prototype is described. The scheme of information transfer and processing, starting with receiving navigating telemetric data from a transport vehicle and up to the time of passenger public transport arrival at the stop, as well as representation of the information on the electronic board is illustrated and described. Methods and algorithms of determination of the transport vehicle current location in the city route network are considered in detail. The description of the proposed forecasting model of transport vehicle arrival time at the stop is given. The obtained result is applied in Tomsk for forecasting and displaying the arrival time information at the stops.
Systems engineering and integration: Cost estimation and benefits analysis
NASA Technical Reports Server (NTRS)
Dean, ED; Fridge, Ernie; Hamaker, Joe
1990-01-01
Space Transportation Avionics hardware and software cost has traditionally been estimated in Phase A and B using cost techniques which predict cost as a function of various cost predictive variables such as weight, lines of code, functions to be performed, quantities of test hardware, quantities of flight hardware, design and development heritage, complexity, etc. The output of such analyses has been life cycle costs, economic benefits and related data. The major objectives of Cost Estimation and Benefits analysis are twofold: (1) to play a role in the evaluation of potential new space transportation avionics technologies, and (2) to benefit from emerging technological innovations. Both aspects of cost estimation and technology are discussed here. The role of cost analysis in the evaluation of potential technologies should be one of offering additional quantitative and qualitative information to aid decision-making. The cost analyses process needs to be fully integrated into the design process in such a way that cost trades, optimizations and sensitivities are understood. Current hardware cost models tend to primarily use weights, functional specifications, quantities, design heritage and complexity as metrics to predict cost. Software models mostly use functionality, volume of code, heritage and complexity as cost descriptive variables. Basic research needs to be initiated to develop metrics more responsive to the trades which are required for future launch vehicle avionics systems. These would include cost estimating capabilities that are sensitive to technological innovations such as improved materials and fabrication processes, computer aided design and manufacturing, self checkout and many others. In addition to basic cost estimating improvements, the process must be sensitive to the fact that no cost estimate can be quoted without also quoting a confidence associated with the estimate. In order to achieve this, better cost risk evaluation techniques are needed as well as improved usage of risk data by decision-makers. More and better ways to display and communicate cost and cost risk to management are required.
Enabling Co-Design of Multi-Layer Exascale Storage Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carothers, Christopher
Growing demands for computing power in applications such as energy production, climate analysis, computational chemistry, and bioinformatics have propelled computing systems toward the exascale: systems with 10 18 floating-point operations per second. These systems, to be designed and constructed over the next decade, will create unprecedented challenges in component counts, power consumption, resource limitations, and system complexity. Data storage and access are an increasingly important and complex component in extreme-scale computing systems, and significant design work is needed to develop successful storage hardware and software architectures at exascale. Co-design of these systems will be necessary to find the best possiblemore » design points for exascale systems. The goal of this work has been to enable the exploration and co-design of exascale storage systems by providing a detailed, accurate, and highly parallel simulation of exascale storage and the surrounding environment. Specifically, this simulation has (1) portrayed realistic application checkpointing and analysis workloads, (2) captured the complexity, scale, and multilayer nature of exascale storage hardware and software, and (3) executed in a timeframe that enables “what if'” exploration of design concepts. We developed models of the major hardware and software components in an exascale storage system, as well as the application I/O workloads that drive them. We used our simulation system to investigate critical questions in reliability and concurrency at exascale, helping guide the design of future exascale hardware and software architectures. Additionally, we provided this system to interested vendors and researchers so that others can explore the design space. We validated the capabilities of our simulation environment by configuring the simulation to represent the Argonne Leadership Computing Facility Blue Gene/Q system and comparing simulation results for application I/O patterns to the results of executions of these I/O kernels on the actual system.« less
INDIVIDUAL-BASED MODELS: POWERFUL OR POWER STRUGGLE?
Willem, L; Stijven, S; Hens, N; Vladislavleva, E; Broeckhove, J; Beutels, P
2015-01-01
Individual-based models (IBMs) offer endless possibilities to explore various research questions but come with high model complexity and computational burden. Large-scale IBMs have become feasible but the novel hardware architectures require adapted software. The increased model complexity also requires systematic exploration to gain thorough system understanding. We elaborate on the development of IBMs for vaccine-preventable infectious diseases and model exploration with active learning. Investment in IBM simulator code can lead to significant runtime reductions. We found large performance differences due to data locality. Sorting the population once, reduced simulation time by a factor two. Storing person attributes separately instead of using person objects also seemed more efficient. Next, we improved model performance up to 70% by structuring potential contacts based on health status before processing disease transmission. The active learning approach we present is based on iterative surrogate modelling and model-guided experimentation. Symbolic regression is used for nonlinear response surface modelling with automatic feature selection. We illustrate our approach using an IBM for influenza vaccination. After optimizing the parameter spade, we observed an inverse relationship between vaccination coverage and the clinical attack rate reinforced by herd immunity. These insights can be used to focus and optimise research activities, and to reduce both dimensionality and decision uncertainty.
The Numerical Propulsion System Simulation: An Overview
NASA Technical Reports Server (NTRS)
Lytle, John K.
2000-01-01
Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.
NASA Astrophysics Data System (ADS)
Li, Xinhua; Song, Zhenyu; Zhan, Yongjie; Wu, Qiongzhi
2009-12-01
Since the system capacity is severely limited, reducing the multiple access interfere (MAI) is necessary in the multiuser direct-sequence code division multiple access (DS-CDMA) system which is used in the telecommunication terminals data-transferred link system. In this paper, we adopt an adaptive multistage parallel interference cancellation structure in the demodulator based on the least mean square (LMS) algorithm to eliminate the MAI on the basis of overviewing various of multiuser dectection schemes. Neither a training sequence nor a pilot signal is needed in the proposed scheme, and its implementation complexity can be greatly reduced by a LMS approximate algorithm. The algorithm and its FPGA implementation is then derived. Simulation results of the proposed adaptive PIC can outperform some of the existing interference cancellation methods in AWGN channels. The hardware setup of mutiuser demodulator is described, and the experimental results based on it demonstrate that the simulation results shows large performance gains over the conventional single-user demodulator.
Relative multiplexing for minimising switching in linear-optical quantum computing
NASA Astrophysics Data System (ADS)
Gimeno-Segovia, Mercedes; Cable, Hugo; Mendoza, Gabriel J.; Shadbolt, Pete; Silverstone, Joshua W.; Carolan, Jacques; Thompson, Mark G.; O'Brien, Jeremy L.; Rudolph, Terry
2017-06-01
Many existing schemes for linear-optical quantum computing (LOQC) depend on multiplexing (MUX), which uses dynamic routing to enable near-deterministic gates and sources to be constructed using heralded, probabilistic primitives. MUXing accounts for the overwhelming majority of active switching demands in current LOQC architectures. In this manuscript we introduce relative multiplexing (RMUX), a general-purpose optimisation which can dramatically reduce the active switching requirements for MUX in LOQC, and thereby reduce hardware complexity and energy consumption, as well as relaxing demands on performance for various photonic components. We discuss the application of RMUX to the generation of entangled states from probabilistic single-photon sources, and argue that an order of magnitude improvement in the rate of generation of Bell states can be achieved. In addition, we apply RMUX to the proposal for percolation of a 3D cluster state by Gimeno-Segovia et al (2015 Phys. Rev. Lett. 115 020502), and we find that RMUX allows an 2.4× increase in loss tolerance for this architecture.
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary Jo W.
2017-01-01
The Space Telecommunications Radio System (STRS) provides a common, consistent framework for software defined radios (SDRs) to abstract the application software from the radio platform hardware. The STRS standard aims to reduce the cost and risk of using complex, configurable and reprogrammable radio systems across NASA missions. To promote the use of the STRS architecture for future NASA advanced exploration missions, NASA Glenn Research Center (GRC) developed an STRS compliant SDR on a radio platform used by the Advance Exploration System program at the Johnson Space Center (JSC) in their Integrated Power, Avionics, and Software (iPAS) laboratory. At the conclusion of the development, the software and hardware description language (HDL) code was delivered to JSC for their use in their iPAS test bed to get hands-on experience with the STRS standard, and for development of their own STRS Waveforms on the now STRS compliant platform.The iPAS STRS Radio was implemented on the Reconfigurable, Intelligently-Adaptive Communication System (RIACS) platform, currently being used for radio development at JSC. The platform consists of a Xilinx ML605 Virtex-6 FPGA board, an Analog Devices FMCOMMS1-EBZ RF transceiver board, and an Embedded PC (Axiomtek eBox 620-110-FL) running the Ubuntu 12.4 operating system. Figure 1 shows the RIACS platform hardware. The result of this development is a very low cost STRS compliant platform that can be used for waveform developments for multiple applications.The purpose of this document is to describe the design of the HDL code for the FPGA portion of the iPAS STRS Radio particularly the design of the FPGA wrapper and the test waveform.
Petrovici, Mihai A.; Vogginger, Bernhard; Müller, Paul; Breitwieser, Oliver; Lundqvist, Mikael; Muller, Lyle; Ehrlich, Matthias; Destexhe, Alain; Lansner, Anders; Schüffny, René; Schemmel, Johannes; Meier, Karlheinz
2014-01-01
Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks. PMID:25303102
Petrovici, Mihai A; Vogginger, Bernhard; Müller, Paul; Breitwieser, Oliver; Lundqvist, Mikael; Muller, Lyle; Ehrlich, Matthias; Destexhe, Alain; Lansner, Anders; Schüffny, René; Schemmel, Johannes; Meier, Karlheinz
2014-01-01
Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks.
Reducing the cognitive workload: Trouble managing power systems
NASA Technical Reports Server (NTRS)
Manner, David B.; Liberman, Eugene M.; Dolce, James L.; Mellor, Pamela A.
1993-01-01
The complexity of space-based systems makes monitoring them and diagnosing their faults taxing for human beings. Mission control operators are well-trained experts but they can not afford to have their attention diverted by extraneous information. During normal operating conditions monitoring the status of the components of a complex system alone is a big task. When a problem arises, immediate attention and quick resolution is mandatory. To aid humans in these endeavors we have developed an automated advisory system. Our advisory expert system, Trouble, incorporates the knowledge of the power system designers for Space Station Freedom. Trouble is designed to be a ground-based advisor for the mission controllers in the Control Center Complex at Johnson Space Center (JSC). It has been developed at NASA Lewis Research Center (LeRC) and tested in conjunction with prototype flight hardware contained in the Power Management and Distribution testbed and the Engineering Support Center, ESC, at LeRC. Our work will culminate with the adoption of these techniques by the mission controllers at JSC. This paper elucidates how we have captured power system failure knowledge, how we have built and tested our expert system, and what we believe are its potential uses.
Local wavelet transform: a cost-efficient custom processor for space image compression
NASA Astrophysics Data System (ADS)
Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier
2002-11-01
Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.
Understanding the Complex Dimensions of the Digital Divide: Lessons Learned in the Alaskan Arctic
ERIC Educational Resources Information Center
Subramony, Deepak Prem
2007-01-01
An ethnographic case study of Inupiat Eskimo in the Alaskan Arctic has provided insights into the complex nature of the sociological issues surrounding equitable access to technology tools and skills, which are referred to as the digital divide. These people can overcome the digital divide if they get the basic ready access to hardware and…
High-speed digital signal normalization for feature identification
NASA Technical Reports Server (NTRS)
Ortiz, J. A.; Meredith, B. D.
1983-01-01
A design approach for high speed normalization of digital signals was developed. A reciprocal look up table technique is employed, where a digital value is mapped to its reciprocal via a high speed memory. This reciprocal is then multiplied with an input signal to obtain the normalized result. Normalization improves considerably the accuracy of certain feature identification algorithms. By using the concept of pipelining the multispectral sensor data processing rate is limited only by the speed of the multiplier. The breadboard system was found to operate at an execution rate of five million normalizations per second. This design features high precision, a reduced hardware complexity, high flexibility, and expandability which are very important considerations for spaceborne applications. It also accomplishes a high speed normalization rate essential for real time data processing.
Multiphysics Simulations: Challenges and Opportunities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keyes, David; McInnes, Lois C.; Woodward, Carol
2013-02-12
We consider multiphysics applications from algorithmic and architectural perspectives, where ‘‘algorithmic’’ includes both mathematical analysis and computational complexity, and ‘‘architectural’’ includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not always practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose somemore » commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities.« less
FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression
2015-01-01
A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation. PMID:26601120
Impact of uniform electrode current distribution on ETF. [Engineering Test Facility MHD generator
NASA Technical Reports Server (NTRS)
Bents, D. J.
1982-01-01
A basic reason for the complexity and sheer volume of electrode consolidation hardware in the MHD ETF Powertrain system is the channel electrode current distribution, which is non-uniform. If the channel design is altered to provide uniform electrode current distribution, the amount of hardware required decreases considerably, but at the possible expense of degraded channel performance. This paper explains the design impacts on the ETF electrode consolidation network associated with uniform channel electrode current distribution, and presents the alternate consolidation designs which occur. They are compared to the baseline (non-uniform current) design with respect to performance, and hardware requirements. A rational basis is presented for comparing the requirements for the different designs and the savings that result from uniform current distribution. Performance and cost impacts upon the combined cycle plant are discussed.
Pre-Hardware Optimization of Spacecraft Image Processing Algorithms and Hardware Implementation
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Petrick, David J.; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Day, John H. (Technical Monitor)
2002-01-01
Spacecraft telemetry rates and telemetry product complexity have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image data processing and color picture generation application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The proposed solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms, and reconfigurable computing hardware (RC) technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processors (DSP). It has been shown that this approach can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft.
A forecast-based STDP rule suitable for neuromorphic implementation.
Davies, S; Galluppi, F; Rast, A D; Furber, S B
2012-08-01
Artificial neural networks increasingly involve spiking dynamics to permit greater computational efficiency. This becomes especially attractive for on-chip implementation using dedicated neuromorphic hardware. However, both spiking neural networks and neuromorphic hardware have historically found difficulties in implementing efficient, effective learning rules. The best-known spiking neural network learning paradigm is Spike Timing Dependent Plasticity (STDP) which adjusts the strength of a connection in response to the time difference between the pre- and post-synaptic spikes. Approaches that relate learning features to the membrane potential of the post-synaptic neuron have emerged as possible alternatives to the more common STDP rule, with various implementations and approximations. Here we use a new type of neuromorphic hardware, SpiNNaker, which represents the flexible "neuromimetic" architecture, to demonstrate a new approach to this problem. Based on the standard STDP algorithm with modifications and approximations, a new rule, called STDP TTS (Time-To-Spike) relates the membrane potential with the Long Term Potentiation (LTP) part of the basic STDP rule. Meanwhile, we use the standard STDP rule for the Long Term Depression (LTD) part of the algorithm. We show that on the basis of the membrane potential it is possible to make a statistical prediction of the time needed by the neuron to reach the threshold, and therefore the LTP part of the STDP algorithm can be triggered when the neuron receives a spike. In our system these approximations allow efficient memory access, reducing the overall computational time and the memory bandwidth required. The improvements here presented are significant for real-time applications such as the ones for which the SpiNNaker system has been designed. We present simulation results that show the efficacy of this algorithm using one or more input patterns repeated over the whole time of the simulation. On-chip results show that the STDP TTS algorithm allows the neural network to adapt and detect the incoming pattern with improvements both in the reliability of, and the time required for, consistent output. Through the approximations we suggest in this paper, we introduce a learning rule that is easy to implement both in event-driven simulators and in dedicated hardware, reducing computational complexity relative to the standard STDP rule. Such a rule offers a promising solution, complementary to standard STDP evaluation algorithms, for real-time learning using spiking neural networks in time-critical applications. Copyright © 2012 Elsevier Ltd. All rights reserved.
Kole, J S; Beekman, F J
2006-02-21
Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases.
A haptic interface for virtual simulation of endoscopic surgery.
Rosenberg, L B; Stredney, D
1996-01-01
Virtual reality can be described as a convincingly realistic and naturally interactive simulation in which the user is given a first person illusion of being immersed within a computer generated environment While virtual reality systems offer great potential to reduce the cost and increase the quality of medical training, many technical challenges must be overcome before such simulation platforms offer effective alternatives to more traditional training means. A primary challenge in developing effective virtual reality systems is designing the human interface hardware which allows rich sensory information to be presented to users in natural ways. When simulating a given manual procedure, task specific human interface requirements dictate task specific human interface hardware. The following paper explores the design of human interface hardware that satisfies the task specific requirements of virtual reality simulation of Endoscopic surgical procedures. Design parameters were derived through direct cadaver studies and interviews with surgeons. Final hardware design is presented.
About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture
NASA Astrophysics Data System (ADS)
Grauer, Manfred; Barth, Thomas
2004-06-01
Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.
DeFazio, Michael V; Economides, James M; Anghel, Ersilia L; Mathis, Ryan K; Barbour, John R; Attinger, Christopher E
2017-10-01
Loss of domain often complicates attempts at delayed wound closure in regions of high tension. Wound temporization with traction-assisted internal negative pressure wound therapy (NPWT), using bridging retention sutures, can minimize the effects of edema and elastic recoil that contribute to progressive tissue retraction over time. The investigators evaluated the safety and efficacy of this technique for complex wound closure. Between May 2015 and November 2015, 18 consecutive patients underwent staged reconstruction of complex and/or contaminated soft tissue defects utilizing either conventional NPWT or modified NPWT with instillation and continuous dermatotraction via bridging retention sutures. Instillation of antimicrobial solution was reserved for wounds containing infected/exposed hardware or prosthetic devices. Demographic data, wound characteristics, reconstructive outcomes, and complications were reviewed retrospectively. Eighteen wounds were treated with traction-assisted internal NPWT using the conventional (n = 11) or modified instillation (n = 7) technique. Defects involved the lower extremity (n = 14), trunk (n = 3), and proximal upper extremity (n = 1), with positive cultures identified in 12 wounds (67%). Therapy continued for 3 to 8 days (mean, 4.3 days), resulting in an average wound surface area reduction of 78% (149 cm² vs. 33 cm²) at definitive closure. Seventeen wounds (94%) were closed directly, whereas the remaining defect required coverage with a local muscle flap and skin graft. At final follow-up (mean, 12 months), 89% of wounds remained closed. In 2 patients with delayed, recurrent periprosthetic infection (mean, 7.5 weeks), serial debridement/hardware removal mandated free tissue transfer for composite defect reconstruction. Traction-assisted internal NPWT provides a safe and effective alternative to reduce wound burden and facilitate definitive closure in cases where delayed reconstruction of high-tension wounds is planned.
Accelerating a MPEG-4 video decoder through custom software/hardware co-design
NASA Astrophysics Data System (ADS)
Díaz, Jorge L.; Barreto, Dacil; García, Luz; Marrero, Gustavo; Carballo, Pedro P.; Núñez, Antonio
2007-05-01
In this paper we present a novel methodology to accelerate an MPEG-4 video decoder using software/hardware co-design for wireless DAB/DMB networks. Software support includes the services provided by the embedded kernel μC/OS-II, and the application tasks mapped to software. Hardware support includes several custom co-processors and a communication architecture with bridges to the main system bus and with a dual port SRAM. Synchronization among tasks is achieved at two levels, by a hardware protocol and by kernel level scheduling services. Our reference application is an MPEG-4 video decoder composed of several software functions and written using a special C++ library named CASSE. Profiling and space exploration techniques were used previously over the Advanced Simple Profile (ASP) MPEG-4 decoder to determinate the best HW/SW partition developed here. This research is part of the ARTEMI project and its main goal is the establishment of methodologies for the design of real-time complex digital systems using Programmable Logic Devices with embedded microprocessors as target technology and the design of multimedia systems for broadcasting networks as reference application.
Extravehicular activity training and hardware design consideration
NASA Technical Reports Server (NTRS)
Thuot, P. J.; Harbaugh, G. J.
1995-01-01
Preparing astronauts to perform the many complex extravehicular activity (EVA) tasks required to assemble and maintain Space Station will be accomplished through training simulations in a variety of facilities. The adequacy of this training is dependent on a thorough understanding of the task to be performed, the environment in which the task will be performed, high-fidelity training hardware and an awareness of the limitations of each particular training facility. Designing hardware that can be successfully operated, or assembled, by EVA astronauts in an efficient manner, requires an acute understanding of human factors and the capabilities and limitations of the space-suited astronaut. Additionally, the significant effect the microgravity environment has on the crew members' capabilities has to be carefully considered not only for each particular task, but also for all the overhead related to the task and the general overhead associated with EVA. This paper will describe various training methods and facilities that will be used to train EVA astronauts for Space Station assembly and maintenance. User-friendly EVA hardware design considerations and recent EVA flight experience will also be presented.
Efficient Hardware Implementation of the Lightweight Block Encryption Algorithm LEA
Lee, Donggeon; Kim, Dong-Chan; Kwon, Daesung; Kim, Howon
2014-01-01
Recently, due to the advent of resource-constrained trends, such as smartphones and smart devices, the computing environment is changing. Because our daily life is deeply intertwined with ubiquitous networks, the importance of security is growing. A lightweight encryption algorithm is essential for secure communication between these kinds of resource-constrained devices, and many researchers have been investigating this field. Recently, a lightweight block cipher called LEA was proposed. LEA was originally targeted for efficient implementation on microprocessors, as it is fast when implemented in software and furthermore, it has a small memory footprint. To reflect on recent technology, all required calculations utilize 32-bit wide operations. In addition, the algorithm is comprised of not complex S-Box-like structures but simple Addition, Rotation, and XOR operations. To the best of our knowledge, this paper is the first report on a comprehensive hardware implementation of LEA. We present various hardware structures and their implementation results according to key sizes. Even though LEA was originally targeted at software efficiency, it also shows high efficiency when implemented as hardware. PMID:24406859
Extravehicular activity training and hardware design consideration.
Thuot, P J; Harbaugh, G J
1995-07-01
Preparing astronauts to perform the many complex extravehicular activity (EVA) tasks required to assemble and maintain Space Station will be accomplished through training simulations in a variety of facilities. The adequacy of this training is dependent on a thorough understanding of the task to be performed, the environment in which the task will be performed, high-fidelity training hardware and an awareness of the limitations of each particular training facility. Designing hardware that can be successfully operated, or assembled, by EVA astronauts in an efficient manner, requires an acute understanding of human factors and the capabilities and limitations of the space-suited astronaut. Additionally, the significant effect the microgravity environment has on the crew members' capabilities has to be carefully considered not only for each particular task, but also for all the overhead related to the task and the general overhead associated with EVA. This paper will describe various training methods and facilities that will be used to train EVA astronauts for Space Station assembly and maintenance. User-friendly EVA hardware design considerations and recent EVA flight experience will also be presented.
Reducing gas generators and methods for generating a reducing gas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scotto, Mark Vincent; Perna, Mark Anthony
One embodiment of the present invention is a unique reducing gas generator. Another embodiment is a unique method for generating a reducing gas. Other embodiments include apparatuses, systems, devices, hardware, methods, and combinations for generating reducing gas. Further embodiments, forms, features, aspects, benefits, and advantages of the present application will become apparent from the description and figures provided herewith.
NASA Astrophysics Data System (ADS)
Riggs, William R.
1994-05-01
SHARP is a Navy wide logistics technology development effort aimed at reducing the acquisition costs, support costs, and risks of military electronic weapon systems while increasing the performance capability, reliability, maintainability, and readiness of these systems. Lower life cycle costs for electronic hardware are achieved through technology transition, standardization, and reliability enhancement to improve system affordability and availability as well as enhancing fleet modernization. Advanced technology is transferred into the fleet through hardware specifications for weapon system building blocks of standard electronic modules, standard power systems, and standard electronic systems. The product lines are all defined with respect to their size, weight, I/O, environmental performance, and operational performance. This method of defining the standard is very conducive to inserting new technologies into systems using the standard hardware. This is the approach taken thus far in inserting photonic technologies into SHARP hardware. All of the efforts have been related to module packaging; i.e. interconnects, component packaging, and module developments. Fiber optic interconnects are discussed in this paper.
Vapor Compression Distillation Urine Processor Lessons Learned from Development and Life Testing
NASA Technical Reports Server (NTRS)
Hutchens, Cindy F.; Long, David A.
1999-01-01
Vapor Compression Distillation (VCD) is the chosen technology for urine processing aboard the International Space Station (155). Development and life testing over the past several years have brought to the forefront problems and solutions for the VCD technology. Testing between 1992 and 1998 has been instrumental in developing estimates of hardware life and reliability. It has also helped improve the hardware design in ways that either correct existing problems or enhance the existing design of the hardware. The testing has increased the confidence in the VCD technology and reduced technical and programmatic risks. This paper summarizes the test results and changes that have been made to the VCD design.
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting.
Alomar, Miquel L; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L
2016-01-01
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.
NASA Technical Reports Server (NTRS)
Rieker, Lorra L.; Haraburda, Francis M.
1989-01-01
The National Aeronautics and Space Administration has adopted the policy to achieve the maximum practical level of commonality for the Space Station Freedom program in order to significantly reduce life cycle costs. Commonality means using identical or similar hardware/software for meeting common sets of functionally similar requirements. Information on how the concept of commonality is being implemented with respect to electric power system hardware for the Space Station Freedom and the U.S. Polar Platform is presented. Included is a historical account of the candidate common items which have the potential to serve the same power system functions on both Freedom and the Polar Platform.
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting
Alomar, Miquel L.; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L.
2016-01-01
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting. PMID:26880876
Low-complexity piecewise-affine virtual sensors: theory and design
NASA Astrophysics Data System (ADS)
Rubagotti, Matteo; Poggi, Tomaso; Oliveri, Alberto; Pascucci, Carlo Alberto; Bemporad, Alberto; Storace, Marco
2014-03-01
This paper is focused on the theoretical development and the hardware implementation of low-complexity piecewise-affine direct virtual sensors for the estimation of unmeasured variables of interest of nonlinear systems. The direct virtual sensor is designed directly from measured inputs and outputs of the system and does not require a dynamical model. The proposed approach allows one to design estimators which mitigate the effect of the so-called 'curse of dimensionality' of simplicial piecewise-affine functions, and can be therefore applied to relatively high-order systems, enjoying convergence and optimality properties. An automatic toolchain is also presented to generate the VHDL code describing the digital circuit implementing the virtual sensor, starting from the set of measured input and output data. The proposed methodology is applied to generate an FPGA implementation of the virtual sensor for the estimation of vehicle lateral velocity, using a hardware-in-the-loop setting.
Tutorial: Parallel Computing of Simulation Models for Risk Analysis.
Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D
2016-10-01
Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Borisov, A.
2018-05-01
The current issue of studying the vector velocity field in a cyclone-separator with a screw insert is considered in the article. Modeling of the velocity vector field in SolidWorks was carried out, tangential, axial and radial velocities were investigated. Also, a software and hardware complex was developed that makes it possible to obtain data on the speed inside a cyclone separator. The results of the experiment showed that on flour dusts the efficiency of the cyclone separator in question was more than 99.5%, with an air flow rate of 376 m3 / h, 472 m3 / h and 516 m3 / h, and ΔP less than 600 Pa. The velocity in the inlet branch of the screw insert was 18-20 m / s, and at the exit of the screw insert the airflow velocity is 50-70 m / s.
Software control and system configuration management - A process that works
NASA Technical Reports Server (NTRS)
Petersen, K. L.; Flores, C., Jr.
1983-01-01
A comprehensive software control and system configuration management process for flight-crucial digital control systems of advanced aircraft has been developed and refined to insure efficient flight system development and safe flight operations. Because of the highly complex interactions among the hardware, software, and system elements of state-of-the-art digital flight control system designs, a systems-wide approach to configuration control and management has been used. Specific procedures are implemented to govern discrepancy reporting and reconciliation, software and hardware change control, systems verification and validation testing, and formal documentation requirements. An active and knowledgeable configuration control board reviews and approves all flight system configuration modifications and revalidation tests. This flexible process has proved effective during the development and flight testing of several research aircraft and remotely piloted research vehicles with digital flight control systems that ranged from relatively simple to highly complex, integrated mechanizations.
Hardware-efficient implementation of digital FIR filter using fast first-order moment algorithm
NASA Astrophysics Data System (ADS)
Cao, Li; Liu, Jianguo; Xiong, Jun; Zhang, Jing
2018-03-01
As the digital finite impulse response (FIR) filter can be transformed into the shift-add form of multiple small-sized firstorder moments, based on the existing fast first-order moment algorithm, this paper presents a novel multiplier-less structure to calculate any number of sequential filtering results in parallel. The theoretical analysis on its hardware and time-complexities reveals that by appropriately setting the degree of parallelism and the decomposition factor of a fixed word width, the proposed structure may achieve better area-time efficiency than the existing two-dimensional (2-D) memoryless-based filter. To evaluate the performance concretely, the proposed designs for different taps along with the existing 2-D memoryless-based filters, are synthesized by Synopsys Design Compiler with 0.18-μm SMIC library. The comparisons show that the proposed design has less area-time complexity and power consumption when the number of filter taps is larger than 48.
Software control and system configuration management: A systems-wide approach
NASA Technical Reports Server (NTRS)
Petersen, K. L.; Flores, C., Jr.
1984-01-01
A comprehensive software control and system configuration management process for flight-crucial digital control systems of advanced aircraft has been developed and refined to insure efficient flight system development and safe flight operations. Because of the highly complex interactions among the hardware, software, and system elements of state-of-the-art digital flight control system designs, a systems-wide approach to configuration control and management has been used. Specific procedures are implemented to govern discrepancy reporting and reconciliation, software and hardware change control, systems verification and validation testing, and formal documentation requirements. An active and knowledgeable configuration control board reviews and approves all flight system configuration modifications and revalidation tests. This flexible process has proved effective during the development and flight testing of several research aircraft and remotely piloted research vehicles with digital flight control systems that ranged from relatively simple to highly complex, integrated mechanizations.
Understanding GPU Power. A Survey of Profiling, Modeling, and Simulation Methods
Bridges, Robert A.; Imam, Neena; Mintz, Tiffany M.
2016-09-01
Modern graphics processing units (GPUs) have complex architectures that admit exceptional performance and energy efficiency for high throughput applications.Though GPUs consume large amounts of power, their use for high throughput applications facilitate state-of-the-art energy efficiency and performance. Consequently, continued development relies on understanding their power consumption. Our work is a survey of GPU power modeling and profiling methods with increased detail on noteworthy efforts. Moreover, as direct measurement of GPU power is necessary for model evaluation and parameter initiation, internal and external power sensors are discussed. Hardware counters, which are low-level tallies of hardware events, share strong correlation to powermore » use and performance. Statistical correlation between power and performance counters has yielded worthwhile GPU power models, yet the complexity inherent to GPU architectures presents new hurdles for power modeling. Developments and challenges of counter-based GPU power modeling is discussed. Often building on the counter-based models, research efforts for GPU power simulation, which make power predictions from input code and hardware knowledge, provide opportunities for optimization in programming or architectural design. Noteworthy strides in power simulations for GPUs are included along with their performance or functional simulator counterparts when appropriate. Lastly, possible directions for future research are discussed.« less
Understanding GPU Power. A Survey of Profiling, Modeling, and Simulation Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Robert A.; Imam, Neena; Mintz, Tiffany M.
Modern graphics processing units (GPUs) have complex architectures that admit exceptional performance and energy efficiency for high throughput applications.Though GPUs consume large amounts of power, their use for high throughput applications facilitate state-of-the-art energy efficiency and performance. Consequently, continued development relies on understanding their power consumption. Our work is a survey of GPU power modeling and profiling methods with increased detail on noteworthy efforts. Moreover, as direct measurement of GPU power is necessary for model evaluation and parameter initiation, internal and external power sensors are discussed. Hardware counters, which are low-level tallies of hardware events, share strong correlation to powermore » use and performance. Statistical correlation between power and performance counters has yielded worthwhile GPU power models, yet the complexity inherent to GPU architectures presents new hurdles for power modeling. Developments and challenges of counter-based GPU power modeling is discussed. Often building on the counter-based models, research efforts for GPU power simulation, which make power predictions from input code and hardware knowledge, provide opportunities for optimization in programming or architectural design. Noteworthy strides in power simulations for GPUs are included along with their performance or functional simulator counterparts when appropriate. Lastly, possible directions for future research are discussed.« less
An Alternative Approach to Human Servicing of Crewed Earth Orbiting Spacecraft
NASA Technical Reports Server (NTRS)
Mularski, John R.; Alpert, Brian K.
2017-01-01
As crewed spacecraft have grown larger and more complex, they have come to rely on spacewalks, or Extravehicular Activities (EVA), for assembly and to assure mission success. Typically, these spacecraft maintain all of the hardware and trained personnel needed to perform an EVA on-board at all times. Maintaining this capability requires up-mass, volume for storage of EVA hardware, crew time for ground and on-orbit training, and on-orbit maintenance of EVA hardware. This paper proposes an alternative methodology, utilizing either launch-on-need hardware and crew or regularly scheduled missions to provide EVA capability for space stations in low Earth orbit after assembly complete. Much the same way that one would call a repairman to fix something at their home these EVAs are dedicated to maintenance and upgrades of the orbiting station. For crew safety contingencies it is assumed the station would be designed such the crew could either solve those issues from inside the spacecraft or use the docked Earth to Orbit vehicles as a return lifeboat, in the same manner as the International Space Station (ISS) which does not rely on EVA for crew safety related contingencies. This approach would reduce ground training requirements for long duration crews, save Intravehicular Activity (IVA) crew time in the form of EVA hardware maintenance and on-orbit training, and lead to more efficient EVAs because they would be performed by specialists with detailed knowledge and training stemming from their direct involvement in the development of the EVA. The on-orbit crew would then be available to focus on the immediate response to any failures such as IVA systems reconfiguration or jumper installation as well as the day-to-day operations of the spacecraft and payloads. This paper will look at how current unplanned EVAs are conducted on ISS, including the time required for preparation, and offer an alternative for future spacecraft. As this methodology relies on the on-time and on-need launch of spacecraft, any space station that utilized this approach would need a robust transportation system, possibly including more than one launch vehicle capable of carrying crew. In addition, the fault tolerance of the future space station would be an important consideration in how much time was available for EVA preparation after the failure. Ideally the fault tolerance of the station would allow for the maintenance tasks to be grouped such that they could be handled by regularly scheduled maintenance visits and not contingency launches. Each future program would have to weigh the risk of on-time launch against the increase in available crew time for the main objective of the spacecraft. This is only one of several ideas that could be used to reduce or eliminate a station's reliance on rapid turnaround EVAs using on-board crew. Others could include having shirt-sleeve access to critical systems or utilizing low pressure temporarily pressurized equipment bays.
Numerical propulsion system simulation
NASA Technical Reports Server (NTRS)
Lytle, John K.; Remaklus, David A.; Nichols, Lester D.
1990-01-01
The cost of implementing new technology in aerospace propulsion systems is becoming prohibitively expensive. One of the major contributors to the high cost is the need to perform many large scale system tests. Extensive testing is used to capture the complex interactions among the multiple disciplines and the multiple components inherent in complex systems. The objective of the Numerical Propulsion System Simulation (NPSS) is to provide insight into these complex interactions through computational simulations. This will allow for comprehensive evaluation of new concepts early in the design phase before a commitment to hardware is made. It will also allow for rapid assessment of field-related problems, particularly in cases where operational problems were encountered during conditions that would be difficult to simulate experimentally. The tremendous progress taking place in computational engineering and the rapid increase in computing power expected through parallel processing make this concept feasible within the near future. However it is critical that the framework for such simulations be put in place now to serve as a focal point for the continued developments in computational engineering and computing hardware and software. The NPSS concept which is described will provide that framework.
NASA Technical Reports Server (NTRS)
Jackson, L. Neal; Crenshaw, John, Sr.; Schulze, Arthur E.; Wood, H. J., Jr.
1989-01-01
The objective was to define the factors which space flight hardware developers and planners should consider when determining: (1) the number of hardware units required to support program; (2) design level of the units; and (3) most efficient means of utilization of the units. The analysis considered technology risk, maintainability, reliability, and safety design requirements for achieving the delivery of highest quality flight hardware. Relative cost impacts of the utilization of prototyping were identified. The development of Space Biology Initiative research hardware will involve intertwined hardware/software activities. Experience has shown that software development can be an expensive portion of a system design program. While software prototyping could imply the development of a significantly different end item, an operational system prototype must be considered to be a combination of software and hardware. Hundreds of factors were identified that could be considered in determining the quantity and types of prototypes that should be constructed. In developing the decision models, these factors were combined and reduced by approximately ten-to-one in order to develop a manageable structure based on the major determining factors. The Baseline SBI hardware list of Appendix D was examined and reviewed in detail; however, from the facts available it was impossible to identify the exact types and quantities of prototypes required for each of these items. Although the factors that must be considered could be enumerated for each of these pieces of equipment, the exact status and state of development of the equipment is variable and uncertain at this time.
Space shuttle engineering and operations support. Avionics system engineering
NASA Technical Reports Server (NTRS)
Broome, P. A.; Neubaur, R. J.; Welsh, R. T.
1976-01-01
The shuttle avionics integration laboratory (SAIL) requirements for supporting the Spacelab/orbiter avionics verification process are defined. The principal topics are a Spacelab avionics hardware assessment, test operations center/electronic systems test laboratory (TOC/ESL) data processing requirements definition, SAIL (Building 16) payload accommodations study, and projected funding and test scheduling. Because of the complex nature of the Spacelab/orbiter computer systems, the PCM data link, and the high rate digital data system hardware/software relationships, early avionics interface verification is required. The SAIL is a prime candidate test location to accomplish this early avionics verification.
RAMP: A fault tolerant distributed microcomputer structure for aircraft navigation and control
NASA Technical Reports Server (NTRS)
Dunn, W. R.
1980-01-01
RAMP consists of distributed sets of parallel computers partioned on the basis of software and packaging constraints. To minimize hardware and software complexity, the processors operate asynchronously. It was shown that through the design of asymptotically stable control laws, data errors due to the asynchronism were minimized. It was further shown that by designing control laws with this property and making minor hardware modifications to the RAMP modules, the system became inherently tolerant to intermittent faults. A laboratory version of RAMP was constructed and is described in the paper along with the experimental results.
NASA Astrophysics Data System (ADS)
Mochalov, V. A.; Firstov, P. P.; Cherneva, N. V.; Sannikov, D. V.; Akbashev, R. R.; Uvarov, V. N.; Shevtsov, B. M.; Druzhin, G. I.; Mochalova, A. V.
2017-11-01
In the region of the Northern group of volcanoes in Kamchatka peninsula, a distributed network is being planned to monitor the VLF range electromagnetic radiation and to locate the lightning strokes. It will allow the researchers to register weaker electromagnetic pulses from lightning strokes in comparison to the World Wide Lightning Location Network. The hardware-software complex of the network under construction is presented. The capabilities of the available and the developing hardware and software to investigate natural phenomena associated with lightning activity are described.
Robotics control using isolated word recognition of voice input
NASA Technical Reports Server (NTRS)
Weiner, J. M.
1977-01-01
A speech input/output system is presented that can be used to communicate with a task oriented system. Human speech commands and synthesized voice output extend conventional information exchange capabilities between man and machine by utilizing audio input and output channels. The speech input facility is comprised of a hardware feature extractor and a microprocessor implemented isolated word or phrase recognition system. The recognizer offers a medium sized (100 commands), syntactically constrained vocabulary, and exhibits close to real time performance. The major portion of the recognition processing required is accomplished through software, minimizing the complexity of the hardware feature extractor.
Pfeil, Thomas; Potjans, Tobias C; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2012-01-01
Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists.
Pfeil, Thomas; Potjans, Tobias C.; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2012-01-01
Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists. PMID:22822388
The J-2X Upper Stage Engine: From Heritage to Hardware
NASA Technical Reports Server (NTRS)
Byrd, THomas
2008-01-01
NASA's Global Exploration Strategy requires safe, reliable, robust, efficient transportation to support sustainable operations from Earth to orbit and into the far reaches of the solar system. NASA selected the Ares I crew launch vehicle and the Ares V cargo launch vehicle to provide that transportation. Guiding principles in creating the architecture represented by the Ares vehicles were the maximum use of heritage hardware and legacy knowledge, particularly Space Shuttle assets, and commonality between the Ares vehicles where possible to streamline the hardware development approach and reduce programmatic, technical, and budget risks. The J-2X exemplifies those goals. It was selected by the Exploration Systems Architecture Study (ESAS) as the upper stage propulsion for the Ares I Upper Stage and the Ares V Earth Departure Stage (EDS). The J-2X is an evolved version ofthe historic J-2 engine that successfully powered the second stage of the Saturn I launch vehicle and the second and third stages of the Saturn V launch vehicle. The Constellation architecture, however, requires performance greater than its predecessor. The new architecture calls for larger payloads delivered to the Moon and demands greater loss of mission reliability and numerous other requirements associated with human rating that were not applied to the original J-2. As a result, the J-2X must operate at much higher temperatures, pressures, and flow rates than the heritage J-2, making it one of the highest performing gas generator cycle engines ever built, approaching the efficiency of more complex stage combustion engines. Development is focused on early risk mitigation, component and subassembly test, and engine system test. The development plans include testing engine components, including the subscale injector, main igniter, powerpack assembly (turbopumps, gas generator and associated ducting and structural mounts), full-scale gas generator, valves, and control software with hardware-in-the-loop. Testing expanded in 2007, accompanied by the refinement of the design through several key milestones. This paper discusses those 2007 tests and milestones, as well as updates key developments in 2008.
NASA Astrophysics Data System (ADS)
Martin, Adrian
As the applications of mobile robotics evolve it has become increasingly less practical for researchers to design custom hardware and control systems for each problem. This research presents a new approach to control system design that looks beyond end-of-lifecycle performance and considers control system structure, flexibility, and extensibility. Toward these ends the Control ad libitum philosophy is proposed, stating that to make significant progress in the real-world application of mobile robot teams the control system must be structured such that teams can be formed in real-time from diverse components. The Control ad libitum philosophy was applied to the design of the HAA (Host, Avatar, Agent) architecture: a modular hierarchical framework built with provably correct distributed algorithms. A control system for exploration and mapping, search and deploy, and foraging was developed to evaluate the architecture in three sets of hardware-in-the-loop experiments. First, the basic functionality of the HAA architecture was studied, specifically the ability to: a) dynamically form the control system, b) dynamically form the robot team, c) dynamically form the processing network, and d) handle heterogeneous teams. Secondly, the real-time performance of the distributed algorithms was tested, and proved effective for the moderate sized systems tested. Furthermore, the distributed Just-in-time Cooperative Simultaneous Localization and Mapping (JC-SLAM) algorithm demonstrated accuracy equal to or better than traditional approaches in resource starved scenarios, while reducing exploration time significantly. The JC-SLAM strategies are also suitable for integration into many existing particle filter SLAM approaches, complementing their unique optimizations. Thirdly, the control system was subjected to concurrent software and hardware failures in a series of increasingly complex experiments. Even with unrealistically high rates of failure the control system was able to successfully complete its tasks. The HAA implementation designed following the Control ad libitum philosophy proved to be capable of dynamic team formation and extremely robust against both hardware and software failure; and, due to the modularity of the system there is significant potential for reuse of assets and future extensibility. One future goal is to make the source code publically available and establish a forum for the development and exchange of new agents.
Accelerating epistasis analysis in human genetics with consumer graphics hardware.
Sinnott-Armstrong, Nicholas A; Greene, Casey S; Cancare, Fabio; Moore, Jason H
2009-07-24
Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR) is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs) have more memory bandwidth and computational capability than Central Processing Units (CPUs) and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective performance while leaving the CPU available for other tasks. The GPU workstation containing three GPUs costs $2000 while obtaining similar performance on a Beowulf cluster requires 150 CPU cores which, including the added infrastructure and support cost of the cluster system, cost approximately $82,500. Graphics hardware based computing provides a cost effective means to perform genetic analysis of epistasis using MDR on large datasets without the infrastructure of a computing cluster.
Pre-Hardware Optimization and Implementation Of Fast Optics Closed Control Loop Algorithms
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Lyon, Richard G.; Herman, Jay R.; Abuhassan, Nader
2004-01-01
One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The FFT is particularly useful in two-dimensional (2-D) image processing (FFT2) within optical systems control. However, timing constraints of a fast optics closed control loop would require a supercomputer to run the software implementation of the FFT2 and its inverse, as well as other image processing representative algorithm, such as numerical image folding and fringe feature extraction. A laboratory supercomputer is not always available even for ground operations and is not feasible for a night project. However, the computationally intensive algorithms still warrant alternative implementation using reconfigurable computing technologies (RC) such as Digital Signal Processors (DSP) and Field Programmable Gate Arrays (FPGA), which provide low cost compact super-computing capabilities. We present a new RC hardware implementation and utilization architecture that significantly reduces the computational complexity of a few basic image-processing algorithm, such as FFT2, image folding and phase diversity for the NASA Solar Viewing Interferometer Prototype (SVIP) using a cluster of DSPs and FPGAs. The DSP cluster utilization architecture also assures avoidance of a single point of failure, while using commercially available hardware. This, combined with the control algorithms pre-hardware optimization, or the first time allows construction of image-based 800 Hertz (Hz) optics closed control loops on-board a spacecraft, based on the SVIP ground instrument. That spacecraft is the proposed Earth Atmosphere Solar Occultation Imager (EASI) to study greenhouse gases CO2, C2H, H2O, O3, O2, N2O from Lagrange-2 point in space. This paper provides an advanced insight into a new type of science capabilities for future space exploration missions based on on-board image processing for control and for robotics missions using vision sensors. It presents a top-level description of technologies required for the design and construction of SVIP and EASI and to advance the spatial-spectral imaging and large-scale space interferometry science and engineering.
Creating raptor benefits from powerline problems
Kochert, Michael N.; Olendorff, R.R.
1999-01-01
Powerlines benefit raptors by providing enhanced nesting and roosting sites. However, they also can kill raptors by electrocution and raptors can interfere with power transmission. The electrocution problem has been reduced by correcting existing lethal lines and implementing electrocution safe designs for new lines. Remedial actions include pole modifications, perch management and insulation of wires and hardware. New line designs provide for proper insulation and adequate spacing of conductors and grounded hardware. Nesting platforms can reduce power transmission problems and enhance the benefits of nesting on powerlines. A combination of perch deterrents and insulator shields is a positive, cost-effective approach to managing bird contamination that allows birds to continue roosting on the towers.
NASA Technical Reports Server (NTRS)
Miller, Darcy
2000-01-01
Foreign object debris (FOD) is an important concern while processing space flight hardware. FOD can be defined as "The debris that is left in or around flight hardware, where it could cause damage to that flight hardware," (United Space Alliance, 2000). Just one small screw left unintentionally in the wrong place could delay a launch schedule while it is retrieved, increase the cost of processing, or cause a potentially fatal accident. At this time, there is not a single solution to help reduce the number of dropped parts such as screws, bolts, nuts, and washers during installation. Most of the effort is currently focused on training employees and on capturing the parts once they are dropped. Advances in ergonomics and hand tool design suggest that a solution may be possible, in the form of specialty hand tools, which secure the small parts while they are being handled. To assist in the development of these new advances, a test methodology was developed to conduct a usability evaluation of hand tools, while performing tasks with risk of creating FOD. The methodology also includes hardware in the form of a testing board and the small parts that can be installed onto the board during a test. The usability of new hand tools was determined based on efficiency and the number of dropped parts. To validate the methodology, participants were tested while performing a task that is representative of the type of work that may be done when processing space flight hardware. Test participants installed small parts using their hands and two commercially available tools. The participants were from three groups: (1) students, (2) engineers / managers and (3) technicians. The test was conducted to evaluate the differences in performance when using the three installation methods, as well as the difference in performance of the three participant groups.
Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M
2016-01-01
The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.
An Embedded Reconfigurable Logic Module
NASA Technical Reports Server (NTRS)
Tucker, Jerry H.; Klenke, Robert H.; Shams, Qamar A. (Technical Monitor)
2002-01-01
A Miniature Embedded Reconfigurable Computer and Logic (MERCAL) module has been developed and verified. MERCAL was designed to be a general-purpose, universal module that that can provide significant hardware and software resources to meet the requirements of many of today's complex embedded applications. This is accomplished in the MERCAL module by combining a sub credit card size PC in a DIMM form factor with a XILINX Spartan I1 FPGA. The PC has the ability to download program files to the FPGA to configure it for different hardware functions and to transfer data to and from the FPGA via the PC's ISA bus during run time. The MERCAL module combines, in a compact package, the computational power of a 133 MHz PC with up to 150,000 gate equivalents of digital logic that can be reconfigured by software. The general architecture and functionality of the MERCAL hardware and system software are described.
Langley Storage facility which houses remains of Apollo 204 craft
NASA Technical Reports Server (NTRS)
1990-01-01
The Apollo 204 command module is seen in storage at Langley Research Center in Virginia. The command module, damaged in the 1967 Apollo fire, its heat shield, booster protective cover and 81 cartons of related hardware and investigative data occupy 3,300 cubic feet of Langley's storage space. Astronauts Virgil I. Grissom, Roger B. Chaffee and Edward H. White II perished in the Apollo 204 spacecraft fire on Jan. 27, 1967 on Launch Complex 34, Cape Canaveral. The hardware has been stored at Langley since 1967. PLEASE NOTE UPDATE: In early May of 1990, NASA announced plans to move the hardware and related data to permanent storage at the site of all the Challenger debris in an abandoned missile silo at Cape Canaveral Air Force Station (CCAFS), Florida. However, at month's end, NASA announced it had decided to keep the capsule at Langley for an indefinite period of time.
Langley Storage facility which houses remains of Apollo 204 craft
NASA Technical Reports Server (NTRS)
1990-01-01
The Apollo 204 command module is seen in storage at Langley Research Center in Virginia. The command module, damaged in the 1967 Apollo fire, its heat shield, booster protective cover and 81 cartons of related hardware and investigative data occupy 3,300 cubic feet of warehouse storage space. Astronauts Virgil I. Grissom, Roger B. Chaffee and Edward H. White II perished in the Apollo 204 spacecraft fire on Jan. 27, 1967 on Launch Complex 34 at Cape Canaveral. The hardware has been stored at Langley since 1967. PLEASE NOTE UPDATE: In early May of 1990, NASA announced plans to move the hardware and related data to permanent storage with the Challenger debris in an abandoned missile silo at Cape Canaveral Air Force Station (CCAFS), Florida. However, at month's end, NASA announced it had decided to keep the capsule at Langley for an indefinite period of time.
Langley Storage facility which houses remains of Apollo 204 craft
NASA Technical Reports Server (NTRS)
1990-01-01
A warehouse holding Apollo 204 hardware and investigative data is seen at Langley Research Center in Virginia. The command module, damaged in the 1967 Apollo fire, its heat shield, booster protective cover and 81 cartons of data and other related materials occupy 3,300 cubic feet. Astronauts Virgil I. Grissom, Roger B. Chaffee and Edward H. White II perished in the Apollo 204 spacecraft fire on Jan. 27, 1967 on Launch Complex 34 at Cape Canaveral. The hardware has been stored at Langley since 1967. PLEASE NOTE UPDATE: In early May of 1990, NASA announced plans to move the hardware and related data to permanent storage with the Challenger debris in an abandoned missile silo at Cape Canaveral Air Force Station (CCAFS), Florida. However, at month's end, NASA announced it had decided to keep the capsule at Langley for an indefinite period of time.
Langley Storage facility which houses remains of Apollo 204 craft
NASA Technical Reports Server (NTRS)
1990-01-01
Part of 81 cartons of Apollo 204 hardware and investigation data are seen in storage at Langley Research Center in Virginia. The command module, damaged in the 1967 Apollo fire, its heat shield, booster protective cover and the cartons occupy 3,300 cubic feet of Langley's storage space. Astronauts Virgil I. Grissom, Roger B. Chaffee and Edward H. White II perished in the Apollo 204 spacecraft fire on Jan. 27, 1967 on Launch Complex 34, Cape Canaveral. The hardware has been stored at Langley since 1967. PLEASE NOTE UPDATE: In early May of 1990, NASA announced plans to move the hardware and related data to permanent storage with the Challenger debris in an abandoned missile silo at Cape Canaveral Air Force Station (CCAFS), Florida. However, at month's end, NASA announced it had decided to keep the capsule at Langley for an indefinite period of time.
Ground Operations Aerospace Language (GOAL). Volume 3: Data bank
NASA Technical Reports Server (NTRS)
1973-01-01
The GOAL (Ground Operations Aerospace Language) test programming language was developed for use in ground checkout operations in a space vehicle launch environment. To insure compatibility with a maximum number of applications, a systematic and error-free method of referencing command/response (analog and digital) hardware measurements is a principle feature of the language. Central to the concept of requiring the test language to be independent of launch complex equipment and terminology is that of addressing measurements via symbolic names that have meaning directly in the hardware units being tested. To form the link from test program through test system interfaces to the units being tested the concept of a data bank has been introduced. The data bank is actually a large cross-reference table that provides pertinent hardware data such as interface unit addresses, data bus routings, or any other system values required to locate and access measurements.
Building the Digital Library Infrastructure: A Primer.
ERIC Educational Resources Information Center
Tebbetts, Diane R.
1999-01-01
Provides a framework for examining the complex infrastructure needed to successfully implement a digital library. Highlights include database development, online public-access catalogs, interactive technical services, full-text documents, hardware and wiring, licensing, access, and security issues. (Author/LRW)
Analysis of Selected Enhancements to the En Route Central Computing Complex
DOT National Transportation Integrated Search
1981-09-01
This report analyzes selected hardware enhancements that could improve the performance of the 9020 computer systems, which are used to provide en route air traffic control services. These enhancements could be implemented quickly, would be relatively...
A Multi-Purpose Modular Electronics Integration Node for Exploration Extravehicular Activity
NASA Technical Reports Server (NTRS)
Hodgson, Edward; Papale, William; Wichowski, Robert; Rosenbush, David; Hawes, Kevin; Stankiewicz, Tom
2013-01-01
As NASA works to develop an effective integrated portable life support system design for exploration Extravehicular activity (EVA), alternatives to the current system s electrical power and control architecture are needed to support new requirements for flexibility, maintainability, reliability, and reduced mass and volume. Experience with the current Extravehicular Mobility Unit (EMU) has demonstrated that the current architecture, based in a central power supply, monitoring and control unit, with dedicated analog wiring harness connections to active components in the system has a significant impact on system packaging and seriously constrains design flexibility in adapting to component obsolescence and changing system needs over time. An alternative architecture based in the use of a digital data bus offers possible wiring harness and system power savings, but risks significant penalties in component complexity and cost. A hybrid architecture that relies on a set of electronic and power interface nodes serving functional models within the Portable Life Support System (PLSS) is proposed to minimize both packaging and component level penalties. A common interface node hardware design can further reduce penalties by reducing the nonrecurring development costs, making miniaturization more practical, maximizing opportunities for maturation and reliability growth, providing enhanced fault tolerance, and providing stable design interfaces for system components and a central control. Adaptation to varying specific module requirements can be achieved with modest changes in firmware code within the module. A preliminary design effort has developed a common set of hardware interface requirements and functional capabilities for such a node based on anticipated modules comprising an exploration PLSS, and a prototype node has been designed assembled, programmed, and tested. One instance of such a node has been adapted to support testing the swingbed carbon dioxide and humidity control element in NASA s advanced PLSS 2.0 test article. This paper will describe the common interface node design concept, results of the prototype development and test effort, and plans for use in NASA PLSS 2.0 integrated tests.
Single-channel ground airborne radio system (SINCGARS) based remote control for the M1 Abrahms
NASA Astrophysics Data System (ADS)
Urda, Joseph R.
1995-04-01
Remote control of the Ml Abrahms Main Battle Tank through a minefield breach operation will remove the vehicle crew from the inherent hazard. A successful remote control system will provide automotive control yet not impair normal operation. This requires a minimum of physical parts, and an unobtrusive installation. Most importantly, a system failure must not impair the regular operation as a manned system. The system itself need not be complex. A minefield breach only requires simple control of automotive function and a mine plow interface. Control hardware for the Ml-Al can be reduced to two linear actuators, an electrical interface for the engine control unit, an interface for the mine plow, and the associated cables. Communication between vehicle control and operator control takes place over the vehicles organic radio (typically SINCGARS). This helps reduce the number of special purpose components for the remote control device. The device is currently awaiting an automotive safety test to prepare for its safety release. Because of the specific nature of the MDL-STD 1553-B data bus the device will not control an M1-A2 Main Battle Tank. The architecture will allow control of the M1-A2 through the 1553-B data bus however the physical hardware has not been constructed. The control scheme will not change. The communication interface will provide greater flexibility when interfacing to the vehicle tactical radio. Operational utility will be determined by U.S. Army Training and Doctrine Command personnel. The obvious benefit is that if a remote tank is lost during a minefield breach the crew is saved.
Warfarin genotyping in a single PCR reaction for microchip electrophoresis.
Poe, Brian L; Haverstick, Doris M; Landers, James P
2012-04-01
Warfarin is the most commonly prescribed oral anticoagulant medication but also is the second leading cause of emergency room visits for adverse drug reactions. Genetic testing for warfarin sensitivity may reduce hospitalization rates, but prospective genotyping is impeded in part by the turnaround time and costs of genotyping. Microfluidics-based assays can reduce reagent consumption and analysis time; however, no current assay has integrated multiplexed allele-specific PCR for warfarin genotyping with electrophoretic microfluidics hardware. Ideally, such an assay would use a single PCR reaction and, without further processing, a single microchip electrophoresis (ME) run to determine the 3 single-nucleotide polymorphisms (SNPs) affecting warfarin sensitivity [i.e., CYP2C9 (cytochrome P450, family 2, subfamily C, polypeptide 9) *2, CYP2C9 *3, and the VKORC1 (vitamin K epoxide reductase complex 1) A/B haplotype]. We designed and optimized primers for a fully multiplexed assay to examine 3 biallelic SNPs with the tetraprimer amplification refractory mutation system (T-ARMS). The assay was developed with conventional PCR equipment and demonstrated for microfluidic infrared-mediated PCR. Genotypes were determined by ME on the basis of the pattern of PCR products. Thirty-five samples of human genomic DNA were analyzed with this multiplex T-ARMS assay, and 100% of the genotype determinations agreed with the results obtained by other validated methods. The sample population included several genotypes conferring warfarin sensitivity, with both homozygous and heterozygous genotypes for each SNP. Total analysis times for the PCR and ME were approximately 75 min (1-sample run) and 90 min (12-sample run). This multiplexed T-ARMS assay coupled with microfluidics hardware constitutes a promising avenue for an inexpensive and rapid platform for warfarin genotyping.
Hardware architecture design of a fast global motion estimation method
NASA Astrophysics Data System (ADS)
Liang, Chaobing; Sang, Hongshi; Shen, Xubang
2015-12-01
VLSI implementation of gradient-based global motion estimation (GME) faces two main challenges: irregular data access and high off-chip memory bandwidth requirement. We previously proposed a fast GME method that reduces computational complexity by choosing certain number of small patches containing corners and using them in a gradient-based framework. A hardware architecture is designed to implement this method and further reduce off-chip memory bandwidth requirement. On-chip memories are used to store coordinates of the corners and template patches, while the Gaussian pyramids of both the template and reference frame are stored in off-chip SDRAMs. By performing geometric transform only on the coordinates of the center pixel of a 3-by-3 patch in the template image, a 5-by-5 area containing the warped 3-by-3 patch in the reference image is extracted from the SDRAMs by burst read. Patched-based and burst mode data access helps to keep the off-chip memory bandwidth requirement at the minimum. Although patch size varies at different pyramid level, all patches are processed in term of 3x3 patches, so the utilization of the patch-processing circuit reaches 100%. FPGA implementation results show that the design utilizes 24,080 bits on-chip memory and for a sequence with resolution of 352x288 and frequency of 60Hz, the off-chip bandwidth requirement is only 3.96Mbyte/s, compared with 243.84Mbyte/s of the original gradient-based GME method. This design can be used in applications like video codec, video stabilization, and super-resolution, where real-time GME is a necessity and minimum memory bandwidth requirement is appreciated.
Similarity constraints in testing of cooled engine parts
NASA Technical Reports Server (NTRS)
Colladay, R. S.; Stepka, F. S.
1974-01-01
A study is made of the effect of testing cooled parts of current and advanced gas turbine engines at the reduced temperature and pressure conditions which maintain similarity with the engine environment. Some of the problems facing the experimentalist in evaluating heat transfer and aerodynamic performance when hardware is tested at conditions other than the actual engine environment are considered. Low temperature and pressure test environments can simulate the performance of actual size prototype engine hardware within the tolerance of experimental accuracy if appropriate similarity conditions are satisfied. Failure to adhere to these similarity constraints because of test facility limitations or other reasons, can result in a number of serious errors in projecting the performance of test hardware to engine conditions.
Halbach Magnetic Rotor Development
NASA Technical Reports Server (NTRS)
Gallo, Christopher A.
2008-01-01
The NASA John H. Glenn Research Center has a wealth of experience in Halbach array technology through the Fundamental Aeronautics Program. The goals of the program include improving aircraft efficiency, reliability, and safety. The concept of a Halbach magnetically levitated electric aircraft motor will help reduce harmful emissions, reduce the Nation s dependence on fossil fuels, increase efficiency and reliability, reduce maintenance and decrease operating noise levels. Experimental hardware systems were developed in the GRC Engineering Development Division to validate the basic principles described herein and the theoretical work that was performed. A number of Halbach Magnetic rotors have been developed and tested under this program. A separate test hardware setup was developed to characterize each of the rotors. A second hardware setup was developed to test the levitation characteristics of the rotors. Each system focused around a unique Halbach array rotor. Each rotor required original design and fabrication techniques. A 4 in. diameter rotor was developed to test the radial levitation effects for use as a magnetic bearing. To show scalability from the 4 in. rotor, a 1 in. rotor was developed to also test radial levitation effects. The next rotor to be developed was 20 in. in diameter again to show scalability from the 4 in. rotor. An axial rotor was developed to determine the force that could be generated to position the rotor axially while it is rotating. With both radial and axial magnetic bearings, the rotor would be completely suspended magnetically. The purpose of this report is to document the development of a series of Halbach magnetic rotors to be used in testing. The design, fabrication and assembly of the rotors will be discussed as well as the hardware developed to test the rotors.
An experiment in vision based autonomous grasping within a reduced gravity environment
NASA Technical Reports Server (NTRS)
Grimm, K. A.; Erickson, J. D.; Anderson, G.; Chien, C. H.; Hewgill, L.; Littlefield, M.; Norsworthy, R.
1992-01-01
The National Aeronautics and Space Administration's Reduced Gravity Program (RGP) offers opportunities for experimentation in gravities of less than one-g. The Extravehicular Activity Helper/Retriever (EVAHR) robot project of the Automation and Robotics Division at the Lyndon B. Johnson Space Center in Houston, Texas, is undertaking a task that will culminate in a series of tests in simulated zero-g using this facility. A subset of the final robot hardware consisting of a three-dimensional laser mapper, a Robotics Research 807 arm, a Jameson JH-5 hand, and the appropriate interconnect hardware/software will be used. This equipment will be flown on the RGP's KC-135 aircraft. This aircraft will fly a series of parabolas creating the effect of zero-g. During the periods of zero-g, a number of objects will be released in front of the fixed base robot hardware in both static and dynamic configurations. The system will then inspect the object, determine the objects pose, plan a grasp strategy, and execute the grasp. This must all be accomplished in the approximately 27 seconds of zero-g.
Training Scalable Restricted Boltzmann Machines Using a Quantum Annealer
NASA Astrophysics Data System (ADS)
Kumar, V.; Bass, G.; Dulny, J., III
2016-12-01
Machine learning and the optimization involved therein is of critical importance for commercial and military applications. Due to the computational complexity of many-variable optimization, the conventional approach is to employ meta-heuristic techniques to find suboptimal solutions. Quantum Annealing (QA) hardware offers a completely novel approach with the potential to obtain significantly better solutions with large speed-ups compared to traditional computing. In this presentation, we describe our development of new machine learning algorithms tailored for QA hardware. We are training restricted Boltzmann machines (RBMs) using QA hardware on large, high-dimensional commercial datasets. Traditional optimization heuristics such as contrastive divergence and other closely related techniques are slow to converge, especially on large datasets. Recent studies have indicated that QA hardware when used as a sampler provides better training performance compared to conventional approaches. Most of these studies have been limited to moderately-sized datasets due to the hardware restrictions imposed by exisitng QA devices, which make it difficult to solve real-world problems at scale. In this work we develop novel strategies to circumvent this issue. We discuss scale-up techniques such as enhanced embedding and partitioned RBMs which allow large commercial datasets to be learned using QA hardware. We present our initial results obtained by training an RBM as an autoencoder on an image dataset. The results obtained so far indicate that the convergence rates can be improved significantly by increasing RBM network connectivity. These ideas can be readily applied to generalized Boltzmann machines and we are currently investigating this in an ongoing project.
HiCAT Software Infrastructure: Safe hardware control with object oriented Python
NASA Astrophysics Data System (ADS)
Moriarty, Christopher; Brooks, Keira; Soummer, Remi
2018-01-01
High contrast imaging for Complex Aperture Telescopes (HiCAT) is a testbed designed to demonstrate coronagraphy and wavefront control for segmented on-axis space telescopes such as envisioned for LUVOIR. To limit the air movements in the testbed room, software interfaces for several different hardware components were developed to completely automate operations. When developing software interfaces for many different pieces of hardware, unhandled errors are commonplace and can prevent the software from properly closing a hardware resource. Some fragile components (e.g. deformable mirrors) can be permanently damaged because of this. We present an object oriented Python-based infrastructure to safely automate hardware control and optical experiments. Specifically, conducting high-contrast imaging experiments while monitoring humidity and power status along with graceful shutdown processes even for unexpected errors. Python contains a construct called a “context manager” that allows you define code to run when a resource is opened or closed. Context managers ensure that a resource is properly closed, even when unhandled errors occur. Harnessing the context manager design, we also use Python’s multiprocessing library to monitor humidity and power status without interrupting the experiment. Upon detecting a safety problem, the master process sends an event to the child process that triggers the context managers to gracefully close any open resources. This infrastructure allows us to queue up several experiments and safely operate the testbed without a human in the loop.
Computer program determines chemical equilibria in complex systems
NASA Technical Reports Server (NTRS)
Gordon, S.; Zeleznik, F. J.
1966-01-01
Computer program numerically solves nonlinear algebraic equations for chemical equilibrium based on iteration equations independent of choice of components. This program calculates theoretical performance for frozen and equilibrium composition during expansion and Chapman-Jouguet flame properties, studies combustion, and designs hardware.
NASA Technical Reports Server (NTRS)
Maxwell, Scott A.; Cooper, Brian; Hartman, Frank; Wright, John; Yen, Jeng; Leger, Chris
2005-01-01
A Mars rover is a complex system, and driving one is a complex endeavor. Rover driver must be intimately familiar with the hardware and software of the mobility system and of the robotic arm. They must rapidly assess threats in the terrain, then creatively combine their knowledge o f the vehicle and its environment to achieve each day's science and engineering objective.
Space shuttle low cost/risk avionics study
NASA Technical Reports Server (NTRS)
1971-01-01
All work breakdown structure elements containing any avionics related effort were examined for pricing the life cycle costs. The analytical, testing, and integration efforts are included for the basic onboard avionics and electrical power systems. The design and procurement of special test equipment and maintenance and repair equipment are considered. Program management associated with these efforts is described. Flight test spares and labor and materials associated with the operations and maintenance of the avionics systems throughout the horizontal flight test are examined. It was determined that cost savings can be achieved by using existing hardware, maximizing orbiter-booster commonality, specifying new equipments to MIL quality standards, basing redundancy on cost effective analysis, minimizing software complexity and reducing cross strapping and computer-managed functions, utilizing compilers and floating point computers, and evolving the design as dictated by the horizontal flight test schedules.
Practice brief. Securing wireless technology for healthcare.
Retterer, John; Casto, Brian W
2004-05-01
Wireless networking can be a very complex science, requiring an understanding of physics and the electromagnetic spectrum. While the radio theory behind the technology can be challenging, a basic understanding of wireless networking can be sufficient for small-scale deployment. Numerous security mechanisms are available to wireless technologies, making it practical, scalable, and affordable for healthcare organizations. The decision on the selected security model should take into account the needs for additional server hardware and administrative costs. Where wide area network connections exist between cooperative organizations, deployment of a distributed security model can be considered to reduce administrative overhead. The wireless approach chosen should be dynamic and concentrate on the organization's specific environmental needs. Aspects of organizational mission, operations, service level, and budget allotment as well as an organization's risk tolerance are all part of the balance in the decision to deploy wireless technology.
Review of battery powered embedded systems design for mission-critical low-power applications
NASA Astrophysics Data System (ADS)
Malewski, Matthew; Cowell, David M. J.; Freear, Steven
2018-06-01
The applications and uses of embedded systems is increasingly pervasive. Mission and safety critical systems relying on embedded systems pose specific challenges. Embedded systems is a multi-disciplinary domain, involving both hardware and software. Systems need to be designed in a holistic manner so that they are able to provide the desired reliability and minimise unnecessary complexity. The large problem landscape means that there is no one solution that fits all applications of embedded systems. With the primary focus of these mission and safety critical systems being functionality and reliability, there can be conflicts with business needs, and this can introduce pressures to reduce cost at the expense of reliability and functionality. This paper examines the challenges faced by battery powered systems, and then explores at more general problems, and several real-world embedded systems.
NASA Astrophysics Data System (ADS)
Fiandrotti, Attilio; Fosson, Sophie M.; Ravazzi, Chiara; Magli, Enrico
2018-04-01
Compressive sensing promises to enable bandwidth-efficient on-board compression of astronomical data by lifting the encoding complexity from the source to the receiver. The signal is recovered off-line, exploiting GPUs parallel computation capabilities to speedup the reconstruction process. However, inherent GPU hardware constraints limit the size of the recoverable signal and the speedup practically achievable. In this work, we design parallel algorithms that exploit the properties of circulant matrices for efficient GPU-accelerated sparse signals recovery. Our approach reduces the memory requirements, allowing us to recover very large signals with limited memory. In addition, it achieves a tenfold signal recovery speedup thanks to ad-hoc parallelization of matrix-vector multiplications and matrix inversions. Finally, we practically demonstrate our algorithms in a typical application of circulant matrices: deblurring a sparse astronomical image in the compressed domain.
High-performance, multi-faceted research sonar electronics
NASA Astrophysics Data System (ADS)
Moseley, Julian W.
This thesis describes the design, implementation and testing of a research sonar system capable of performing complex applications such as coherent Doppler measurement and synthetic aperture imaging. Specifically, this thesis presents an approach to improve the precision of the timing control and increase the signal-to-noise ratio of an existing research sonar. A dedicated timing control subsystem, and hardware drivers are designed to improve the efficiency of the old sonar's timing operations. A low noise preamplifier is designed to reduce the noise component in the received signal arriving at the input of the system's data acquisition board. Noise analysis, frequency response, and timing simulation data are generated in order to predict the functionality and performance improvements expected when the subsystems are implemented. Experimental data, gathered using these subsys- tems, are presented, and are shown to closely match the simulation results, thus verifying performance.
A vector scanning processing technique for pulsed laser velocimetry
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Edwards, Robert V.
1989-01-01
Pulsed-laser-sheet velocimetry yields two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high-precision (1-percent) velocity estimates, but can require hours of processing time on specialized array processors. Sometimes, however, a less accurate (about 5 percent) data-reduction technique which also gives unambiguous velocity vector information is acceptable. Here, a direct space-domain processing technique is described and shown to be far superior to previous methods in achieving these objectives. It uses a novel data coding and reduction technique and has no 180-deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 min on an 80386-based PC, producing a two-dimensional velocity-vector map of the flowfield. Pulsed-laser velocimetry data can thus be reduced quickly and reasonably accurately, without specialized array processing hardware.
LLMapReduce: Multi-Level Map-Reduce for High Performance Data Analysis
2016-05-23
LLMapReduce works with several schedulers such as SLURM, Grid Engine and LSF. Keywords—LLMapReduce; map-reduce; performance; scheduler; Grid Engine ...SLURM; LSF I. INTRODUCTION Large scale computing is currently dominated by four ecosystems: supercomputing, database, enterprise , and big data [1...interconnects [6]), High performance math libraries (e.g., BLAS [7, 8], LAPACK [9], ScaLAPACK [10]) designed to exploit special processing hardware, High
Motion compensation in digital subtraction angiography using graphics hardware.
Deuerling-Zheng, Yu; Lell, Michael; Galant, Adam; Hornegger, Joachim
2006-07-01
An inherent disadvantage of digital subtraction angiography (DSA) is its sensitivity to patient motion which causes artifacts in the subtraction images. These artifacts could often reduce the diagnostic value of this technique. Automated, fast and accurate motion compensation is therefore required. To cope with this requirement, we first examine a method explicitly designed to detect local motions in DSA. Then, we implement a motion compensation algorithm by means of block matching on modern graphics hardware. Both methods search for maximal local similarity by evaluating a histogram-based measure. In this context, we are the first who have mapped an optimizing search strategy on graphics hardware while paralleling block matching. Moreover, we provide an innovative method for creating histograms on graphics hardware with vertex texturing and frame buffer blending. It turns out that both methods can effectively correct the artifacts in most case, as the hardware implementation of block matching performs much faster: the displacements of two 1024 x 1024 images can be calculated at 3 frames/s with integer precision or 2 frames/s with sub-pixel precision. Preliminary clinical evaluation indicates that the computation with integer precision could already be sufficient.
Reducing Stressful Aspects of Information Technology in Public Services.
ERIC Educational Resources Information Center
Quinn, Brian
1995-01-01
Identifies sources of technological stress for public services librarians and patrons and proposes ways to reduce stress, including communicating with staff, implementing a system gradually, providing adequate training, creating proper documentation, planning, considering ergonomics in hardware and software selection, selecting a good interface,…
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary Jo W.; Roche, Rigoberto
2017-01-01
The Space Telecommunications Radio System (STRS) provides a common, consistent framework for software defined radios (SDRs) to abstract the application software from the radio platform hardware. The STRS standard aims to reduce the cost and risk of using complex, configurable and reprogrammable radio systems across NASA missions. To promote the use of the STRS architecture for future NASA advanced exploration missions, NASA Glenn Research Center (GRC) developed an STRS-compliant SDR on a radio platform used by the Advance Exploration System program at the Johnson Space Center (JSC) in their Integrated Power, Avionics, and Software (iPAS) laboratory. The iPAS STRS Radio was implemented on the Reconfigurable, Intelligently-Adaptive Communication System (RIACS) platform, currently being used for radio development at JSC. The platform consists of a Xilinx(Trademark) ML605 Virtex(Trademark)-6 FPGA board, an Analog Devices FMCOMMS1-EBZ RF transceiver board, and an Embedded PC (Axiomtek(Trademark) eBox 620-110-FL) running the Ubuntu 12.4 operating system. The result of this development is a very low cost STRS compliant platform that can be used for waveform developments for multiple applications. The purpose of this document is to describe how to develop a new waveform using the RIACS platform and the Very High Speed Integrated Circuits (VHSIC) Hardware Description Language (VHDL) FPGA wrapper code and the STRS implementation on the Axiomtek processor.
Khiarak, Mehdi Noormohammadi; Martianova, Ekaterina; Bories, Cyril; Martel, Sylvain; Proulx, Christophe D; De Koninck, Yves; Gosselin, Benoit
2018-06-01
Fluorescence biophotometry measurements require wide dynamic range (DR) and high-sensitivity laboratory apparatus. Indeed, it is often very challenging to accurately resolve the small fluorescence variations in presence of noise and high-background tissue autofluorescence. There is a great need for smaller detectors combining high linearity, high sensitivity, and high-energy efficiency. This paper presents a new biophotometry sensor merging two individual building blocks, namely a low-noise sensing front-end and a order continuous-time modulator (CTSDM), into a single module for enabling high-sensitivity and high energy-efficiency photo-sensing. In particular, a differential CMOS photodetector associated with a differential capacitive transimpedance amplifier-based sensing front-end is merged with an incremental order 1-bit CTSDM to achieve a large DR, low hardware complexity, and high-energy efficiency. The sensor leverages a hardware sharing strategy to simplify the implementation and reduce power consumption. The proposed CMOS biosensor is integrated within a miniature wireless head mountable prototype for enabling biophotometry with a single implantable fiber in the brain of live mice. The proposed biophotometry sensor is implemented in a 0.18- CMOS technology, consuming from a 1.8- supply voltage, while achieving a peak dynamic range of over a 50- input bandwidth, a sensitivity of 24 mV/nW, and a minimum detectable current of 2.46- at a 20- sampling rate.
NASA Astrophysics Data System (ADS)
Nguyen, An Hung; Guillemette, Thomas; Lambert, Andrew J.; Pickering, Mark R.; Garratt, Matthew A.
2017-09-01
Image registration is a fundamental image processing technique. It is used to spatially align two or more images that have been captured at different times, from different sensors, or from different viewpoints. There have been many algorithms proposed for this task. The most common of these being the well-known Lucas-Kanade (LK) and Horn-Schunck approaches. However, the main limitation of these approaches is the computational complexity required to implement the large number of iterations necessary for successful alignment of the images. Previously, a multi-pass image interpolation algorithm (MP-I2A) was developed to considerably reduce the number of iterations required for successful registration compared with the LK algorithm. This paper develops a kernel-warping algorithm (KWA), a modified version of the MP-I2A, which requires fewer iterations to successfully register two images and less memory space for the field-programmable gate array (FPGA) implementation than the MP-I2A. These reductions increase feasibility of the implementation of the proposed algorithm on FPGAs with very limited memory space and other hardware resources. A two-FPGA system rather than single FPGA system is successfully developed to implement the KWA in order to compensate insufficiency of hardware resources supported by one FPGA, and increase parallel processing ability and scalability of the system.
Lunar Applications in Reconfigurable Computing
NASA Technical Reports Server (NTRS)
Somervill, Kevin
2008-01-01
NASA s Constellation Program is developing a lunar surface outpost in which reconfigurable computing will play a significant role. Reconfigurable systems provide a number of benefits over conventional software-based implementations including performance and power efficiency, while the use of standardized reconfigurable hardware provides opportunities to reduce logistical overhead. The current vision for the lunar surface architecture includes habitation, mobility, and communications systems, each of which greatly benefit from reconfigurable hardware in applications including video processing, natural feature recognition, data formatting, IP offload processing, and embedded control systems. In deploying reprogrammable hardware, considerations similar to those of software systems must be managed. There needs to be a mechanism for discovery enabling applications to locate and utilize the available resources. Also, application interfaces are needed to provide for both configuring the resources as well as transferring data between the application and the reconfigurable hardware. Each of these topics are explored in the context of deploying reconfigurable resources as an integral aspect of the lunar exploration architecture.
Data Applicability of Heritage and New Hardware for Launch Vehicle System Reliability Models
NASA Technical Reports Server (NTRS)
Al Hassan Mohammad; Novack, Steven
2015-01-01
Many launch vehicle systems are designed and developed using heritage and new hardware. In most cases, the heritage hardware undergoes modifications to fit new functional system requirements, impacting the failure rates and, ultimately, the reliability data. New hardware, which lacks historical data, is often compared to like systems when estimating failure rates. Some qualification of applicability for the data source to the current system should be made. Accurately characterizing the reliability data applicability and quality under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This presentation will demonstrate a data-source classification method that ranks reliability data according to applicability and quality criteria to a new launch vehicle. This method accounts for similarities/dissimilarities in source and applicability, as well as operating environments like vibrations, acoustic regime, and shock. This classification approach will be followed by uncertainty-importance routines to assess the need for additional data to reduce uncertainty.
Evaluation of control parameters for Spray-In-Air (SIA) aqueous cleaning for shuttle RSRM hardware
NASA Technical Reports Server (NTRS)
Davis, S. J.; Deweese, C. D.
1995-01-01
HD-2 grease is deliberately applied to Shuttle Redesigned Solid Rocket Motor (RSRM) D6AC steel hardware parts as a temporary protective coating for storage and shipping. This HD-2 grease is the most common form of surface contamination on RSRM hardware and must be removed prior to subsequent surface treatment. Failure to achieve an acceptable level of cleanliness (HD-2 calcium grease removal) is a common cause of defect incidence. Common failures from ineffective cleaning include poor adhesion of surface coatings, reduced bond performance of structural adhesives, and failure to pass cleanliness inspection standards. The RSRM hardware is currently cleaned and refurbished using methyl chloroform (1,1,1-trichloroethane). This chlorinated solvent is mandated for elimination due to its ozone depleting characteristics. This report describes an experimental study of an aqueous cleaning system (which uses Brulin 815 GD) as a replacement for methyl chloroform. Evaluation of process control parameters for this cleaner are discussed as well as cleaning mechanisms for a spray-in-air process.
Hardware Acceleration of Adaptive Neural Algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, Conrad D.
As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - worldmore » conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.« less
Inexact hardware for modelling weather & climate
NASA Astrophysics Data System (ADS)
Düben, Peter D.; McNamara, Hugh; Palmer, Tim
2014-05-01
The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing exact calculations in exchange for improvements in performance and potentially accuracy and a reduction in power consumption. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud resolving atmospheric modelling. The impact of both, hardware induced faults and low precision arithmetic is tested in the dynamical core of a global atmosphere model. Our simulations show that both approaches to inexact calculations do not substantially affect the quality of the model simulations, provided they are restricted to act only on smaller scales. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations.
Hardware test program for evaluation of baseline range/range rate sensor concept
NASA Technical Reports Server (NTRS)
1985-01-01
The Hardware Test Program for evaluation of the baseline range/range rate sensor concept was initiated 11 September 1984. This ninth report covers the period 12 May through 11 June 1885. A contract amendment adding a second phase has extended the Hardware Test Program through 10 December 1985. The objective of the added program phase is to establish range and range measurement accuracy and radar signature characteristics for a typical spacecraft target. Phase I of the Hardware Test Program was designed to reduce the risks associated with the Range/Range Rate (R/R) Sensor baseline design approach. These risks are associated with achieving the sensor performance required for the two modes of operation, the Interrupted CW (ICW) mode for initial acquisition and tracking to close-in ranges, and the CW mode, providing coverage during the final docking maneuver. The risks associated with these modes of operation have to do with the realization of adequate sensitivity to operate to their individual maximum ranges.
Automated culture system experiments hardware: developing test results and design solutions.
Freddi, M; Covini, M; Tenconi, C; Ricci, C; Caprioli, M; Cotronei, V
2002-07-01
The experiment proposed by Prof. Ricci University of Milan is funded by ASI with Laben as industrial Prime Contractor. ACS-EH (Automated Culture System-Experiment Hardware) will support the multigenerational experiment on weightlessness with rotifers and nematodes within four Experiment Containers (ECs) located inside the European Modular Cultivation System (EMCS) facility..Actually the Phase B is in progress and a concept design solution has been defined. The most challenging aspects for the design of such hardware are, from biological point of view the provision of an environment which permits animal's survival and to maintain desiccated generations separated and from the technical point of view, the miniaturisation of the hardware itself due to the reduce EC provided volume (160mmx60mmx60mm). The miniaturisation will allow a better use of the available EMCS Facility resources (e.g. volume. power etc.) and to fulfil the experiment requirements. ACS-EH, will be ready to fly in the year 2005 on boar the ISS.
NASA Astrophysics Data System (ADS)
Laracuente, Nicholas; Grossman, Carl
2013-03-01
We developed an algorithm and software to calculate autocorrelation functions from real-time photon-counting data using the fast, parallel capabilities of graphical processor units (GPUs). Recent developments in hardware and software have allowed for general purpose computing with inexpensive GPU hardware. These devices are more suited for emulating hardware autocorrelators than traditional CPU-based software applications by emphasizing parallel throughput over sequential speed. Incoming data are binned in a standard multi-tau scheme with configurable points-per-bin size and are mapped into a GPU memory pattern to reduce time-expensive memory access. Applications include dynamic light scattering (DLS) and fluorescence correlation spectroscopy (FCS) experiments. We ran the software on a 64-core graphics pci card in a 3.2 GHz Intel i5 CPU based computer running Linux. FCS measurements were made on Alexa-546 and Texas Red dyes in a standard buffer (PBS). Software correlations were compared to hardware correlator measurements on the same signals. Supported by HHMI and Swarthmore College
Testing Microshutter Arrays Using Commercial FPGA Hardware
NASA Technical Reports Server (NTRS)
Rapchun, David
2008-01-01
NASA is developing micro-shutter arrays for the Near Infrared Spectrometer (NIRSpec) instrument on the James Webb Space Telescope (JWST). These micro-shutter arrays allow NIRspec to do Multi Object Spectroscopy, a key part of the mission. Each array consists of 62414 individual 100 x 200 micron shutters. These shutters are magnetically opened and held electrostatically. Individual shutters are then programmatically closed using a simple row/column addressing technique. A common approach to provide these data/clock patterns is to use a Field Programmable Gate Array (FPGA). Such devices require complex VHSIC Hardware Description Language (VHDL) programming and custom electronic hardware. Due to JWST's rapid schedule on the development of the micro-shutters, rapid changes were required to the FPGA code to facilitate new approaches being discovered to optimize the array performance. Such rapid changes simply could not be made using conventional VHDL programming. Subsequently, National Instruments introduced an FPGA product that could be programmed through a Labview interface. Because Labview programming is considerably easier than VHDL programming, this method was adopted and brought success. The software/hardware allowed the rapid change the FPGA code and timely results of new micro-shutter array performance data. As a result, numerous labor hours and money to the project were conserved.
Hardware system of X-wave generator with simple driving pulses
NASA Astrophysics Data System (ADS)
Li, Xu; Li, Yaqin; Xiao, Feng; Ding, Mingyue; Yuchi, Ming
2013-03-01
The limited diffraction beams such as X-wave have the properties of larger depth of field. Thus, it has the potential to generate ultra-high frame rate ultrasound images. However, in practice, the real-time generation of X-wave ultrasonic field requires complex and high-cost system, especially the precise and specific voltage time distribution part for the excitation of each distinct array element. In order to simplify the hardware realization of X-wave, based on the previous works, X-wave excitation signals were decomposed and expressed as the superposition of a group of simple driving pulses, such as rectangular and triangular waves. The hardware system for the X-wave generator was also designed. The generator consists of a computer for communication with the circuit, universal serial bus (USB) based micro-controller unit (MCU) for data transmission, field programmable gate array (FPGA) based Direct Digital Synthesizer(DDS), 12-bit digital-to-analog (D/A) converter and a two stage amplifier.The hardware simulation results show that the designed system can generate the waveforms at different radius approximating the theoretical X-wave excitations with a maximum error of 0.49% triggered by the quantification of amplitude data.
Automating quantum experiment control
NASA Astrophysics Data System (ADS)
Stevens, Kelly E.; Amini, Jason M.; Doret, S. Charles; Mohler, Greg; Volin, Curtis; Harter, Alexa W.
2017-03-01
The field of quantum information processing is rapidly advancing. As the control of quantum systems approaches the level needed for useful computation, the physical hardware underlying the quantum systems is becoming increasingly complex. It is already becoming impractical to manually code control for the larger hardware implementations. In this chapter, we will employ an approach to the problem of system control that parallels compiler design for a classical computer. We will start with a candidate quantum computing technology, the surface electrode ion trap, and build a system instruction language which can be generated from a simple machine-independent programming language via compilation. We incorporate compile time generation of ion routing that separates the algorithm description from the physical geometry of the hardware. Extending this approach to automatic routing at run time allows for automated initialization of qubit number and placement and additionally allows for automated recovery after catastrophic events such as qubit loss. To show that these systems can handle real hardware, we present a simple demonstration system that routes two ions around a multi-zone ion trap and handles ion loss and ion placement. While we will mainly use examples from transport-based ion trap quantum computing, many of the issues and solutions are applicable to other architectures.
NASA Astrophysics Data System (ADS)
Sanchez, Gustavo; Marcon, César; Agostini, Luciano Volcan
2018-01-01
The 3D-high efficiency video coding has introduced tools to obtain higher efficiency in 3-D video coding, and most of them are related to the depth maps coding. Among these tools, the depth modeling mode-1 (DMM-1) focuses on better encoding edges regions of depth maps. The large memory required for storing all wedgelet patterns is one of the bottlenecks in the DMM-1 hardware design of both encoder and decoder since many patterns must be stored. Three algorithms to reduce the DMM-1 memory requirements and a hardware design targeting the most efficient among these algorithms are presented. Experimental results demonstrate that the proposed solutions surpass related works reducing up to 78.8% of the wedgelet memory, without degrading the encoding efficiency. Synthesis results demonstrate that the proposed algorithm reduces almost 75% of the power dissipation when compared to the standard approach.
Iterative Demodulation and Decoding of Non-Square QAM
NASA Technical Reports Server (NTRS)
Li, Lifang; Divsalar, Dariush; Dolinar, Samuel
2004-01-01
It has been shown that a non-square (NS) 2(sup 2n+1)-ary (where n is a positive integer) quadrature amplitude modulation [(NS)2(sup 2n+1)-QAM] has inherent memory that can be exploited to obtain coding gains. Moreover, it should not be necessary to build new hardware to realize these gains. The present scheme is a product of theoretical calculations directed toward reducing the computational complexity of decoding coded 2(sup 2n+1)-QAM. In the general case of 2(sup 2n+1)-QAM, the signal constellation is not square and it is impossible to have independent in-phase (I) and quadrature-phase (Q) mapping and demapping. However, independent I and Q mapping and demapping are desirable for reducing the complexity of computing the log likelihood ratio (LLR) between a bit and a received symbol (such computations are essential operations in iterative decoding). This is because in modulation schemes that include independent I and Q mapping and demapping, each bit of a signal point is involved in only one-dimensional mapping and demapping. As a result, the computation of the LLR is equivalent to that of a one-dimensional pulse amplitude modulation (PAM) system. Therefore, it is desirable to find a signal constellation that enables independent I and Q mapping and demapping for 2(sup 2n+1)-QAM.
Deep Space Gateway - Enabling Missions to Mars
NASA Technical Reports Server (NTRS)
Rucker, Michelle; Connolly, John
2017-01-01
There are many opportunities for commonality between Lunar vicinity and Mars mission hardware and operations. Best approach: Identify Mars mission risks that can be bought down with testing in the Lunar vicinity, then explore hardware and operational concepts that work for both missions with minimal compromise. Deep Space Transport will validate the systems and capabilities required to send humans to Mars orbit and return to Earth. Deep Space Gateway provides a convenient assembly, checkout, and refurbishment location to enable Mars missions Current deep space transport concept is to fly missions of increasing complexity: Shakedown cruise, Mars orbital mission, Mars surface mission; Mars surface mission would require additional elements.
Unmanned Vehicle Guidance Using Video Camera/Vehicle Model
NASA Technical Reports Server (NTRS)
Sutherland, T.
1999-01-01
A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.
Precision Clean Hardware: Maintenance of Fluid Systems Cleanliness
NASA Technical Reports Server (NTRS)
Sharp, Sheila; Pedley, Mike; Bond, Tim; Quaglino, Joseph; Lorenz, Mary Jo; Bentz, Michael; Banta, Richard; Tolliver, Nancy; Golden, John; Levesque, Ray
2003-01-01
The ISS fluid systems are so complex that fluid system cleanliness cannot be verified at the assembly level. A "build clean / maintain clean" approach was used by all major fluid systems: Verify cleanliness at the detail and subassembly level. Maintain cleanliness during assembly.
Payload Crew Training Complex (PCTC) utilization and training plan
NASA Technical Reports Server (NTRS)
Self, M. R.
1980-01-01
The physical facilities that comprise the payload crew training complex (PCTC) are described including the host simulator; experiment simulators; Spacelab aft flight deck, experiment pallet, and experiment rack mockups; the simulation director's console; payload operations control center; classrooms; and supporting soft- and hardware. The parameters of a training philosophy for payload crew training at the PCTC are established. Finally the development of the training plan is addressed including discussions of preassessment, and evaluation options.
Next-generation digital camera integration and software development issues
NASA Astrophysics Data System (ADS)
Venkataraman, Shyam; Peters, Ken; Hecht, Richard
1998-04-01
This paper investigates the complexities associated with the development of next generation digital cameras due to requirements in connectivity and interoperability. Each successive generation of digital camera improves drastically in cost, performance, resolution, image quality and interoperability features. This is being accomplished by advancements in a number of areas: research, silicon, standards, etc. As the capabilities of these cameras increase, so do the requirements for both hardware and software. Today, there are two single chip camera solutions in the market including the Motorola MPC 823 and LSI DCAM- 101. Real time constraints for a digital camera may be defined by the maximum time allowable between capture of images. Constraints in the design of an embedded digital camera include processor architecture, memory, processing speed and the real-time operating systems. This paper will present the LSI DCAM-101, a single-chip digital camera solution. It will present an overview of the architecture and the challenges in hardware and software for supporting streaming video in such a complex device. Issues presented include the development of the data flow software architecture, testing and integration on this complex silicon device. The strategy for optimizing performance on the architecture will also be presented.
A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berres, Anne Sabine; Adhinarayanan, Vignesh; Turton, Terece
2017-05-12
Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline atmore » the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.« less
Reconfigurable, Intelligently-Adaptive, Communication System, an SDR Platform
NASA Technical Reports Server (NTRS)
Roche, Rigoberto
2016-01-01
The Space Telecommunications Radio System (STRS) provides a common, consistent framework to abstract the application software from the radio platform hardware. STRS aims to reduce the cost and risk of using complex, configurable and reprogrammable radio systems across NASA missions. The Glenn Research Center (GRC) team made a software-defined radio (SDR) platform STRS compliant by adding an STRS operating environment and a field programmable gate array (FPGA) wrapper, capable of implementing each of the platforms interfaces, as well as a test waveform to exercise those interfaces. This effort serves to provide a framework toward waveform development on an STRS compliant platform to support future space communication systems for advanced exploration missions. Validated STRS compliant applications provided tested code with extensive documentation to potentially reduce risk, cost and efforts in development of space-deployable SDRs. This paper discusses the advantages of STRS, the integration of STRS onto a Reconfigurable, Intelligently-Adaptive, Communication System (RIACS) SDR platform, the sample waveform, and wrapper development efforts. The paper emphasizes the infusion of the STRS Architecture onto the RIACS platform for potential use in next generation SDRs for advance exploration missions.
NASA Astrophysics Data System (ADS)
Lippert, Ross A.; Predescu, Cristian; Ierardi, Douglas J.; Mackenzie, Kenneth M.; Eastwood, Michael P.; Dror, Ron O.; Shaw, David E.
2013-10-01
In molecular dynamics simulations, control over temperature and pressure is typically achieved by augmenting the original system with additional dynamical variables to create a thermostat and a barostat, respectively. These variables generally evolve on timescales much longer than those of particle motion, but typical integrator implementations update the additional variables along with the particle positions and momenta at each time step. We present a framework that replaces the traditional integration procedure with separate barostat, thermostat, and Newtonian particle motion updates, allowing thermostat and barostat updates to be applied infrequently. Such infrequent updates provide a particularly substantial performance advantage for simulations parallelized across many computer processors, because thermostat and barostat updates typically require communication among all processors. Infrequent updates can also improve accuracy by alleviating certain sources of error associated with limited-precision arithmetic. In addition, separating the barostat, thermostat, and particle motion update steps reduces certain truncation errors, bringing the time-average pressure closer to its target value. Finally, this framework, which we have implemented on both general-purpose and special-purpose hardware, reduces software complexity and improves software modularity.
COALA-System for Visual Representation of Cryptography Algorithms
ERIC Educational Resources Information Center
Stanisavljevic, Zarko; Stanisavljevic, Jelena; Vuletic, Pavle; Jovanovic, Zoran
2014-01-01
Educational software systems have an increasingly significant presence in engineering sciences. They aim to improve students' attitudes and knowledge acquisition typically through visual representation and simulation of complex algorithms and mechanisms or hardware systems that are often not available to the educational institutions. This paper…
Instrumentino: An Open-Source Software for Scientific Instruments.
Koenka, Israel Joel; Sáiz, Jorge; Hauser, Peter C
2015-01-01
Scientists often need to build dedicated computer-controlled experimental systems. For this purpose, it is becoming common to employ open-source microcontroller platforms, such as the Arduino. These boards and associated integrated software development environments provide affordable yet powerful solutions for the implementation of hardware control of transducers and acquisition of signals from detectors and sensors. It is, however, a challenge to write programs that allow interactive use of such arrangements from a personal computer. This task is particularly complex if some of the included hardware components are connected directly to the computer and not via the microcontroller. A graphical user interface framework, Instrumentino, was therefore developed to allow the creation of control programs for complex systems with minimal programming effort. By writing a single code file, a powerful custom user interface is generated, which enables the automatic running of elaborate operation sequences and observation of acquired experimental data in real time. The framework, which is written in Python, allows extension by users, and is made available as an open source project.
NASA Astrophysics Data System (ADS)
Ba, Seydou N.; Waheed, Khurram; Zhou, G. Tong
2010-12-01
Digital predistortion is an effective means to compensate for the nonlinear effects of a memoryless system. In case of a cellular transmitter, a digital baseband predistorter can mitigate the undesirable nonlinear effects along the signal chain, particularly the nonlinear impairments in the radiofrequency (RF) amplifiers. To be practically feasible, the implementation complexity of the predistorter must be minimized so that it becomes a cost-effective solution for the resource-limited wireless handset. This paper proposes optimizations that facilitate the design of a low-cost high-performance adaptive digital baseband predistorter for memoryless systems. A comparative performance analysis of the amplitude and power lookup table (LUT) indexing schemes is presented. An optimized low-complexity amplitude approximation and its hardware synthesis results are also studied. An efficient LUT predistorter training algorithm that combines the fast convergence speed of the normalized least mean squares (NLMSs) with a small hardware footprint is proposed. Results of fixed-point simulations based on the measured nonlinear characteristics of an RF amplifier are presented.
NASA Technical Reports Server (NTRS)
Boulanger, Richard; Overland, David
2004-01-01
Technologies that facilitate the design and control of complex, hybrid, and resource-constrained systems are examined. This paper focuses on design methodologies, and system architectures, not on specific control methods that may be applied to life support subsystems. Honeywell and Boeing have estimated that 60-80Y0 of the effort in developing complex control systems is software development, and only 20-40% is control system development. It has also been shown that large software projects have failure rates of as high as 50-65%. Concepts discussed include the Unified Modeling Language (UML) and design patterns with the goal of creating a self-improving, self-documenting system design process. Successful architectures for control must not only facilitate hardware to software integration, but must also reconcile continuously changing software with much less frequently changing hardware. These architectures rely on software modules or components to facilitate change. Architecting such systems for change leverages the interfaces between these modules or components.
Space and Time Partitioning with Hardware Support for Space Applications
NASA Astrophysics Data System (ADS)
Pinto, S.; Tavares, A.; Montenegro, S.
2016-08-01
Complex and critical systems like airplanes and spacecraft implement a very fast growing amount of functions. Typically, those systems were implemented with fully federated architectures, but the number and complexity of desired functions of todays systems led aerospace industry to follow another strategy. Integrated Modular Avionics (IMA) arose as an attractive approach for consolidation, by combining several applications into one single generic computing resource. Current approach goes towards higher integration provided by space and time partitioning (STP) of system virtualization. The problem is existent virtualization solutions are not ready to fully provide what the future of aerospace are demanding: performance, flexibility, safety, security while simultaneously containing Size, Weight, Power and Cost (SWaP-C).This work describes a real time hypervisor for space applications assisted by commercial off-the-shell (COTS) hardware. ARM TrustZone technology is exploited to implement a secure virtualization solution with low overhead and low memory footprint. This is demonstrated by running multiple guest partitions of RODOS operating system on a Xilinx Zynq platform.
NASA Astrophysics Data System (ADS)
Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo
2015-08-01
In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.
Using Clouds for MapReduce Measurement Assignments
ERIC Educational Resources Information Center
Rabkin, Ariel; Reiss, Charles; Katz, Randy; Patterson, David
2013-01-01
We describe our experiences teaching MapReduce in a large undergraduate lecture course using public cloud services and the standard Hadoop API. Using the standard API, students directly experienced the quality of industrial big-data tools. Using the cloud, every student could carry out scalability benchmarking assignments on realistic hardware,…
Pointo - a Low Cost Solution to Point Cloud Processing
NASA Astrophysics Data System (ADS)
Houshiar, H.; Winkler, S.
2017-11-01
With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a fast and efficient visualization with the ability to add annotation and documentation to the point clouds.
Real-time skin feature identification in a time-sequential video stream
NASA Astrophysics Data System (ADS)
Kramberger, Iztok
2005-04-01
Skin color can be an important feature when tracking skin-colored objects. Particularly this is the case for computer-vision-based human-computer interfaces (HCI). Humans have a highly developed feeling of space and, therefore, it is reasonable to support this within intelligent HCI, where the importance of augmented reality can be foreseen. Joining human-like interaction techniques within multimodal HCI could, or will, gain a feature for modern mobile telecommunication devices. On the other hand, real-time processing plays an important role in achieving more natural and physically intuitive ways of human-machine interaction. The main scope of this work is the development of a stereoscopic computer-vision hardware-accelerated framework for real-time skin feature identification in the sense of a single-pass image segmentation process. The hardware-accelerated preprocessing stage is presented with the purpose of color and spatial filtering, where the skin color model within the hue-saturation-value (HSV) color space is given with a polyhedron of threshold values representing the basis of the filter model. An adaptive filter management unit is suggested to achieve better segmentation results. This enables the adoption of filter parameters to the current scene conditions in an adaptive way. Implementation of the suggested hardware structure is given at the level of filed programmable system level integrated circuit (FPSLIC) devices using an embedded microcontroller as their main feature. A stereoscopic clue is achieved using a time-sequential video stream, but this shows no difference for real-time processing requirements in terms of hardware complexity. The experimental results for the hardware-accelerated preprocessing stage are given by efficiency estimation of the presented hardware structure using a simple motion-detection algorithm based on a binary function.
A Low-Complexity Circuit for On-Sensor Concurrent A/D Conversion and Compression
NASA Technical Reports Server (NTRS)
Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.
2007-01-01
A low-complexity circuit for on-sensor compression is presented. The proposed circuit achieves complexity savings by combining a single-slope analog-to-digital converter with a Golomb-Rice entropy encoder and by implementing a low-complexity adaptation rule. The adaptation rule monitors the output codewords and minimizes their length by incrementing or decrementing the value of the Golomb-Rice coding parameter k. Its hardware implementation is one order of magnitude lower than existing adaptive algorithms. The compression circuit has been fabricated using a 0.35 micrometers CMOS technology and occupies an area of 0.0918 mm2. Test measurements confirm the validity of the design
Robust Fixed-Structure Controller Synthesis
NASA Technical Reports Server (NTRS)
Corrado, Joseph R.; Haddad, Wassim M.; Gupta, Kajal (Technical Monitor)
2000-01-01
The ability to develop an integrated control system design methodology for robust high performance controllers satisfying multiple design criteria and real world hardware constraints constitutes a challenging task. The increasingly stringent performance specifications required for controlling such systems necessitates a trade-off between controller complexity and robustness. The principle challenge of the minimal complexity robust control design is to arrive at a tractable control design formulation in spite of the extreme complexity of such systems. Hence, design of minimal complexitY robust controllers for systems in the face of modeling errors has been a major preoccupation of system and control theorists and practitioners for the past several decades.
Kim, Dong-Sun; Kwon, Jin-San
2014-01-01
Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900
FPGA implementation of sparse matrix algorithm for information retrieval
NASA Astrophysics Data System (ADS)
Bojanic, Slobodan; Jevtic, Ruzica; Nieto-Taladriz, Octavio
2005-06-01
Information text data retrieval requires a tremendous amount of processing time because of the size of the data and the complexity of information retrieval algorithms. In this paper the solution to this problem is proposed via hardware supported information retrieval algorithms. Reconfigurable computing may adopt frequent hardware modifications through its tailorable hardware and exploits parallelism for a given application through reconfigurable and flexible hardware units. The degree of the parallelism can be tuned for data. In this work we implemented standard BLAS (basic linear algebra subprogram) sparse matrix algorithm named Compressed Sparse Row (CSR) that is showed to be more efficient in terms of storage space requirement and query-processing timing over the other sparse matrix algorithms for information retrieval application. Although inverted index algorithm is treated as the de facto standard for information retrieval for years, an alternative approach to store the index of text collection in a sparse matrix structure gains more attention. This approach performs query processing using sparse matrix-vector multiplication and due to parallelization achieves a substantial efficiency over the sequential inverted index. The parallel implementations of information retrieval kernel are presented in this work targeting the Virtex II Field Programmable Gate Arrays (FPGAs) board from Xilinx. A recent development in scientific applications is the use of FPGA to achieve high performance results. Computational results are compared to implementations on other platforms. The design achieves a high level of parallelism for the overall function while retaining highly optimised hardware within processing unit.
A data-driven modeling approach to stochastic computation for low-energy biomedical devices.
Lee, Kyong Ho; Jang, Kuk Jin; Shoeb, Ali; Verma, Naveen
2011-01-01
Low-power devices that can detect clinically relevant correlations in physiologically-complex patient signals can enable systems capable of closed-loop response (e.g., controlled actuation of therapeutic stimulators, continuous recording of disease states, etc.). In ultra-low-power platforms, however, hardware error sources are becoming increasingly limiting. In this paper, we present how data-driven methods, which allow us to accurately model physiological signals, also allow us to effectively model and overcome prominent hardware error sources with nearly no additional overhead. Two applications, EEG-based seizure detection and ECG-based arrhythmia-beat classification, are synthesized to a logic-gate implementation, and two prominent error sources are introduced: (1) SRAM bit-cell errors and (2) logic-gate switching errors ('stuck-at' faults). Using patient data from the CHB-MIT and MIT-BIH databases, performance similar to error-free hardware is achieved even for very high fault rates (up to 0.5 for SRAMs and 7 × 10(-2) for logic) that cause computational bit error rates as high as 50%.
Proposed hardware architectures of particle filter for object tracking
NASA Astrophysics Data System (ADS)
Abd El-Halym, Howida A.; Mahmoud, Imbaby Ismail; Habib, SED
2012-12-01
In this article, efficient hardware architectures for particle filter (PF) are presented. We propose three different architectures for Sequential Importance Resampling Filter (SIRF) implementation. The first architecture is a two-step sequential PF machine, where particle sampling, weight, and output calculations are carried out in parallel during the first step followed by sequential resampling in the second step. For the weight computation step, a piecewise linear function is used instead of the classical exponential function. This decreases the complexity of the architecture without degrading the results. The second architecture speeds up the resampling step via a parallel, rather than a serial, architecture. This second architecture targets a balance between hardware resources and the speed of operation. The third architecture implements the SIRF as a distributed PF composed of several processing elements and central unit. All the proposed architectures are captured using VHDL synthesized using Xilinx environment, and verified using the ModelSim simulator. Synthesis results confirmed the resource reduction and speed up advantages of our architectures.
NASA Astrophysics Data System (ADS)
Berdychowski, Piotr P.; Zabolotny, Wojciech M.
2010-09-01
The main goal of C to VHDL compiler project is to make FPGA platform more accessible for scientists and software developers. FPGA platform offers unique ability to configure the hardware to implement virtually any dedicated architecture, and modern devices provide sufficient number of hardware resources to implement parallel execution platforms with complex processing units. All this makes the FPGA platform very attractive for those looking for efficient heterogeneous, computing environment. Current industry standard in development of digital systems on FPGA platform is based on HDLs. Although very effective and expressive in hands of hardware development specialists, these languages require specific knowledge and experience, unreachable for most scientists and software programmers. C to VHDL compiler project attempts to remedy that by creating an application, that derives initial VHDL description of a digital system (for further compilation and synthesis), from purely algorithmic description in C programming language. This idea itself is not new, and the C to VHDL compiler combines the best approaches from existing solutions developed over many previous years, with the introduction of some new unique improvements.
Hardware-software face detection system based on multi-block local binary patterns
NASA Astrophysics Data System (ADS)
Acasandrei, Laurentiu; Barriga, Angel
2015-03-01
Face detection is an important aspect for biometrics, video surveillance and human computer interaction. Due to the complexity of the detection algorithms any face detection system requires a huge amount of computational and memory resources. In this communication an accelerated implementation of MB LBP face detection algorithm targeting low frequency, low memory and low power embedded system is presented. The resulted implementation is time deterministic and uses a customizable AMBA IP hardware accelerator. The IP implements the kernel operations of the MB-LBP algorithm and can be used as universal accelerator for MB LBP based applications. The IP employs 8 parallel MB-LBP feature evaluators cores, uses a deterministic bandwidth, has a low area profile and the power consumption is ~95 mW on a Virtex5 XC5VLX50T. The resulted implementation acceleration gain is between 5 to 8 times, while the hardware MB-LBP feature evaluation gain is between 69 and 139 times.
Process of videotape making: presentation design, software, and hardware
NASA Astrophysics Data System (ADS)
Dickinson, Robert R.; Brady, Dan R.; Bennison, Tim; Burns, Thomas; Pines, Sheldon
1991-06-01
The use of technical video tape presentations for communicating abstractions of complex data is now becoming commonplace. While the use of video tapes in the day-to-day work of scientists and engineers is still in its infancy, their use as applications oriented conferences is now growing rapidly. Despite these advancements, there is still very little that is written down about the process of making technical videotapes. For printed media, different presentation styles are well known for categories such as results reports, executive summary reports, and technical papers and articles. In this paper, the authors present ideas on the topic of technical videotape presentation design in a format that is worth referring to. They have started to document the ways in which the experience of media specialist, teaching professionals, and character animators can be applied to scientific animation. Software and hardware considerations are also discussed. For this portion, distinctions are drawn between the software and hardware required for computer animation (frame at a time) productions, and live recorded interaction with a computer graphics display.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-08-21
Recent advancements in technology scaling have shown a trend towards greater integration with large-scale chips containing thousands of processors connected to memories and other I/O devices using non-trivial network topologies. Software simulation proves insufficient to study the tradeoffs in such complex systems due to slow execution time, whereas hardware RTL development is too time-consuming. We present OpenSoC Fabric, an on-chip network generation infrastructure which aims to provide a parameterizable and powerful on-chip network generator for evaluating future high performance computing architectures based on SoC technology. OpenSoC Fabric leverages a new hardware DSL, Chisel, which contains powerful abstractions provided by itsmore » base language, Scala, and generates both software (C++) and hardware (Verilog) models from a single code base. The OpenSoC Fabric2 infrastructure is modeled after existing state-of-the-art simulators, offers large and powerful collections of configuration options, and follows object-oriented design and functional programming to make functionality extension as easy as possible.« less
Rodríguez, Manuel; Magdaleno, Eduardo; Pérez, Fernando; García, Cristhian
2017-03-28
Non-equispaced Fast Fourier transform (NFFT) is a very important algorithm in several technological and scientific areas such as synthetic aperture radar, computational photography, medical imaging, telecommunications, seismic analysis and so on. However, its computation complexity is high. In this paper, we describe an efficient NFFT implementation with a hardware coprocessor using an All-Programmable System-on-Chip (APSoC). This is a hybrid device that employs an Advanced RISC Machine (ARM) as Processing System with Programmable Logic for high-performance digital signal processing through parallelism and pipeline techniques. The algorithm has been coded in C language with pragma directives to optimize the architecture of the system. We have used the very novel Software Develop System-on-Chip (SDSoC) evelopment tool that simplifies the interface and partitioning between hardware and software. This provides shorter development cycles and iterative improvements by exploring several architectures of the global system. The computational results shows that hardware acceleration significantly outperformed the software based implementation.
Rodríguez, Manuel; Magdaleno, Eduardo; Pérez, Fernando; García, Cristhian
2017-01-01
Non-equispaced Fast Fourier transform (NFFT) is a very important algorithm in several technological and scientific areas such as synthetic aperture radar, computational photography, medical imaging, telecommunications, seismic analysis and so on. However, its computation complexity is high. In this paper, we describe an efficient NFFT implementation with a hardware coprocessor using an All-Programmable System-on-Chip (APSoC). This is a hybrid device that employs an Advanced RISC Machine (ARM) as Processing System with Programmable Logic for high-performance digital signal processing through parallelism and pipeline techniques. The algorithm has been coded in C language with pragma directives to optimize the architecture of the system. We have used the very novel Software Develop System-on-Chip (SDSoC) evelopment tool that simplifies the interface and partitioning between hardware and software. This provides shorter development cycles and iterative improvements by exploring several architectures of the global system. The computational results shows that hardware acceleration significantly outperformed the software based implementation. PMID:28350358
Scout: high-performance heterogeneous computing made simple
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jablin, James; Mc Cormick, Patrick; Herlihy, Maurice
2011-01-26
Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focusmore » on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.« less
Plant, Richard R; Turner, Garry
2009-08-01
Since the publication of Plant, Hammond, and Turner (2004), which highlighted a pressing need for researchers to pay more attention to sources of error in computer-based experiments, the landscape has undoubtedly changed, but not necessarily for the better. Readily available hardware has improved in terms of raw speed; multi core processors abound; graphics cards now have hundreds of megabytes of RAM; main memory is measured in gigabytes; drive space is measured in terabytes; ever larger thin film transistor displays capable of single-digit response times, together with newer Digital Light Processing multimedia projectors, enable much greater graphic complexity; and new 64-bit operating systems, such as Microsoft Vista, are now commonplace. However, have millisecond-accurate presentation and response timing improved, and will they ever be available in commodity computers and peripherals? In the present article, we used a Black Box ToolKit to measure the variability in timing characteristics of hardware used commonly in psychological research.
Streamlining Payload Integration
NASA Technical Reports Server (NTRS)
Lufkin, Susan N.
2010-01-01
Payload integration onto space transport vehicles and the International Space Station (ISS) is a complex process. Yet, cargo transport is the sole reason for any space mission, be it for ferrying humans, science, or hardware. As the largest such effort in history, the ISS offers a wide variety of payload experience. However, for any payload to reach the Space Station under the current process, Payload Developers face a list of daunting tasks that go well beyond just designing the payload to the constraints of the transport vehicle and its stowage topology. Payload customers are required to prove their payload s functionality, structural integrity, and safe integration - including under less than nominal situations. They must also plan for or provide training, procedures, hardware labeling, ground support, and communications. In addition, they must deal with negotiating shared consumables, integrating software, obtaining video, and coordinating the return of data and hardware. All the while, they must meet export laws, launch schedules, budget limits, and the consensus of more than 12 panel and board reviews. Despite the cost and infrastructure overhead, payload proposals have increased. Just in the span from FY08 to FY09, the NASA Payload Space Station Support Office budget rose from $78M to $96M in attempt to manage the growing manifest, but the potential number of payloads still exceeds available Payload Integration Management manpower. The growth has also increased management difficulties due to the fact that payloads are more frequently added to a flight schedule late in the flow. The current standard ISS template for payload integration from concept to payload turn-over is 36 months, or 18 months if the payload already has a preliminary design. Customers are increasingly requiring a turn-around of 3 to 6-months to meet market needs. The following paper suggests options for streamlining the current payload integration process in order to meet customer schedule needs and reduce costs for both the integration support teams and the developers, without reducing quality or compromising safety. Issues for the key integration areas of planning, training, verification, and safety are presented in a Root-Cause Analysis study, with plausible solutions provided that involve technology and tools already available to the ISS community. Although based upon the ISS process, the payload integration techniques outlined herein also offer an integration template for any space transport endeavor.
NASA Astrophysics Data System (ADS)
Sutliff, T. J.; Otero, A. M.; Urban, D. L.
2002-01-01
The Physical Sciences Research Program of NASA has chartered a broad suite of peer-reviewed research investigating both fundamental combustion phenomena and applied combustion research topics. Fundamental research provides insights to develop accurate simulations of complex combustion processes and allows developers to improve the efficiency of combustion devices, to reduce the production of harmful emissions, and to reduce the incidence of accidental uncontrolled combustion (fires, explosions). The applied research benefit humans living and working in space through its fire safety program. The Combustion Science Discipline is implementing a structured flight research program utilizing the International Space Station (ISS) and two of its premier facilities, the Combustion Integrated Rack of the Fluids and Combustion Facility and the Microgravity Science Glovebox to conduct this space-based research. This paper reviews the current vision of Combustion Science research planned for International Space Station implementation from 2003 through 2012. A variety of research efforts in droplets and sprays, solid-fuels combustion, and gaseous combustion have been independently selected and critiqued through a series of peer-review processes. During this period, while both the ISS carrier and its research facilities are under development, the Combustion Science Discipline has synergistically combined research efforts into sub-topical areas. To conduct this research aboard ISS in the most cost effective and resource efficient manner, the sub-topic research areas are implemented via a multi-user hardware approach. This paper also summarizes the multi-user hardware approach and recaps the progress made in developing these research hardware systems. A balanced program content has been developed to maximize the production of fundamental and applied combustion research results within the current budgetary and ISS operational resource constraints. Decisions on utilizing the Combustion Integrated Rack and the Microgravity Science Glovebox are made based on facility capabilities and research requirements. To maximize research potential, additional research objectives are specified as desires a priori during the research design phase. These expanded research goals, which are designed to be achievable even with late addition of operational resources, allow additional research of a known, peer-endorsed scope to be conducted at marginal cost. Additional operational resources such as upmass, crewtime, data downlink bandwidth, and stowage volume may be presented by the ISS planners late in the research mission planning process. The Combustion Discipline has put in place plans to be prepared to take full advantage of such opportunities.
A Hazardous Gas Detection System for Aerospace and Commercial Applications
NASA Technical Reports Server (NTRS)
Hunter, G. W.; Neudeck, P. G.; Chen, L. - Y.; Makel, D. B.; Liu, C. C.; Wu, Q. H.; Knight, D.
1998-01-01
The detection of explosive conditions in aerospace propulsion applications is important for safety and economic reasons. Microfabricated hydrogen, oxygen, and hydrocarbon sensors as well as the accompanying hardware and software are being developed for a range of aerospace safety applications. The development of these sensors is being done using MEMS (Micro ElectroMechanical Systems) based technology and SiC-based semiconductor technology. The hardware and software allows control and interrogation of each sensor head and reduces accompanying cabling through multiplexing. These systems are being applied on the X-33 and on an upcoming STS-95 Shuttle mission. A number of commercial applications are also being pursued. It is concluded that this MEMS-based technology has significant potential to reduce costs and increase safety in a variety of aerospace applications.
A Hazardous Gas Detection System for Aerospace and Commercial Applications
NASA Technical Reports Server (NTRS)
Hunter, G. W.; Neudeck, P. G.; Chen, L.-Y.; Makel, D. B.; Liu, C. C.; Wu, Q. H.; Knight, D.
1998-01-01
The detection of explosive conditions in aerospace propulsion applications is important for safety and economic reasons. Microfabricated hydrogen, oxygen, and hydrocarbon sensors as well as the accompanying hardware and software are being, developed for a range of aerospace safety applications. The development of these sensors is being done using MEMS (Micro ElectroMechanical Systems) based technology and SiC-based semiconductor technology. The hardware and software allows control and interrocation of each sensor head and reduces accompanying cabling through multiplexing. These systems are being, applied on the X-33 and on an upcoming STS-95 Shuttle mission. A number of commercial applications are also being pursued. It is concluded that this MEMS-based technology has significant potential to reduce costs and increase safety in a variety of aerospace applications.
Neely, Alice N.; Sittig, Dean F.
2002-01-01
Computer technology from the management of individual patient medical records to the tracking of epidemiologic trends has become an essential part of all aspects of modern medicine. Consequently, computers, including bedside components, point-of-care testing equipment, and handheld computer devices, are increasingly present in patients’ rooms. Recent articles have indicated that computer hardware, just as other medical equipment, may act as a reservoir for microorganisms and contribute to the transfer of pathogens to patients. This article presents basic microbiological concepts relative to infection, reviews the present literature concerning possible links between computer contamination and nosocomial colonizations and infections, discusses basic principles for the control of contamination, and provides guidelines for reducing the risk of transfer of microorganisms to susceptible patient populations. PMID:12223502
Bibliographic Utilities and the Use of Microcomputers in Libraries: Current and Projected Practices.
ERIC Educational Resources Information Center
McAninch, Glen
1986-01-01
Bibliographic utilities are marketing specially designed hardware and software that permit libraries to patch together automated cataloging, acquisitions, serials control, reference, online public catalogs, circulation, interlibrary loan, and administrative functions. The increasing complexity of the technology is making it more difficult to…
Campus Financial Systems for the Future.
ERIC Educational Resources Information Center
Jonas, Stephen; And Others
This handbook guides college and university business officers, from small liberal arts colleges to community colleges to research universities, through the complex set of decisions and actions associated with replacing financial management systems. It lists the steps necessary to evaluate an institution's current hardware, network, and software;…
Proceedings of the Symposium on Long-Life Hardware for Space
NASA Technical Reports Server (NTRS)
1970-01-01
Two-volume edition of the papers of the symposium is described. It is divided into six sections - parts, materials, management, system testing, component design, and system test. Material presented focuses attention on problems created by the increased complexity of technology and long-term mission requirements.
NASA Astrophysics Data System (ADS)
Sanders, Gerald B.; Larson, William E.
2015-05-01
A key aspect of enabling an affordable and sustainable program of human exploration beyond low Earth orbit is the ability to locate, extract, and harness the resources found in space to reduce what needs to be launched from Earth's deep gravity well and to minimize the risk of dependence on Earth for survival. Known as In Situ Resource Utilization or ISRU, the ability to convert space resources into useful and mission critical products has been shown in numerous studies to be mission and architecture enhancing or enabling. However at the time of the release of the US Vision for Space Exploration in 2004, only concept feasibility hardware for ISRU technologies and capabilities had been built and tested in the laboratory; no ISRU hardware had ever flown in a mission to the Moon or Mars. As a result, an ISRU development project was established with phased development of multiple generations of hardware and systems. To bridge the gap between past ISRU feasibility hardware and future hardware needed for space missions, and to increase confidence in mission and architecture planners that ISRU capabilities would meet exploration needs, the ISRU development project incorporated extensive ground and analog site testing to mature hardware, operations, and interconnectivity with other exploration systems linked to ISRU products. This report documents the series of analog test activities performed from 2008 to 2012, the stepwise progress achieved, and the end-to-end system and mission demonstrations accomplished in this test program.
NASA Astrophysics Data System (ADS)
Kyrkou, Christos; Theocharides, Theocharis
2016-07-01
Object detection is a major step in several computer vision applications and a requirement for most smart camera systems. Recent advances in hardware acceleration for real-time object detection feature extensive use of reconfigurable hardware [field programmable gate arrays (FPGAs)], and relevant research has produced quite fascinating results, in both the accuracy of the detection algorithms as well as the performance in terms of frames per second (fps) for use in embedded smart camera systems. Detecting objects in images, however, is a daunting task and often involves hardware-inefficient steps, both in terms of the datapath design and in terms of input/output and memory access patterns. We present how a visual-feature-directed search cascade composed of motion detection, depth computation, and edge detection, can have a significant impact in reducing the data that needs to be examined by the classification engine for the presence of an object of interest. Experimental results on a Spartan 6 FPGA platform for face detection indicate data search reduction of up to 95%, which results in the system being able to process up to 50 1024×768 pixels images per second with a significantly reduced number of false positives.
A Low-Power All-Digital on-Chip CMOS Oscillator for a Wireless Sensor Node
Sheng, Duo; Hong, Min-Rong
2016-01-01
This paper presents an all-digital low-power oscillator for reference clocks in wireless body area network (WBAN) applications. The proposed on-chip complementary metal-oxide-semiconductor (CMOS) oscillator provides low-frequency clock signals with low power consumption, high delay resolution, and low circuit complexity. The cascade-stage structure of the proposed design simultaneously achieves high resolution and a wide frequency range. The proposed hysteresis delay cell further reduces the power consumption and hardware costs by 92.4% and 70.4%, respectively, relative to conventional designs. The proposed design is implemented in a standard performance 0.18 μm CMOS process. The measured operational frequency ranged from 7 to 155 MHz, and the power consumption was improved to 79.6 μW (@7 MHz) with a 4.6 ps resolution. The proposed design can be implemented in an all-digital manner, which is highly desirable for system-level integration. PMID:27754439
A Low-Power All-Digital on-Chip CMOS Oscillator for a Wireless Sensor Node.
Sheng, Duo; Hong, Min-Rong
2016-10-14
This paper presents an all-digital low-power oscillator for reference clocks in wireless body area network (WBAN) applications. The proposed on-chip complementary metal-oxide-semiconductor (CMOS) oscillator provides low-frequency clock signals with low power consumption, high delay resolution, and low circuit complexity. The cascade-stage structure of the proposed design simultaneously achieves high resolution and a wide frequency range. The proposed hysteresis delay cell further reduces the power consumption and hardware costs by 92.4% and 70.4%, respectively, relative to conventional designs. The proposed design is implemented in a standard performance 0.18 μm CMOS process. The measured operational frequency ranged from 7 to 155 MHz, and the power consumption was improved to 79.6 μW (@7 MHz) with a 4.6 ps resolution. The proposed design can be implemented in an all-digital manner, which is highly desirable for system-level integration.
Modular, thermal bus-to-radiator integral heat exchanger design for Space Station Freedom
NASA Technical Reports Server (NTRS)
Chambliss, Joe; Ewert, Michael
1990-01-01
The baseline concept is introduced for the 'integral heat exchanger' (IHX) which is the interface of the two-phase thermal bus with the heat-rejecting radiator panels. A direct bus-to-radiator heat-pipe integral connection replaces the present interface hardware to reduce the weight and complexity of the heat-exchange mechanism. The IHX is presented in detail and compared to the baseline system assuming certain values for heat rejection, mass per unit width, condenser capacity, contact conductance, and assembly mass. The spreadsheet comparison can be used to examine a variety of parameters such as radiator length and configuration. The IHX is shown to permit the reduction of panel size and system mass in response to better conductance and packaging efficiency. The IHX is found to be a suitable heat-rejection system for the Space Station Freedom because it uses present technology and eliminates the interface mechanisms.
A Web-Based Monitoring System for Multidisciplinary Design Projects
NASA Technical Reports Server (NTRS)
Rogers, James L.; Salas, Andrea O.; Weston, Robert P.
1998-01-01
In today's competitive environment, both industry and government agencies are under pressure to reduce the time and cost of multidisciplinary design projects. New tools have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. One such tool, a framework for multidisciplinary computational environments, is defined as a hardware and software architecture that enables integration, execution, and communication among diverse disciplinary processes. An examination of current frameworks reveals weaknesses in various areas, such as sequencing, displaying, monitoring, and controlling the design process. The objective of this research is to explore how Web technology, integrated with an existing framework, can improve these areas of weakness. This paper describes a Web-based system that optimizes and controls the execution sequence of design processes; and monitors the project status and results. The three-stage evolution of the system with increasingly complex problems demonstrates the feasibility of this approach.
Parallel transformation of K-SVD solar image denoising algorithm
NASA Astrophysics Data System (ADS)
Liang, Youwen; Tian, Yu; Li, Mei
2017-02-01
The images obtained by observing the sun through a large telescope always suffered with noise due to the low SNR. K-SVD denoising algorithm can effectively remove Gauss white noise. Training dictionaries for sparse representations is a time consuming task, due to the large size of the data involved and to the complexity of the training algorithms. In this paper, an OpenMP parallel programming language is proposed to transform the serial algorithm to the parallel version. Data parallelism model is used to transform the algorithm. Not one atom but multiple atoms updated simultaneously is the biggest change. The denoising effect and acceleration performance are tested after completion of the parallel algorithm. Speedup of the program is 13.563 in condition of using 16 cores. This parallel version can fully utilize the multi-core CPU hardware resources, greatly reduce running time and easily to transplant in multi-core platform.
Insoo Kim; Bhagat, Yusuf A
2016-08-01
The standard in noninvasive blood pressure (BP) measurement is an inflatable cuff device based on the oscillometric method, which poses several practical challenges for continuous BP monitoring. Here, we present a novel ultra-wide band RF Doppler radar sensor for next-generation mobile interface for the purpose of characterizing fluid flow speeds, and for ultimately measuring cuffless blood flow in the human wrist. The system takes advantage of the 7.1~10.5 GHz ultra-wide band signals which can reduce transceiver complexity and power consumption overhead. Moreover, results obtained from hardware development, antenna design and human wrist modeling, and subsequent phantom development are reported. Our comprehensive lab bench system setup with a peristaltic pump was capable of characterizing various speed flow components during a linear velocity sweep of 5~62 cm/s. The sensor holds potential for providing estimates of heart rate and blood pressure.
Further optimization of SeDDaRA blind image deconvolution algorithm and its DSP implementation
NASA Astrophysics Data System (ADS)
Wen, Bo; Zhang, Qiheng; Zhang, Jianlin
2011-11-01
Efficient algorithm for blind image deconvolution and its high-speed implementation is of great value in practice. Further optimization of SeDDaRA is developed, from algorithm structure to numerical calculation methods. The main optimization covers that, the structure's modularization for good implementation feasibility, reducing the data computation and dependency of 2D-FFT/IFFT, and acceleration of power operation by segmented look-up table. Then the Fast SeDDaRA is proposed and specialized for low complexity. As the final implementation, a hardware system of image restoration is conducted by using the multi-DSP parallel processing. Experimental results show that, the processing time and memory demand of Fast SeDDaRA decreases 50% at least; the data throughput of image restoration system is over 7.8Msps. The optimization is proved efficient and feasible, and the Fast SeDDaRA is able to support the real-time application.
Novel Quaternary Quantum Decoder, Multiplexer and Demultiplexer Circuits
NASA Astrophysics Data System (ADS)
Haghparast, Majid; Monfared, Asma Taheri
2017-05-01
Multiple valued logic is a promising approach to reduce the width of the reversible or quantum circuits, moreover, quaternary logic is considered as being a good choice for future quantum computing technology hence it is very suitable for the encoded realization of binary logic functions through its grouping of 2-bits together into quaternary values. The Quaternary decoder, multiplexer, and demultiplexer are essential units of quaternary digital systems. In this paper, we have initially designed a quantum realization of the quaternary decoder circuit using quaternary 1-qudit gates and quaternary Muthukrishnan-Stroud gates. Then we have presented quantum realization of quaternary multiplexer and demultiplexer circuits using the constructed quaternary decoder circuit and quaternary controlled Feynman gates. The suggested circuits in this paper have a lower quantum cost and hardware complexity than the existing designs that are currently used in quaternary digital systems. All the scales applied in this paper are based on Nanometric area.
A novel ultrasonic phased array inspection system to NDT for offshore platform structures
NASA Astrophysics Data System (ADS)
Wang, Hua; Shan, Baohua; Wang, Xin; Ou, Jinping
2007-01-01
A novel ultrasonic phased array detection system is developed for nondestructive testing (NDT). The purpose of the system is to make acquisition of data in real-time from 64-element ultrasonic phased array transducer, and to enable real- time processing of the acquired data. The system is composed of five main parts: master unit, main board, eight transmit/receive units, a 64-element transducer and an external PC. The system can be used with 64 element transducers, excite 32 elements, receive and sample echo signals form 32 elements simultaneously at 62.5MHz with 8 bit precision. The external PC is used as the user interface showing the real time images and controls overall operation of the system through USB serial link. The use of Universal Serial Bus (USB) improves the transform speed and reduces hardware interface complexity. The program of the system is written in Visual C++.NET and is platform independent.
Enhancement of Fast Face Detection Algorithm Based on a Cascade of Decision Trees
NASA Astrophysics Data System (ADS)
Khryashchev, V. V.; Lebedev, A. A.; Priorov, A. L.
2017-05-01
Face detection algorithm based on a cascade of ensembles of decision trees (CEDT) is presented. The new approach allows detecting faces other than the front position through the use of multiple classifiers. Each classifier is trained for a specific range of angles of the rotation head. The results showed a high rate of productivity for CEDT on images with standard size. The algorithm increases the area under the ROC-curve of 13% compared to a standard Viola-Jones face detection algorithm. Final realization of given algorithm consist of 5 different cascades for frontal/non-frontal faces. One more thing which we take from the simulation results is a low computational complexity of CEDT algorithm in comparison with standard Viola-Jones approach. This could prove important in the embedded system and mobile device industries because it can reduce the cost of hardware and make battery life longer.
Pollution reduction technology program for small jet aircraft engines, phase 1
NASA Technical Reports Server (NTRS)
Bruce, T. W.; Davis, F. G.; Kuhn, T. E.; Mongia, H. C.
1977-01-01
A series of combustor pressure rig screening tests was conducted on three combustor concepts applied to the TFE731-2 turbofan engine combustion system for the purpose of evaluating their relative emissions reduction potential consistent with prescribed performance, durability, and envelope contraints. The three concepts and their modifications represented increasing potential for reducing emission levels with the penalty of increased hardware complexity and operational risk. Concept 1 entailed advanced modifications to the present production TFE731-2 combustion system. Concept 2 was based on the incorporation of an axial air-assisted airblast fuel injection system. Concept 3 was a staged premix/prevaporizing combustion system. Significant emissions reductions were achieved in all three concepts, consistent with acceptable combustion system performance. Concepts 2 and 3 were identified as having the greatest achievable emissions reduction potential, and were selected to undergo refinement to prepare for ultimate incorporation within an engine.
A Multi-Parameter Approach for Calculating Crack Instability
NASA Technical Reports Server (NTRS)
Zanganeh, M.; Forman, R. G.
2014-01-01
An accurate fracture control analysis of spacecraft pressure systems, boosters, rocket hardware and other critical low-cycle fatigue cases where the fracture toughness highly impacts cycles to failure requires accurate knowledge of the material fracture toughness. However, applicability of the measured fracture toughness values using standard specimens and transferability of the values to crack instability analysis of the realistically complex structures is refutable. The commonly used single parameter Linear Elastic Fracture Mechanics (LEFM) approach which relies on the key assumption that the fracture toughness is a material property would result in inaccurate crack instability predictions. In the past years extensive studies have been conducted to improve the single parameter (K-controlled) LEFM by introducing parameters accounting for the geometry or in-plane constraint effects]. Despite the importance of the thickness (out-of-plane constraint) effects in fracture control problems, the literature is mainly limited to some empirical equations for scaling the fracture toughness data] and only few theoretically based developments can be found. In aerospace hardware where the structure might have only one life cycle and weight reduction is crucial, reducing the design margin of safety by decreasing the uncertainty involved in fracture toughness evaluations would result in lighter hardware. In such conditions LEFM would not suffice and an elastic-plastic analysis would be vital. Multi-parameter elastic plastic crack tip field quantifying developments combined with statistical methods] have been shown to have the potential to be used as a powerful tool for tackling such problems. However, these approaches have not been comprehensively scrutinized using experimental tests. Therefore, in this paper a multi-parameter elastic-plastic approach has been used to study the crack instability problem and the transferability issue by considering the effects of geometrical constraints as well as the thickness. The feasibility of the approach has been examined using a wide range of specimen geometries and thicknesses manufactured from 7075-T7351 aluminum alloy.
Scheduling Operations for Massive Heterogeneous Clusters
NASA Technical Reports Server (NTRS)
Humphrey, John; Spagnoli, Kyle
2013-01-01
High-performance computing (HPC) programming has become increasingly difficult with the advent of hybrid supercomputers consisting of multicore CPUs and accelerator boards such as the GPU. Manual tuning of software to achieve high performance on this type of machine has been performed by programmers. This is needlessly difficult and prone to being invalidated by new hardware, new software, or changes in the underlying code. A system was developed for task-based representation of programs, which when coupled with a scheduler and runtime system, allows for many benefits, including higher performance and utilization of computational resources, easier programming and porting, and adaptations of code during runtime. The system consists of a method of representing computer algorithms as a series of data-dependent tasks. The series forms a graph, which can be scheduled for execution on many nodes of a supercomputer efficiently by a computer algorithm. The schedule is executed by a dispatch component, which is tailored to understand all of the hardware types that may be available within the system. The scheduler is informed by a cluster mapping tool, which generates a topology of available resources and their strengths and communication costs. Software is decoupled from its hardware, which aids in porting to future architectures. A computer algorithm schedules all operations, which for systems of high complexity (i.e., most NASA codes), cannot be performed optimally by a human. The system aids in reducing repetitive code, such as communication code, and aids in the reduction of redundant code across projects. It adds new features to code automatically, such as recovering from a lost node or the ability to modify the code while running. In this project, the innovators at the time of this reporting intend to develop two distinct technologies that build upon each other and both of which serve as building blocks for more efficient HPC usage. First is the scheduling and dynamic execution framework, and the second is scalable linear algebra libraries that are built directly on the former.
Khan, Shadab; Manwaring, Preston; Borsic, Andrea; Halter, Ryan
2015-04-01
Electrical impedance tomography (EIT) is used to image the electrical property distribution of a tissue under test. An EIT system comprises complex hardware and software modules, which are typically designed for a specific application. Upgrading these modules is a time-consuming process, and requires rigorous testing to ensure proper functioning of new modules with the existing ones. To this end, we developed a modular and reconfigurable data acquisition (DAQ) system using National Instruments' (NI) hardware and software modules, which offer inherent compatibility over generations of hardware and software revisions. The system can be configured to use up to 32-channels. This EIT system can be used to interchangeably apply current or voltage signal, and measure the tissue response in a semi-parallel fashion. A novel signal averaging algorithm, and 512-point fast Fourier transform (FFT) computation block was implemented on the FPGA. FFT output bins were classified as signal or noise. Signal bins constitute a tissue's response to a pure or mixed tone signal. Signal bins' data can be used for traditional applications, as well as synchronous frequency-difference imaging. Noise bins were used to compute noise power on the FPGA. Noise power represents a metric of signal quality, and can be used to ensure proper tissue-electrode contact. Allocation of these computationally expensive tasks to the FPGA reduced the required bandwidth between PC, and the FPGA for high frame rate EIT. In 16-channel configuration, with a signal-averaging factor of 8, the DAQ frame rate at 100 kHz exceeded 110 frames s (-1), and signal-to-noise ratio exceeded 90 dB across the spectrum. Reciprocity error was found to be for frequencies up to 1 MHz. Static imaging experiments were performed on a high-conductivity inclusion placed in a saline filled tank; the inclusion was clearly localized in the reconstructions obtained for both absolute current and voltage mode data.
Hardware interface unit for control of shuttle RMS vibrations
NASA Technical Reports Server (NTRS)
Lindsay, Thomas S.; Hansen, Joseph M.; Manouchehri, Davoud; Forouhar, Kamran
1994-01-01
Vibration of the Shuttle Remote Manipulator System (RMS) increases the time for task completion and reduces task safety for manipulator-assisted operations. If the dynamics of the manipulator and the payload can be physically isolated, performance should improve. Rockwell has developed a self contained hardware unit which interfaces between a manipulator arm and payload. The End Point Control Unit (EPCU) is built and is being tested at Rockwell and at the Langley/Marshall Coupled, Multibody Spacecraft Control Research Facility in NASA's Marshall Space Flight Center in Huntsville, Alabama.
Technology advancement of an oxygen generation subsystem
NASA Technical Reports Server (NTRS)
Lee, M. K.; Burke, K. A.; Schubert, F. H.; Wynveen, R. A.
1979-01-01
An oxygen generation subsystem based on water electrolysis was developed and tested to further advance the concept and technology of the spacecraft air revitalization system. Emphasis was placed on demonstrating the subsystem integration concept and hardware maturity at a subsystem level. The integration concept of the air revitalization system was found to be feasible. Hardware and technology of the oxygen generation subsystem was demonstrated to be close to the preprototype level. Continued development of the oxygen generation technology is recommended to further reduce the total weight penalties of the oxygen generation subsystem through optimization.
Software technology insertion: A study of success factors
NASA Technical Reports Server (NTRS)
Lydon, Tom
1990-01-01
Managing software development in large organizations has become increasingly difficult due to increasing technical complexity, stricter government standards, a shortage of experienced software engineers, competitive pressure for improved productivity and quality, the need to co-develop hardware and software together, and the rapid changes in both hardware and software technology. The 'software factory' approach to software development minimizes risks while maximizing productivity and quality through standardization, automation, and training. However, in practice, this approach is relatively inflexible when adopting new software technologies. The methods that a large multi-project software engineering organization can use to increase the likelihood of successful software technology insertion (STI), especially in a standardized engineering environment, are described.
A digital matched filter for reverse time chaos.
Bailey, J Phillip; Beal, Aubrey N; Dean, Robert N; Hamilton, Michael C
2016-07-01
The use of reverse time chaos allows the realization of hardware chaotic systems that can operate at speeds equivalent to existing state of the art while requiring significantly less complex circuitry. Matched filter decoding is possible for the reverse time system since it exhibits a closed form solution formed partially by a linear basis pulse. Coefficients have been calculated and are used to realize the matched filter digitally as a finite impulse response filter. Numerical simulations confirm that this correctly implements a matched filter that can be used for detection of the chaotic signal. In addition, the direct form of the filter has been implemented in hardware description language and demonstrates performance in agreement with numerical results.
One year old and growing: a status report of the International Space Station and its partners
NASA Technical Reports Server (NTRS)
Bartoe, J. D.; Fortenberry, L.
2000-01-01
The International Space Station (ISS), as the largest international science and engineering program in history, features unprecedented technical, cost, scheduling, managerial, and international complexity. A number of major milestones have been accomplished to date, including the construction of major elements of flight hardware, the development of operations and sustaining engineering centers, astronaut training, and eight Space Shuttle/Mir docking missions. International partner contributions and levels of participation have been baselined, and negotiations and discussions are nearing completion regarding bartering arrangements for services and new hardware. As ISS is successfully executed, it can pave the way for more inspiring cooperative achievements in the future. Published by Elsevier Science Ltd.
Big Science, Small-Budget Space Experiment Package Aka MISSE-5: A Hardware And Software Perspective
NASA Technical Reports Server (NTRS)
Krasowski, Michael; Greer, Lawrence; Flatico, Joseph; Jenkins, Phillip; Spina, Dan
2007-01-01
Conducting space experiments with small budgets is a fact of life for many design groups with low-visibility science programs. One major consequence is that specialized space grade electronic components are often too costly to incorporate into the design. Radiation mitigation now becomes more complex as a result of being restricted to the use of commercial off-the-shelf (COTS) parts. Unique hardware and software design techniques are required to succeed in producing a viable instrument suited for use in space. This paper highlights some of the design challenges and associated solutions encountered in the production of a highly capable, low cost space experiment package.
NASA Technical Reports Server (NTRS)
Krasowski, Michael; Greer, Lawrence; Flatico, Joseph; Jenkins, Phillip; Spina, Dan
2005-01-01
Conducting space experiments with small budgets is a fact of life for many design groups with low-visibility science programs. One major consequence is that specialized space grade electronic components are often too costly to incorporate into the design. Radiation mitigation now becomes more complex as a result of being restricted to the use of commercial off-the-shelf (COTS) parts. Unique hardware and software design techniques are required to succeed in producing a viable instrument suited for use in space. This paper highlights some of the design challenges and associated solutions encountered in the production of a highly capable, low cost space experiment package.
NASA Technical Reports Server (NTRS)
Mulhall, B. D. L.
1980-01-01
An evaluation is presented which is defined as the adequacy of system design with known functional and performance requirements. The proposed Rockwell International AIDS 3 card, document and data flow are presented to summarize the concepts involved and the relationships between functions. The analysis and evaluation includes a study of system capability, processing rates, search requirements and response accuracy as well as a consideration of operational components and hardware integration. Results indicate that the AIDS 3 System concept is operationally feasible if production capacity is slightly enhanced but that operational complexity, hardware integration and a lack of conceptual data pertinent to some of the functions are areas of concern.
Performance of GeantV EM Physics Models
NASA Astrophysics Data System (ADS)
Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.
2017-10-01
The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.
A digital matched filter for reverse time chaos
NASA Astrophysics Data System (ADS)
Bailey, J. Phillip; Beal, Aubrey N.; Dean, Robert N.; Hamilton, Michael C.
2016-07-01
The use of reverse time chaos allows the realization of hardware chaotic systems that can operate at speeds equivalent to existing state of the art while requiring significantly less complex circuitry. Matched filter decoding is possible for the reverse time system since it exhibits a closed form solution formed partially by a linear basis pulse. Coefficients have been calculated and are used to realize the matched filter digitally as a finite impulse response filter. Numerical simulations confirm that this correctly implements a matched filter that can be used for detection of the chaotic signal. In addition, the direct form of the filter has been implemented in hardware description language and demonstrates performance in agreement with numerical results.
Trends in computer hardware and software.
Frankenfeld, F M
1993-04-01
Previously identified and current trends in the development of computer systems and in the use of computers for health care applications are reviewed. Trends identified in a 1982 article were increasing miniaturization and archival ability, increasing software costs, increasing software independence, user empowerment through new software technologies, shorter computer-system life cycles, and more rapid development and support of pharmaceutical services. Most of these trends continue today. Current trends in hardware and software include the increasing use of reduced instruction-set computing, migration to the UNIX operating system, the development of large software libraries, microprocessor-based smart terminals that allow remote validation of data, speech synthesis and recognition, application generators, fourth-generation languages, computer-aided software engineering, object-oriented technologies, and artificial intelligence. Current trends specific to pharmacy and hospitals are the withdrawal of vendors of hospital information systems from the pharmacy market, improved linkage of information systems within hospitals, and increased regulation by government. The computer industry and its products continue to undergo dynamic change. Software development continues to lag behind hardware, and its high cost is offsetting the savings provided by hardware.
Easy Handling of Sensors and Actuators over TCP/IP Networks by Open Source Hardware/Software
Mejías, Andrés; Herrera, Reyes S.; Márquez, Marco A.; Calderón, Antonio José; González, Isaías; Andújar, José Manuel
2017-01-01
There are several specific solutions for accessing sensors and actuators present in any process or system through a TCP/IP network, either local or a wide area type like the Internet. The usage of sensors and actuators of different nature and diverse interfaces (SPI, I2C, analogue, etc.) makes access to them from a network in a homogeneous and secure way more complex. A framework, including both software and hardware resources, is necessary to simplify and unify networked access to these devices. In this paper, a set of open-source software tools, specifically designed to cover the different issues concerning the access to sensors and actuators, and two proposed low-cost hardware architectures to operate with the abovementioned software tools are presented. They allow integrated and easy access to local or remote sensors and actuators. The software tools, integrated in the free authoring tool Easy Java and Javascript Simulations (EJS) solve the interaction issues between the subsystem that integrates sensors and actuators into the network, called convergence subsystem in this paper, and the Human Machine Interface (HMI)—this one designed using the intuitive graphical system of EJS—located on the user’s computer. The proposed hardware architectures and software tools are described and experimental implementations with the proposed tools are presented. PMID:28067801
Easy Handling of Sensors and Actuators over TCP/IP Networks by Open Source Hardware/Software.
Mejías, Andrés; Herrera, Reyes S; Márquez, Marco A; Calderón, Antonio José; González, Isaías; Andújar, José Manuel
2017-01-05
There are several specific solutions for accessing sensors and actuators present in any process or system through a TCP/IP network, either local or a wide area type like the Internet. The usage of sensors and actuators of different nature and diverse interfaces (SPI, I2C, analogue, etc.) makes access to them from a network in a homogeneous and secure way more complex. A framework, including both software and hardware resources, is necessary to simplify and unify networked access to these devices. In this paper, a set of open-source software tools, specifically designed to cover the different issues concerning the access to sensors and actuators, and two proposed low-cost hardware architectures to operate with the abovementioned software tools are presented. They allow integrated and easy access to local or remote sensors and actuators. The software tools, integrated in the free authoring tool Easy Java and Javascript Simulations (EJS) solve the interaction issues between the subsystem that integrates sensors and actuators into the network, called convergence subsystem in this paper, and the Human Machine Interface (HMI)-this one designed using the intuitive graphical system of EJS-located on the user's computer. The proposed hardware architectures and software tools are described and experimental implementations with the proposed tools are presented.
The Use of HFC (CFC Free) Processes at the NASA Stennis Space Center
NASA Technical Reports Server (NTRS)
Ross, Richard H.
1997-01-01
The search for ozone depleting alternative chemicals was heightened when, in 1990, the more than 65 countries that had signed the Montreal Protocol agreed to phase out completely by the year 2000. In 1992, then-president Bush advanced this date for the United States to January l, 1996. In 1991, it was realized that the planned phase out and eventual elimination of ozone depleting chemicals imposed by the Montreal Protocol and the resulting Clean Air Act (CAA) amendments would impact the cleaning and testing of aerospace hardware at the NASA Stennis Space Center. Because of this regulation, the Test & Engineering Sciences Laboratory has been working on solvent conversion studies to replace CFC-113. Aerospace hardware and test equipment used in rocket propulsion systems require extreme cleanliness levels to function and maintain their integrity. Because the cleanliness of aerospace hardware will be affected by the elimination of CFC-113; alternate cleaning technologies, including the use of fluoridated solvents have been studied as potential replacements. Several aqueous processes have been identified for cleaning moderately sized components. However, no known aqueous alternative exists for cleaning and validating T&ME and complex geometry based hardware. This paper discusses the choices and the methodologies that were used to screen potential alternatives to CFC-113.
Superconducting Optoelectronic Circuits for Neuromorphic Computing
NASA Astrophysics Data System (ADS)
Shainline, Jeffrey M.; Buckley, Sonia M.; Mirin, Richard P.; Nam, Sae Woo
2017-03-01
Neural networks have proven effective for solving many difficult computational problems, yet implementing complex neural networks in software is computationally expensive. To explore the limits of information processing, it is necessary to implement new hardware platforms with large numbers of neurons, each with a large number of connections to other neurons. Here we propose a hybrid semiconductor-superconductor hardware platform for the implementation of neural networks and large-scale neuromorphic computing. The platform combines semiconducting few-photon light-emitting diodes with superconducting-nanowire single-photon detectors to behave as spiking neurons. These processing units are connected via a network of optical waveguides, and variable weights of connection can be implemented using several approaches. The use of light as a signaling mechanism overcomes fanout and parasitic constraints on electrical signals while simultaneously introducing physical degrees of freedom which can be employed for computation. The use of supercurrents achieves the low power density (1 mW /cm2 at 20-MHz firing rate) necessary to scale to systems with enormous entropy. Estimates comparing the proposed hardware platform to a human brain show that with the same number of neurons (1 011) and 700 independent connections per neuron, the hardware presented here may achieve an order of magnitude improvement in synaptic events per second per watt.
Symphony: A Framework for Accurate and Holistic WSN Simulation
Riliskis, Laurynas; Osipov, Evgeny
2015-01-01
Research on wireless sensor networks has progressed rapidly over the last decade, and these technologies have been widely adopted for both industrial and domestic uses. Several operating systems have been developed, along with a multitude of network protocols for all layers of the communication stack. Industrial Wireless Sensor Network (WSN) systems must satisfy strict criteria and are typically more complex and larger in scale than domestic systems. Together with the non-deterministic behavior of network hardware in real settings, this greatly complicates the debugging and testing of WSN functionality. To facilitate the testing, validation, and debugging of large-scale WSN systems, we have developed a simulation framework that accurately reproduces the processes that occur inside real equipment, including both hardware- and software-induced delays. The core of the framework consists of a virtualized operating system and an emulated hardware platform that is integrated with the general purpose network simulator ns-3. Our framework enables the user to adjust the real code base as would be done in real deployments and also to test the boundary effects of different hardware components on the performance of distributed applications and protocols. Additionally we have developed a clock emulator with several different skew models and a component that handles sensory data feeds. The new framework should substantially shorten WSN application development cycles. PMID:25723144
Design and Evolution of a Modular Tensegrity Robot Platform
NASA Technical Reports Server (NTRS)
Bruce, Jonathan; Caluwaerts, Ken; Iscen, Atil; Sabelhaus, Andrew P.; SunSpiral, Vytas
2014-01-01
NASA Ames Research Center is developing a compliant modular tensegrity robotic platform for planetary exploration. In this paper we present the design and evolution of the platform's main hardware component, an untethered, robust tensegrity strut, with rich sensor feedback and cable actuation. Each strut is a complete robot, and multiple struts can be combined together to form a wide range of complex tensegrity robots. Our current goal for the tensegrity robotic platform is the development of SUPERball, a 6-strut icosahedron underactuated tensegrity robot aimed at dynamic locomotion for planetary exploration rovers and landers, but the aim is for the modular strut to enable a wide range of tensegrity morphologies. SUPERball is a second generation prototype, evolving from the tensegrity robot ReCTeR, which is also a modular, lightweight, highly compliant 6-strut tensegrity robot that was used to validate our physics based NASA Tensegrity Robot Toolkit (NTRT) simulator. Many hardware design parameters of the SUPERball were driven by locomotion results obtained in our validated simulator. These evolutionary explorations helped constrain motor torque and speed parameters, along with strut and string stress. As construction of the hardware has finalized, we have also used the same evolutionary framework to evolve controllers that respect the built hardware parameters.
Open Architecture Standard for NASA's Software-Defined Space Telecommunications Radio Systems
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Johnson, Sandra K.; Kacpura, Thomas J.; Hall, Charles S.; Smith, Carl R.; Liebetreu, John
2008-01-01
NASA is developing an architecture standard for software-defined radios used in space- and ground-based platforms to enable commonality among radio developments to enhance capability and services while reducing mission and programmatic risk. Transceivers (or transponders) with functionality primarily defined in software (e.g., firmware) have the ability to change their functional behavior through software alone. This radio architecture standard offers value by employing common waveform software interfaces, method of instantiation, operation, and testing among different compliant hardware and software products. These common interfaces within the architecture abstract application software from the underlying hardware to enable technology insertion independently at either the software or hardware layer. This paper presents the initial Space Telecommunications Radio System (STRS) Architecture for NASA missions to provide the desired software abstraction and flexibility while minimizing the resources necessary to support the architecture.
Massengill, L W; Mundie, D B
1992-01-01
A neural network IC based on a dynamic charge injection is described. The hardware design is space and power efficient, and achieves massive parallelism of analog inner products via charge-based multipliers and spatially distributed summing buses. Basic synaptic cells are constructed of exponential pulse-decay modulation (EPDM) dynamic injection multipliers operating sequentially on propagating signal vectors and locally stored analog weights. Individually adjustable gain controls on each neutron reduce the effects of limited weight dynamic range. A hardware simulator/trainer has been developed which incorporates the physical (nonideal) characteristics of actual circuit components into the training process, thus absorbing nonlinearities and parametric deviations into the macroscopic performance of the network. Results show that charge-based techniques may achieve a high degree of neural density and throughput using standard CMOS processes.
[Soft- and hardware support for the setup for computer tracking of radiation teletherapy].
Tarutin, I G; Piliavets, V I; Strakh, A G; Minenko, V F; Golubovskiĭ, A I
1983-06-01
A hard and soft ware computer assisted complex has been worked out for gamma-beam therapy. The complex included all radiotherapeutic units, including a Siemens program controlled betatron with an energy of 42 MEV computer ES-1022, a Medigraf system of the processing of graphic information, a Mars-256 system for control over the homogeneity of distribution of dose rate on the field of irradiation and a package of mathematical programs to select a plan of irradiation of various tumor sites. The prospects of the utilization of such complexes in the dosimetric support of radiation therapy are discussed.
Planning and Implementing Technical Services Workstations.
ERIC Educational Resources Information Center
Kaplan, Michael, Ed.
The job of the library cataloger has grown increasingly complex. Catalogers must draw from a vast pool of dynamic information as they handle traditional and new forms of media. Technical Services Workstations (TSWs) provide catalogers the network data, application programs, and standard hardware required to catalog all types of media quickly and…
2004-04-15
Combustion Module-1 was one of the most complex and technologically sophisticated pieces of hardware ever to be included as a part of a Spacelab mission. Shown here are the two racks which comprised CM-1, the rack on the right shows the combustion chamber with the Structure Of Flame Balls at Low Lewis-numbers (SOFBALL) experiment inside.
Description of the microbial ecology evaluation device, flight equipment, and ground transporter
NASA Technical Reports Server (NTRS)
Chassay, C. E.; Taylor, G. R.
1973-01-01
Exposure of test systems in space required the fabrication of specialized hardware termed a Microbial Ecology Evaluation Device that had individual test chambers and a complex optical filter system. The characteristics of this device and the manner in which it was deployed in space are described.
Analysis of Software Systems for Specialized Computers,
computer) with given computer hardware and software . The object of study is the software system of a computer, designed for solving a fixed complex of...purpose of the analysis is to find parameters that characterize the system and its elements during operation, i.e., when servicing the given requirement flow. (Author)
Enhancing the Internet of Things Architecture with Flow Semantics
ERIC Educational Resources Information Center
DeSerranno, Allen Ronald
2017-01-01
Internet of Things ("IoT") systems are complex, asynchronous solutions often comprised of various software and hardware components developed in isolation of each other. These components function with different degrees of reliability and performance over an inherently unreliable network, the Internet. Many IoT systems are developed within…
Responding to an RFP: A Vendor's Viewpoint.
ERIC Educational Resources Information Center
Kington, Robert A.
1987-01-01
Outlines factors used by online vendors to decide whether to bid on RFPs (requests for proposals) for library automation systems, including specifications for software, hardware or performance requirements not met by the vendor; specifications based on competitors' systems; the size and complexity of the request itself; and vendors' time…
Spintronic Nanodevices for Bioinspired Computing
Grollier, Julie; Querlioz, Damien; Stiles, Mark D.
2016-01-01
Bioinspired hardware holds the promise of low-energy, intelligent, and highly adaptable computing systems. Applications span from automatic classification for big data management, through unmanned vehicle control, to control for biomedical prosthesis. However, one of the major challenges of fabricating bioinspired hardware is building ultra-high-density networks out of complex processing units interlinked by tunable connections. Nanometer-scale devices exploiting spin electronics (or spintronics) can be a key technology in this context. In particular, magnetic tunnel junctions (MTJs) are well suited for this purpose because of their multiple tunable functionalities. One such functionality, non-volatile memory, can provide massive embedded memory in unconventional circuits, thus escaping the von-Neumann bottleneck arising when memory and processors are located separately. Other features of spintronic devices that could be beneficial for bioinspired computing include tunable fast nonlinear dynamics, controlled stochasticity, and the ability of single devices to change functions in different operating conditions. Large networks of interacting spintronic nanodevices can have their interactions tuned to induce complex dynamics such as synchronization, chaos, soliton diffusion, phase transitions, criticality, and convergence to multiple metastable states. A number of groups have recently proposed bioinspired architectures that include one or several types of spintronic nanodevices. In this paper, we show how spintronics can be used for bioinspired computing. We review the different approaches that have been proposed, the recent advances in this direction, and the challenges toward fully integrated spintronics complementary metal–oxide–semiconductor (CMOS) bioinspired hardware. PMID:27881881
Design of efficient circularly symmetric two-dimensional variable digital FIR filters.
Bindima, Thayyil; Elias, Elizabeth
2016-05-01
Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability.
A Cluster-Based Dual-Adaptive Topology Control Approach in Wireless Sensor Networks.
Gui, Jinsong; Zhou, Kai; Xiong, Naixue
2016-09-25
Multi-Input Multi-Output (MIMO) can improve wireless network performance. Sensors are usually single-antenna devices due to the high hardware complexity and cost, so several sensors are used to form virtual MIMO array, which is a desirable approach to efficiently take advantage of MIMO gains. Also, in large Wireless Sensor Networks (WSNs), clustering can improve the network scalability, which is an effective topology control approach. The existing virtual MIMO-based clustering schemes do not either fully explore the benefits of MIMO or adaptively determine the clustering ranges. Also, clustering mechanism needs to be further improved to enhance the cluster structure life. In this paper, we propose an improved clustering scheme for virtual MIMO-based topology construction (ICV-MIMO), which can determine adaptively not only the inter-cluster transmission modes but also the clustering ranges. Through the rational division of cluster head function and the optimization of cluster head selection criteria and information exchange process, the ICV-MIMO scheme effectively reduces the network energy consumption and improves the lifetime of the cluster structure when compared with the existing typical virtual MIMO-based scheme. Moreover, the message overhead and time complexity are still in the same order of magnitude.
Design of efficient circularly symmetric two-dimensional variable digital FIR filters
Bindima, Thayyil; Elias, Elizabeth
2016-01-01
Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability. PMID:27222739
A Cluster-Based Dual-Adaptive Topology Control Approach in Wireless Sensor Networks
Gui, Jinsong; Zhou, Kai; Xiong, Naixue
2016-01-01
Multi-Input Multi-Output (MIMO) can improve wireless network performance. Sensors are usually single-antenna devices due to the high hardware complexity and cost, so several sensors are used to form virtual MIMO array, which is a desirable approach to efficiently take advantage of MIMO gains. Also, in large Wireless Sensor Networks (WSNs), clustering can improve the network scalability, which is an effective topology control approach. The existing virtual MIMO-based clustering schemes do not either fully explore the benefits of MIMO or adaptively determine the clustering ranges. Also, clustering mechanism needs to be further improved to enhance the cluster structure life. In this paper, we propose an improved clustering scheme for virtual MIMO-based topology construction (ICV-MIMO), which can determine adaptively not only the inter-cluster transmission modes but also the clustering ranges. Through the rational division of cluster head function and the optimization of cluster head selection criteria and information exchange process, the ICV-MIMO scheme effectively reduces the network energy consumption and improves the lifetime of the cluster structure when compared with the existing typical virtual MIMO-based scheme. Moreover, the message overhead and time complexity are still in the same order of magnitude. PMID:27681731
NASA Technical Reports Server (NTRS)
Gernand, Jeremy M.
2004-01-01
Experience with the International Space Station (ISS) program demonstrates the degree to which engineering design and operational solutions must protect crewmembers from health risks due to long-term exposure to the microgravity environment. Risks to safety and health due to degradation in the microgravity environment include crew inability to complete emergency or nominal activities, increased risk of injury, and inability to complete safe return to the ground due to reduced strength or embrittled bones. These risks without controls slowly increase in probability for the length of the mission and become more significant for increasing mission durations. Countermeasures to microgravity include hardware systems that place a crewmember s body under elevated stress to produce an effect similar to daily exposure to gravity. The ISS countermeasure system is predominately composed of customized exercise machines. Historical treatment of microgravity countermeasure systems as medical research experiments unintentionally reduced the foreseen importance and therefore the capability of the systems to function in a long-term operational role. Long-term hazardous effects and steadily increasing operational risks due to non-functional countermeasure equipment require a more rigorous design approach and incorporation of redundancy into seemingly non- mission-critical hardware systems. Variations in the rate of health degradation and responsiveness to countermeasures among the crew population drastically increase the challenge for design requirements development and verification of the appropriate risk control strategy. The long-term nature of the hazards and severe limits on logistical re-supply mass, volume and frequency complicates assessment of hardware availability and verification of an adequate maintenance and sparing plan. Design achievement of medically defined performance requirements by microgravity countermeasure systems and incorporation of adequate failure tolerance significantly reduces these risks. Future implementation of on-site monitoring hardware for critical health parameters such as bone mineral density would allow greater responsiveness, efficiency, and optimized design of the countermeasures system.
Onboard Image Processing System for Hyperspectral Sensor
Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun
2015-01-01
Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandt, James M.; Devine, Karen Dragon; Gentile, Ann C.
2014-09-01
As computer systems grow in both size and complexity, the need for applications and run-time systems to adjust to their dynamic environment also grows. The goal of the RAAMP LDRD was to combine static architecture information and real-time system state with algorithms to conserve power, reduce communication costs, and avoid network contention. We devel- oped new data collection and aggregation tools to extract static hardware information (e.g., node/core hierarchy, network routing) as well as real-time performance data (e.g., CPU uti- lization, power consumption, memory bandwidth saturation, percentage of used bandwidth, number of network stalls). We created application interfaces that allowedmore » this data to be used easily by algorithms. Finally, we demonstrated the benefit of integrating system and application information for two use cases. The first used real-time power consumption and memory bandwidth saturation data to throttle concurrency to save power without increasing application execution time. The second used static or real-time network traffic information to reduce or avoid network congestion by remapping MPI tasks to allocated processors. Results from our work are summarized in this report; more details are available in our publications [2, 6, 14, 16, 22, 29, 38, 44, 51, 54].« less
Many-core graph analytics using accelerated sparse linear algebra routines
NASA Astrophysics Data System (ADS)
Kozacik, Stephen; Paolini, Aaron L.; Fox, Paul; Kelmelis, Eric
2016-05-01
Graph analytics is a key component in identifying emerging trends and threats in many real-world applications. Largescale graph analytics frameworks provide a convenient and highly-scalable platform for developing algorithms to analyze large datasets. Although conceptually scalable, these techniques exhibit poor performance on modern computational hardware. Another model of graph computation has emerged that promises improved performance and scalability by using abstract linear algebra operations as the basis for graph analysis as laid out by the GraphBLAS standard. By using sparse linear algebra as the basis, existing highly efficient algorithms can be adapted to perform computations on the graph. This approach, however, is often less intuitive to graph analytics experts, who are accustomed to vertex-centric APIs such as Giraph, GraphX, and Tinkerpop. We are developing an implementation of the high-level operations supported by these APIs in terms of linear algebra operations. This implementation is be backed by many-core implementations of the fundamental GraphBLAS operations required, and offers the advantages of both the intuitive programming model of a vertex-centric API and the performance of a sparse linear algebra implementation. This technology can reduce the number of nodes required, as well as the run-time for a graph analysis problem, enabling customers to perform more complex analysis with less hardware at lower cost. All of this can be accomplished without the requirement for the customer to make any changes to their analytics code, thanks to the compatibility with existing graph APIs.
A complex noise reduction method for improving visualization of SD-OCT skin biomedical images
NASA Astrophysics Data System (ADS)
Myakinin, Oleg O.; Zakharov, Valery P.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Khramov, Alexander G.
2014-05-01
In this paper we consider the original method of solving noise reduction problem for visualization's quality improvement of SD-OCT skin and tumors biomedical images. The principal advantages of OCT are high resolution and possibility of in vivo analysis. We propose a two-stage algorithm: 1) process of raw one-dimensional A-scans of SD-OCT and 2) remove a noise from the resulting B(C)-scans. The general mathematical methods of SD-OCT are unstable: if the noise of the CCD is 1.6% of the dynamic range then result distortions are already 25-40% of the dynamic range. We use at the first stage a resampling of A-scans and simple linear filters to reduce the amount of data and remove the noise of the CCD camera. The efficiency, improving productivity and conservation of the axial resolution when using this approach are showed. At the second stage we use an effective algorithms based on Hilbert-Huang Transform for more accurately noise peaks removal. The effectiveness of the proposed approach for visualization of malignant and benign skin tumors (melanoma, BCC etc.) and a significant improvement of SNR level for different methods of noise reduction are showed. Also in this study we consider a modification of this method depending of a specific hardware and software features of used OCT setup. The basic version does not require any hardware modifications of existing equipment. The effectiveness of proposed method for 3D visualization of tissues can simplify medical diagnosis in oncology.
Real-time control using open source RTOS
NASA Astrophysics Data System (ADS)
Irwin, Philip C.; Johnson, Richard L., Jr.
2002-12-01
Complex telescope systems such as interferometers tend to rely heavily on hard real-time operating systems (RTOS). It has been standard practice at NASA's Jet Propulsion Laboratory (JPL) and many other institutions to use costly commercial RTOSs and hardware. After developing a real-time toolkit for VxWorks on the PowerPC platform (dubbed RTC), the interferometry group at JPL is porting this code to the real-time Application Interface (RTAI), an open source RTOS that is essentially an extension to the Linux kernel. This port has the potential to reduce software and hardware costs for future projects, while increasing the level of performance. The goals of this paper are to briefly describe the RTC toolkit, highlight the successes and pitfalls of porting the toolkit from VxWorks to Linux-RTAI, and to discuss future enhancements that will be implemented as a direct result of this port. The first port of any body of code is always the most difficult since it uncovers the OS-specific calls and forces "red flags" into those portions of the code. For this reason, It has also been a huge benefit that the project chose a generic, platform independent OS extension, ACE, and its CORBA counterpart, TAO. This port of RTC will pave the way for conversions to other environments, the most interesting of which is a non-real-time simulation environment, currently being considered by the Space Interferometry Mission (SIM) and the Terrestrial Planet Finder (TPF) Projects.
Driver Training Simulator for Backing Up Commercial Vehicles with Trailers
NASA Astrophysics Data System (ADS)
Berg, Uwe; Wojke, Philipp; Zöbel, Dieter
Backing up tractors with trailers is a difficult task since the kinematic behavior of articulated vehicles is complex and hard to control. Especially unskilled drivers are overstrained with the complicated steering process. To learn and practice the steering behavior of articulated vehicles, we developed a 3D driving simulator. The simulator can handle different types of articulated vehicles like semi-trailers, one- and two-axle trailers, or gigaliners. The use of a driving simulator offers many advantages over the use of real vehicles. One of the main advantages is the possibility to learn the steering behavior of all vehicle types. Drivers can be given more and better driving instructions like collision warnings or steering hints. Furthermore, the driver training costs can be reduced. Moreover, mistakes of the student do not lead to real damages and costly repairs. The hardware of the simulator consists of a low cost commercial driving stand with original truck parts, a projection of the windshield and two flat panel monitors for the left and right exterior mirrors. Standard PC hardware is used for controlling the driving stand and for generating the realtime 3D environment. Each aspect of the simulation like realistic vehicle movements or generation of different views, is handled by a specific software module. This flexible system can be easily extended which offers the opportunity for other uses than just driver training. Therefore, we use the simulator for the development and test of driver assistance systems.
A Hardware Model Validation Tool for Use in Complex Space Systems
NASA Technical Reports Server (NTRS)
Davies, Misty Dawn; Gundy-Burlet, Karen L.; Limes, Gregory L.
2010-01-01
One of the many technological hurdles that must be overcome in future missions is the challenge of validating as-built systems against the models used for design. We propose a technique composed of intelligent parameter exploration in concert with automated failure analysis as a scalable method for the validation of complex space systems. The technique is impervious to discontinuities and linear dependencies in the data, and can handle dimensionalities consisting of hundreds of variables over tens of thousands of experiments.
Reinjection laser oscillator and method
McLellan, Edward J.
1984-01-01
A uv preionized CO.sub.2 oscillator with integral four-pass amplifier capable of providing 1 to 5 GW laser pulses with pulse widths from 0.1 to 0.5 ns full width at half-maximum (FWHM) is described. The apparatus is operated at any pressure from 1 atm to 10 atm without the necessity of complex high voltage electronics. The reinjection technique employed gives rise to a compact, efficient system that is particularly immune to alignment instabilities with a minimal amount of hardware and complexity.
Microwave techniques for measuring complex permittivity and permeability of materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guillon, P.
1995-08-01
Different materials are of fundamental importance to the aerospace, microwave, electronics and communications industries, and include for example microwave absorbing materials, antennas lenses and radomes, substrates for MMIC and microwave components and antennaes. Basic measurements for the complex permittivity and permeability of those homogeneous solid materials in the microwave spectral region are described including hardware, instrumentation and analysis. Elevated temperature measurements as well as measurements intercomparisons, with a discussion of the strengths and weaknesses of each techniques are also presented.
NASA Technical Reports Server (NTRS)
Owen, Robert B.; Gyekenyesi, Andrew L.; Inman, Daniel J.; Ha, Dong S.
2011-01-01
The Integrated Vehicle Health Management (IVHM) Project, sponsored by NASA's Aeronautics Research Mission Directorate, is conducting research to advance the state of highly integrated and complex flight-critical health management technologies and systems. An effective IVHM system requires Structural Health Monitoring (SHM). The impedance method is one such SHM technique for detection and monitoring complex structures for damage. This position paper on the impedance method presents the current state of the art, future directions, applications and possible flight test demonstrations.
Laboratory complex for simulation of navigation signals of pseudosatellites
NASA Astrophysics Data System (ADS)
Ratushniak, V. N.; Gladyshev, A. B.; Sokolovskiy, A. V.; Mikhov, E. D.
2018-05-01
In the article, features of the organization, structure and questions of formation of navigation signals of pseudosatellites of the short - range navigation system based on the hardware-software complex National Instruments are considered. A software model that performs the formation and management of a pseudo-random sequence of a navigation signal and the formation and management of the format transmitted pseudosatellite navigation information is presented. The variant of constructing the transmitting equipment of the pseudosatellite base stations is provided.
Fastening hardware to honeycomb panels
NASA Technical Reports Server (NTRS)
Kenger, A.
1979-01-01
Adhesive bonding reduces likelihood of skin failure due to excessive forces or torques by utilizing an adhesive to honeycomb skin. Concept is useful in other applications of composites such as aircraft, automobiles, and home appliances.
Impact of flight systems integration on future aircraft design
NASA Technical Reports Server (NTRS)
Hood, R. V.; Dollyhigh, S. M.; Newsom, J. R.
1984-01-01
Integrations trends in aircraft are discussed with an eye to manifestations in future aircraft designs through interdisciplinary technology integration. Current practices use software changes or small hardware fixes to solve problems late in the design process, e.g., low static stability to upgrade fuel efficiency. A total energy control system has been devised to integrate autopilot and autothrottle functions, thereby eliminating hardware, reducing the software, pilot workload, and cost, and improving flight efficiency and performance. Integrated active controls offer reduced weight and larger payloads for transport aircraft. The introduction of vectored thrust may eliminate horizontal and vertical stabilizers, and location of the thrust at the vehicle center of gravity can provide vertical takeoff and landing capabilities. It is suggested that further efforts will open a new discipline, aeroservoelasticity, and tests will become multidisciplinary, involving controls, aerodynamics, propulsion and structures.
The LHCb trigger and its upgrade
NASA Astrophysics Data System (ADS)
Dziurda, A.; LHCb Trigger Group
2016-07-01
The current LHCb trigger system consists of a hardware level, which reduces the LHC inelastic collision rate of 30 MHz, at which the entire detector is read out. In a second level, implemented in a farm of 20 k parallel-processing CPUs, the event rate is reduced to about 5 kHz. We review the performance of the LHCb trigger system during Run I of the LHC. Special attention is given to the use of multivariate analyses in the High Level Trigger. The major bottleneck for hadronic decays is the hardware trigger. LHCb plans a major upgrade of the detector and DAQ system in the LHC shutdown of 2018, enabling a purely software based trigger to process the full 30 MHz of inelastic collisions delivered by the LHC. We demonstrate that the planned architecture will be able to meet this challenge.
A Virtual Laboratory for Aviation and Airspace Prognostics Research
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan; Gorospe, George; Teubert, Christ; Quach, Cuong C.; Hogge, Edward; Darafsheh, Kaveh
2017-01-01
Integration of Unmanned Aerial Vehicles (UAVs), autonomy, spacecraft, and other aviation technologies, in the airspace is becoming more and more complicated, and will continue to do so in the future. Inclusion of new technology and complexity into the airspace increases the importance and difficulty of safety assurance. Additionally, testing new technologies on complex aviation systems and systems of systems can be challenging, expensive, and at times unsafe when implementing real life scenarios. The application of prognostics to aviation and airspace management may produce new tools and insight into these problems. Prognostic methodology provides an estimate of the health and risks of a component, vehicle, or airspace and knowledge of how that will change over time. That measure is especially useful in safety determination, mission planning, and maintenance scheduling. In our research, we develop a live, distributed, hardware- in-the-loop Prognostics Virtual Laboratory testbed for aviation and airspace prognostics. The developed testbed will be used to validate prediction algorithms for the real-time safety monitoring of the National Airspace System (NAS) and the prediction of unsafe events. In our earlier work1 we discussed the initial Prognostics Virtual Laboratory testbed development work and related results for milestones 1 & 2. This paper describes the design, development, and testing of the integrated tested which are part of milestone 3, along with our next steps for validation of this work. Through a framework consisting of software/hardware modules and associated interface clients, the distributed testbed enables safe, accurate, and inexpensive experimentation and research into airspace and vehicle prognosis that would not have been possible otherwise. The testbed modules can be used cohesively to construct complex and relevant airspace scenarios for research. Four modules are key to this research: the virtual aircraft module which uses the X-Plane simulator and X-PlaneConnect toolbox, the live aircraft module which connects fielded aircraft using onboard cellular communications devices, the hardware in the loop (HITL) module which connects laboratory based bench-top hardware testbeds and the research module which contains diagnostics and prognostics tools for analysis of live air traffic situations and vehicle health conditions. The testbed also features other modules for data recording and playback, information visualization, and air traffic generation. Software reliability, safety, and latency are some of the critical design considerations in development of the testbed.
Optimal Discrete Spatial Compression for Beamspace Massive MIMO Signals
NASA Astrophysics Data System (ADS)
Jiang, Zhiyuan; Zhou, Sheng; Niu, Zhisheng
2018-05-01
Deploying massive number of antennas at the base station side can boost the cellular system performance dramatically. Meanwhile, it however involves significant additional radio-frequency (RF) front-end complexity, hardware cost and power consumption. To address this issue, the beamspace-multiple-input-multiple-output (beamspace-MIMO) based approach is considered as a promising solution. In this paper, we first show that the traditional beamspace-MIMO suffers from spatial power leakage and imperfect channel statistics estimation. A beam combination module is hence proposed, which consists of a small number (compared with the number of antenna elements) of low-resolution (possibly one-bit) digital (discrete) phase shifters after the beamspace transformation to further compress the beamspace signal dimensionality, such that the number of RF chains can be reduced beyond beamspace transformation and beam selection. The optimum discrete beam combination weights for the uplink are obtained based on the branch-and-bound (BB) approach. The key to the BB-based solution is to solve the embodied sub-problem, whose solution is derived in a closed-form. Based on the solution, a sequential greedy beam combination scheme with linear-complexity (w.r.t. the number of beams in the beamspace) is proposed. Link-level simulation results based on realistic channel models and long-term-evolution (LTE) parameters are presented which show that the proposed schemes can reduce the number of RF chains by up to $25\\%$ with a one-bit digital phase-shifter-network.
Implementation of Electronic Checklists in an Oncology Medical Record: Initial Clinical Experience
Albuquerque, Kevin V.; Miller, Alexis A.; Roeske, John C.
2011-01-01
Purpose: The quality of any medical treatment depends on the accurate processing of multiple complex components of information, with proper delivery to the patient. This is true for radiation oncology, in which treatment delivery is as complex as a surgical procedure but more dependent on hardware and software technology. Uncorrected errors, even if small or infrequent, can result in catastrophic consequences for the patient. We developed electronic checklists (ECLs) within the oncology electronic medical record (EMR) and evaluated their use and report on our initial clinical experience. Methods: Using the Mosaiq EMR, we developed checklists within the clinical assessment section. These checklists are based on the process flow of information from one group to another within the clinic and enable the processing, confirmation, and documentation of relevant patient information before the delivery of radiation therapy. The clinical use of the ECL was documented by means of a customized report. Results: Use of ECL has reduced the number of times that physicians were called to the treatment unit. In particular, the ECL has ensured that therapists have a better understanding of the treatment plan before the initiation of treatment. An evaluation of ECL compliance showed that, with additional staff training, > 94% of the records were completed. Conclusion: The ECL can be used to ensure standardization of procedures and documentation that the pretreatment checks have been performed before patient treatment. We believe that the implementation of ECLs will improve patient safety and reduce the likelihood of treatment errors. PMID:22043184
Use of Field Programmable Gate Array Technology in Future Space Avionics
NASA Technical Reports Server (NTRS)
Ferguson, Roscoe C.; Tate, Robert
2005-01-01
Fulfilling NASA's new vision for space exploration requires the development of sustainable, flexible and fault tolerant spacecraft control systems. The traditional development paradigm consists of the purchase or fabrication of hardware boards with fixed processor and/or Digital Signal Processing (DSP) components interconnected via a standardized bus system. This is followed by the purchase and/or development of software. This paradigm has several disadvantages for the development of systems to support NASA's new vision. Building a system to be fault tolerant increases the complexity and decreases the performance of included software. Standard bus design and conventional implementation produces natural bottlenecks. Configuring hardware components in systems containing common processors and DSPs is difficult initially and expensive or impossible to change later. The existence of Hardware Description Languages (HDLs), the recent increase in performance, density and radiation tolerance of Field Programmable Gate Arrays (FPGAs), and Intellectual Property (IP) Cores provides the technology for reprogrammable Systems on a Chip (SOC). This technology supports a paradigm better suited for NASA's vision. Hardware and software production are melded for more effective development; they can both evolve together over time. Designers incorporating this technology into future avionics can benefit from its flexibility. Systems can be designed with improved fault isolation and tolerance using hardware instead of software. Also, these designs can be protected from obsolescence problems where maintenance is compromised via component and vendor availability.To investigate the flexibility of this technology, the core of the Central Processing Unit and Input/Output Processor of the Space Shuttle AP101S Computer were prototyped in Verilog HDL and synthesized into an Altera Stratix FPGA.
Using technology to prevent adverse drug events in the intensive care unit.
Hassan, Erkan; Badawi, Omar; Weber, Robert J; Cohen, Henry
2010-06-01
Critically ill patients are particularly susceptible to adverse drug events (ADEs) due to their rapidly changing and unstable physiology, complex therapeutic regimens, and large percentage of medications administered intravenously. There are a wide variety of technologies that can help prevent the points of failure commonly associated with ADEs (i.e., the five "Rights": right patient; right drug; right route; right dose; right frequency). These technologies are often categorized by their degree of complexity to design and engineer and the type of error they are designed to prevent. Focusing solely on the software and hardware design of technology may over- or underestimate the degree of difficulty to avoid ADEs at the bedside. Alternatively, we propose categorizing technological solutions by identifying the factors essential for success. The two major critical success factors are: 1) the degree of clinical assessment required by the clinician to appropriately evaluate and disposition the issue identified by a technology; and 2) the complexity associated with effective implementation. This classification provides a way of determining how ADE-preventing technologies in the intensive care unit can be successfully integrated into clinical practice. Although there are limited data on the effectiveness of many technologies in reducing ADEs, we will review the technologies currently available in the intensive care unit environment. We will also discuss critical success factors for implementation, common errors made during implementation, and the potential errors using these systems.
NASA Technical Reports Server (NTRS)
Harper, R. E.; Alger, L. S.; Babikyan, C. A.; Butler, B. P.; Friend, S. A.; Ganska, R. J.; Lala, J. H.; Masotto, T. K.; Meyer, A. J.; Morton, D. P.
1992-01-01
Described here is the Army Fault Tolerant Architecture (AFTA) hardware architecture and components and the operating system. The architectural and operational theory of the AFTA Fault Tolerant Data Bus is discussed. The test and maintenance strategy developed for use in fielded AFTA installations is presented. An approach to be used in reducing the probability of AFTA failure due to common mode faults is described. Analytical models for AFTA performance, reliability, availability, life cycle cost, weight, power, and volume are developed. An approach is presented for using VHSIC Hardware Description Language (VHDL) to describe and design AFTA's developmental hardware. A plan is described for verifying and validating key AFTA concepts during the Dem/Val phase. Analytical models and partial mission requirements are used to generate AFTA configurations for the TF/TA/NOE and Ground Vehicle missions.
FPGA implementation of self organizing map with digital phase locked loops.
Hikawa, Hiroomi
2005-01-01
The self-organizing map (SOM) has found applicability in a wide range of application areas. Recently new SOM hardware with phase modulated pulse signal and digital phase-locked loops (DPLLs) has been proposed (Hikawa, 2005). The system uses the DPLL as a computing element since the operation of the DPLL is very similar to that of SOM's computation. The system also uses square waveform phase to hold the value of the each input vector element. This paper discuss the hardware implementation of the DPLL SOM architecture. For effective hardware implementation, some components are redesigned to reduce the circuit size. The proposed SOM architecture is described in VHDL and implemented on field programmable gate array (FPGA). Its feasibility is verified by experiments. Results show that the proposed SOM implemented on the FPGA has a good quantization capability, and its circuit size very small.
Automated installation methods for photovoltaic arrays
NASA Astrophysics Data System (ADS)
Briggs, R.; Daniels, A.; Greenaway, R.; Oster, J., Jr.; Racki, D.; Stoeltzing, R.
1982-11-01
Since installation expenses constitute a substantial portion of the cost of a large photovoltaic power system, methods for reduction of these costs were investigated. The installation of the photovoltaic arrays includes all areas, starting with site preparation (i.e., trenching, wiring, drainage, foundation installation, lightning protection, grounding and installation of the panel) and concluding with the termination of the bus at the power conditioner building. To identify the optimum combination of standard installation procedures and automated/mechanized techniques, the installation process was investigated including the equipment and hardware available, the photovoltaic array structure systems and interfaces, and the array field and site characteristics. Preliminary designs of hardware for both the standard installation method, the automated/mechanized method, and a mix of standard installation procedures and mechanized procedures were identified to determine which process effectively reduced installation costs. In addition, costs associated with each type of installation method and with the design, development and fabrication of new installation hardware were generated.
Wearable and low-stress ambulatory blood pressure monitoring technology for hypertension diagnosis.
Altintas, Ersin; Takoh, Kimiyasu; Ohno, Yuji; Abe, Katsumi; Akagawa, Takeshi; Ariyama, Tetsuri; Kubo, Masahiro; Tsuda, Kenichiro; Tochikubo, Osamu
2015-01-01
We propose a highly wearable, upper-arm type, oscillometric-based blood pressure monitoring technology with low-stress. The low-stress is realized by new developments in the hardware and software design. In the hardware design, conventional armband; cuff, is almost halved in volume thanks to a flexible plastic core and a liquid bag which enhances the fitness and pressure uniformity over the arm. Reduced air bag volume enables smaller motor pump size and battery leading to a thinner, more compact and more wearable unified device. In the software design, a new prediction algorithm enabled to apply less stress (and less pain) on arm of the patient. Proof-of-concept experiments on volunteers show a high accuracy on both technologies. This paper mainly introduces hardware developments. The system is promising for less-painful and less-stressful 24-hour blood pressure monitoring in hypertension managements and related healthcare solutions.
Chao, Chun-Tang; Maneetien, Nopadon; Wang, Chi-Jo; Chiou, Juing-Shian
2014-01-01
This paper presents the design and evaluation of the hardware circuit for electronic stethoscopes with heart sound cancellation capabilities using field programmable gate arrays (FPGAs). The adaptive line enhancer (ALE) was adopted as the filtering methodology to reduce heart sound attributes from the breath sounds obtained via the electronic stethoscope pickup. FPGAs were utilized to implement the ALE functions in hardware to achieve near real-time breath sound processing. We believe that such an implementation is unprecedented and crucial toward a truly useful, standalone medical device in outpatient clinic settings. The implementation evaluation with one Altera cyclone II-EP2C70F89 shows that the proposed ALE used 45% resources of the chip. Experiments with the proposed prototype were made using DE2-70 emulation board with recorded body signals obtained from online medical archives. Clear suppressions were observed in our experiments from both the frequency domain and time domain perspectives.
NASA Astrophysics Data System (ADS)
Fitzgerald, Ryan; Karanassios, Vassili
2017-05-01
There are many applications requiring chemical analysis in the field and analytical results in (near) real-time. For example, when accidental spills occur. In others, collecting samples in the field followed by analysis in a lab increases costs and introduces time-delays. In such cases, "bring part of the lab to the sample" would be ideal. Toward this ideal (and to further reduce size and weight), we developed a relatively inexpensive, battery-operated, wireless data acquisition hardware system around an Arduino nano micro-controller and a 16-bit ADC (Analog-to- Digital Converter) with a max sampling rate of 860 samples/s. The hardware communicates the acquired data using low-power Bluetooth. Software for data acquisition and data display was written in Python. Potential ways of making the hardware-software approach described here a part of the Internet-of-Things (IoT) are presented.
What can formal methods offer to digital flight control systems design
NASA Technical Reports Server (NTRS)
Good, Donald I.
1990-01-01
Formal methods research begins to produce methods which will enable mathematic modeling of the physical behavior of digital hardware and software systems. The development of these methods directly supports the NASA mission of increasing the scope and effectiveness of flight system modeling capabilities. The conventional, continuous mathematics that is used extensively in modeling flight systems is not adequate for accurate modeling of digital systems. Therefore, the current practice of digital flight control system design has not had the benefits of extensive mathematical modeling which are common in other parts of flight system engineering. Formal methods research shows that by using discrete mathematics, very accurate modeling of digital systems is possible. These discrete modeling methods will bring the traditional benefits of modeling to digital hardware and hardware design. Sound reasoning about accurate mathematical models of flight control systems can be an important part of reducing risk of unsafe flight control.
Implementing real-time robotic systems using CHIMERA II
NASA Technical Reports Server (NTRS)
Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.
1990-01-01
A description is given of the CHIMERA II programming environment and operating system, which was developed for implementing real-time robotic systems. Sensor-based robotic systems contain both general- and special-purpose hardware, and thus the development of applications tends to be a time-consuming task. The CHIMERA II environment is designed to reduce the development time by providing a convenient software interface between the hardware and the user. CHIMERA II supports flexible hardware configurations which are based on one or more VME-backplanes. All communication across multiple processors is transparent to the user through an extensive set of interprocessor communication primitives. CHIMERA II also provides a high-performance real-time kernel which supports both deadline and highest-priority-first scheduling. The flexibility of CHIMERA II allows hierarchical models for robot control, such as NASREM, to be implemented with minimal programming time and effort.
Design Tools for Reconfigurable Hardware in Orbit (RHinO)
NASA Technical Reports Server (NTRS)
French, Mathew; Graham, Paul; Wirthlin, Michael; Larchev, Gregory; Bellows, Peter; Schott, Brian
2004-01-01
The Reconfigurable Hardware in Orbit (RHinO) project is focused on creating a set of design tools that facilitate and automate design techniques for reconfigurable computing in space, using SRAM-based field-programmable-gate-array (FPGA) technology. These tools leverage an established FPGA design environment and focus primarily on space effects mitigation and power optimization. The project is creating software to automatically test and evaluate the single-event-upsets (SEUs) sensitivities of an FPGA design and insert mitigation techniques. Extensions into the tool suite will also allow evolvable algorithm techniques to reconfigure around single-event-latchup (SEL) events. In the power domain, tools are being created for dynamic power visualiization and optimization. Thus, this technology seeks to enable the use of Reconfigurable Hardware in Orbit, via an integrated design tool-suite aiming to reduce risk, cost, and design time of multimission reconfigurable space processors using SRAM-based FPGAs.