A software methodology for compiling quantum programs
NASA Astrophysics Data System (ADS)
Häner, Thomas; Steiger, Damian S.; Svore, Krysta; Troyer, Matthias
2018-04-01
Quantum computers promise to transform our notions of computation by offering a completely new paradigm. To achieve scalable quantum computation, optimizing compilers and a corresponding software design flow will be essential. We present a software architecture for compiling quantum programs from a high-level language program to hardware-specific instructions. We describe the necessary layers of abstraction and their differences and similarities to classical layers of a computer-aided design flow. For each layer of the stack, we discuss the underlying methods for compilation and optimization. Our software methodology facilitates more rapid innovation among quantum algorithm designers, quantum hardware engineers, and experimentalists. It enables scalable compilation of complex quantum algorithms and can be targeted to any specific quantum hardware implementation.
Total System Design (TSD) Methodology Assessment.
1983-01-01
hardware implementation. Author: Martin - Marietta Aerospace Title: Total System Design Methodology Source: Martin - Marietta Technical Report MCR -79-646...systematic, rational approach to computer systems design is needed. Martin - Marietta has produced a Total System Design Methodology to support such design...gathering and ordering. The purpose of the paper is to document the existing TSD methoeology at Martin - Marietta , describe the supporting tools, and
Onboard FPGA-based SAR processing for future spaceborne systems
NASA Technical Reports Server (NTRS)
Le, Charles; Chan, Samuel; Cheng, Frank; Fang, Winston; Fischman, Mark; Hensley, Scott; Johnson, Robert; Jourdan, Michael; Marina, Miguel; Parham, Bruce;
2004-01-01
We present a real-time high-performance and fault-tolerant FPGA-based hardware architecture for the processing of synthetic aperture radar (SAR) images in future spaceborne system. In particular, we will discuss the integrated design approach, from top-level algorithm specifications and system requirements, design methodology, functional verification and performance validation, down to hardware design and implementation.
Hardware proofs using EHDM and the RSRE verification methodology
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Sjogren, Jon A.
1988-01-01
Examined is a methodology for hardware verification developed by Royal Signals and Radar Establishment (RSRE) in the context of the SRI International's Enhanced Hierarchical Design Methodology (EHDM) specification/verification system. The methodology utilizes a four-level specification hierarchy with the following levels: functional level, finite automata model, block model, and circuit level. The properties of a level are proved as theorems in the level below it. This methodology is applied to a 6-bit counter problem and is critically examined. The specifications are written in EHDM's specification language, Extended Special, and the proofs are improving both the RSRE methodology and the EHDM system.
Lab at Home: Hardware Kits for a Digital Design Lab
ERIC Educational Resources Information Center
Oliver, J. P.; Haim, F.
2009-01-01
An innovative laboratory methodology for an introductory digital design course is presented. Instead of having traditional lab experiences, where students have to come to school classrooms, a "lab at home" concept is proposed. Students perform real experiments in their own homes, using hardware kits specially developed for this purpose. They…
Accelerating a MPEG-4 video decoder through custom software/hardware co-design
NASA Astrophysics Data System (ADS)
Díaz, Jorge L.; Barreto, Dacil; García, Luz; Marrero, Gustavo; Carballo, Pedro P.; Núñez, Antonio
2007-05-01
In this paper we present a novel methodology to accelerate an MPEG-4 video decoder using software/hardware co-design for wireless DAB/DMB networks. Software support includes the services provided by the embedded kernel μC/OS-II, and the application tasks mapped to software. Hardware support includes several custom co-processors and a communication architecture with bridges to the main system bus and with a dual port SRAM. Synchronization among tasks is achieved at two levels, by a hardware protocol and by kernel level scheduling services. Our reference application is an MPEG-4 video decoder composed of several software functions and written using a special C++ library named CASSE. Profiling and space exploration techniques were used previously over the Advanced Simple Profile (ASP) MPEG-4 decoder to determinate the best HW/SW partition developed here. This research is part of the ARTEMI project and its main goal is the establishment of methodologies for the design of real-time complex digital systems using Programmable Logic Devices with embedded microprocessors as target technology and the design of multimedia systems for broadcasting networks as reference application.
Space Biology Initiative. Trade Studies, volume 2
NASA Technical Reports Server (NTRS)
1989-01-01
The six studies which are the subjects of this report are entitled: Design Modularity and Commonality; Modification of Existing Hardware (COTS) vs. New Hardware Build Cost Analysis; Automation Cost vs. Crew Utilization; Hardware Miniaturization versus Cost; Space Station Freedom/Spacelab Modules Compatibility vs. Cost; and Prototype Utilization in the Development of Space Hardware. The product of these six studies was intended to provide a knowledge base and methodology that enables equipment produced for the Space Biology Initiative program to meet specific design and functional requirements in the most efficient and cost effective form consistent with overall mission integration parameters. Each study promulgates rules of thumb, formulas, and matrices that serves as a handbook for the use and guidance of designers and engineers in design, development, and procurement of Space Biology Initiative (SBI) hardware and software.
Space Biology Initiative. Trade Studies, volume 1
NASA Technical Reports Server (NTRS)
1989-01-01
The six studies which are addressed are entitled: Design Modularity and Commonality; Modification of Existing Hardware (COTS) vs. New Hardware Build Cost Analysis; Automation Cost vs. Crew Utilization; Hardware Miniaturization versus Cost; Space Station Freedom/Spacelab Modules Compatibility vs. Cost; and Prototype Utilization in the Development of Space Hardware. The product of these six studies was intended to provide a knowledge base and methodology that enables equipment produced for the Space Biology Initiative program to meet specific design and functional requirements in the most efficient and cost effective form consistent with overall mission integration parameters. Each study promulgates rules of thumb, formulas, and matrices that serves has a handbook for the use and guidance of designers and engineers in design, development, and procurement of Space Biology Initiative (SBI) hardware and software.
An overview of key technology thrusts at Bell Helicopter Textron
NASA Technical Reports Server (NTRS)
Harse, James H.; Yen, Jing G.; Taylor, Rodney S.
1988-01-01
Insight is provided into several key technologies at Bell. Specific topics include the results of ongoing research and development in advanced rotors, methodology development, and new configurations. The discussion on advanced rotors highlight developments on the composite, bearingless rotor, including the development and testing of full scale flight hardware as well as some of the design support analyses and verification testing. The discussion on methodology development concentrates on analytical development in aeromechanics, including correlation studies and design application. New configurations, presents the results of some advanced configuration studies including hardware development.
Enhancements and Algorithms for Avionic Information Processing System Design Methodology.
1982-06-16
programming algorithm is enhanced by incorporating task precedence constraints and hardware failures. Stochastic network methods are used to analyze...allocations in the presence of random fluctuations. Graph theoretic methods are used to analyze hardware designs, and new designs are constructed with...There, spatial dynamic programming (SDP) was used to solve a static, deterministic software allocation problem. Under the current contract the SDP
A Systematic Software, Firmware, and Hardware Codesign Methodology for Digital Signal Processing
2014-03-01
possible mappings ...................................................60 Table 25. Possible optimal leaf -nodes... size weight and power UAV unmanned aerial vehicle UHF ultra-high frequency UML universal modeling language Verilog verify logic VHDL VHSIC...optimal leaf -nodes to some design patterns for embedded system design. Software and hardware partitioning is a very difficult challenge in the field of
DRFM Cordic Processor and Sea Clutter Modeling for Enhancing Structured False Target Synthesis
2017-09-01
was implemented using the Verilog hardware description language. The second investigation concerns generating sea clutter to impose on the false target...to achieve accuracy at 5.625o. The resulting design was implemented using the Verilog hardware description language. The second investigation...33 3. Initialization of the Angle Accumulator ....................................34 4. Design Methodology for I/Q Phase
Mitigating Communication Delays in Remotely Connected Hardware-in-the-loop Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cale, James; Johnson, Brian; Dall'Anese, Emiliano
Here, this paper introduces a potential approach for mitigating the effects of communication delays between multiple, closed-loop hardware-in-the-loop experiments which are virtually connected, yet physically separated. The method consists of an analytical method for the compensation of communication delays, along with the supporting computational and communication infrastructure. The control design leverages tools for the design of observers for the compensation of measurement errors in systems with time-varying delays. The proposed methodology is validated through computer simulation and hardware experimentation connecting hardware-in-the-loop experiments conducted between laboratories separated by a distance of over 100 km.
Mitigating Communication Delays in Remotely Connected Hardware-in-the-loop Experiments
Cale, James; Johnson, Brian; Dall'Anese, Emiliano; ...
2018-03-30
Here, this paper introduces a potential approach for mitigating the effects of communication delays between multiple, closed-loop hardware-in-the-loop experiments which are virtually connected, yet physically separated. The method consists of an analytical method for the compensation of communication delays, along with the supporting computational and communication infrastructure. The control design leverages tools for the design of observers for the compensation of measurement errors in systems with time-varying delays. The proposed methodology is validated through computer simulation and hardware experimentation connecting hardware-in-the-loop experiments conducted between laboratories separated by a distance of over 100 km.
Electronic Design Automation: Integrating the Design and Manufacturing Functions
NASA Technical Reports Server (NTRS)
Bachnak, Rafic; Salkowski, Charles
1997-01-01
As the complexity of electronic systems grows, the traditional design practice, a sequential process, is replaced by concurrent design methodologies. A major advantage of concurrent design is that the feedback from software and manufacturing engineers can be easily incorporated into the design. The implementation of concurrent engineering methodologies is greatly facilitated by employing the latest Electronic Design Automation (EDA) tools. These tools offer integrated simulation of the electrical, mechanical, and manufacturing functions and support virtual prototyping, rapid prototyping, and hardware-software co-design. This report presents recommendations for enhancing the electronic design and manufacturing capabilities and procedures at JSC based on a concurrent design methodology that employs EDA tools.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ainsworth, Nathan; Hariri, Ali; Prabakar, Kumaraguru
Power hardware-in-the-loop (PHIL) simulation, where actual hardware under text is coupled with a real-time digital model in closed loop, is a powerful tool for analyzing new methods of control for emerging distributed power systems. However, without careful design and compensation of the interface between the simulated and actual systems, PHIL simulations may exhibit instability and modeling inaccuracies. This paper addresses issues that arise in the PHIL simulation of a hardware battery inverter interfaced with a simulated distribution feeder. Both the stability and accuracy issues are modeled and characterized, and a methodology for design of PHIL interface compensation to ensure stabilitymore » and accuracy is presented. The stability and accuracy of the resulting compensated PHIL simulation is then shown by experiment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabakar, Kumaraguru; Ainsworth, Nathan; Pratt, Annabelle
Power hardware-in-the-loop (PHIL) simulation, where actual hardware under text is coupled with a real-time digital model in closed loop, is a powerful tool for analyzing new methods of control for emerging distributed power systems. However, without careful design and compensation of the interface between the simulated and actual systems, PHIL simulations may exhibit instability and modeling inaccuracies. This paper addresses issues that arise in the PHIL simulation of a hardware battery inverter interfaced with a simulated distribution feeder. Both the stability and accuracy issues are modeled and characterized, and a methodology for design of PHIL interface compensation to ensure stabilitymore » and accuracy is presented. The stability and accuracy of the resulting compensated PHIL simulation is then shown by experiment.« less
MONTAGE: A Methodology for Designing Composable End-to-End Secure Distributed Systems
2012-08-01
83 7.6 Formal Model of Loc Separation . . . . . . . . . . . . . . . . . . . . . . . . . 84 7.6.1 Static Partitions...Next, we derive five requirements (called Loc Separation, Implicit Parameter Separation, Error Signaling Separation, Conf Separation, and Next Call...hypervisors and hardware) and a real cloud (with shared hypervisors and hardware) that satisfies these requirements. Finally we study Loc Separation
Modular hardware synthesis using an HDL. [Hardware Description Language
NASA Technical Reports Server (NTRS)
Covington, J. A.; Shiva, S. G.
1981-01-01
Although hardware description languages (HDL) are becoming more and more necessary to automated design systems, their application is complicated due to the difficulty in translating the HDL description into an implementable format, nonfamiliarity of hardware designers with high-level language programming, nonuniform design methodologies and the time and costs involved in transfering HDL design software. Digital design language (DDL) suffers from all of the above problems and in addition can only by synthesized on a complete system and not on its subparts, making it unsuitable for synthesis using standard modules or prefabricated chips such as those required in LSI or VLSI circuits. The present paper presents a method by which the DDL translator can be made to generate modular equations that will allow the system to be synthesized as an interconnection of lower-level modules. The method involves the introduction of a new language construct called a Module which provides for the separate translation of all equations bounded by it.
Composite Structures Damage Tolerance Analysis Methodologies
NASA Technical Reports Server (NTRS)
Chang, James B.; Goyal, Vinay K.; Klug, John C.; Rome, Jacob I.
2012-01-01
This report presents the results of a literature review as part of the development of composite hardware fracture control guidelines funded by NASA Engineering and Safety Center (NESC) under contract NNL04AA09B. The objectives of the overall development tasks are to provide a broad information and database to the designers, analysts, and testing personnel who are engaged in space flight hardware production.
Framework for Computer Assisted Instruction Courseware: A Case Study.
ERIC Educational Resources Information Center
Betlach, Judith A.
1987-01-01
Systematically investigates, defines, and organizes variables related to production of internally designed and implemented computer assisted instruction (CAI) courseware: special needs of users; costs; identification and definition of realistic training needs; CAI definition and design methodology; hardware and software requirements; and general…
Olson, Eric J.
2013-06-11
An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).
Design and implementation of the tree-based fuzzy logic controller.
Liu, B D; Huang, C Y
1997-01-01
In this paper, a tree-based approach is proposed to design the fuzzy logic controller. Based on the proposed methodology, the fuzzy logic controller has the following merits: the fuzzy control rule can be extracted automatically from the input-output data of the system and the extraction process can be done in one-pass; owing to the fuzzy tree inference structure, the search spaces of the fuzzy inference process are largely reduced; the operation of the inference process can be simplified as a one-dimensional matrix operation because of the fuzzy tree approach; and the controller has regular and modular properties, so it is easy to be implemented by hardware. Furthermore, the proposed fuzzy tree approach has been applied to design the color reproduction system for verifying the proposed methodology. The color reproduction system is mainly used to obtain a color image through the printer that is identical to the original one. In addition to the software simulation, an FPGA is used to implement the prototype hardware system for real-time application. Experimental results show that the effect of color correction is quite good and that the prototype hardware system can operate correctly under the condition of 30 MHz clock rate.
System-level protection and hardware Trojan detection using weighted voting.
Amin, Hany A M; Alkabani, Yousra; Selim, Gamal M I
2014-07-01
The problem of hardware Trojans is becoming more serious especially with the widespread of fabless design houses and design reuse. Hardware Trojans can be embedded on chip during manufacturing or in third party intellectual property cores (IPs) during the design process. Recent research is performed to detect Trojans embedded at manufacturing time by comparing the suspected chip with a golden chip that is fully trusted. However, Trojan detection in third party IP cores is more challenging than other logic modules especially that there is no golden chip. This paper proposes a new methodology to detect/prevent hardware Trojans in third party IP cores. The method works by gradually building trust in suspected IP cores by comparing the outputs of different untrusted implementations of the same IP core. Simulation results show that our method achieves higher probability of Trojan detection over a naive implementation of simple voting on the output of different IP cores. In addition, experimental results show that the proposed method requires less hardware overhead when compared with a simple voting technique achieving the same degree of security.
Versatile fluid-mixing device for cell and tissue microgravity research applications.
Wilfinger, W W; Baker, C S; Kunze, E L; Phillips, A T; Hammerstedt, R H
1996-01-01
Microgravity life-science research requires hardware that can be easily adapted to a variety of experimental designs and working environments. The Biomodule is a patented, computer-controlled fluid-mixing device that can accommodate these diverse requirements. A typical shuttle payload contains eight Biomodules with a total of 64 samples, a sealed containment vessel, and a NASA refrigeration-incubation module. Each Biomodule contains eight gas-permeable Silastic T tubes that are partitioned into three fluid-filled compartments. The fluids can be mixed at any user-specified time. Multiple investigators and complex experimental designs can be easily accommodated with the hardware. During flight, the Biomodules are sealed in a vessel that provides two levels of containment (liquids and gas) and a stable, investigator-controlled experimental environment that includes regulated temperature, internal pressure, humidity, and gas composition. A cell microencapsulation methodology has also been developed to streamline launch-site sample manipulation and accelerate postflight analysis through the use of fluorescent-activated cell sorting. The Biomodule flight hardware and analytical cell encapsulation methodology are ideally suited for temporal, qualitative, or quantitative life-science investigations.
A low power biomedical signal processor ASIC based on hardware software codesign.
Nie, Z D; Wang, L; Chen, W G; Zhang, T; Zhang, Y T
2009-01-01
A low power biomedical digital signal processor ASIC based on hardware and software codesign methodology was presented in this paper. The codesign methodology was used to achieve higher system performance and design flexibility. The hardware implementation included a low power 32bit RISC CPU ARM7TDMI, a low power AHB-compatible bus, and a scalable digital co-processor that was optimized for low power Fast Fourier Transform (FFT) calculations. The co-processor could be scaled for 8-point, 16-point and 32-point FFTs, taking approximate 50, 100 and 150 clock circles, respectively. The complete design was intensively simulated using ARM DSM model and was emulated by ARM Versatile platform, before conducted to silicon. The multi-million-gate ASIC was fabricated using SMIC 0.18 microm mixed-signal CMOS 1P6M technology. The die area measures 5,000 microm x 2,350 microm. The power consumption was approximately 3.6 mW at 1.8 V power supply and 1 MHz clock rate. The power consumption for FFT calculations was less than 1.5 % comparing with the conventional embedded software-based solution.
NASA Astrophysics Data System (ADS)
Brereton, Margot Felicity
A series of short engineering exercises and design projects was created to help students learn to apply abstract knowledge to physical experiences with hardware. The exercises involved designing machines from kits of materials and dissecting and analyzing familiar household products. Students worked in teams. During the activities students brought their knowledge of engineering fundamentals to bear. Videotape analysis was used to identify and characterize the ways in which hardware contributed to learning fundamental concepts. Structural and qualitative analyses of videotaped activities were undertaken. Structural analysis involved counting the references to theory and hardware and the extent of interleaving of references in activity. The analysis found that there was much more discussion linking fundamental concepts to hardware in some activities than in others. The analysis showed that the interleaving of references to theory and hardware in activity is observable and quantifiable. Qualitative analysis was used to investigate the dialog linking concepts and hardware. Students were found to advance their designs and their understanding of engineering fundamentals through a negotiation process in which they pitted abstract concepts against hardware behavior. Through this process students sorted out theoretical assumptions and causal relations. In addition they discovered design assumptions, functional connections and physical embodiments of abstract concepts in hardware, developing a repertoire of familiar hardware components and machines. Hardware was found to be integral to learning, affecting the course of inquiry and the dynamics of group interaction. Several case studies are presented to illustrate the processes at work. The research illustrates the importance of working across the boundary between abstractions and experiences with hardware in order to learn engineering and physical sciences. The research findings are: (a) the negotiation process by which students discover fundamental concepts in hardware (and three central causes of negotiation breakdown); (b) a characterization of the ways that material systems contribute to learning activities, (the seven roles of hardware in learning); (c) the characteristics of activities that support discovering fundamental concepts in hardware (plus several engineering exercises); (d) a research methodology to examine how students learn in practice.
2018-01-01
14. ABSTRACT The objective of this effort was to: (a) develop novel and fundamental methodologies for data representation using hardware-based spike...Distribution Unlimited. 1 1.0 SUMMARY This effort is a critical part of an overall program to develop novel and fundamental methodologies for data...to fabrication a dynamic-reservoir circuit that utilizes sensory encoding methodologies similar to those employed in biological brains. Inspired
Silicon compilation: From the circuit to the system
NASA Astrophysics Data System (ADS)
Obrien, Keven
The methodology used for the compilation of silicon from a behavioral level to a system level is presented. The aim was to link the heretofore unrelated areas of high level synthesis and system level design. This link will play an important role in the development of future design automation tools as it will allow hardware/software co-designs to be synthesized. A design methodology that alllows, through the use of an intermediate representation, SOLAR, a System level Design Language (SDL), to be combined with a Hardware Description Language (VHDL) is presented. Two main steps are required in order to transform this specification into a synthesizable one. Firstly, a system level synthesis step including partitioning and communication synthesis is required in order to split the model into a set of interconnected subsystems, each of which will be processed by a high level synthesis tool. For this latter step AMICAL is used and this allows powerful scheduling techniques to be used, that accept very abstract descriptions of control flow dominated circuits as input, and interconnected RTL blocks that may feed existing logic-level synthesis tools to be generated.
Design and Implementation of Embedded Computer Vision Systems Based on Particle Filters
2010-01-01
for hardware/software implementa- tion of multi-dimensional particle filter application and we explore this in the third application which is a 3D...methodology for hardware/software implementation of multi-dimensional particle filter application and we explore this in the third application which is a...and hence multiprocessor implementation of parti- cle filters is an important option to examine. A significant body of work exists on optimizing generic
The Art of Space Flight Exercise Hardware: Design and Implementation
NASA Technical Reports Server (NTRS)
Beyene, Nahom M.
2004-01-01
The design of space flight exercise hardware depends on experience with crew health maintenance in a microgravity environment, history in development of flight-quality exercise hardware, and a foundation for certifying proper project management and design methodology. Developed over the past 40 years, the expertise in designing exercise countermeasures hardware at the Johnson Space Center stems from these three aspects of design. The medical community has steadily pursued an understanding of physiological changes in humans in a weightless environment and methods of counteracting negative effects on the cardiovascular and musculoskeletal system. The effects of weightlessness extend to the pulmonary and neurovestibular system as well with conditions ranging from motion sickness to loss of bone density. Results have shown losses in water weight and muscle mass in antigravity muscle groups. With the support of university-based research groups and partner space agencies, NASA has identified exercise to be the primary countermeasure for long-duration space flight. The history of exercise hardware began during the Apollo Era and leads directly to the present hardware on the International Space Station. Under the classifications of aerobic and resistive exercise, there is a clear line of development from the early devices to the countermeasures hardware used today. In support of all engineering projects, the engineering directorate has created a structured framework for project management. Engineers have identified standards and "best practices" to promote efficient and elegant design of space exercise hardware. The quality of space exercise hardware depends on how well hardware requirements are justified by exercise performance guidelines and crew health indicators. When considering the microgravity environment of the device, designers must consider performance of hardware separately from the combined human-in-hardware system. Astronauts are the caretakers of the hardware while it is deployed and conduct all sanitization, calibration, and maintenance for the devices. Thus, hardware designs must account for these issues with a goal of minimizing crew time on orbit required to complete these tasks. In the future, humans will venture to Mars and exercise countermeasures will play a critical role in allowing us to continue in our spirit of exploration. NASA will benefit from further experimentation on Earth, through the International Space Station, and with advanced biomechanical models to quantify how each device counteracts specific symptoms of weightlessness. With the continued support of international space agencies and the academic research community, we will usher the next frontier in human space exploration.
Software Design Improvements. Part 1; Software Benefits and Limitations
NASA Technical Reports Server (NTRS)
Lalli, Vincent R.; Packard, Michael H.; Ziemianski, Tom
1997-01-01
Computer hardware and associated software have been used for many years to process accounting information, to analyze test data and to perform engineering analysis. Now computers and software also control everything from automobiles to washing machines and the number and type of applications are growing at an exponential rate. The size of individual program has shown similar growth. Furthermore, software and hardware are used to monitor and/or control potentially dangerous products and safety-critical systems. These uses include everything from airplanes and braking systems to medical devices and nuclear plants. The question is: how can this hardware and software be made more reliable? Also, how can software quality be improved? What methodology needs to be provided on large and small software products to improve the design and how can software be verified?
NASA Technical Reports Server (NTRS)
Miller, Darcy
2000-01-01
Foreign object debris (FOD) is an important concern while processing space flight hardware. FOD can be defined as "The debris that is left in or around flight hardware, where it could cause damage to that flight hardware," (United Space Alliance, 2000). Just one small screw left unintentionally in the wrong place could delay a launch schedule while it is retrieved, increase the cost of processing, or cause a potentially fatal accident. At this time, there is not a single solution to help reduce the number of dropped parts such as screws, bolts, nuts, and washers during installation. Most of the effort is currently focused on training employees and on capturing the parts once they are dropped. Advances in ergonomics and hand tool design suggest that a solution may be possible, in the form of specialty hand tools, which secure the small parts while they are being handled. To assist in the development of these new advances, a test methodology was developed to conduct a usability evaluation of hand tools, while performing tasks with risk of creating FOD. The methodology also includes hardware in the form of a testing board and the small parts that can be installed onto the board during a test. The usability of new hand tools was determined based on efficiency and the number of dropped parts. To validate the methodology, participants were tested while performing a task that is representative of the type of work that may be done when processing space flight hardware. Test participants installed small parts using their hands and two commercially available tools. The participants were from three groups: (1) students, (2) engineers / managers and (3) technicians. The test was conducted to evaluate the differences in performance when using the three installation methods, as well as the difference in performance of the three participant groups.
NASA Astrophysics Data System (ADS)
Qiu, Mo; Yu, Simin; Wen, Yuqiong; Lü, Jinhu; He, Jianbin; Lin, Zhuosheng
In this paper, a novel design methodology and its FPGA hardware implementation for a universal chaotic signal generator is proposed via the Verilog HDL fixed-point algorithm and state machine control. According to continuous-time or discrete-time chaotic equations, a Verilog HDL fixed-point algorithm and its corresponding digital system are first designed. In the FPGA hardware platform, each operation step of Verilog HDL fixed-point algorithm is then controlled by a state machine. The generality of this method is that, for any given chaotic equation, it can be decomposed into four basic operation procedures, i.e. nonlinear function calculation, iterative sequence operation, iterative values right shifting and ceiling, and chaotic iterative sequences output, each of which corresponds to only a state via state machine control. Compared with the Verilog HDL floating-point algorithm, the Verilog HDL fixed-point algorithm can save the FPGA hardware resources and improve the operation efficiency. FPGA-based hardware experimental results validate the feasibility and reliability of the proposed approach.
NASA Technical Reports Server (NTRS)
Campbell, B. H.
1974-01-01
A methodology which was developed for balanced designing of spacecraft subsystems and interrelates cost, performance, safety, and schedule considerations was refined. The methodology consists of a two-step process: the first step is one of selecting all hardware designs which satisfy the given performance and safety requirements, the second step is one of estimating the cost and schedule required to design, build, and operate each spacecraft design. Using this methodology to develop a systems cost/performance model allows the user of such a model to establish specific designs and the related costs and schedule. The user is able to determine the sensitivity of design, costs, and schedules to changes in requirements. The resulting systems cost performance model is described and implemented as a digital computer program.
System architectures for telerobotic research
NASA Technical Reports Server (NTRS)
Harrison, F. Wallace
1989-01-01
Several activities are performed related to the definition and creation of telerobotic systems. The effort and investment required to create architectures for these complex systems can be enormous; however, the magnitude of process can be reduced if structured design techniques are applied. A number of informal methodologies supporting certain aspects of the design process are available. More recently, prototypes of integrated tools supporting all phases of system design from requirements analysis to code generation and hardware layout have begun to appear. Activities related to system architecture of telerobots are described, including current activities which are designed to provide a methodology for the comparison and quantitative analysis of alternative system architectures.
Towards a visual modeling approach to designing microelectromechanical system transducers
NASA Astrophysics Data System (ADS)
Dewey, Allen; Srinivasan, Vijay; Icoz, Evrim
1999-12-01
In this paper, we address initial design capture and system conceptualization of microelectromechanical system transducers based on visual modeling and design. Visual modeling frames the task of generating hardware description language (analog and digital) component models in a manner similar to the task of generating software programming language applications. A structured topological design strategy is employed, whereby microelectromechanical foundry cell libraries are utilized to facilitate the design process of exploring candidate cells (topologies), varying key aspects of the transduction for each topology, and determining which topology best satisfies design requirements. Coupled-energy microelectromechanical system characterizations at a circuit level of abstraction are presented that are based on branch constitutive relations and an overall system of simultaneous differential and algebraic equations. The resulting design methodology is called visual integrated-microelectromechanical VHDL-AMS interactive design (VHDL-AMS is visual hardware design language for analog and mixed signal).
ALS rocket engine combustion devices design and demonstration
NASA Technical Reports Server (NTRS)
Arreguin, Steve
1989-01-01
Work performed during Phase one is summarized and the significant technical and programmatic accomplishments occurring during this period are documented. Besides a summary of the results, methodologies, trade studies, design, fabrication, and hardware conditions; the following are included: the evolving Maintainability Plan, Reliability Program Plan, Failure Summary and Analysis Report, and the Failure Mode and Effect Analysis.
Graphical Requirements for Force Level Planning. Volume 2
1991-09-01
technology review includes graphics algorithms, computer hardware, computer software, and design methodologies. The technology can either exist today or...level graphics language. 7.4 User Interface Design Tools As user interfaces have become more sophisticated, they have become harder to develop. Xl...Setphen M. Pizer, editors. Proceedings 1986 Workshop on Interactive 31) Graphics , October 1986. 18 J. S. Dumas. Designing User Interface Software. Prentice
Design and Control of Compliant Tensegrity Robots Through Simulation and Hardware Validation
NASA Technical Reports Server (NTRS)
Caluwaerts, Ken; Despraz, Jeremie; Iscen, Atil; Sabelhaus, Andrew P.; Bruce, Jonathan; Schrauwen, Benjamin; Sunspiral, Vytas
2014-01-01
To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center has developed and validated two different software environments for the analysis, simulation, and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity ("tensile-integrity") structures have unique physical properties which make them ideal for interaction with uncertain environments. Yet these characteristics, such as variable structural compliance, and global multi-path load distribution through the tension network, make design and control of bio-inspired tensegrity robots extremely challenging. This work presents the progress in using these two tools in tackling the design and control challenges. The results of this analysis includes multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures. The current hardware prototype of a six-bar tensegrity, code-named ReCTeR, is presented in the context of this validation.
NASA Astrophysics Data System (ADS)
Weijers, Jan-Willem; Derudder, Veerle; Janssens, Sven; Petré, Frederik; Bourdoux, André
2006-12-01
To assess the performance of forthcoming 4th generation wireless local area networks, the algorithmic functionality is usually modelled using a high-level mathematical software package, for instance, Matlab. In order to validate the modelling assumptions against the real physical world, the high-level functional model needs to be translated into a prototype. A systematic system design methodology proves very valuable, since it avoids, or, at least reduces, numerous design iterations. In this paper, we propose a novel Matlab-to-hardware design flow, which allows to map the algorithmic functionality onto the target prototyping platform in a systematic and reproducible way. The proposed design flow is partly manual and partly tool assisted. It is shown that the proposed design flow allows to use the same testbench throughout the whole design flow and avoids time-consuming and error-prone intermediate translation steps.
NASA Astrophysics Data System (ADS)
Casselman, Steve; Schewel, John
2002-07-01
Success in the marketplace may well depend upon the ability to upgrade and test hardware designs instantly around the world. An upgrade management strategy requires more than just the bitstream file, email or a JTAG cable. A well-managed methodology, capable of transmitting bitstreams directly into targeted FPGAs over the network or internet is an essential element for a successful FPGA based product strategy. Virtual Computer Corporation"s HOTMan, Bitstream Management Environment combines a feature rich cross-platform API with an Object Oriented Bitstream technique for Remote Upgrading of Hardware over the Internet.
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting.
Alomar, Miquel L; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L
2016-01-01
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting
Alomar, Miquel L.; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L.
2016-01-01
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting. PMID:26880876
An evaluation of the directed flow graph methodology
NASA Technical Reports Server (NTRS)
Snyder, W. E.; Rajala, S. A.
1984-01-01
The applicability of the Directed Graph Methodology (DGM) to the design and analysis of special purpose image and signal processing hardware was evaluated. A special purpose image processing system was designed and described using DGM. The design, suitable for very large scale integration (VLSI) implements a region labeling technique. Two computer chips were designed, both using metal-nitride-oxide-silicon (MNOS) technology, as well as a functional system utilizing those chips to perform real time region labeling. The system is described in terms of DGM primitives. As it is currently implemented, DGM is inappropriate for describing synchronous, tightly coupled, special purpose systems. The nature of the DGM formalism lends itself more readily to modeling networks of general purpose processors.
High-Density Liquid-State Machine Circuitry for Time-Series Forecasting.
Rosselló, Josep L; Alomar, Miquel L; Morro, Antoni; Oliver, Antoni; Canals, Vincent
2016-08-01
Spiking neural networks (SNN) are the last neural network generation that try to mimic the real behavior of biological neurons. Although most research in this area is done through software applications, it is in hardware implementations in which the intrinsic parallelism of these computing systems are more efficiently exploited. Liquid state machines (LSM) have arisen as a strategic technique to implement recurrent designs of SNN with a simple learning methodology. In this work, we show a new low-cost methodology to implement high-density LSM by using Boolean gates. The proposed method is based on the use of probabilistic computing concepts to reduce hardware requirements, thus considerably increasing the neuron count per chip. The result is a highly functional system that is applied to high-speed time series forecasting.
Contamination of planets by nonsterile flight hardware.
NASA Technical Reports Server (NTRS)
Wolfson, R. P.; Craven, C. W.
1971-01-01
The various factors about space missions and spacecraft involved in the study of nonsterile space flight hardware with respect to their effects on planetary quarantine are reviewed. It is shown that methodology currently exists to evaluate the various potential contamination sources and to take appropriate steps in the design of spacecraft ha rdware and mission parameters so that quarantine constraints are met. This work should be done for each program so that the latest knowledge pertaining to various biological questions is utilized, and so that the specific hardware designs of the program can be assessed. The general trend of specific recommendations include: (1) biasing the launch trajectory away from planet to assure against accidental impact of the spacecraft; (2) selecting planetary orbits that meet quarantine requirements - both for accidental impact and for minimizing contamination probabilities due to ejecta; and (3) manufacturing and handling spacecraft under cleanliness conditions assuring minimum bioload.
Problems related to the integration of fault tolerant aircraft electronic systems
NASA Technical Reports Server (NTRS)
Bannister, J. A.; Adlakha, V.; Triyedi, K.; Alspaugh, T. A., Jr.
1982-01-01
Problems related to the design of the hardware for an integrated aircraft electronic system are considered. Taxonomies of concurrent systems are reviewed and a new taxonomy is proposed. An informal methodology intended to identify feasible regions of the taxonomic design space is described. Specific tools are recommended for use in the methodology. Based on the methodology, a preliminary strawman integrated fault tolerant aircraft electronic system is proposed. Next, problems related to the programming and control of inegrated aircraft electronic systems are discussed. Issues of system resource management, including the scheduling and allocation of real time periodic tasks in a multiprocessor environment, are treated in detail. The role of software design in integrated fault tolerant aircraft electronic systems is discussed. Conclusions and recommendations for further work are included.
Theorem Proving in Intel Hardware Design
NASA Technical Reports Server (NTRS)
O'Leary, John
2009-01-01
For the past decade, a framework combining model checking (symbolic trajectory evaluation) and higher-order logic theorem proving has been in production use at Intel. Our tools and methodology have been used to formally verify execution cluster functionality (including floating-point operations) for a number of Intel products, including the Pentium(Registered TradeMark)4 and Core(TradeMark)i7 processors. Hardware verification in 2009 is much more challenging than it was in 1999 - today s CPU chip designs contain many processor cores and significant firmware content. This talk will attempt to distill the lessons learned over the past ten years, discuss how they apply to today s problems, outline some future directions.
NASA Technical Reports Server (NTRS)
Jones, Corey; LaPha, Steven
2013-01-01
This presentation will focus on the modernization of design and engineering practices through the use of Model Based Definition methodology. By gathering important engineering data into one 3D digital data set, applying model annotations, and setting up model view states directly in the 3D CAD model, model-specific information can be published to Windchill and CreoView for use during the Design Review Process. This presentation will describe the methods that have been incorporated into the modeling.
A Program Manager’s Methodology for Developing Structured Design in Embedded Weapons Systems.
1983-12-01
the hardware selec- tion. This premise has been reiterated and substantiated by numerous case studies performed in recent years among them Barry ...measures, rules of thumb, and analysis techniques, this method with early development by re Marco is the basis for the Pressman design methcdology...desired traits of a design based on the specificaticns generated, but does not include a procedure for realizat or" of the design. Pressman , (Ref. 5
Preloaded joint analysis methodology for space flight systems
NASA Technical Reports Server (NTRS)
Chambers, Jeffrey A.
1995-01-01
This report contains a compilation of some of the most basic equations governing simple preloaded joint systems and discusses the more common modes of failure associated with such hardware. It is intended to provide the mechanical designer with the tools necessary for designing a basic bolted joint. Although the information presented is intended to aid in the engineering of space flight structures, the fundamentals are equally applicable to other forms of mechanical design.
A design methodology for portable software on parallel computers
NASA Technical Reports Server (NTRS)
Nicol, David M.; Miller, Keith W.; Chrisman, Dan A.
1993-01-01
This final report for research that was supported by grant number NAG-1-995 documents our progress in addressing two difficulties in parallel programming. The first difficulty is developing software that will execute quickly on a parallel computer. The second difficulty is transporting software between dissimilar parallel computers. In general, we expect that more hardware-specific information will be included in software designs for parallel computers than in designs for sequential computers. This inclusion is an instance of portability being sacrificed for high performance. New parallel computers are being introduced frequently. Trying to keep one's software on the current high performance hardware, a software developer almost continually faces yet another expensive software transportation. The problem of the proposed research is to create a design methodology that helps designers to more precisely control both portability and hardware-specific programming details. The proposed research emphasizes programming for scientific applications. We completed our study of the parallelizability of a subsystem of the NASA Earth Radiation Budget Experiment (ERBE) data processing system. This work is summarized in section two. A more detailed description is provided in Appendix A ('Programming Practices to Support Eventual Parallelism'). Mr. Chrisman, a graduate student, wrote and successfully defended a Ph.D. dissertation proposal which describes our research associated with the issues of software portability and high performance. The list of research tasks are specified in the proposal. The proposal 'A Design Methodology for Portable Software on Parallel Computers' is summarized in section three and is provided in its entirety in Appendix B. We are currently studying a proposed subsystem of the NASA Clouds and the Earth's Radiant Energy System (CERES) data processing system. This software is the proof-of-concept for the Ph.D. dissertation. We have implemented and measured the performance of a portion of this subsystem on the Intel iPSC/2 parallel computer. These results are provided in section four. Our future work is summarized in section five, our acknowledgements are stated in section six, and references for published papers associated with NAG-1-995 are provided in section seven.
Design and control of compliant tensegrity robots through simulation and hardware validation
Caluwaerts, Ken; Despraz, Jérémie; Işçen, Atıl; Sabelhaus, Andrew P.; Bruce, Jonathan; Schrauwen, Benjamin; SunSpiral, Vytas
2014-01-01
To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center, Moffett Field, CA, USA, has developed and validated two software environments for the analysis, simulation and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity (‘tensile–integrity’) structures have unique physical properties that make them ideal for interaction with uncertain environments. Yet, these characteristics make design and control of bioinspired tensegrity robots extremely challenging. This work presents the progress our tools have made in tackling the design and control challenges of spherical tensegrity structures. We focus on this shape since it lends itself to rolling locomotion. The results of our analyses include multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures that have been tested in simulation. A hardware prototype of a spherical six-bar tensegrity, the Reservoir Compliant Tensegrity Robot, is used to empirically validate the accuracy of simulation. PMID:24990292
Elements of oxygen production systems using Martian atmosphere
NASA Technical Reports Server (NTRS)
Ash, R. L.; Huang, J.-K.; Johnson, P. B.; Sivertson, W. E., Jr.
1986-01-01
Hardware elements have been studied in terms of their applicability to Mars oxygen production systems. Various aspects of the system design are discussed and areas requiring further research are identified. Initial work on system reliability is discussed and a methodology for applying expert system technology to the oxygen processor is described.
HPC Programming on Intel Many-Integrated-Core Hardware with MAGMA Port to Xeon Phi
Dongarra, Jack; Gates, Mark; Haidar, Azzam; ...
2015-01-01
This paper presents the design and implementation of several fundamental dense linear algebra (DLA) algorithms for multicore with Intel Xeon Phi coprocessors. In particular, we consider algorithms for solving linear systems. Further, we give an overview of the MAGMA MIC library, an open source, high performance library, that incorporates the developments presented here and, more broadly, provides the DLA functionality equivalent to that of the popular LAPACK library while targeting heterogeneous architectures that feature a mix of multicore CPUs and coprocessors. The LAPACK-compliance simplifies the use of the MAGMA MIC library in applications, while providing them with portably performant DLA.more » High performance is obtained through the use of the high-performance BLAS, hardware-specific tuning, and a hybridization methodology whereby we split the algorithm into computational tasks of various granularities. Execution of those tasks is properly scheduled over the heterogeneous hardware by minimizing data movements and mapping algorithmic requirements to the architectural strengths of the various heterogeneous hardware components. Our methodology and programming techniques are incorporated into the MAGMA MIC API, which abstracts the application developer from the specifics of the Xeon Phi architecture and is therefore applicable to algorithms beyond the scope of DLA.« less
Analysis of a hardware and software fault tolerant processor for critical applications
NASA Technical Reports Server (NTRS)
Dugan, Joanne B.
1993-01-01
Computer systems for critical applications must be designed to tolerate software faults as well as hardware faults. A unified approach to tolerating hardware and software faults is characterized by classifying faults in terms of duration (transient or permanent) rather than source (hardware or software). Errors arising from transient faults can be handled through masking or voting, but errors arising from permanent faults require system reconfiguration to bypass the failed component. Most errors which are caused by software faults can be considered transient, in that they are input-dependent. Software faults are triggered by a particular set of inputs. Quantitative dependability analysis of systems which exhibit a unified approach to fault tolerance can be performed by a hierarchical combination of fault tree and Markov models. A methodology for analyzing hardware and software fault tolerant systems is applied to the analysis of a hypothetical system, loosely based on the Fault Tolerant Parallel Processor. The models consider both transient and permanent faults, hardware and software faults, independent and related software faults, automatic recovery, and reconfiguration.
NASA Astrophysics Data System (ADS)
Kelly, Jamie S.; Bowman, Hiroshi C.; Rao, Vittal S.; Pottinger, Hardy J.
1997-06-01
Implementation issues represent an unfamiliar challenge to most control engineers, and many techniques for controller design ignore these issues outright. Consequently, the design of controllers for smart structural systems usually proceeds without regard for their eventual implementation, thus resulting either in serious performance degradation or in hardware requirements that squander power, complicate integration, and drive up cost. The level of integration assumed by the Smart Patch further exacerbates these difficulties, and any design inefficiency may render the realization of a single-package sensor-controller-actuator system infeasible. The goal of this research is to automate the controller implementation process and to relieve the design engineer of implementation concerns like quantization, computational efficiency, and device selection. We specifically target Field Programmable Gate Arrays (FPGAs) as our hardware platform because these devices are highly flexible, power efficient, and reprogrammable. The current study develops an automated implementation sequence that minimizes hardware requirements while maintaining controller performance. Beginning with a state space representation of the controller, the sequence automatically generates a configuration bitstream for a suitable FPGA implementation. MATLAB functions optimize and simulate the control algorithm before translating it into the VHSIC hardware description language. These functions improve power efficiency and simplify integration in the final implementation by performing a linear transformation that renders the controller computationally friendly. The transformation favors sparse matrices in order to reduce multiply operations and the hardware necessary to support them; simultaneously, the remaining matrix elements take on values that minimize limit cycles and parameter sensitivity. The proposed controller design methodology is implemented on a simple cantilever beam test structure using FPGA hardware. The experimental closed loop response is compared with that of an automated FPGA controller implementation. Finally, we explore the integration of FPGA based controllers into a multi-chip module, which we believe represents the next step towards the realization of the Smart Patch.
Benchmarking hypercube hardware and software
NASA Technical Reports Server (NTRS)
Grunwald, Dirk C.; Reed, Daniel A.
1986-01-01
It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.
An adaptable, low cost test-bed for unmanned vehicle systems research
NASA Astrophysics Data System (ADS)
Goppert, James M.
2011-12-01
An unmanned vehicle systems test-bed has been developed. The test-bed has been designed to accommodate hardware changes and various vehicle types and algorithms. The creation of this test-bed allows research teams to focus on algorithm development and employ a common well-tested experimental framework. The ArduPilotOne autopilot was developed to provide the necessary level of abstraction for multiple vehicle types. The autopilot was also designed to be highly integrated with the Mavlink protocol for Micro Air Vehicle (MAV) communication. Mavlink is the native protocol for QGroundControl, a MAV ground control program. Features were added to QGroundControl to accommodate outdoor usage. Next, the Mavsim toolbox was developed for Scicoslab to allow hardware-in-the-loop testing, control design and analysis, and estimation algorithm testing and verification. In order to obtain linear models of aircraft dynamics, the JSBSim flight dynamics engine was extended to use a probabilistic Nelder-Mead simplex method. The JSBSim aircraft dynamics were compared with wind-tunnel data collected. Finally, a structured methodology for successive loop closure control design is proposed. This methodology is demonstrated along with the rest of the test-bed tools on a quadrotor, a fixed wing RC plane, and a ground vehicle. Test results for the ground vehicle are presented.
NASA Astrophysics Data System (ADS)
Zaveri, Mazad Shaheriar
The semiconductor/computer industry has been following Moore's law for several decades and has reaped the benefits in speed and density of the resultant scaling. Transistor density has reached almost one billion per chip, and transistor delays are in picoseconds. However, scaling has slowed down, and the semiconductor industry is now facing several challenges. Hybrid CMOS/nano technologies, such as CMOL, are considered as an interim solution to some of the challenges. Another potential architectural solution includes specialized architectures for applications/models in the intelligent computing domain, one aspect of which includes abstract computational models inspired from the neuro/cognitive sciences. Consequently in this dissertation, we focus on the hardware implementations of Bayesian Memory (BM), which is a (Bayesian) Biologically Inspired Computational Model (BICM). This model is a simplified version of George and Hawkins' model of the visual cortex, which includes an inference framework based on Judea Pearl's belief propagation. We then present a "hardware design space exploration" methodology for implementing and analyzing the (digital and mixed-signal) hardware for the BM. This particular methodology involves: analyzing the computational/operational cost and the related micro-architecture, exploring candidate hardware components, proposing various custom hardware architectures using both traditional CMOS and hybrid nanotechnology - CMOL, and investigating the baseline performance/price of these architectures. The results suggest that CMOL is a promising candidate for implementing a BM. Such implementations can utilize the very high density storage/computation benefits of these new nano-scale technologies much more efficiently; for example, the throughput per 858 mm2 (TPM) obtained for CMOL based architectures is 32 to 40 times better than the TPM for a CMOS based multiprocessor/multi-FPGA system, and almost 2000 times better than the TPM for a PC implementation. We later use this methodology to investigate the hardware implementations of cortex-scale spiking neural system, which is an approximate neural equivalent of BICM based cortex-scale system. The results of this investigation also suggest that CMOL is a promising candidate to implement such large-scale neuromorphic systems. In general, the assessment of such hypothetical baseline hardware architectures provides the prospects for building large-scale (mammalian cortex-scale) implementations of neuromorphic/Bayesian/intelligent systems using state-of-the-art and beyond state-of-the-art silicon structures.
NASA Technical Reports Server (NTRS)
Mayer, Richard
1988-01-01
The integrated development support environment (IDSE) is a suite of integrated software tools that provide intelligent support for information modelling. These tools assist in function, information, and process modeling. Additional tools exist to assist in gathering and analyzing information to be modeled. This is a user's guide to application of the IDSE. Sections covering the requirements and design of each of the tools are presented. There are currently three integrated computer aided manufacturing definition (IDEF) modeling methodologies: IDEF0, IDEF1, and IDEF2. Also, four appendices exist to describe hardware and software requirements, installation procedures, and basic hardware usage.
NASA Technical Reports Server (NTRS)
Pang, Jackson; Liddicoat, Albert; Ralston, Jesse; Pingree, Paula
2006-01-01
The current implementation of the Telecommunications Protocol Processing Subsystem Using Reconfigurable Interoperable Gate Arrays (TRIGA) is equipped with CFDP protocol and CCSDS Telemetry and Telecommand framing schemes to replace the CPU intensive software counterpart implementation for reliable deep space communication. We present the hardware/software co-design methodology used to accomplish high data rate throughput. The hardware CFDP protocol stack implementation is then compared against the two recent flight implementations. The results from our experiments show that TRIGA offers more than 3 orders of magnitude throughput improvement with less than one-tenth of the power consumption.
DYNER: A DYNamic ClustER for Education and Research
ERIC Educational Resources Information Center
Kehagias, Dimitris; Grivas, Michael; Mamalis, Basilis; Pantziou, Grammati
2006-01-01
Purpose: The purpose of this paper is to evaluate the use of a non-expensive dynamic computing resource, consisting of a Beowulf class cluster and a NoW, as an educational and research infrastructure. Design/methodology/approach: Clusters, built using commodity-off-the-shelf (COTS) hardware components and free, or commonly used, software, provide…
An autonomous satellite architecture integrating deliberative reasoning and behavioural intelligence
NASA Technical Reports Server (NTRS)
Lindley, Craig A.
1993-01-01
This paper describes a method for the design of autonomous spacecraft, based upon behavioral approaches to intelligent robotics. First, a number of previous spacecraft automation projects are reviewed. A methodology for the design of autonomous spacecraft is then presented, drawing upon both the European Space Agency technological center (ESTEC) automation and robotics methodology and the subsumption architecture for autonomous robots. A layered competency model for autonomous orbital spacecraft is proposed. A simple example of low level competencies and their interaction is presented in order to illustrate the methodology. Finally, the general principles adopted for the control hardware design of the AUSTRALIS-1 spacecraft are described. This system will provide an orbital experimental platform for spacecraft autonomy studies, supporting the exploration of different logical control models, different computational metaphors within the behavioral control framework, and different mappings from the logical control model to its physical implementation.
Automated Methodologies for the Design of Flow Diagrams for Development and Maintenance Activities
NASA Astrophysics Data System (ADS)
Shivanand M., Handigund; Shweta, Bhat
The Software Requirements Specification (SRS) of the organization is a text document prepared by strategic management incorporating the requirements of the organization. These requirements of ongoing business/ project development process involve the software tools, the hardware devices, the manual procedures, the application programs and the communication commands. These components are appropriately ordered for achieving the mission of the concerned process both in the project development and the ongoing business processes, in different flow diagrams viz. activity chart, workflow diagram, activity diagram, component diagram and deployment diagram. This paper proposes two generic, automatic methodologies for the design of various flow diagrams of (i) project development activities, (ii) ongoing business process. The methodologies also resolve the ensuing deadlocks in the flow diagrams and determine the critical paths for the activity chart. Though both methodologies are independent, each complements other in authenticating its correctness and completeness.
An Overview of Starfish: A Table-Centric Tool for Interactive Synthesis
NASA Technical Reports Server (NTRS)
Tsow, Alex
2008-01-01
Engineering is an interactive process that requires intelligent interaction at many levels. My thesis [1] advances an engineering discipline for high-level synthesis and architectural decomposition that integrates perspicuous representation, designer interaction, and mathematical rigor. Starfish, the software prototype for the design method, implements a table-centric transformation system for reorganizing control-dominated system expressions into high-level architectures. Based on the digital design derivation (DDD) system a designer-guided synthesis technique that applies correctness preserving transformations to synchronous data flow specifications expressed as co- recursive stream equations Starfish enhances user interaction and extends the reachable design space by incorporating four innovations: behavior tables, serialization tables, data refinement, and operator retiming. Behavior tables express systems of co-recursive stream equations as a table of guarded signal updates. Developers and users of the DDD system used manually constructed behavior tables to help them decide which transformations to apply and how to specify them. These design exercises produced several formally constructed hardware implementations: the FM9001 microprocessor, an SECD machine for evaluating LISP, and the SchemEngine, garbage collected machine for interpreting a byte-code representation of compiled Scheme programs. Bose and Tuna, two of DDD s developers, have subsequently commercialized the design derivation methodology at Derivation Systems, Inc. (DSI). DSI has formally derived and validated PCI bus interfaces and a Java byte-code processor; they further executed a contract to prototype SPIDER-NASA's ultra-reliable communications bus. To date, most derivations from DDD and DRS have targeted hardware due to its synchronous design paradigm. However, Starfish expressions are independent of the synchronization mechanism; there is no commitment to hardware or globally broadcast clocks. Though software back-ends for design derivation are limited to the DDD stream-interpreter, targeting synchronous or real-time software is not substantively different from targeting hardware.
A methodology for identification and control of electro-mechanical actuators
Tutunji, Tarek A.; Saleem, Ashraf
2015-01-01
Mechatronic systems are fully-integrated engineering systems that are composed of mechanical, electronic, and computer control sub-systems. These integrated systems use electro-mechanical actuators to cause the required motion. Therefore, the design of appropriate controllers for these actuators are an essential step in mechatronic system design. In this paper, a three-stage methodology for real-time identification and control of electro-mechanical actuator plants is presented, tested, and validated. First, identification models are constructed from experimental data to approximate the plants’ response. Second, the identified model is used in a simulation environment for the purpose of designing a suitable controller. Finally, the designed controller is applied and tested on the real plant through Hardware-in-the-Loop (HIL) environment. The described three-stage methodology provides the following practical contributions: • Establishes an easy-to-follow methodology for controller design of electro-mechanical actuators. • Combines off-line and on-line controller design for practical performance. • Modifies the HIL concept by using physical plants with computer control (rather than virtual plants with physical controllers). Simulated and experimental results for two case studies, induction motor and vehicle drive system, are presented in order to validate the proposed methodology. These results showed that electromechanical actuators can be identified and controlled using an easy-to-duplicate and flexible procedure. PMID:26150992
A methodology for identification and control of electro-mechanical actuators.
Tutunji, Tarek A; Saleem, Ashraf
2015-01-01
Mechatronic systems are fully-integrated engineering systems that are composed of mechanical, electronic, and computer control sub-systems. These integrated systems use electro-mechanical actuators to cause the required motion. Therefore, the design of appropriate controllers for these actuators are an essential step in mechatronic system design. In this paper, a three-stage methodology for real-time identification and control of electro-mechanical actuator plants is presented, tested, and validated. First, identification models are constructed from experimental data to approximate the plants' response. Second, the identified model is used in a simulation environment for the purpose of designing a suitable controller. Finally, the designed controller is applied and tested on the real plant through Hardware-in-the-Loop (HIL) environment. The described three-stage methodology provides the following practical contributions: •Establishes an easy-to-follow methodology for controller design of electro-mechanical actuators.•Combines off-line and on-line controller design for practical performance.•Modifies the HIL concept by using physical plants with computer control (rather than virtual plants with physical controllers). Simulated and experimental results for two case studies, induction motor and vehicle drive system, are presented in order to validate the proposed methodology. These results showed that electromechanical actuators can be identified and controlled using an easy-to-duplicate and flexible procedure.
A FPGA implementation for linearly unmixing a hyperspectral image using OpenCL
NASA Astrophysics Data System (ADS)
Guerra, Raúl; López, Sebastián.; Sarmiento, Roberto
2017-10-01
Hyperspectral imaging systems provide images in which single pixels have information from across the electromagnetic spectrum of the scene under analysis. These systems divide the spectrum into many contiguos channels, which may be even out of the visible part of the spectra. The main advantage of the hyperspectral imaging technology is that certain objects leave unique fingerprints in the electromagnetic spectrum, known as spectral signatures, which allow to distinguish between different materials that may look like the same in a traditional RGB image. Accordingly, the most important hyperspectral imaging applications are related with distinguishing or identifying materials in a particular scene. In hyperspectral imaging applications under real-time constraints, the huge amount of information provided by the hyperspectral sensors has to be rapidly processed and analysed. For such purpose, parallel hardware devices, such as Field Programmable Gate Arrays (FPGAs) are typically used. However, developing hardware applications typically requires expertise in the specific targeted device, as well as in the tools and methodologies which can be used to perform the implementation of the desired algorithms in the specific device. In this scenario, the Open Computing Language (OpenCL) emerges as a very interesting solution in which a single high-level synthesis design language can be used to efficiently develop applications in multiple and different hardware devices. In this work, the Fast Algorithm for Linearly Unmixing Hyperspectral Images (FUN) has been implemented into a Bitware Stratix V Altera FPGA using OpenCL. The obtained results demonstrate the suitability of OpenCL as a viable design methodology for quickly creating efficient FPGAs designs for real-time hyperspectral imaging applications.
F-16XL-2 Supersonic Laminar Flow Control Flight Test Experiment
NASA Technical Reports Server (NTRS)
Anders, Scott G.; Fischer, Michael C.
1999-01-01
The F-16XL-2 Supersonic Laminar Flow Control Flight Test Experiment was part of the NASA High-Speed Research Program. The goal of the experiment was to demonstrate extensive laminar flow, to validate computational fluid dynamics (CFD) codes and design methodology, and to establish laminar flow control design criteria. Topics include the flight test hardware and design, airplane modification, the pressure and suction distributions achieved, the laminar flow achieved, and the data analysis and code correlation.
NASA Technical Reports Server (NTRS)
1981-01-01
The software package evaluation was designed to analyze commercially available, field-proven, production control or manufacturing resource planning management technology and software package. The analysis was conducted by comparing SRB production control software requirements and conceptual system design to software package capabilities. The methodology of evaluation and the findings at each stage of evaluation are described. Topics covered include: vendor listing; request for information (RFI) document; RFI response rate and quality; RFI evaluation process; and capabilities versus requirements.
Software requirements flow-down and preliminary software design for the G-CLEF spectrograph
NASA Astrophysics Data System (ADS)
Evans, Ian N.; Budynkiewicz, Jamie A.; DePonte Evans, Janet; Miller, Joseph B.; Onyuksel, Cem; Paxson, Charles; Plummer, David A.
2016-08-01
The Giant Magellan Telescope (GMT)-Consortium Large Earth Finder (G-CLEF) is a fiber-fed, precision radial velocity (PRV) optical echelle spectrograph that will be the first light instrument on the GMT. The G-CLEF instrument device control subsystem (IDCS) provides software control of the instrument hardware, including the active feedback loops that are required to meet the G-CLEF PRV stability requirements. The IDCS is also tasked with providing operational support packages that include data reduction pipelines and proposal preparation tools. A formal, but ultimately pragmatic approach is being used to establish a complete and correct set of requirements for both the G-CLEF device control and operational support packages. The device control packages must integrate tightly with the state-machine driven software and controls reference architecture designed by the GMT Organization. A model-based systems engineering methodology is being used to develop a preliminary design that meets these requirements. Through this process we have identified some lessons that have general applicability to the development of software for ground-based instrumentation. For example, tasking an individual with overall responsibility for science/software/hardware integration is a key step to ensuring effective integration between these elements. An operational concept document that includes detailed routine and non- routine operational sequences should be prepared in parallel with the hardware design process to tie together these elements and identify any gaps. Appropriate time-phasing of the hardware and software design phases is important, but revisions to driving requirements that impact software requirements and preliminary design are inevitable. Such revisions must be carefully managed to ensure efficient use of resources.
Multirate flutter suppression system design for the Benchmark Active Controls Technology Wing
NASA Technical Reports Server (NTRS)
Berg, Martin C.; Mason, Gregory S.
1994-01-01
To study the effectiveness of various control system design methodologies, the NASA Langley Research Center initiated the Benchmark Active Controls Project. In this project, the various methodologies will be applied to design a flutter suppression system for the Benchmark Active Controls Technology (BACT) Wing (also called the PAPA wing). Eventually, the designs will be implemented in hardware and tested on the BACT wing in a wind tunnel. This report describes a project at the University of Washington to design a multirate flutter suppression system for the BACT wing. The objective of the project was two fold. First, to develop a methodology for designing robust multirate compensators, and second, to demonstrate the methodology by applying it to the design of a multirate flutter suppression system for the BACT wing. The contributions of this project are (1) development of an algorithm for synthesizing robust low order multirate control laws (the algorithm is capable of synthesizing a single compensator which stabilizes both the nominal plant and multiple plant perturbations; (2) development of a multirate design methodology, and supporting software, for modeling, analyzing and synthesizing multirate compensators; and (3) design of a multirate flutter suppression system for NASA's BACT wing which satisfies the specified design criteria. This report describes each of these contributions in detail. Section 2.0 discusses our design methodology. Section 3.0 details the results of our multirate flutter suppression system design for the BACT wing. Finally, Section 4.0 presents our conclusions and suggestions for future research. The body of the report focuses primarily on the results. The associated theoretical background appears in the three technical papers that are included as Attachments 1-3. Attachment 4 is a user's manual for the software that is key to our design methodology.
Design and control of compliant tensegrity robots through simulation and hardware validation.
Caluwaerts, Ken; Despraz, Jérémie; Işçen, Atıl; Sabelhaus, Andrew P; Bruce, Jonathan; Schrauwen, Benjamin; SunSpiral, Vytas
2014-09-06
To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center, Moffett Field, CA, USA, has developed and validated two software environments for the analysis, simulation and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity ('tensile-integrity') structures have unique physical properties that make them ideal for interaction with uncertain environments. Yet, these characteristics make design and control of bioinspired tensegrity robots extremely challenging. This work presents the progress our tools have made in tackling the design and control challenges of spherical tensegrity structures. We focus on this shape since it lends itself to rolling locomotion. The results of our analyses include multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures that have been tested in simulation. A hardware prototype of a spherical six-bar tensegrity, the Reservoir Compliant Tensegrity Robot, is used to empirically validate the accuracy of simulation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
NASA Technical Reports Server (NTRS)
Allen, Cheryl L.
1991-01-01
Enhanced engineering tools can be obtained through the integration of expert system methodologies and existing design software. The application of these methodologies to the spacecraft design and cost model (SDCM) software provides an improved technique for the selection of hardware for unmanned spacecraft subsystem design. The knowledge engineering system (KES) expert system development tool was used to implement a smarter equipment section algorithm than that which is currently achievable through the use of a standard data base system. The guidance, navigation, and control subsystems of the SDCM software was chosen as the initial subsystem for implementation. The portions of the SDCM code which compute the selection criteria and constraints remain intact, and the expert system equipment selection algorithm is embedded within this existing code. The architecture of this new methodology is described and its implementation is reported. The project background and a brief overview of the expert system is described, and once the details of the design are characterized, an example of its implementation is demonstrated.
Defining Exercise Performance Metrics for Flight Hardware Development
NASA Technical Reports Server (NTRS)
Beyene, Nahon M.
2004-01-01
The space industry has prevailed over numerous design challenges in the spirit of exploration. Manned space flight entails creating products for use by humans and the Johnson Space Center has pioneered this effort as NASA's center for manned space flight. NASA Astronauts use a suite of flight exercise hardware to maintain strength for extravehicular activities and to minimize losses in muscle mass and bone mineral density. With a cycle ergometer, treadmill, and the Resistive Exercise Device available on the International Space Station (ISS), the Space Medicine community aspires to reproduce physical loading schemes that match exercise performance in Earth s gravity. The resistive exercise device presents the greatest challenge with the duty of accommodating 20 different exercises and many variations on the core set of exercises. This paper presents a methodology for capturing engineering parameters that can quantify proper resistive exercise performance techniques. For each specified exercise, the method provides engineering parameters on hand spacing, foot spacing, and positions of the point of load application at the starting point, midpoint, and end point of the exercise. As humans vary in height and fitness levels, the methodology presents values as ranges. In addition, this method shows engineers the proper load application regions on the human body. The methodology applies to resistive exercise in general and is in use for the current development of a Resistive Exercise Device. Exercise hardware systems must remain available for use and conducive to proper exercise performance as a contributor to mission success. The astronauts depend on exercise hardware to support extended stays aboard the ISS. Future plans towards exploration of Mars and beyond acknowledge the necessity of exercise. Continuous improvement in technology and our understanding of human health maintenance in space will allow us to support the exploration of Mars and the future of space exploration.
Low-complexity piecewise-affine virtual sensors: theory and design
NASA Astrophysics Data System (ADS)
Rubagotti, Matteo; Poggi, Tomaso; Oliveri, Alberto; Pascucci, Carlo Alberto; Bemporad, Alberto; Storace, Marco
2014-03-01
This paper is focused on the theoretical development and the hardware implementation of low-complexity piecewise-affine direct virtual sensors for the estimation of unmeasured variables of interest of nonlinear systems. The direct virtual sensor is designed directly from measured inputs and outputs of the system and does not require a dynamical model. The proposed approach allows one to design estimators which mitigate the effect of the so-called 'curse of dimensionality' of simplicial piecewise-affine functions, and can be therefore applied to relatively high-order systems, enjoying convergence and optimality properties. An automatic toolchain is also presented to generate the VHDL code describing the digital circuit implementing the virtual sensor, starting from the set of measured input and output data. The proposed methodology is applied to generate an FPGA implementation of the virtual sensor for the estimation of vehicle lateral velocity, using a hardware-in-the-loop setting.
Portable parallel stochastic optimization for the design of aeropropulsion components
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Rhodes, G. S.
1994-01-01
This report presents the results of Phase 1 research to develop a methodology for performing large-scale Multi-disciplinary Stochastic Optimization (MSO) for the design of aerospace systems ranging from aeropropulsion components to complete aircraft configurations. The current research recognizes that such design optimization problems are computationally expensive, and require the use of either massively parallel or multiple-processor computers. The methodology also recognizes that many operational and performance parameters are uncertain, and that uncertainty must be considered explicitly to achieve optimum performance and cost. The objective of this Phase 1 research was to initialize the development of an MSO methodology that is portable to a wide variety of hardware platforms, while achieving efficient, large-scale parallelism when multiple processors are available. The first effort in the project was a literature review of available computer hardware, as well as review of portable, parallel programming environments. The first effort was to implement the MSO methodology for a problem using the portable parallel programming language, Parallel Virtual Machine (PVM). The third and final effort was to demonstrate the example on a variety of computers, including a distributed-memory multiprocessor, a distributed-memory network of workstations, and a single-processor workstation. Results indicate the MSO methodology can be well-applied towards large-scale aerospace design problems. Nearly perfect linear speedup was demonstrated for computation of optimization sensitivity coefficients on both a 128-node distributed-memory multiprocessor (the Intel iPSC/860) and a network of workstations (speedups of almost 19 times achieved for 20 workstations). Very high parallel efficiencies (75 percent for 31 processors and 60 percent for 50 processors) were also achieved for computation of aerodynamic influence coefficients on the Intel. Finally, the multi-level parallelization strategy that will be needed for large-scale MSO problems was demonstrated to be highly efficient. The same parallel code instructions were used on both platforms, demonstrating portability. There are many applications for which MSO can be applied, including NASA's High-Speed-Civil Transport, and advanced propulsion systems. The use of MSO will reduce design and development time and testing costs dramatically.
Horton, Emily L; Renganathan, Ramkesh; Toth, Bryan N; Cohen, Alexa J; Bajcsy, Andrea V; Bateman, Amelia; Jennings, Mathew C; Khattar, Anish; Kuo, Ryan S; Lee, Felix A; Lim, Meilin K; Migasiuk, Laura W; Zhang, Amy; Zhao, Oliver K; Oliveira, Marcio A
2017-01-01
To lay the groundwork for devising, improving, and implementing new technologies to meet the needs of individuals with visual impairments, a systematic literature review was conducted to: a) describe hardware platforms used in assistive devices, b) identify their various applications, and c) summarize practices in user testing conducted with these devices. A search in relevant EBSCO databases for articles published between 1980 and 2014 with terminology related to visual impairment, technology, and tactile sensory adaptation yielded 62 articles that met the inclusion criteria for final review. It was found that while earlier hardware development focused on pin matrices, the emphasis then shifted toward force feedback haptics and accessible touch screens. The inclusion of interactive and multimodal features has become increasingly prevalent. The quantity and consistency of research on navigation, education, and computer accessibility suggest that these are pertinent areas of need for the visually impaired community. Methodologies for usability testing ranged from case studies to larger cross-sectional studies. Many studies used blindfolded sighted users to draw conclusions about design principles and usability. Altogether, the findings presented in this review provide insight on effective design strategies and user testing methodologies for future research on assistive technology for individuals with visual impairments.
NASA Astrophysics Data System (ADS)
Benkrid, K.; Belkacemi, S.; Sukhsawas, S.
2005-06-01
This paper proposes an integrated framework for the high level design of high performance signal processing algorithms' implementations on FPGAs. The framework emerged from a constant need to rapidly implement increasingly complicated algorithms on FPGAs while maintaining the high performance needed in many real time digital signal processing applications. This is particularly important for application developers who often rely on iterative and interactive development methodologies. The central idea behind the proposed framework is to dynamically integrate high performance structural hardware description languages with higher level hardware languages in other to help satisfy the dual requirement of high level design and high performance implementation. The paper illustrates this by integrating two environments: Celoxica's Handel-C language, and HIDE, a structural hardware environment developed at the Queen's University of Belfast. On the one hand, Handel-C has been proven to be very useful in the rapid design and prototyping of FPGA circuits, especially control intensive ones. On the other hand, HIDE, has been used extensively, and successfully, in the generation of highly optimised parameterisable FPGA cores. In this paper, this is illustrated in the construction of a scalable and fully parameterisable core for image algebra's five core neighbourhood operations, where fully floorplanned efficient FPGA configurations, in the form of EDIF netlists, are generated automatically for instances of the core. In the proposed combined framework, highly optimised data paths are invoked dynamically from within Handel-C, and are synthesized using HIDE. Although the idea might seem simple prima facie, it could have serious implications on the design of future generations of hardware description languages.
Optical interferometer testbed
NASA Technical Reports Server (NTRS)
Blackwood, Gary H.
1991-01-01
Viewgraphs on optical interferometer testbed presented at the MIT Space Research Engineering Center 3rd Annual Symposium are included. Topics covered include: space-based optical interferometer; optical metrology; sensors and actuators; real time control hardware; controlled structures technology (CST) design methodology; identification for MIMO control; FEM/ID correlation for the naked truss; disturbance modeling; disturbance source implementation; structure design: passive damping; low authority control; active isolation of lightweight mirrors on flexible structures; open loop transfer function of mirror; and global/high authority control.
Rocketdyne PSAM: In-house enhancement/application
NASA Technical Reports Server (NTRS)
Newell, J. F.; Rajagopal, K. R.; Ohara, K.
1991-01-01
The development was initiated of the Probabilistic Design Analysis (PDA) Process for rocket engines. This will enable engineers a quantitative assessment of calculated reliability during the design process. The PDA will help choose better designs, make them more robust, and help decide on critical tests to help demonstrate key reliability issues to aid in improving the confidence of the engine capabilities. Rockedyne's involvement with the Composite Loads Spectra (CLS) and Probabilistic Structural Analysis Methodology (PSAM) contracts started this effort and are key elements in the on-going developments. Internal development efforts and hardware applications complement and extend the CLS and PSAM efforts. The completion of the CLS option work and the follow-on PSAM developments will also be integral parts of this methodology. A brief summary of these efforts is presented.
NASA Technical Reports Server (NTRS)
Padilla, Peter A.
1991-01-01
An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.
NASA Technical Reports Server (NTRS)
Boulanger, Richard; Overland, David
2004-01-01
Technologies that facilitate the design and control of complex, hybrid, and resource-constrained systems are examined. This paper focuses on design methodologies, and system architectures, not on specific control methods that may be applied to life support subsystems. Honeywell and Boeing have estimated that 60-80Y0 of the effort in developing complex control systems is software development, and only 20-40% is control system development. It has also been shown that large software projects have failure rates of as high as 50-65%. Concepts discussed include the Unified Modeling Language (UML) and design patterns with the goal of creating a self-improving, self-documenting system design process. Successful architectures for control must not only facilitate hardware to software integration, but must also reconcile continuously changing software with much less frequently changing hardware. These architectures rely on software modules or components to facilitate change. Architecting such systems for change leverages the interfaces between these modules or components.
Pérez Suárez, Santiago T.; Travieso González, Carlos M.; Alonso Hernández, Jesús B.
2013-01-01
This article presents a design methodology for designing an artificial neural network as an equalizer for a binary signal. Firstly, the system is modelled in floating point format using Matlab. Afterward, the design is described for a Field Programmable Gate Array (FPGA) using fixed point format. The FPGA design is based on the System Generator from Xilinx, which is a design tool over Simulink of Matlab. System Generator allows one to design in a fast and flexible way. It uses low level details of the circuits and the functionality of the system can be fully tested. System Generator can be used to check the architecture and to analyse the effect of the number of bits on the system performance. Finally the System Generator design is compiled for the Xilinx Integrated System Environment (ISE) and the system is described using a hardware description language. In ISE the circuits are managed with high level details and physical performances are obtained. In the Conclusions section, some modifications are proposed to improve the methodology and to ensure portability across FPGA manufacturers.
Contamination Control for Thermal Engineers
NASA Technical Reports Server (NTRS)
Rivera, Rachel B.
2015-01-01
The presentation will be given at the 26th Annual Thermal Fluids Analysis Workshop (TFAWS 2015) hosted by the Goddard Spaceflight Center (GSFC) Thermal Engineering Branch (Code 545). This course will cover the basics of Contamination Control, including contamination control related failures, the effects of contamination on Flight Hardware, what contamination requirements translate to, design methodology, and implementing contamination control into Integration, Testing and Launch.
MDO can help resolve the designer's dilemma. [multidisciplinary design optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Tulinius, Jan R.
1991-01-01
Multidisciplinary design optimization (MDO) is presented as a rapidly growing body of methods, algorithms, and techniques that will provide a quantum jump in the effectiveness and efficiency of the quantitative side of design, and will turn that side into an environment in which the qualitative side can thrive. MDO borrows from CAD/CAM for graphic visualization of geometrical and numerical data, data base technology, and in computer software and hardware. Expected benefits from this methodology are a rational, mathematically consistent approach to hypersonic aircraft designs, designs pushed closer to the optimum, and a design process either shortened or leaving time available for different concepts to be explored.
NASA Technical Reports Server (NTRS)
Madrid, G. A.; Westmoreland, P. T.
1983-01-01
A progress report is presented on a program to upgrade the existing NASA Deep Space Network in terms of a redesigned computer-controlled data acquisition system for channelling tracking, telemetry, and command data between a California-based control center and three signal processing centers in Australia, California, and Spain. The methodology for the improvements is oriented towards single subsystem development with consideration for a multi-system and multi-subsystem network of operational software. Details of the existing hardware configurations and data transmission links are provided. The program methodology includes data flow design, interface design and coordination, incremental capability availability, increased inter-subsystem developmental synthesis and testing, system and network level synthesis and testing, and system verification and validation. The software has been implemented thus far to a 65 percent completion level, and the methodology being used to effect the changes, which will permit enhanced tracking and communication with spacecraft, has been concluded to feature effective techniques.
Chao, Chun-Tang; Maneetien, Nopadon; Wang, Chi-Jo; Chiou, Juing-Shian
2014-01-01
This paper presents the design and evaluation of the hardware circuit for electronic stethoscopes with heart sound cancellation capabilities using field programmable gate arrays (FPGAs). The adaptive line enhancer (ALE) was adopted as the filtering methodology to reduce heart sound attributes from the breath sounds obtained via the electronic stethoscope pickup. FPGAs were utilized to implement the ALE functions in hardware to achieve near real-time breath sound processing. We believe that such an implementation is unprecedented and crucial toward a truly useful, standalone medical device in outpatient clinic settings. The implementation evaluation with one Altera cyclone II-EP2C70F89 shows that the proposed ALE used 45% resources of the chip. Experiments with the proposed prototype were made using DE2-70 emulation board with recorded body signals obtained from online medical archives. Clear suppressions were observed in our experiments from both the frequency domain and time domain perspectives.
Satellite servicing mission preliminary cost estimation model
NASA Technical Reports Server (NTRS)
1987-01-01
The cost model presented is a preliminary methodology for determining a rough order-of-magnitude cost for implementing a satellite servicing mission. Mission implementation, in this context, encompassess all activities associated with mission design and planning, including both flight and ground crew training and systems integration (payload processing) of servicing hardward with the Shuttle. A basic assumption made in developing this cost model is that a generic set of servicing hardware was developed and flight tested, is inventoried, and is maintained by NASA. This implies that all hardware physical and functional interfaces are well known and therefore recurring CITE testing is not required. The development of the cost model algorithms and examples of their use are discussed.
Robust Fixed-Structure Controller Synthesis
NASA Technical Reports Server (NTRS)
Corrado, Joseph R.; Haddad, Wassim M.; Gupta, Kajal (Technical Monitor)
2000-01-01
The ability to develop an integrated control system design methodology for robust high performance controllers satisfying multiple design criteria and real world hardware constraints constitutes a challenging task. The increasingly stringent performance specifications required for controlling such systems necessitates a trade-off between controller complexity and robustness. The principle challenge of the minimal complexity robust control design is to arrive at a tractable control design formulation in spite of the extreme complexity of such systems. Hence, design of minimal complexitY robust controllers for systems in the face of modeling errors has been a major preoccupation of system and control theorists and practitioners for the past several decades.
Hardware and software reliability estimation using simulations
NASA Technical Reports Server (NTRS)
Swern, Frederic L.
1994-01-01
The simulation technique is used to explore the validation of both hardware and software. It was concluded that simulation is a viable means for validating both hardware and software and associating a reliability number with each. This is useful in determining the overall probability of system failure of an embedded processor unit, and improving both the code and the hardware where necessary to meet reliability requirements. The methodologies were proved using some simple programs, and simple hardware models.
Methodology for object-oriented real-time systems analysis and design: Software engineering
NASA Technical Reports Server (NTRS)
Schoeffler, James D.
1991-01-01
Successful application of software engineering methodologies requires an integrated analysis and design life-cycle in which the various phases flow smoothly 'seamlessly' from analysis through design to implementation. Furthermore, different analysis methodologies often lead to different structuring of the system so that the transition from analysis to design may be awkward depending on the design methodology to be used. This is especially important when object-oriented programming is to be used for implementation when the original specification and perhaps high-level design is non-object oriented. Two approaches to real-time systems analysis which can lead to an object-oriented design are contrasted: (1) modeling the system using structured analysis with real-time extensions which emphasizes data and control flows followed by the abstraction of objects where the operations or methods of the objects correspond to processes in the data flow diagrams and then design in terms of these objects; and (2) modeling the system from the beginning as a set of naturally occurring concurrent entities (objects) each having its own time-behavior defined by a set of states and state-transition rules and seamlessly transforming the analysis models into high-level design models. A new concept of a 'real-time systems-analysis object' is introduced and becomes the basic building block of a series of seamlessly-connected models which progress from the object-oriented real-time systems analysis and design system analysis logical models through the physical architectural models and the high-level design stages. The methodology is appropriate to the overall specification including hardware and software modules. In software modules, the systems analysis objects are transformed into software objects.
Centralized database for interconnection system design. [for spacecraft
NASA Technical Reports Server (NTRS)
Billitti, Joseph W.
1989-01-01
A database application called DFACS (Database, Forms and Applications for Cabling and Systems) is described. The objective of DFACS is to improve the speed and accuracy of interconnection system information flow during the design and fabrication stages of a project, while simultaneously supporting both the horizontal (end-to-end wiring) and the vertical (wiring by connector) design stratagems used by the Jet Propulsion Laboratory (JPL) project engineering community. The DFACS architecture is centered around a centralized database and program methodology which emulates the manual design process hitherto used at JPL. DFACS has been tested and successfully applied to existing JPL hardware tasks with a resulting reduction in schedule time and costs.
Investigation, Development, and Evaluation of Performance Proving for Fault-tolerant Computers
NASA Technical Reports Server (NTRS)
Levitt, K. N.; Schwartz, R.; Hare, D.; Moore, J. S.; Melliar-Smith, P. M.; Shostak, R. E.; Boyer, R. S.; Green, M. W.; Elliott, W. D.
1983-01-01
A number of methodologies for verifying systems and computer based tools that assist users in verifying their systems were developed. These tools were applied to verify in part the SIFT ultrareliable aircraft computer. Topics covered included: STP theorem prover; design verification of SIFT; high level language code verification; assembly language level verification; numerical algorithm verification; verification of flight control programs; and verification of hardware logic.
Hardware accelerated high performance neutron transport computation based on AGENT methodology
NASA Astrophysics Data System (ADS)
Xiao, Shanjie
The spatial heterogeneity of the next generation Gen-IV nuclear reactor core designs brings challenges to the neutron transport analysis. The Arbitrary Geometry Neutron Transport (AGENT) AGENT code is a three-dimensional neutron transport analysis code being developed at the Laboratory for Neutronics and Geometry Computation (NEGE) at Purdue University. It can accurately describe the spatial heterogeneity in a hierarchical structure through the R-function solid modeler. The previous version of AGENT coupled the 2D transport MOC solver and the 1D diffusion NEM solver to solve the three dimensional Boltzmann transport equation. In this research, the 2D/1D coupling methodology was expanded to couple two transport solvers, the radial 2D MOC solver and the axial 1D MOC solver, for better accuracy. The expansion was benchmarked with the widely applied C5G7 benchmark models and two fast breeder reactor models, and showed good agreement with the reference Monte Carlo results. In practice, the accurate neutron transport analysis for a full reactor core is still time-consuming and thus limits its application. Therefore, another content of my research is focused on designing a specific hardware based on the reconfigurable computing technique in order to accelerate AGENT computations. It is the first time that the application of this type is used to the reactor physics and neutron transport for reactor design. The most time consuming part of the AGENT algorithm was identified. Moreover, the architecture of the AGENT acceleration system was designed based on the analysis. Through the parallel computation on the specially designed, highly efficient architecture, the acceleration design on FPGA acquires high performance at the much lower working frequency than CPUs. The whole design simulations show that the acceleration design would be able to speedup large scale AGENT computations about 20 times. The high performance AGENT acceleration system will drastically shortening the computation time for 3D full-core neutron transport analysis, making the AGENT methodology unique and advantageous, and thus supplies the possibility to extend the application range of neutron transport analysis in either industry engineering or academic research.
Robotic system construction with mechatronic components inverted pendulum: humanoid robot
NASA Astrophysics Data System (ADS)
Sandru, Lucian Alexandru; Crainic, Marius Florin; Savu, Diana; Moldovan, Cristian; Dolga, Valer; Preitl, Stefan
2017-03-01
Mechatronics is a new methodology used to achieve an optimal design of an electromechanical product. This methodology is collection of practices, procedures and rules used by those who work in particular branch of knowledge or discipline. Education in mechatronics at the Polytechnic University Timisoara is organized on three levels: bachelor, master and PhD studies. These activities refer and to design the mechatronics systems. In this context the design, implementation and experimental study of a family of mechatronic demonstrator occupy an important place. In this paper, a variant for a mechatronic demonstrator based on the combination of the electrical and mechanical components is proposed. The demonstrator, named humanoid robot, is equivalent with an inverted pendulum. Is presented the analyze of components for associated functions of the humanoid robot. This type of development the mechatronic systems by the combination of hardware and software, offers the opportunity to build the optimal solutions.
A Nonlinear Digital Control Solution for a DC/DC Power Converter
NASA Technical Reports Server (NTRS)
Zhu, Minshao
2002-01-01
A digital Nonlinear Proportional-Integral-Derivative (NPID) control algorithm was proposed to control a 1-kW, PWM, DC/DC, switching power converter. The NPID methodology is introduced and a practical hardware control solution is obtained. The design of the controller was completed using Matlab (trademark) Simulink, while the hardware-in-the-loop testing was performed using both the dSPACE (trademark) rapid prototyping system, and a stand-alone Texas Instruments (trademark) Digital Signal Processor (DSP)-based system. The final Nonlinear digital control algorithm was implemented and tested using the ED408043-1 Westinghouse DC-DC switching power converter. The NPID test results are discussed and compared to the results of a standard Proportional-Integral (PI) controller.
A novel visual hardware behavioral language
NASA Technical Reports Server (NTRS)
Li, Xueqin; Cheng, H. D.
1992-01-01
Most hardware behavioral languages just use texts to describe the behavior of the desired hardware design. This is inconvenient for VLSI designers who enjoy using the schematic approach. The proposed visual hardware behavioral language has the ability to graphically express design information using visual parallel models (blocks), visual sequential models (processes) and visual data flow graphs (which consist of primitive operational icons, control icons, and Data and Synchro links). Thus, the proposed visual hardware behavioral language can not only specify hardware concurrent and sequential functionality, but can also visually expose parallelism, sequentiality, and disjointness (mutually exclusive operations) for the hardware designers. That would make the hardware designers capture the design ideas easily and explicitly using this visual hardware behavioral language.
Modeling and Analysis of Space Based Transceivers
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Liebetreu, John; Moore, Michael S.; Price, Jeremy C.; Abbott, Ben
2005-01-01
This paper presents the tool chain, methodology, and initial results of a study to provide a thorough, objective, and quantitative analysis of the design alternatives for space Software Defined Radio (SDR) transceivers. The approach taken was to develop a set of models and tools for describing communications requirements, the algorithm resource requirements, the available hardware, and the alternative software architectures, and generate analysis data necessary to compare alternative designs. The Space Transceiver Analysis Tool (STAT) was developed to help users identify and select representative designs, calculate the analysis data, and perform a comparative analysis of the representative designs. The tool allows the design space to be searched quickly while permitting incremental refinement in regions of higher payoff.
Modeling and Analysis of Space Based Transceivers
NASA Technical Reports Server (NTRS)
Moore, Michael S.; Price, Jeremy C.; Abbott, Ben; Liebetreu, John; Reinhart, Richard C.; Kacpura, Thomas J.
2007-01-01
This paper presents the tool chain, methodology, and initial results of a study to provide a thorough, objective, and quantitative analysis of the design alternatives for space Software Defined Radio (SDR) transceivers. The approach taken was to develop a set of models and tools for describing communications requirements, the algorithm resource requirements, the available hardware, and the alternative software architectures, and generate analysis data necessary to compare alternative designs. The Space Transceiver Analysis Tool (STAT) was developed to help users identify and select representative designs, calculate the analysis data, and perform a comparative analysis of the representative designs. The tool allows the design space to be searched quickly while permitting incremental refinement in regions of higher payoff.
A Framework for Assessing the Reusability of Hardware (Reusable Rocket Engines)
NASA Technical Reports Server (NTRS)
Childress-Thompson, Rhonda; Farrington, Philip; Thomas, Dale
2016-01-01
Within the space flight community, reusability has taken center stage as the new buzzword. In order for reusable hardware to be competitive with its expendable counterpart, two major elements must be closely scrutinized. First, recovery and refurbishment costs must be lower than the development and acquisition costs. Additionally, the reliability for reused hardware must remain the same (or nearly the same) as "first use" hardware. Therefore, it is imperative that a systematic approach be established to enhance the development of reusable systems. However, before the decision can be made on whether it is more beneficial to reuse hardware or to replace it, the parameters that are needed to deem hardware worthy of reuse must be identified. For reusable hardware to be successful, the factors that must be considered are reliability (integrity, life, number of uses), operability (maintenance, accessibility), and cost (procurement, retrieval, refurbishment). These three factors are essential to the successful implementation of reusability while enabling the ability to meet performance goals. Past and present strategies and attempts at reuse within the space industry will be examined to identify important attributes of reusability that can be used to evaluate hardware when contemplating reusable versus expendable options. This paper will examine why reuse must be stated as an initial requirement rather than included as an afterthought in the final design. Late in the process, changes in the overall objective/purpose of components typically have adverse effects that potentially negate the benefits. A methodology for assessing the viability of reusing hardware will be presented by using the Space Shuttle Main Engine (SSME) to validate the approach. Because reliability, operability, and costs are key drivers in making this critical decision, they will be used to assess requirements for reuse as applied to components of the SSME.
An Approach for Performance Based Glove Mobility Requirements
NASA Technical Reports Server (NTRS)
Aitchison, Lindsay; Benson, Elizabeth; England, Scott
2016-01-01
The Space Suit Assembly (SSA) Development Team at NASA Johnson Space Center has invested heavily in the advancement of rear-entry planetary exploration suit design but largely deferred development of extravehicular activity (EVA) glove designs, and accepted the risk of using the current flight gloves, Phase VI, for exploration missions. However, as design reference missions mature, the risks of using heritage hardware have highlighted the need for developing robust new glove technologies. To address the technology gap, the NASA Space Technology Mission Directorate's Game-Changing Development Program provided start-up funding for the High Performance EVA Glove (HPEG) Element as part of the Next Generation Life Support (NGLS) Project in the fall of 2013. The overarching goal of the HPEG Element is to develop a robust glove design that increases human performance during EVA and creates pathway for implementation of emergent technologies, with specific aims of increasing pressurized mobility to 60% of barehanded capability, increasing the durability in on-pristine environments, and decreasing the potential of gloves to cause injury during use. The HPEG Element focused initial efforts on developing quantifiable and repeatable methodologies for assessing glove performance with respect to mobility, injury potential, thermal conductivity, and abrasion resistance. The team used these methodologies to establish requirements against which emerging technologies and glove designs can be assessed at both the component and assembly levels. The mobility performance testing methodology was an early focus for the HPEG team as it stems from collaborations between the SSA Development team and the JSC Anthropometry and Biomechanics Facility (ABF) that began investigating new methods for suited mobility and fit early in the Constellation Program. The combined HPEG and ABF team used lessons learned from the previous efforts as well as additional reviews of methodologies in physical and occupational therapy arenas to develop a protocol that assesses gloved range of motion, strength, dexterity, tactility, and fit in comparative quantitative terms and also provides qualitative insight to direct hardware design iterations. The protocol was evaluated using five experienced test subjects wearing the EMU pressurized to 4.3psid with three different glove configurations. The results of the testing are presented to illustrate where the protocol is and is not valid for benchmark comparisons. The process for requirements development based upon the results is also presented along with suggested performance values for the High Performance EVA Gloves currently in development.
An Approach for Performance Based Glove Mobility Requirements
NASA Technical Reports Server (NTRS)
Aitchison, Lindsay; Benson, Elizabeth; England, Scott
2015-01-01
The Space Suit Assembly (SSA) Development Team at NASA Johnson Space Center has invested heavily in the advancement of rear-entry planetary exploration suit design but largely deferred development of extravehicular activity (EVA) glove designs, and accepted the risk of using the current flight gloves, Phase VI, for exploration missions. However, as design reference missions mature, the risks of using heritage hardware have highlighted the need for developing robust new glove technologies. To address the technology gap, the NASA Space Technology Mission Directorate's Game-Changing Development Program provided start-up funding for the High Performance EVA Glove (HPEG) Element as part of the Next Generation Life Support (NGLS) Project in the fall of 2013. The overarching goal of the HPEG Element is to develop a robust glove design that increases human performance during EVA and creates pathway for implementation of emergent technologies, with specific aims of increasing pressurized mobility to 60% of barehanded capability, increasing the durability in on-pristine environments, and decreasing the potential of gloves to cause injury during use. The HPEG Element focused initial efforts on developing quantifiable and repeatable methodologies for assessing glove performance with respect to mobility, injury potential, thermal conductivity, and abrasion resistance. The team used these methodologies to establish requirements against which emerging technologies and glove designs can be assessed at both the component and assembly levels. The mobility performance testing methodology was an early focus for the HPEG team as it stems from collaborations between the SSA Development team and the JSC Anthropometry and Biomechanics Facility (ABF) that began investigating new methods for suited mobility and fit early in the Constellation Program. The combined HPEG and ABF team used lessons learned from the previous efforts as well as additional reviews of methodologies in physical and occupational therapy arenas to develop a protocol that assesses gloved range of motion, strength, dexterity, tactility, and fit in comparative quantitative terms and also provides qualitative insight to direct hardware design iterations. The protocol was evaluated using five experienced test subjects wearing the EMU pressurized to 4.3psid with three different glove configurations. The results of the testing are presented to illustrate where the protocol is and is not valid for benchmark comparisons. The process for requirements development based upon the results is also presented along with suggested performance values for the High Performance EVA Gloves to be procured in fiscal year 2015.
Chao, Chun-Tang
2014-01-01
This paper presents the design and evaluation of the hardware circuit for electronic stethoscopes with heart sound cancellation capabilities using field programmable gate arrays (FPGAs). The adaptive line enhancer (ALE) was adopted as the filtering methodology to reduce heart sound attributes from the breath sounds obtained via the electronic stethoscope pickup. FPGAs were utilized to implement the ALE functions in hardware to achieve near real-time breath sound processing. We believe that such an implementation is unprecedented and crucial toward a truly useful, standalone medical device in outpatient clinic settings. The implementation evaluation with one Altera cyclone II–EP2C70F89 shows that the proposed ALE used 45% resources of the chip. Experiments with the proposed prototype were made using DE2-70 emulation board with recorded body signals obtained from online medical archives. Clear suppressions were observed in our experiments from both the frequency domain and time domain perspectives. PMID:24790573
Space Station man-machine automation trade-off analysis
NASA Technical Reports Server (NTRS)
Zimmerman, W. F.; Bard, J.; Feinberg, A.
1985-01-01
The man machine automation tradeoff methodology presented is of four research tasks comprising the autonomous spacecraft system technology (ASST) project. ASST was established to identify and study system level design problems for autonomous spacecraft. Using the Space Station as an example spacecraft system requiring a certain level of autonomous control, a system level, man machine automation tradeoff methodology is presented that: (1) optimizes man machine mixes for different ground and on orbit crew functions subject to cost, safety, weight, power, and reliability constraints, and (2) plots the best incorporation plan for new, emerging technologies by weighing cost, relative availability, reliability, safety, importance to out year missions, and ease of retrofit. A fairly straightforward approach is taken by the methodology to valuing human productivity, it is still sensitive to the important subtleties associated with designing a well integrated, man machine system. These subtleties include considerations such as crew preference to retain certain spacecraft control functions; or valuing human integration/decision capabilities over equivalent hardware/software where appropriate.
NASA Technical Reports Server (NTRS)
1972-01-01
Guidelines for the design, development, and fabrication of electronic components and circuits for use in spacecraft construction are presented. The subjects discussed involve quality control procedures and test methodology for the following subjects: (1) monolithic integrated circuits, (2) hybrid integrated circuits, (3) transistors, (4) diodes, (5) tantalum capacitors, (6) electromechanical relays, (7) switches and circuit breakers, and (8) electronic packaging.
Streamlined design and self reliant hardware for active control of precision space structures
NASA Technical Reports Server (NTRS)
Hyland, David C.; King, James A.; Phillips, Douglas J.
1994-01-01
Precision space structures may require active vibration control to satisfy critical performance requirements relating to line-of-sight pointing accuracy and the maintenance of precise, internal alignments. In order for vibration control concepts to become operational, it is necessary that their benefits be practically demonstrated in large scale ground-based experiments. A unique opportunity to carry out such demonstrations on a wide variety of experimental testbeds was provided by the NASA Control-Structure Integration (CSI) Guest Investigator (GI) Program. This report surveys the experimental results achieved by the Harris Corporation GI team on both Phases 1 and 2 of the program and provides a detailed description of Phase 2 activities. The Phase 1 results illustrated the effectiveness of active vibration control for space structures and demonstrated a systematic methodology for control design, implementation test. In Phase 2, this methodology was significantly streamlined to yield an on-site, single session design/test capability. Moreover, the Phase 2 research on adaptive neural control techniques made significant progress toward fully automated, self-reliant space structure control systems. As a further thrust toward productized, self-contained vibration control systems, the Harris Phase II activity concluded with experimental demonstration of new vibration isolation hardware suitable for a wide range of space-flight and ground-based commercial applications.The CSI GI Program Phase 1 activity was conducted under contract NASA1-18872, and the Phase 2 activity was conducted under NASA1-19372.
Methodology for creating dedicated machine and algorithm on sunflower counting
NASA Astrophysics Data System (ADS)
Muracciole, Vincent; Plainchault, Patrick; Mannino, Maria-Rosaria; Bertrand, Dominique; Vigouroux, Bertrand
2007-09-01
In order to sell grain lots in European countries, seed industries need a government certification. This certification requests purity testing, seed counting in order to quantify specified seed species and other impurities in lots, and germination testing. These analyses are carried out within the framework of international trade according to the methods of the International Seed Testing Association. Presently these different analyses are still achieved manually by skilled operators. Previous works have already shown that seeds can be characterized by around 110 visual features (morphology, colour, texture), and thus have presented several identification algorithms. Until now, most of the works in this domain are computer based. The approach presented in this article is based on the design of dedicated electronic vision machine aimed to identify and sort seeds. This machine is composed of a FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor) and a PC bearing the GUI (Human Machine Interface) of the system. Its operation relies on the stroboscopic image acquisition of a seed falling in front of a camera. A first machine was designed according to this approach, in order to simulate all the vision chain (image acquisition, feature extraction, identification) under the Matlab environment. In order to perform this task into dedicated hardware, all these algorithms were developed without the use of the Matlab toolbox. The objective of this article is to present a design methodology for a special purpose identification algorithm based on distance between groups into dedicated hardware machine for seed counting.
Design for dependability: A simulation-based approach. Ph.D. Thesis, 1993
NASA Technical Reports Server (NTRS)
Goswami, Kumar K.
1994-01-01
This research addresses issues in simulation-based system level dependability analysis of fault-tolerant computer systems. The issues and difficulties of providing a general simulation-based approach for system level analysis are discussed and a methodology that address and tackle these issues is presented. The proposed methodology is designed to permit the study of a wide variety of architectures under various fault conditions. It permits detailed functional modeling of architectural features such as sparing policies, repair schemes, routing algorithms as well as other fault-tolerant mechanisms, and it allows the execution of actual application software. One key benefit of this approach is that the behavior of a system under faults does not have to be pre-defined as it is normally done. Instead, a system can be simulated in detail and injected with faults to determine its failure modes. The thesis describes how object-oriented design is used to incorporate this methodology into a general purpose design and fault injection package called DEPEND. A software model is presented that uses abstractions of application programs to study the behavior and effect of software on hardware faults in the early design stage when actual code is not available. Finally, an acceleration technique that combines hierarchical simulation, time acceleration algorithms and hybrid simulation to reduce simulation time is introduced.
Analysis of Solar Census Remote Solar Access Value Calculation Methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nangle, J.; Dean, J.; Van Geet, O.
2015-03-01
The costs of photovoltaic (PV) system hardware (PV panels, inverters, racking, etc.) have fallen dramatically over the past few years. Nonhardware (soft) costs, however, have failed to keep pace with the decrease in hardware costs, and soft costs have become a major driver of U.S. PV system prices. Upfront or 'sunken' customer acquisition costs make up a portion of an installation's soft costs and can be addressed through software solutions that aim to streamline sales and system design aspects of customer acquisition. One of the key soft costs associated with sales and system design is collecting information on solar accessmore » for a particular site. Solar access, reported in solar access values (SAVs), is a measurement of the available clear sky over a site and is used to characterize the impacts of local shading objects. Historically, onsite shading studies have been required to characterize the SAV of the proposed array and determine the potential energy production of a photovoltaic system.« less
SCOS 2: A distributed architecture for ground system control
NASA Astrophysics Data System (ADS)
Keyte, Karl P.
The current generation of spacecraft ground control systems in use at the European Space Agency/European Space Operations Centre (ESA/ESOC) is based on the SCOS 1. Such systems have become difficult to manage in both functional and financial terms. The next generation of spacecraft is demanding more flexibility in the use, configuration and distribution of control facilities as well as functional requirements capable of matching those being planned for future missions. SCOS 2 is more than a successor to SCOS 1. Many of the shortcomings of the existing system have been carefully analyzed by user and technical communities and a complete redesign was made. Different technologies were used in many areas including hardware platform, network architecture, user interfaces and implementation techniques, methodologies and language. As far as possible a flexible design approach has been made using popular industry standards to provide vendor independence in both hardware and software areas. This paper describes many of the new approaches made in the architectural design of the SCOS 2.
ISS Microgravity Research Payload Training Methodology
NASA Technical Reports Server (NTRS)
Schlagheck, Ronald; Geveden, Rex (Technical Monitor)
2001-01-01
The NASA Microgravity Research Discipline has multiple categories of science payloads that are being planned and currently under development to operate on various ISS on-orbit increments. The current program includes six subdisciplines; Materials Science, Fluids Physics, Combustion Science, Fundamental Physics, Cellular Biology and Macromolecular Biotechnology. All of these experiment payloads will require the astronaut various degrees of crew interaction and science observation. With the current programs planning to build various facility class science racks, the crew will need to be trained on basic core operations as well as science background. In addition, many disciplines will use the Express Rack and the Microgravity Science Glovebox (MSG) to utilize the accommodations provided by these facilities for smaller and less complex type hardware. The Microgravity disciplines will be responsible to have a training program designed to maximize the experiment and hardware throughput as well as being prepared for various contingencies both with anomalies as well as unexpected experiment observations. The crewmembers will need various levels of training from simple tasks as power on and activate to extensive training on hardware mode change out to observing the cell growth of various types of tissue cultures. Sample replacement will be required for furnaces and combustion type modules. The Fundamental Physics program will need crew EVA support to provide module change out of experiment. Training will take place various research centers and hardware development locations. It is expected that onboard training through various methods and video/digital technology as well as limited telecommunication interaction. Since hardware will be designed to operate from a few weeks to multiple research increments, flexibility must be planned in the training approach and procedure skills to optimize the output as well as the equipment maintainability. Early increment lessons learned will be addressed.
Statechart-based design controllers for FPGA partial reconfiguration
NASA Astrophysics Data System (ADS)
Łabiak, Grzegorz; Wegrzyn, Marek; Rosado Muñoz, Alfredo
2015-09-01
Statechart diagram and UML technique can be a vital part of early conceptual modeling. At the present time there is no much support in hardware design methodologies for reconfiguration features of reprogrammable devices. Authors try to bridge the gap between imprecise UML model and formal HDL description. The key concept in author's proposal is to describe the behavior of the digital controller by statechart diagrams and to map some parts of the behavior into reprogrammable logic by means of group of states which forms sequential automaton. The whole process is illustrated by the example with experimental results.
Systems report for payload G-652: Project origins
NASA Technical Reports Server (NTRS)
Bellina, J.; Muckerheide, M. C.; Clark, J.; Petry, M.; Seeley, D.; Sportiello, R.; Sprecher, R.; Theiler, M.
1988-01-01
Experiments conducted to investigate possible hardware configurations and methodologies for a Get Away Special payload designated G-652 are discussed. Test data collected from the operation of a free electron laser wiggler using simulated ram glow phenomenon are described. Results of an experiment to synthesize organic compounds within a primordial atmosphere using a laser induced plasma are discussed. An experiment is described which utilized neutron bombardment to assess the risk of genetic alterations in embyros in space.
VHDL Simulation of the Implementation of a Costfunction Circuit.
1990-09-01
the characteristic delays for each component. At this point in time, it is not necessary for the VHDL code to implement the exact hardware...NAVAL POSTGRADUATE SCHOOL Monterey, California AD-A240 430 ,DSTATv, OTIC"b El FCTE 9% SEP 16 1991 ru m D THESIS VHDL Simulation of the Implementation ...partition algorithm is used here as an example to test the VHDL design methodology. Subroutines or statements in the software can be implemented into
Towards composition of verified hardware devices
NASA Technical Reports Server (NTRS)
Schubert, E. Thomas; Levitt, K.; Cohen, G. C.
1991-01-01
Computers are being used where no affordable level of testing is adequate. Safety and life critical systems must find a replacement for exhaustive testing to guarantee their correctness. Through a mathematical proof, hardware verification research has focused on device verification and has largely ignored system composition verification. To address these deficiencies, we examine how the current hardware verification methodology can be extended to verify complete systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, Todd M.; Benjamin, Jacob S.; Wright, Virginia L.
This paper will describe a practical methodology for understanding the cyber risk of a digital asset. This research attempts to gain a greater understanding of the cyber risk posed by a hardware-based computer asset by considering it as a sum of its hardware and software based sub-components.
A methodology of SiP testing based on boundary scan
NASA Astrophysics Data System (ADS)
Qin, He; Quan, Haiyang; Han, Yifei; Zhu, Tianrui; Zheng, Tuo
2017-10-01
System in Package (SiP) play an important role in portable, aerospace and military electronic with the microminiaturization, light weight, high density, and high reliability. At present, SiP system test has encountered the problem on system complexity and malfunction location with the system scale exponentially increase. For SiP system, this paper proposed a testing methodology and testing process based on the boundary scan technology. Combining the character of SiP system and referencing the boundary scan theory of PCB circuit and embedded core test, the specific testing methodology and process has been proposed. The hardware requirement of the under test SiP system has been provided, and the hardware platform of the testing has been constructed. The testing methodology has the character of high test efficiency and accurate malfunction location.
Design and Verification Guidelines for Vibroacoustic and Transient Environments
NASA Technical Reports Server (NTRS)
1986-01-01
Design and verification guidelines for vibroacoustic and transient environments contain many basic methods that are common throughout the aerospace industry. However, there are some significant differences in methodology between NASA/MSFC and others - both government agencies and contractors. The purpose of this document is to provide the general guidelines used by the Component Analysis Branch, ED23, at MSFC, for the application of the vibroacoustic and transient technology to all launch vehicle and payload components and payload components and experiments managed by NASA/MSFC. This document is intended as a tool to be utilized by the MSFC program management and their contractors as a guide for the design and verification of flight hardware.
A Model-Based Approach to Developing Your Mission Operations System
NASA Technical Reports Server (NTRS)
Smith, Robert R.; Schimmels, Kathryn A.; Lock, Patricia D; Valerio, Charlene P.
2014-01-01
Model-Based System Engineering (MBSE) is an increasingly popular methodology for designing complex engineering systems. As the use of MBSE has grown, it has begun to be applied to systems that are less hardware-based and more people- and process-based. We describe our approach to incorporating MBSE as a way to streamline development, and how to build a model consisting of core resources, such as requirements and interfaces, that can be adapted and used by new and upcoming projects. By comparing traditional Mission Operations System (MOS) system engineering with an MOS designed via a model, we will demonstrate the benefits to be obtained by incorporating MBSE in system engineering design processes.
Research in software allocation for advanced manned mission communications and tracking systems
NASA Technical Reports Server (NTRS)
Warnagiris, Tom; Wolff, Bill; Kusmanoff, Antone
1990-01-01
An assessment of the planned processing hardware and software/firmware for the Communications and Tracking System of the Space Station Freedom (SSF) was performed. The intent of the assessment was to determine the optimum distribution of software/firmware in the processing hardware for maximum throughput with minimum required memory. As a product of the assessment process an assessment methodology was to be developed that could be used for similar assessments of future manned spacecraft system designs. The assessment process was hampered by changing requirements for the Space Station. As a result, the initial objective of determining the optimum software/firmware allocation was not fulfilled, but several useful conclusions and recommendations resulted from the assessment. It was concluded that the assessment process would not be completely successful for a system with changing requirements. It was also concluded that memory requirements and hardware requirements were being modified to fit as a consequence of the change process, and although throughput could not be quantitized, potential problem areas could be identified. Finally, inherent flexibility of the system design was essential for the success of a system design with changing requirements. Recommendations resulting from the assessment included development of common software for some embedded controller functions, reduction of embedded processor requirements by hardwiring some Orbital Replacement Units (ORUs) to make better use of processor capabilities, and improvement in communications between software development personnel to enhance the integration process. Lastly, a critical observation was made regarding the software integration tasks did not appear to be addressed in the design process to the degree necessary for successful satisfaction of the system requirements.
A methodology for testing fault-tolerant software
NASA Technical Reports Server (NTRS)
Andrews, D. M.; Mahmood, A.; Mccluskey, E. J.
1985-01-01
A methodology for testing fault tolerant software is presented. There are problems associated with testing fault tolerant software because many errors are masked or corrected by voters, limiter, or automatic channel synchronization. This methodology illustrates how the same strategies used for testing fault tolerant hardware can be applied to testing fault tolerant software. For example, one strategy used in testing fault tolerant hardware is to disable the redundancy during testing. A similar testing strategy is proposed for software, namely, to move the major emphasis on testing earlier in the development cycle (before the redundancy is in place) thus reducing the possibility that undetected errors will be masked when limiters and voters are added.
The scheme machine: A case study in progress in design derivation at system levels
NASA Technical Reports Server (NTRS)
Johnson, Steven D.
1995-01-01
The Scheme Machine is one of several design projects of the Digital Design Derivation group at Indiana University. It differs from the other projects in its focus on issues of system design and its connection to surrounding research in programming language semantics, compiler construction, and programming methodology underway at Indiana and elsewhere. The genesis of the project dates to the early 1980's, when digital design derivation research branched from the surrounding research effort in programming languages. Both branches have continued to develop in parallel, with this particular project serving as a bridge. However, by 1990 there remained little real interaction between the branches and recently we have undertaken to reintegrate them. On the software side, researchers have refined a mathematically rigorous (but not mechanized) treatment starting with the fully abstract semantic definition of Scheme and resulting in an efficient implementation consisting of a compiler and virtual machine model, the latter typically realized with a general purpose microprocessor. The derivation includes a number of sophisticated factorizations and representations and is also deep example of the underlying engineering methodology. The hardware research has created a mechanized algebra supporting the tedious and massive transformations often seen at lower levels of design. This work has progressed to the point that large scale devices, such as processors, can be derived from first-order finite state machine specifications. This is roughly where the language oriented research stops; thus, together, the two efforts establish a thread from the highest levels of abstract specification to detailed digital implementation. The Scheme Machine project challenges hardware derivation research in several ways, although the individual components of the system are of a similar scale to those we have worked with before. The machine has a custom dual-ported memory to support garbage collection. It consists of four tightly coupled processes--processor, collector, allocator, memory--with a very non-trivial synchronization relationship. Finally, there are deep issues of representation for the run-time objects of a symbolic processing language. The research centers on verification through integrated formal reasoning systems, but is also involved with modeling and prototyping environments. Since the derivation algebra is basd on an executable modeling language, there is opportunity to incorporate design animation in the design process. We are looking for ways to move smoothly and incrementally from executable specifications into hardware realization. For example, we can run the garbage collector specification, a Scheme program, directly against the physical memory prototype, and similarly, the instruction processor model against the heap implementation.
A perspective on future directions in aerospace propulsion system simulation
NASA Technical Reports Server (NTRS)
Miller, Brent A.; Szuch, John R.; Gaugler, Raymond E.; Wood, Jerry R.
1989-01-01
The design and development of aircraft engines is a lengthy and costly process using today's methodology. This is due, in large measure, to the fact that present methods rely heavily on experimental testing to verify the operability, performance, and structural integrity of components and systems. The potential exists for achieving significant speedups in the propulsion development process through increased use of computational techniques for simulation, analysis, and optimization. This paper outlines the concept and technology requirements for a Numerical Propulsion Simulation System (NPSS) that would provide capabilities to do interactive, multidisciplinary simulations of complete propulsion systems. By combining high performance computing hardware and software with state-of-the-art propulsion system models, the NPSS will permit the rapid calculation, assessment, and optimization of subcomponent, component, and system performance, durability, reliability and weight-before committing to building hardware.
Autonomous Power System intelligent diagnosis and control
NASA Technical Reports Server (NTRS)
Ringer, Mark J.; Quinn, Todd M.; Merolla, Anthony
1991-01-01
The Autonomous Power System (APS) project at NASA Lewis Research Center is designed to demonstrate the abilities of integrated intelligent diagnosis, control, and scheduling techniques to space power distribution hardware. Knowledge-based software provides a robust method of control for highly complex space-based power systems that conventional methods do not allow. The project consists of three elements: the Autonomous Power Expert System (APEX) for fault diagnosis and control, the Autonomous Intelligent Power Scheduler (AIPS) to determine system configuration, and power hardware (Brassboard) to simulate a space based power system. The operation of the Autonomous Power System as a whole is described and the responsibilities of the three elements - APEX, AIPS, and Brassboard - are characterized. A discussion of the methodologies used in each element is provided. Future plans are discussed for the growth of the Autonomous Power System.
Autonomous power system intelligent diagnosis and control
NASA Technical Reports Server (NTRS)
Ringer, Mark J.; Quinn, Todd M.; Merolla, Anthony
1991-01-01
The Autonomous Power System (APS) project at NASA Lewis Research Center is designed to demonstrate the abilities of integrated intelligent diagnosis, control, and scheduling techniques to space power distribution hardware. Knowledge-based software provides a robust method of control for highly complex space-based power systems that conventional methods do not allow. The project consists of three elements: the Autonomous Power Expert System (APEX) for fault diagnosis and control, the Autonomous Intelligent Power Scheduler (AIPS) to determine system configuration, and power hardware (Brassboard) to simulate a space based power system. The operation of the Autonomous Power System as a whole is described and the responsibilities of the three elements - APEX, AIPS, and Brassboard - are characterized. A discussion of the methodologies used in each element is provided. Future plans are discussed for the growth of the Autonomous Power System.
Planned Environmental Microbiology Aspects of Future Lunar and Mars Missions
NASA Technical Reports Server (NTRS)
Ott, C. Mark; Castro, Victoria A.; Pierson, Duane L.
2006-01-01
With the establishment of the Constellation Program, NASA has initiated efforts designed similar to the Apollo Program to return to the moon and subsequently travel to Mars. Early lunar sorties will take 4 crewmembers to the moon for 4 to 7 days. Later missions will increase in duration up to 6 months as a lunar habitat is constructed. These missions and vehicle designs are the forerunners of further missions destined for human exploration of Mars. Throughout the planning and design process, lessons learned from the International Space Station (ISS) and past programs will be implemented toward future exploration goals. The standards and requirements for these missions will vary depending on life support systems, mission duration, crew activities, and payloads. From a microbiological perspective, preventative measures will remain the primary techniques to mitigate microbial risk. Thus, most of the effort will focus on stringent preflight monitoring requirements and engineering controls designed into the vehicle, such as HEPA air filters. Due to volume constraints in the CEV, in-flight monitoring will be limited for short-duration missions to the measurement of biocide concentration for water potability. Once long-duration habitation begins on the lunar surface, a more extensive environmental monitoring plan will be initiated. However, limited in-flight volume constraints and the inability to return samples to Earth will increase the need for crew capabilities in determining the nature of contamination problems and method of remediation. In addition, limited shelf life of current monitoring hardware consumables and limited capabilities to dispose of biohazardous trash will drive flight hardware toward non-culture based methodologies, such as hardware that rapidly distinguishes biotic versus abiotic surface contamination. As missions progress to Mars, environmental systems will depend heavily on regeneration of air and water and biological waste remediation and regeneration systems, increasing the need for environmental monitoring. Almost complete crew autonomy will be needed for assessment and remediation of contamination problems. Cabin capacity will be limited; thus, current methods of microbial monitoring will be inadequate. Future methodology must limit consumables, and these consumables must have a shelf life of over three years. In summary, missions to the moon and Mars will require a practical design that prudently uses available resources to mitigate microbial risk to the crew.
Insertion of GaAs MMICs into EW systems
NASA Astrophysics Data System (ADS)
Schineller, E. R.; Pospishil, A.; Grzyb, J.
1989-09-01
Development activities on a microwave/mm-wave monolithic IC (MIMIC) program are described, as well as the methodology for inserting these GaAs IC chips into several EW systems. The generic EW chip set developed on the MIMIC program consists of 23 broadband chip types, including amplifiers, oscillators, mixers, switches, variable attenuators, power dividers, and power combiners. These chips are being designed for fabrication using the multifunction self-aligned gate process. The benefits from GaAs IC insertion are quantified by a comparison of hardware units fabricated with existing MIC and digital ECL technology and the same units manufactured with monolithic technology. It is found that major improvements in cost, reliability, size, weight, and performance can be realized. Examples illustrating the methodology for technology insertion are presented.
Hardware acceleration and verification of systems designed with hardware description languages (HDL)
NASA Astrophysics Data System (ADS)
Wisniewski, Remigiusz; Wegrzyn, Marek
2005-02-01
Hardware description languages (HDLs) allow creating bigger and bigger designs nowadays. The size of prototyped systems very often exceeds million gates. Therefore verification process of the designs takes several hours or even days. The solution for this problem can be solved by hardware acceleration of simulation.
Software for Probabilistic Risk Reduction
NASA Technical Reports Server (NTRS)
Hensley, Scott; Michel, Thierry; Madsen, Soren; Chapin, Elaine; Rodriguez, Ernesto
2004-01-01
A computer program implements a methodology, denoted probabilistic risk reduction, that is intended to aid in planning the development of complex software and/or hardware systems. This methodology integrates two complementary prior methodologies: (1) that of probabilistic risk assessment and (2) a risk-based planning methodology, implemented in a prior computer program known as Defect Detection and Prevention (DDP), in which multiple requirements and the beneficial effects of risk-mitigation actions are taken into account. The present methodology and the software are able to accommodate both process knowledge (notably of the efficacy of development practices) and product knowledge (notably of the logical structure of a system, the development of which one seeks to plan). Estimates of the costs and benefits of a planned development can be derived. Functional and non-functional aspects of software can be taken into account, and trades made among them. It becomes possible to optimize the planning process in the sense that it becomes possible to select the best suite of process steps and design choices to maximize the expectation of success while remaining within budget.
Ball, David A; Lux, Matthew W; Graef, Russell R; Peterson, Matthew W; Valenti, Jane D; Dileo, John; Peccoud, Jean
2010-01-01
The concept of co-design is common in engineering, where it is necessary, for example, to determine the optimal partitioning between hardware and software of the implementation of a system features. Here we propose to adapt co-design methodologies for synthetic biology. As a test case, we have designed an environmental sensing device that detects the presence of three chemicals, and returns an output only if at least two of the three chemicals are present. We show that the logical operations can be implemented in three different design domains: (1) the transcriptional domain using synthetically designed hybrid promoters, (2) the protein domain using bi-molecular fluorescence complementation, and (3) the fluorescence domain using spectral unmixing and relying on electronic processing. We discuss how these heterogeneous design strategies could be formalized to develop co-design algorithms capable of identifying optimal designs meeting user specifications.
Optimization of the Multi-Spectral Euclidean Distance Calculation for FPGA-based Spaceborne Systems
NASA Technical Reports Server (NTRS)
Cristo, Alejandro; Fisher, Kevin; Perez, Rosa M.; Martinez, Pablo; Gualtieri, Anthony J.
2012-01-01
Due to the high quantity of operations that spaceborne processing systems must carry out in space, new methodologies and techniques are being presented as good alternatives in order to free the main processor from work and improve the overall performance. These include the development of ancillary dedicated hardware circuits that carry out the more redundant and computationally expensive operations in a faster way, leaving the main processor free to carry out other tasks while waiting for the result. One of these devices is SpaceCube, a FPGA-based system designed by NASA. The opportunity to use FPGA reconfigurable architectures in space allows not only the optimization of the mission operations with hardware-level solutions, but also the ability to create new and improved versions of the circuits, including error corrections, once the satellite is already in orbit. In this work, we propose the optimization of a common operation in remote sensing: the Multi-Spectral Euclidean Distance calculation. For that, two different hardware architectures have been designed and implemented in a Xilinx Virtex-5 FPGA, the same model of FPGAs used by SpaceCube. Previous results have shown that the communications between the embedded processor and the circuit create a bottleneck that affects the overall performance in a negative way. In order to avoid this, advanced methods including memory sharing, Native Port Interface (NPI) connections and Data Burst Transfers have been used.
DDL:Digital systems design language
NASA Technical Reports Server (NTRS)
Shival, S. G.
1980-01-01
Hardware description languages are valuable tools in such applications as hardware design, system documentation, and logic design training. DDL is convenient medium for inputting design details into hardware-design automation system. It is suitable for describing digital systems at gate, register transfer, and major combinational block level.
Logic flowgraph methodology - A tool for modeling embedded systems
NASA Technical Reports Server (NTRS)
Muthukumar, C. T.; Guarro, S. B.; Apostolakis, G. E.
1991-01-01
The logic flowgraph methodology (LFM), a method for modeling hardware in terms of its process parameters, has been extended to form an analytical tool for the analysis of integrated (hardware/software) embedded systems. In the software part of a given embedded system model, timing and the control flow among different software components are modeled by augmenting LFM with modified Petrinet structures. The objective of the use of such an augmented LFM model is to uncover possible errors and the potential for unanticipated software/hardware interactions. This is done by backtracking through the augmented LFM mode according to established procedures which allow the semiautomated construction of fault trees for any chosen state of the embedded system (top event). These fault trees, in turn, produce the possible combinations of lower-level states (events) that may lead to the top event.
Evolving Reliability and Maintainability Allocations for NASA Ground Systems
NASA Technical Reports Server (NTRS)
Munoz, Gisela; Toon, T.; Toon, J.; Conner, A.; Adams, T.; Miranda, D.
2016-01-01
This paper describes the methodology and value of modifying allocations to reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) programs subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. This iterative process provided an opportunity for the reliability engineering team to reevaluate allocations as systems moved beyond their conceptual and preliminary design phases. These new allocations are based on updated designs and maintainability characteristics of the components. It was found that trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper discusses the results of reliability and maintainability reallocations made for the GSDO subsystems as the program nears the end of its design phase.
Evolving Reliability and Maintainability Allocations for NASA Ground Systems
NASA Technical Reports Server (NTRS)
Munoz, Gisela; Toon, Troy; Toon, Jamie; Conner, Angelo C.; Adams, Timothy C.; Miranda, David J.
2016-01-01
This paper describes the methodology and value of modifying allocations to reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) program’s subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. This iterative process provided an opportunity for the reliability engineering team to reevaluate allocations as systems moved beyond their conceptual and preliminary design phases. These new allocations are based on updated designs and maintainability characteristics of the components. It was found that trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper discusses the results of reliability and maintainability reallocations made for the GSDO subsystems as the program nears the end of its design phase.
Evolving Reliability and Maintainability Allocations for NASA Ground Systems
NASA Technical Reports Server (NTRS)
Munoz, Gisela; Toon, Jamie; Toon, Troy; Adams, Timothy C.; Miranda, David J.
2016-01-01
This paper describes the methodology that was developed to allocate reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) program's subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. Allocating is an iterative process; as systems moved beyond their conceptual and preliminary design phases this provided an opportunity for the reliability engineering team to reevaluate allocations based on updated designs and maintainability characteristics of the components. Trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper will discuss the value of modifying reliability and maintainability allocations made for the GSDO subsystems as the program nears the end of its design phase.
NASA Technical Reports Server (NTRS)
Newman, Brett; Yu, Si-bok; Rhew, Ray D. (Technical Monitor)
2003-01-01
Modern experimental and test activities demand innovative and adaptable procedures to maximize data content and quality while working within severely constrained budgetary and facility resource environments. This report describes development of a high accuracy angular measurement capability for NASA Langley Research Center hypersonic wind tunnel facilities to overcome these deficiencies. Specifically, utilization of micro-electro-mechanical sensors including accelerometers and gyros, coupled with software driven data acquisition hardware, integrated within a prototype measurement system, is considered. Development methodology addresses basic design requirements formulated from wind tunnel facility constraints and current operating procedures, as well as engineering and scientific test objectives. Description of the analytical framework governing relationships between time dependent multi-axis acceleration and angular rate sensor data and the desired three dimensional Eulerian angular state of the test model is given. Calibration procedures for identifying and estimating critical parameters in the sensor hardware is also addressed.
Data Quality Assurance for Supersonic Jet Noise Measurements
NASA Technical Reports Server (NTRS)
Brown, Clifford A.; Henderson, Brenda S.; Bridges, James E.
2010-01-01
The noise created by a supersonic aircraft is a primary concern in the design of future high-speed planes. The jet noise reduction technologies required on these aircraft will be developed using scale-models mounted to experimental jet rigs designed to simulate the exhaust gases from a full-scale jet engine. The jet noise data collected in these experiments must accurately predict the noise levels produced by the full-scale hardware in order to be a useful development tool. A methodology has been adopted at the NASA Glenn Research Center s Aero-Acoustic Propulsion Laboratory to insure the quality of the supersonic jet noise data acquired from the facility s High Flow Jet Exit Rig so that it can be used to develop future nozzle technologies that reduce supersonic jet noise. The methodology relies on mitigating extraneous noise sources, examining the impact of measurement location on the acoustic results, and investigating the facility independence of the measurements. The methodology is documented here as a basis for validating future improvements and its limitations are noted so that they do not affect the data analysis. Maintaining a high quality jet noise laboratory is an ongoing process. By carefully examining the data produced and continually following this methodology, data quality can be maintained and improved over time.
An adaptable neuromorphic model of orientation selectivity based on floating gate dynamics
Gupta, Priti; Markan, C. M.
2014-01-01
The biggest challenge that the neuromorphic community faces today is to build systems that can be considered truly cognitive. Adaptation and self-organization are the two basic principles that underlie any cognitive function that the brain performs. If we can replicate this behavior in hardware, we move a step closer to our goal of having cognitive neuromorphic systems. Adaptive feature selectivity is a mechanism by which nature optimizes resources so as to have greater acuity for more abundant features. Developing neuromorphic feature maps can help design generic machines that can emulate this adaptive behavior. Most neuromorphic models that have attempted to build self-organizing systems, follow the approach of modeling abstract theoretical frameworks in hardware. While this is good from a modeling and analysis perspective, it may not lead to the most efficient hardware. On the other hand, exploiting hardware dynamics to build adaptive systems rather than forcing the hardware to behave like mathematical equations, seems to be a more robust methodology when it comes to developing actual hardware for real world applications. In this paper we use a novel time-staggered Winner Take All circuit, that exploits the adaptation dynamics of floating gate transistors, to model an adaptive cortical cell that demonstrates Orientation Selectivity, a well-known biological phenomenon observed in the visual cortex. The cell performs competitive learning, refining its weights in response to input patterns resembling different oriented bars, becoming selective to a particular oriented pattern. Different analysis performed on the cell such as orientation tuning, application of abnormal inputs, response to spatial frequency and periodic patterns reveal close similarity between our cell and its biological counterpart. Embedded in a RC grid, these cells interact diffusively exhibiting cluster formation, making way for adaptively building orientation selective maps in silicon. PMID:24765062
Processor design optimization methodology for synthetic vision systems
NASA Astrophysics Data System (ADS)
Wren, Bill; Tarleton, Norman G.; Symosek, Peter F.
1997-06-01
Architecture optimization requires numerous inputs from hardware to software specifications. The task of varying these input parameters to obtain an optimal system architecture with regard to cost, specified performance and method of upgrade considerably increases the development cost due to the infinitude of events, most of which cannot even be defined by any simple enumeration or set of inequalities. We shall address the use of a PC-based tool using genetic algorithms to optimize the architecture for an avionics synthetic vision system, specifically passive millimeter wave system implementation.
Ubiquitous Computing for Remote Cardiac Patient Monitoring: A Survey
Kumar, Sunil; Kambhatla, Kashyap; Hu, Fei; Lifson, Mark; Xiao, Yang
2008-01-01
New wireless technologies, such as wireless LAN and sensor networks, for telecardiology purposes give new possibilities for monitoring vital parameters with wearable biomedical sensors, and give patients the freedom to be mobile and still be under continuous monitoring and thereby better quality of patient care. This paper will detail the architecture and quality-of-service (QoS) characteristics in integrated wireless telecardiology platforms. It will also discuss the current promising hardware/software platforms for wireless cardiac monitoring. The design methodology and challenges are provided for realistic implementation. PMID:18604301
Welding torch trajectory generation for hull joining using autonomous welding mobile robot
NASA Astrophysics Data System (ADS)
Hascoet, J. Y.; Hamilton, K.; Carabin, G.; Rauch, M.; Alonso, M.; Ares, E.
2012-04-01
Shipbuilding processes involve highly dangerous manual welding operations. Welding of ship hulls presents a hazardous environment for workers. This paper describes a new robotic system, developed by the SHIPWELD consortium, that moves autonomously on the hull and automatically executes the required welding processes. Specific focus is placed on the trajectory control of such a system and forms the basis for the discussion in this paper. It includes a description of the robotic hardware design as well as some methodology used to establish the torch trajectory control.
Ubiquitous computing for remote cardiac patient monitoring: a survey.
Kumar, Sunil; Kambhatla, Kashyap; Hu, Fei; Lifson, Mark; Xiao, Yang
2008-01-01
New wireless technologies, such as wireless LAN and sensor networks, for telecardiology purposes give new possibilities for monitoring vital parameters with wearable biomedical sensors, and give patients the freedom to be mobile and still be under continuous monitoring and thereby better quality of patient care. This paper will detail the architecture and quality-of-service (QoS) characteristics in integrated wireless telecardiology platforms. It will also discuss the current promising hardware/software platforms for wireless cardiac monitoring. The design methodology and challenges are provided for realistic implementation.
A tensor approach to modeling of nonhomogeneous nonlinear systems
NASA Technical Reports Server (NTRS)
Yurkovich, S.; Sain, M.
1980-01-01
Model following control methodology plays a key role in numerous application areas. Cases in point include flight control systems and gas turbine engine control systems. Typical uses of such a design strategy involve the determination of nonlinear models which generate requested control and response trajectories for various commands. Linear multivariable techniques provide trim about these motions; and protection logic is added to secure the hardware from excursions beyond the specification range. This paper reports upon experience in developing a general class of such nonlinear models based upon the idea of the algebraic tensor product.
Open-source hardware for medical devices
2016-01-01
Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device. PMID:27158528
Open-source hardware for medical devices.
Niezen, Gerrit; Eslambolchilar, Parisa; Thimbleby, Harold
2016-04-01
Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device.
Methodology for Designing Fault-Protection Software
NASA Technical Reports Server (NTRS)
Barltrop, Kevin; Levison, Jeffrey; Kan, Edwin
2006-01-01
A document describes a methodology for designing fault-protection (FP) software for autonomous spacecraft. The methodology embodies and extends established engineering practices in the technical discipline of Fault Detection, Diagnosis, Mitigation, and Recovery; and has been successfully implemented in the Deep Impact Spacecraft, a NASA Discovery mission. Based on established concepts of Fault Monitors and Responses, this FP methodology extends the notion of Opinion, Symptom, Alarm (aka Fault), and Response with numerous new notions, sub-notions, software constructs, and logic and timing gates. For example, Monitor generates a RawOpinion, which graduates into Opinion, categorized into no-opinion, acceptable, or unacceptable opinion. RaiseSymptom, ForceSymptom, and ClearSymptom govern the establishment and then mapping to an Alarm (aka Fault). Local Response is distinguished from FP System Response. A 1-to-n and n-to- 1 mapping is established among Monitors, Symptoms, and Responses. Responses are categorized by device versus by function. Responses operate in tiers, where the early tiers attempt to resolve the Fault in a localized step-by-step fashion, relegating more system-level response to later tier(s). Recovery actions are gated by epoch recovery timing, enabling strategy, urgency, MaxRetry gate, hardware availability, hazardous versus ordinary fault, and many other priority gates. This methodology is systematic, logical, and uses multiple linked tables, parameter files, and recovery command sequences. The credibility of the FP design is proven via a fault-tree analysis "top-down" approach, and a functional fault-mode-effects-and-analysis via "bottoms-up" approach. Via this process, the mitigation and recovery strategy(s) per Fault Containment Region scope (width versus depth) the FP architecture.
Long-Term Durability Analysis of a 100,000+ Hr Stirling Power Convertor Heater Head
NASA Technical Reports Server (NTRS)
Bartolotta, Paul A.; Bowman, Randy R.; Krause, David L.; Halford, Gary R.
2000-01-01
DOE and NASA have identified Stirling Radioisotope Power Systems (SRPS) as the power supply for deep space exploration missions the Europa Orbiter and Solar Probe. As a part of this effort, NASA has initiated a long-term durability project for critical hot section components of the Stirling power convertor to qualify flight hardware. This project will develop a life prediction methodology that utilizes short-term (t < 20,000 hr) test data to verify long-term (t > 100,000 hr) design life. The project consists of generating a materials database for the specific heat of alloy, evaluation of critical hermetic sealed joints, life model characterization, and model verification. This paper will describe the qualification methodology being developed and provide a status for this effort.
Eshraghian, Jason K; Baek, Seungbum; Kim, Jun-Ho; Iannella, Nicolangelo; Cho, Kyoungrok; Goo, Yong Sook; Iu, Herbert H C; Kang, Sung-Mo; Eshraghian, Kamran
2018-02-13
Existing computational models of the retina often compromise between the biophysical accuracy and a hardware-adaptable methodology of implementation. When compared to the current modes of vision restoration, algorithmic models often contain a greater correlation between stimuli and the affected neural network, but lack physical hardware practicality. Thus, if the present processing methods are adapted to complement very-large-scale circuit design techniques, it is anticipated that it will engender a more feasible approach to the physical construction of the artificial retina. The computational model presented in this research serves to provide a fast and accurate predictive model of the retina, a deeper understanding of neural responses to visual stimulation, and an architecture that can realistically be transformed into a hardware device. Traditionally, implicit (or semi-implicit) ordinary differential equations (OES) have been used for optimal speed and accuracy. We present a novel approach that requires the effective integration of different dynamical time scales within a unified framework of neural responses, where the rod, cone, amacrine, bipolar, and ganglion cells correspond to the implemented pathways. Furthermore, we show that adopting numerical integration can both accelerate retinal pathway simulations by more than 50% when compared with traditional ODE solvers in some cases, and prove to be a more realizable solution for the hardware implementation of predictive retinal models.
NASA Astrophysics Data System (ADS)
Moffitt, Blake Almy
Unmanned Aerial Vehicles (UAVs) are the most dynamic growth sector of the aerospace industry today. The need to provide persistent intelligence, surveillance, and reconnaissance for military operations is driving the planned acquisition of over 5,000 UAVs over the next five years. The most pressing need is for quiet, small UAVs with endurance beyond what is capable with advanced batteries or small internal combustion propulsion systems. Fuel cell systems demonstrate high efficiency, high specific energy, low noise, low temperature operation, modularity, and rapid refuelability making them a promising enabler of the small, quiet, and persistent UAVs that military planners are seeking. Despite the perceived benefits, the actual near-term performance of fuel cell powered UAVs is unknown. Until the auto industry began spending billions of dollars in research, fuel cell systems were too heavy for useful flight applications. However, the last decade has seen rapid development with fuel cell gravimetric and volumetric power density nearly doubling every 2--3 years. As a result, a few design studies and demonstrator aircraft have appeared, but overall the design methodology and vehicles are still in their infancy. The design of fuel cell aircraft poses many challenges. Fuel cells differ fundamentally from combustion based propulsion in how they generate power and interact with other aircraft subsystems. As a result, traditional multidisciplinary analysis (MDA) codes are inappropriate. Building new MDAs is difficult since fuel cells are rapidly changing in design, and various competitive architectures exist for balance of plant, hydrogen storage, and all electric aircraft subsystems. In addition, fuel cell design and performance data is closely protected which makes validation difficult and uncertainty significant. Finally, low specific power and high volumes compared to traditional combustion based propulsion result in more highly constrained design spaces that are problematic for design space exploration. To begin addressing the current gaps in fuel cell aircraft development, a methodology has been developed to explore and characterize the near-term performance of fuel cell powered UAVs. The first step of the methodology is the development of a valid MDA. This is accomplished by using propagated uncertainty estimates to guide the decomposition of a MDA into key contributing analyses (CAs) that can be individually refined and validated to increase the overall accuracy of the MDA. To assist in MDA development, a flexible framework for simultaneously solving the CAs is specified. This enables the MDA to be easily adapted to changes in technology and the changes in data that occur throughout a design process. Various CAs that model a polymer electrolyte membrane fuel cell (PEMFC) UAV are developed, validated, and shown to be in agreement with hardware-in-the-loop simulations of a fully developed fuel cell propulsion system. After creating a valid MDA, the final step of the methodology is the synthesis of the MDA with an uncertainty propagation analysis, an optimization routine, and a chance constrained problem formulation. This synthesis allows an efficient calculation of the probabilistic constraint boundaries and Pareto frontiers that will govern the design space and influence design decisions relating to optimization and uncertainty mitigation. A key element of the methodology is uncertainty propagation. The methodology uses Systems Sensitivity Analysis (SSA) to estimate the uncertainty of key performance metrics due to uncertainties in design variables and uncertainties in the accuracy of the CAs. A summary of SSA is provided and key rules for properly decomposing a MDA for use with SSA are provided. Verification of SSA uncertainty estimates via Monte Carlo simulations is provided for both an example problem as well as a detailed MDA of a fuel cell UAV. Implementation of the methodology was performed on a small fuel cell UAV designed to carry a 2.2 kg payload with 24 hours of endurance. Uncertainty distributions for both design variables and the CAs were estimated based on experimental results and were found to dominate the design space. To reduce uncertainty and test the flexibility of the MDA framework, CAs were replaced with either empirical, or semi-empirical relationships during the optimization process. The final design was validated via a hardware-in-the loop simulation. Finally, the fuel cell UAV probabilistic design space was studied. A graphical representation of the design space was generated and the optima due to deterministic and probabilistic constraints were identified. The methodology was used to identify Pareto frontiers of the design space which were shown on contour plots of the design space. Unanticipated discontinuities of the Pareto fronts were observed as different constraints became active providing useful information on which to base design and development decisions.
Comparison of Requirements for Composite Structures for Aircraft and Space Applications
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.; Elliott, Kenny B.; Hampton, Roy W.; Knight, Norman F., Jr.; Aggarwal, Pravin; Engelstad, Stephen P.; Chang, James B.
2010-01-01
In this paper, the aircraft and space vehicle requirements for composite structures are compared. It is a valuable exercise to study composite structural design approaches used in the airframe industry, and to adopt methodology that is applicable for space vehicles. The missions, environments, analysis methods, analysis validation approaches, testing programs, build quantities, inspection, and maintenance procedures used by the airframe industry, in general, are not transferable to spaceflight hardware. Therefore, while the application of composite design approaches from other industries is appealing, many aspects cannot be directly utilized. Nevertheless, experiences and research for composite aircraft structures may be of use in unexpected arenas as space exploration technology develops, and so continued technology exchanges are encouraged.
General Principles for Brain Design
NASA Astrophysics Data System (ADS)
Josephson, Brian D.
2006-06-01
The task of understanding how the brain works has met with only limited success since important design concepts are not as yet incorporated in the analysis. Relevant concepts can be uncovered by studying the powerful methodologies that have evolved in the context of computer programming, raising the question of how the concepts involved there can be realised in neural hardware. Insights can be gained in regard to such issues through the study of the role played by models and representation. These insights lead on to an appreciation of the mechanisms underlying subtle capacities such as those concerned with the use of language. A precise, essentially mathematical account of such capacities is in prospect for the future.
Verification Failures: What to Do When Things Go Wrong
NASA Astrophysics Data System (ADS)
Bertacco, Valeria
Every integrated circuit is released with latent bugs. The damage and risk implied by an escaped bug ranges from almost imperceptible to potential tragedy; unfortunately it is impossible to discern within this range before a bug has been exposed and analyzed. While the past few decades have witnessed significant efforts to improve verification methodology for hardware systems, these efforts have been far outstripped by the massive complexity of modern digital designs, leading to product releases for which an always smaller fraction of system's states has been verified. The news of escaped bugs in large market designs and/or safety critical domains is alarming because of safety and cost implications (due to replacements, lawsuits, etc.).
Statistical Design Model (SDM) of satellite thermal control subsystem
NASA Astrophysics Data System (ADS)
Mirshams, Mehran; Zabihian, Ehsan; Aarabi Chamalishahi, Mahdi
2016-07-01
Satellites thermal control, is a satellite subsystem that its main task is keeping the satellite components at its own survival and activity temperatures. Ability of satellite thermal control plays a key role in satisfying satellite's operational requirements and designing this subsystem is a part of satellite design. In the other hand due to the lack of information provided by companies and designers still doesn't have a specific design process while it is one of the fundamental subsystems. The aim of this paper, is to identify and extract statistical design models of spacecraft thermal control subsystem by using SDM design method. This method analyses statistical data with a particular procedure. To implement SDM method, a complete database is required. Therefore, we first collect spacecraft data and create a database, and then we extract statistical graphs using Microsoft Excel, from which we further extract mathematical models. Inputs parameters of the method are mass, mission, and life time of the satellite. For this purpose at first thermal control subsystem has been introduced and hardware using in the this subsystem and its variants has been investigated. In the next part different statistical models has been mentioned and a brief compare will be between them. Finally, this paper particular statistical model is extracted from collected statistical data. Process of testing the accuracy and verifying the method use a case study. Which by the comparisons between the specifications of thermal control subsystem of a fabricated satellite and the analyses results, the methodology in this paper was proved to be effective. Key Words: Thermal control subsystem design, Statistical design model (SDM), Satellite conceptual design, Thermal hardware
Minimum Control Requirements for Advanced Life Support Systems
NASA Technical Reports Server (NTRS)
Boulange, Richard; Jones, Harry; Jones, Harry
2002-01-01
Advanced control technologies are not necessary for the safe, reliable and continuous operation of Advanced Life Support (ALS) systems. ALS systems can and are adequately controlled by simple, reliable, low-level methodologies and algorithms. The automation provided by advanced control technologies is claimed to decrease system mass and necessary crew time by reducing buffer size and minimizing crew involvement. In truth, these approaches increase control system complexity without clearly demonstrating an increase in reliability across the ALS system. Unless these systems are as reliable as the hardware they control, there is no savings to be had. A baseline ALS system is presented with the minimal control system required for its continuous safe reliable operation. This baseline control system uses simple algorithms and scheduling methodologies and relies on human intervention only in the event of failure of the redundant backup equipment. This ALS system architecture is designed for reliable operation, with minimal components and minimal control system complexity. The fundamental design precept followed is "If it isn't there, it can't fail".
Adaptive Neuromorphic Circuit for Stereoscopic Disparity Using Ocular Dominance Map
Sharma, Sheena; Gupta, Priti; Markan, C. M.
2016-01-01
Stereopsis or depth perception is a critical aspect of information processing in the brain and is computed from the positional shift or disparity between the images seen by the two eyes. Various algorithms and their hardware implementation that compute disparity in real time have been proposed; however, most of them compute disparity through complex mathematical calculations that are difficult to realize in hardware and are biologically unrealistic. The brain presumably uses simpler methods to extract depth information from the environment and hence newer methodologies that could perform stereopsis with brain like elegance need to be explored. This paper proposes an innovative aVLSI design that leverages the columnar organization of ocular dominance in the brain and uses time-staggered Winner Take All (ts-WTA) to adaptively create disparity tuned cells. Physiological findings support the presence of disparity cells in the visual cortex and show that these cells surface as a result of binocular stimulation received after birth. Therefore, creating in hardware cells that can learn different disparities with experience not only is novel but also is biologically more realistic. These disparity cells, when allowed to interact diffusively on a larger scale, can be used to adaptively create stable topological disparity maps in silicon. PMID:27243029
Space biology initiative program definition review. Trade study 4: Design modularity and commonality
NASA Technical Reports Server (NTRS)
Jackson, L. Neal; Crenshaw, John, Sr.; Davidson, William L.; Herbert, Frank J.; Bilodeau, James W.; Stoval, J. Michael; Sutton, Terry
1989-01-01
The relative cost impacts (up or down) of developing Space Biology hardware using design modularity and commonality is studied. Recommendations for how the hardware development should be accomplished to meet optimum design modularity requirements for Life Science investigation hardware will be provided. In addition, the relative cost impacts of implementing commonality of hardware for all Space Biology hardware are defined. Cost analysis and supporting recommendations for levels of modularity and commonality are presented. A mathematical or statistical cost analysis method with the capability to support development of production design modularity and commonality impacts to parametric cost analysis is provided.
Hardware design for the Autonomous Visibility Monitoring (AVM) observatory
NASA Technical Reports Server (NTRS)
Cowles, K.
1993-01-01
The hardware for the three Autonomous Visibility Monitoring (AVM) observatories was redesigned. Changes in hardware design include electronics components, weather sensors, and the telescope drive system. Operation of the new hardware is discussed, as well as some of its features. The redesign will allow reliable automated operation.
Using object-oriented analysis to design a multi-mission ground data system
NASA Technical Reports Server (NTRS)
Shames, Peter
1995-01-01
This paper describes an analytical approach and descriptive methodology that is adapted from Object-Oriented Analysis (OOA) techniques. The technique is described and then used to communicate key issues of system logical architecture. The essence of the approach is to limit the analysis to only service objects, with the idea of providing a direct mapping from the design to a client-server implementation. Key perspectives on the system, such as user interaction, data flow and management, service interfaces, hardware configuration, and system and data integrity are covered. A significant advantage of this service-oriented approach is that it permits mapping all of these different perspectives on the system onto a single common substrate. This services substrate is readily represented diagramatically, thus making details of the overall design much more accessible.
NASA Technical Reports Server (NTRS)
Hanna, Stephen G.; Jones, David L.; Creech, Stephen D.; Lawrence, Thomas D.
2012-01-01
In support of the National Aeronautics and Space Administration's (NASA) Human Exploration and Operations Mission Directorate (HEOMD), the Space Launch System (SLS) is being designed for safe, affordable, and sustainable human and scientific exploration missions beyond Earth's or-bit (BEO). The SLS Team is tasked with developing a system capable of safely and repeatedly lofting a new fleet of spaceflight vehicles beyond Earth orbit. The Cryogenic Propulsion Stage (CPS) is a key enabler for evolving the SLS capability for BEO missions. This paper reports on the methodology and initial recommendations relative to the CPS, giving a brief retrospective of early studies on this promising propulsion hardware. This paper provides an overview of the requirements development and CPS configuration in support of NASA's multiple Design Reference Missions (DRMs).
Source Data Impacts on Epistemic Uncertainty for Launch Vehicle Fault Tree Models
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Novack, Steven; Ring, Robert
2016-01-01
Launch vehicle systems are designed and developed using both heritage and new hardware. Design modifications to the heritage hardware to fit new functional system requirements can impact the applicability of heritage reliability data. Risk estimates for newly designed systems must be developed from generic data sources such as commercially available reliability databases using reliability prediction methodologies, such as those addressed in MIL-HDBK-217F. Failure estimates must be converted from the generic environment to the specific operating environment of the system in which it is used. In addition, some qualification of applicability for the data source to the current system should be made. Characterizing data applicability under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This paper will demonstrate a data-source applicability classification method for suggesting epistemic component uncertainty to a target vehicle based on the source and operating environment of the originating data. The source applicability is determined using heuristic guidelines while translation of operating environments is accomplished by applying statistical methods to MIL-HDK-217F tables. The paper will provide one example for assigning environmental factors uncertainty when translating between operating environments for the microelectronic part-type components. The heuristic guidelines will be followed by uncertainty-importance routines to assess the need for more applicable data to reduce model uncertainty.
Parameterized hardware description as object oriented hardware model implementation
NASA Astrophysics Data System (ADS)
Drabik, Pawel K.
2010-09-01
The paper introduces novel model for design, visualization and management of complex, highly adaptive hardware systems. The model settles component oriented environment for both hardware modules and software application. It is developed on parameterized hardware description research. Establishment of stable link between hardware and software, as a purpose of designed and realized work, is presented. Novel programming framework model for the environment, named Graphic-Functional-Components is presented. The purpose of the paper is to present object oriented hardware modeling with mentioned features. Possible model implementation in FPGA chips and its management by object oriented software in Java is described.
New Ways Of Doing Business (NWODB) cost quantification analysis
NASA Technical Reports Server (NTRS)
Hamaker, Joseph W.; Rosmait, Russell L.
1992-01-01
The cost of designing, producing, and operating typical aerospace flight hardware is necessarily more expensive than most other human endeavors. Because of the more stringent environment of space, hardware designed to operate there will probably always be more expensive than similar hardware which is designed for less taxing environments. It is the thesis of this study that there are very significant improvements that can be made in the cost of aerospace flight hardware.
Hardware description languages
NASA Technical Reports Server (NTRS)
Tucker, Jerry H.
1994-01-01
Hardware description languages are special purpose programming languages. They are primarily used to specify the behavior of digital systems and are rapidly replacing traditional digital system design techniques. This is because they allow the designer to concentrate on how the system should operate rather than on implementation details. Hardware description languages allow a digital system to be described with a wide range of abstraction, and they support top down design techniques. A key feature of any hardware description language environment is its ability to simulate the modeled system. The two most important hardware description languages are Verilog and VHDL. Verilog has been the dominant language for the design of application specific integrated circuits (ASIC's). However, VHDL is rapidly gaining in popularity.
NASA Technical Reports Server (NTRS)
1990-01-01
This hardware catalog covers that hardware proposed under the Biomedical Monitoring and Countermeasures Development Program supported by the Johnson Space Center. The hardware items are listed separately by item, and are in alphabetical order. Each hardware item specification consists of four pages. The first page describes background information with an illustration, definition and a history/design status. The second page identifies the general specifications, performance, rack interface requirements, problems, issues, concerns, physical description, and functional description. The level of hardware design reliability is also identified under the maintainability and reliability category. The third page specifies the mechanical design guidelines and assumptions. Described are the material types and weights, modules, and construction methods. Also described is an estimation of percentage of construction which utilizes a particular method, and the percentage of required new mechanical design is documented. The fourth page analyzes the electronics, the scope of design effort, and the software requirements. Electronics are described by percentages of component types and new design. The design effort, as well as, the software requirements are identified and categorized.
COTD: Reference-free Hardware Trojan Detection in Gate-level Netlist
2017-03-01
modern designs , the constraint of time- to-market window, and the cost restriction of final product highly drive the horizontal design process. The...third-party intellectual properties (3PIPs) are widely used while they expose a design to hardware Trojans (HTs) that may tamper with the design and...activated. Some work have investigated hardware Trojans in early design stages and several techniques have been proposed to study the switching
Improvements in analysis techniques for segmented mirror arrays
NASA Astrophysics Data System (ADS)
Michels, Gregory J.; Genberg, Victor L.; Bisson, Gary R.
2016-08-01
The employment of actively controlled segmented mirror architectures has become increasingly common in the development of current astronomical telescopes. Optomechanical analysis of such hardware presents unique issues compared to that of monolithic mirror designs. The work presented here is a review of current capabilities and improvements in the methodology of the analysis of mechanically induced surface deformation of such systems. The recent improvements include capability to differentiate surface deformation at the array and segment level. This differentiation allowing surface deformation analysis at each individual segment level offers useful insight into the mechanical behavior of the segments that is unavailable by analysis solely at the parent array level. In addition, capability to characterize the full displacement vector deformation of collections of points allows analysis of mechanical disturbance predictions of assembly interfaces relative to other assembly interfaces. This capability, called racking analysis, allows engineers to develop designs for segment-to-segment phasing performance in assembly integration, 0g release, and thermal stability of operation. The performance predicted by racking has the advantage of being comparable to the measurements used in assembly of hardware. Approaches to all of the above issues are presented and demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
Pratt and Whitney Overview and Advanced Health Management Program
NASA Technical Reports Server (NTRS)
Inabinett, Calvin
2008-01-01
Hardware Development Activity: Design and Test Custom Multi-layer Circuit Boards for use in the Fault Emulation Unit; Logic design performed using VHDL; Layout power system for lab hardware; Work lab issues with software developers and software testers; Interface with Engine Systems personnel with performance of Engine hardware components; Perform off nominal testing with new engine hardware.
Universal Verification Methodology Based Register Test Automation Flow.
Woo, Jae Hun; Cho, Yong Kwan; Park, Sun Kyu
2016-05-01
In today's SoC design, the number of registers has been increased along with complexity of hardware blocks. Register validation is a time-consuming and error-pron task. Therefore, we need an efficient way to perform verification with less effort in shorter time. In this work, we suggest register test automation flow based UVM (Universal Verification Methodology). UVM provides a standard methodology, called a register model, to facilitate stimulus generation and functional checking of registers. However, it is not easy for designers to create register models for their functional blocks or integrate models in test-bench environment because it requires knowledge of SystemVerilog and UVM libraries. For the creation of register models, many commercial tools support a register model generation from register specification described in IP-XACT, but it is time-consuming to describe register specification in IP-XACT format. For easy creation of register model, we propose spreadsheet-based register template which is translated to IP-XACT description, from which register models can be easily generated using commercial tools. On the other hand, we also automate all the steps involved integrating test-bench and generating test-cases, so that designers may use register model without detailed knowledge of UVM or SystemVerilog. This automation flow involves generating and connecting test-bench components (e.g., driver, checker, bus adaptor, etc.) and writing test sequence for each type of register test-case. With the proposed flow, designers can save considerable amount of time to verify functionality of registers.
Comparison of Requirements for Composite Structures for Aircraft and Space Applications
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.; Elliot, Kenny B.; Hampton, Roy W.; Knight, Norman F., Jr.; Aggarwal, Pravin; Engelstad, Stephen P.; Chang, James B.
2010-01-01
In this report, the aircraft and space vehicle requirements for composite structures are compared. It is a valuable exercise to study composite structural design approaches used in the airframe industry and to adopt methodology that is applicable for space vehicles. The missions, environments, analysis methods, analysis validation approaches, testing programs, build quantities, inspection, and maintenance procedures used by the airframe industry, in general, are not transferable to spaceflight hardware. Therefore, while the application of composite design approaches from aircraft and other industries is appealing, many aspects cannot be directly utilized. Nevertheless, experiences and research for composite aircraft structures may be of use in unexpected arenas as space exploration technology develops, and so continued technology exchanges are encouraged.
NASA Technical Reports Server (NTRS)
Miller, Eric J.; Holguin, Andrew C.; Cruz, Josue; Lokos, William A.
2014-01-01
This is the presentation to follow conference paper of the same name. The adaptive compliant trailing edge (ACTE) flap experiment safety of flight requires that the flap to wing interface loads be sensed and monitored in real time to ensure that the wing structural load limits are not exceeded. This paper discusses the strain gage load calibration testing and load equation derivation methodology for the ACTE interface fittings. Both the left and right wing flap interfaces will be monitored and each contains four uniquely designed and instrumented flap interface fittings. The interface hardware design and instrumentation layout are discussed. Twenty one applied test load cases were developed using the predicted in-flight loads for the ACTE experiment.
NASA Technical Reports Server (NTRS)
Gwaltney, David A.; Ferguson, Michael I.
2003-01-01
Evolvable hardware provides the capability to evolve analog circuits to produce amplifier and filter functions. Conventional analog controller designs employ these same functions. Analog controllers for the control of the shaft speed of a DC motor are evolved on an evolvable hardware platform utilizing a second generation Field Programmable Transistor Array (FPTA2). The performance of an evolved controller is compared to that of a conventional proportional-integral (PI) controller. It is shown that hardware evolution is able to create a compact design that provides good performance, while using considerably less functional electronic components than the conventional design. Additionally, the use of hardware evolution to provide fault tolerance by reconfiguring the design is explored. Experimental results are presented showing that significant recovery of capability can be made in the face of damaging induced faults.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bachan, John
Chisel is a new open-source hardware construction language developed at UC Berkeley that supports advanced hardware design using highly parameterized generators and layered domain-specific hardware languages. Chisel is embedded in the Scala programming language, which raises the level of hardware design abstraction by providing concepts including object orientation, functional programming, parameterized types, and type inference. From the same source, Chisel can generate a high-speed C++-based cycle-accurate software simulator, or low-level Verilog designed to pass on to standard ASIC or FPGA tools for synthesis and place and route.
Supportability Issues and Approaches for Exploration Missions
NASA Technical Reports Server (NTRS)
Watson, J. K.; Ivins, M. S.; Cunningham, R. A.
2006-01-01
Maintaining and repairing spacecraft systems hardware to achieve required levels of operational availability during long-duration exploration missions will be challenged by limited resupply opportunities, constraints on the mass and volume available for spares and other maintenance-related provisions, and extended communications times. These factors will force the adoption of new approaches to the integrated logistics support of spacecraft systems hardware. For missions beyond the Moon, all spares, equipment, and supplies must either be prepositioned prior to departure from Earth of human crews or carried with the crews. The mass and volume of spares must be minimized by enabling repair at the lowest hardware levels, imposing commonality and standardization across all mission elements at all hardware levels, and providing the capability to fabricate structural and mechanical spares as required. Long round-trip communications times will require increasing levels of autonomy by the crews for most operations including spacecraft maintenance. Effective implementation of these approaches will only be possible when their need is recognized at the earliest stages of the program, when they are incorporated in operational concepts and programmatic requirements, and when diligence is applied in enforcing these requirements throughout system design in an integrated way across all contractors and suppliers. These approaches will be essential for the success of missions to Mars. Although limited duration lunar missions may be successfully accomplished with more traditional approaches to supportability, those missions will offer an opportunity to refine these concepts, associated technologies, and programmatic implementation methodologies so that they can be most effectively applied to later missions.
Usability: Human Research Program - Space Human Factors and Habitability
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Holden, Kritina L.
2009-01-01
The Usability project addresses the need for research in the area of metrics and methodologies used in hardware and software usability testing in order to define quantifiable and verifiable usability requirements. A usability test is a human-in-the-loop evaluation where a participant works through a realistic set of representative tasks using the hardware/software under investigation. The purpose of this research is to define metrics and methodologies for measuring and verifying usability in the aerospace domain in accordance with FY09 focus on errors, consistency, and mobility/maneuverability. Usability metrics must be predictive of success with the interfaces, must be easy to obtain and/or calculate, and must meet the intent of current Human Systems Integration Requirements (HSIR). Methodologies must work within the constraints of the aerospace domain, be cost and time efficient, and be able to be applied without extensive specialized training.
Evaluation methodologies for an advanced information processing system
NASA Technical Reports Server (NTRS)
Schabowsky, R. S., Jr.; Gai, E.; Walker, B. K.; Lala, J. H.; Motyka, P.
1984-01-01
The system concept and requirements for an Advanced Information Processing System (AIPS) are briefly described, but the emphasis of this paper is on the evaluation methodologies being developed and utilized in the AIPS program. The evaluation tasks include hardware reliability, maintainability and availability, software reliability, performance, and performability. Hardware RMA and software reliability are addressed with Markov modeling techniques. The performance analysis for AIPS is based on queueing theory. Performability is a measure of merit which combines system reliability and performance measures. The probability laws of the performance measures are obtained from the Markov reliability models. Scalar functions of this law such as the mean and variance provide measures of merit in the AIPS performability evaluations.
Energy efficiency of task allocation for embedded JPEG systems.
Fan, Yang-Hsin; Wu, Jan-Ou; Wang, San-Fu
2014-01-01
Embedded system works everywhere for repeatedly performing a few particular functionalities. Well-known products include consumer electronics, smart home applications, and telematics device, and so forth. Recently, developing methodology of embedded systems is applied to conduct the design of cloud embedded system resulting in the applications of embedded system being more diverse. However, the more energy consumes result from the more embedded system works. This study presents hyperrectangle technology (HT) to embedded system for obtaining energy saving. The HT adopts drift effect to construct embedded systems with more hardware circuits than software components or vice versa. It can fast construct embedded system with a set of hardware circuits and software components. Moreover, it has a great benefit to fast explore energy consumption for various embedded systems. The effects are presented by assessing a JPEG benchmarks. Experimental results demonstrate that the HT, respectively, achieves the energy saving by 29.84%, 2.07%, and 68.80% on average to GA, GHO, and Lin.
Mobile Videoconferencing Apps for Telemedicine
Liu, Wei-Li; Locatis, Craig; Ackerman, Michael
2016-01-01
Abstract Introduction: The quality and performance of several videoconferencing applications (apps) tested on iOS (Apple, Cupertino, CA) and Android™ (Google, Mountain View, CA) mobile platforms using Wi-Fi (802.11), third-generation (3G), and fourth-generation (4G) cellular networks are described. Materials and Methods: The tests were done to determine how well apps perform compared with videoconferencing software installed on computers or with more traditional videoconferencing using dedicated hardware. The rationale for app assessment and the testing methodology are described. Results: Findings are discussed in relation to operating system platform (iOS or Android) for which the apps were designed and the type of network (Wi-Fi, 3G, or 4G) used. The platform, network, and apps interact, and it is impossible to discuss videoconferencing experienced on mobile devices in relation to one of these factors without referencing the others. Conclusions: Apps for mobile devices can vary significantly from other videoconferencing software or hardware. App performance increased over the testing period due to improvements in network infrastructure and how apps manage bandwidth. PMID:26204322
Mobile Videoconferencing Apps for Telemedicine.
Zhang, Kai; Liu, Wei-Li; Locatis, Craig; Ackerman, Michael
2016-01-01
The quality and performance of several videoconferencing applications (apps) tested on iOS (Apple, Cupertino, CA) and Android (Google, Mountain View, CA) mobile platforms using Wi-Fi (802.11), third-generation (3G), and fourth-generation (4G) cellular networks are described. The tests were done to determine how well apps perform compared with videoconferencing software installed on computers or with more traditional videoconferencing using dedicated hardware. The rationale for app assessment and the testing methodology are described. Findings are discussed in relation to operating system platform (iOS or Android) for which the apps were designed and the type of network (Wi-Fi, 3G, or 4G) used. The platform, network, and apps interact, and it is impossible to discuss videoconferencing experienced on mobile devices in relation to one of these factors without referencing the others. Apps for mobile devices can vary significantly from other videoconferencing software or hardware. App performance increased over the testing period due to improvements in network infrastructure and how apps manage bandwidth.
Energy Efficiency of Task Allocation for Embedded JPEG Systems
2014-01-01
Embedded system works everywhere for repeatedly performing a few particular functionalities. Well-known products include consumer electronics, smart home applications, and telematics device, and so forth. Recently, developing methodology of embedded systems is applied to conduct the design of cloud embedded system resulting in the applications of embedded system being more diverse. However, the more energy consumes result from the more embedded system works. This study presents hyperrectangle technology (HT) to embedded system for obtaining energy saving. The HT adopts drift effect to construct embedded systems with more hardware circuits than software components or vice versa. It can fast construct embedded system with a set of hardware circuits and software components. Moreover, it has a great benefit to fast explore energy consumption for various embedded systems. The effects are presented by assessing a JPEG benchmarks. Experimental results demonstrate that the HT, respectively, achieves the energy saving by 29.84%, 2.07%, and 68.80% on average to GA, GHO, and Lin. PMID:24982983
NASA Technical Reports Server (NTRS)
Yau, M.; Guarro, S.; Apostolakis, G.
1993-01-01
Dynamic Flowgraph Methodology (DFM) is a new approach developed to integrate the modeling and analysis of the hardware and software components of an embedded system. The objective is to complement the traditional approaches which generally follow the philosophy of separating out the hardware and software portions of the assurance analysis. In this paper, the DFM approach is demonstrated using the Titan 2 Space Launch Vehicle Digital Flight Control System. The hardware and software portions of this embedded system are modeled in an integrated framework. In addition, the time dependent behavior and the switching logic can be captured by this DFM model. In the modeling process, it is found that constructing decision tables for software subroutines is very time consuming. A possible solution is suggested. This approach makes use of a well-known numerical method, the Newton-Raphson method, to solve the equations implemented in the subroutines in reverse. Convergence can be achieved in a few steps.
NASA-STD-(I)-6016, Standard Materials and Processes Requirements for Spacecraft
NASA Technical Reports Server (NTRS)
Pedley, Michael; Griffin, Dennis
2006-01-01
This document is directed toward Materials and Processes (M&P) used in the design, fabrication, and testing of flight components for all NASA manned, unmanned, robotic, launch vehicle, lander, in-space and surface systems, and spacecraft program/project hardware elements. All flight hardware is covered by the M&P requirements of this document, including vendor designed, off-the-shelf, and vendor furnished items. Materials and processes used in interfacing ground support equipment (GSE); test equipment; hardware processing equipment; hardware packaging; and hardware shipment shall be controlled to prevent damage to or contamination of flight hardware.
Programming time-multiplexed reconfigurable hardware using a scalable neuromorphic compiler.
Minkovich, Kirill; Srinivasa, Narayan; Cruz-Albrecht, Jose M; Cho, Youngkwan; Nogin, Aleksey
2012-06-01
Scalability and connectivity are two key challenges in designing neuromorphic hardware that can match biological levels. In this paper, we describe a neuromorphic system architecture design that addresses an approach to meet these challenges using traditional complementary metal-oxide-semiconductor (CMOS) hardware. A key requirement in realizing such neural architectures in hardware is the ability to automatically configure the hardware to emulate any neural architecture or model. The focus for this paper is to describe the details of such a programmable front-end. This programmable front-end is composed of a neuromorphic compiler and a digital memory, and is designed based on the concept of synaptic time-multiplexing (STM). The neuromorphic compiler automatically translates any given neural architecture to hardware switch states and these states are stored in digital memory to enable desired neural architectures. STM enables our proposed architecture to address scalability and connectivity using traditional CMOS hardware. We describe the details of the proposed design and the programmable front-end, and provide examples to illustrate its capabilities. We also provide perspectives for future extensions and potential applications.
Design Process of Flight Vehicle Structures for a Common Bulkhead and an MPCV Spacecraft Adapter
NASA Technical Reports Server (NTRS)
Aggarwal, Pravin; Hull, Patrick V.
2015-01-01
Design and manufacturing space flight vehicle structures is a skillset that has grown considerably at NASA during that last several years. Beginning with the Ares program and followed by the Space Launch System (SLS); in-house designs were produced for both the Upper Stage and the SLS Multipurpose crew vehicle (MPCV) spacecraft adapter. Specifically, critical design review (CDR) level analysis and flight production drawing were produced for the above mentioned hardware. In particular, the experience of this in-house design work led to increased manufacturing infrastructure for both Marshal Space Flight Center (MSFC) and Michoud Assembly Facility (MAF), improved skillsets in both analysis and design, and hands on experience in building and testing (MSA) full scale hardware. The hardware design and development processes from initiation to CDR and finally flight; resulted in many challenges and experiences that produced valuable lessons. This paper builds on these experiences of NASA in recent years on designing and fabricating flight hardware and examines the design/development processes used, as well as the challenges and lessons learned, i.e. from the initial design, loads estimation and mass constraints to structural optimization/affordability to release of production drawing to hardware manufacturing. While there are many documented design processes which a design engineer can follow, these unique experiences can offer insight into designing hardware in current program environments and present solutions to many of the challenges experienced by the engineering team.
Approximation of Engine Casing Temperature Constraints for Casing Mounted Electronics
NASA Technical Reports Server (NTRS)
Kratz, Jonathan L.; Culley, Dennis E.; Chapman, Jeffryes W.
2017-01-01
The performance of propulsion engine systems is sensitive to weight and volume considerations. This can severely constrain the configuration and complexity of the control system hardware. Distributed Engine Control technology is a response to these concerns by providing more flexibility in designing the control system, and by extension, more functionality leading to higher performing engine systems. Consequently, there can be a weight benefit to mounting modular electronic hardware on the engine core casing in a high temperature environment. This paper attempts to quantify the in-flight temperature constraints for engine casing mounted electronics. In addition, an attempt is made at studying heat soak back effects. The Commercial Modular Aero Propulsion System Simulation 40k (C-MAPSS40k) software is leveraged with real flight data as the inputs to the simulation. A two-dimensional (2-D) heat transfer model is integrated with the engine simulation to approximate the temperature along the length of the engine casing. This modification to the existing C-MAPSS40k software will provide tools and methodologies to develop a better understanding of the requirements for the embedded electronics hardware in future engine systems. Results of the simulations are presented and their implications on temperature constraints for engine casing mounted electronics is discussed.
Approximation of Engine Casing Temperature Constraints for Casing Mounted Electronics
NASA Technical Reports Server (NTRS)
Kratz, Jonathan; Culley, Dennis; Chapman, Jeffryes
2016-01-01
The performance of propulsion engine systems is sensitive to weight and volume considerations. This can severely constrain the configuration and complexity of the control system hardware. Distributed Engine Control technology is a response to these concerns by providing more flexibility in designing the control system, and by extension, more functionality leading to higher performing engine systems. Consequently, there can be a weight benefit to mounting modular electronic hardware on the engine core casing in a high temperature environment. This paper attempts to quantify the in-flight temperature constraints for engine casing mounted electronics. In addition, an attempt is made at studying heat soak back effects. The Commercial Modular Aero Propulsion System Simulation 40k (C-MAPSS40k) software is leveraged with real flight data as the inputs to the simulation. A two-dimensional (2-D) heat transfer model is integrated with the engine simulation to approximate the temperature along the length of the engine casing. This modification to the existing C-MAPSS40k software will provide tools and methodologies to develop a better understanding of the requirements for the embedded electronics hardware in future engine systems. Results of the simulations are presented and their implications on temperature constraints for engine casing mounted electronics is discussed.
Exascale Hardware Architectures Working Group
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemmert, S; Ang, J; Chiang, P
2011-03-15
The ASC Exascale Hardware Architecture working group is challenged to provide input on the following areas impacting the future use and usability of potential exascale computer systems: processor, memory, and interconnect architectures, as well as the power and resilience of these systems. Going forward, there are many challenging issues that will need to be addressed. First, power constraints in processor technologies will lead to steady increases in parallelism within a socket. Additionally, all cores may not be fully independent nor fully general purpose. Second, there is a clear trend toward less balanced machines, in terms of compute capability compared tomore » memory and interconnect performance. In order to mitigate the memory issues, memory technologies will introduce 3D stacking, eventually moving on-socket and likely on-die, providing greatly increased bandwidth but unfortunately also likely providing smaller memory capacity per core. Off-socket memory, possibly in the form of non-volatile memory, will create a complex memory hierarchy. Third, communication energy will dominate the energy required to compute, such that interconnect power and bandwidth will have a significant impact. All of the above changes are driven by the need for greatly increased energy efficiency, as current technology will prove unsuitable for exascale, due to unsustainable power requirements of such a system. These changes will have the most significant impact on programming models and algorithms, but they will be felt across all layers of the machine. There is clear need to engage all ASC working groups in planning for how to deal with technological changes of this magnitude. The primary function of the Hardware Architecture Working Group is to facilitate codesign with hardware vendors to ensure future exascale platforms are capable of efficiently supporting the ASC applications, which in turn need to meet the mission needs of the NNSA Stockpile Stewardship Program. This issue is relatively immediate, as there is only a small window of opportunity to influence hardware design for 2018 machines. Given the short timeline a firm co-design methodology with vendors is of prime importance.« less
Loads and Structural Dynamics Requirements for Spaceflight Hardware
NASA Technical Reports Server (NTRS)
Schultz, Kenneth P.
2011-01-01
The purpose of this document is to establish requirements relating to the loads and structural dynamics technical discipline for NASA and commercial spaceflight launch vehicle and spacecraft hardware. Requirements are defined for the development of structural design loads and recommendations regarding methodologies and practices for the conduct of load analyses are provided. As such, this document represents an implementation of NASA STD-5002. Requirements are also defined for structural mathematical model development and verification to ensure sufficient accuracy of predicted responses. Finally, requirements for model/data delivery and exchange are specified to facilitate interactions between Launch Vehicle Providers (LVPs), Spacecraft Providers (SCPs), and the NASA Technical Authority (TA) providing insight/oversight and serving in the Independent Verification and Validation role. In addition to the analysis-related requirements described above, a set of requirements are established concerning coupling phenomena or other interaction between structural dynamics and aerodynamic environments or control or propulsion system elements. Such requirements may reasonably be considered structure or control system design criteria, since good engineering practice dictates consideration of and/or elimination of the identified conditions in the development of those subsystems. The requirements are included here, however, to ensure that such considerations are captured in the design space for launch vehicles (LV), spacecraft (SC) and the Launch Abort Vehicle (LAV). The requirements in this document are focused on analyses to be performed to develop data needed to support structural verification. As described in JSC 65828, Structural Design Requirements and Factors of Safety for Spaceflight Hardware, implementation of the structural verification requirements is expected to be described in a Structural Verification Plan (SVP), which should describe the verification of each structural item for the applicable requirements. The requirement for and expected contents of the SVP are defined in JSC 65828. The SVP may also document unique verifications that meet or exceed these requirements with Technical Authority approval.
Aerodynamics and Control of Quadrotors
NASA Astrophysics Data System (ADS)
Bangura, Moses
Quadrotors are aerial vehicles with a four motor-rotor assembly for generating lift and controllability. Their light weight, ease of design and simple dynamics have increased their use in aerial robotics research. There are many quadrotors that are commercially available or under development. Commercial off-the-shelf quadrotors usually lack the ability to be reprogrammed and are unsuitable for use as research platforms. The open-source code developed in this thesis differs from other open-source systems by focusing on the key performance road blocks in implementing high performance experimental quadrotor platforms for research: motor-rotor control for thrust regulation, velocity and attitude estimation, and control for position regulation and trajectory tracking. In all three of these fundamental subsystems, code sub modules for implementation on commonly available hardware are provided. In addition, the thesis provides guidance on scoping and commissioning open-source hardware components to build a custom quadrotor. A key contribution of the thesis is then a design methodology for the development of experimental quadrotor platforms from open-source or commercial off-the-shelf software and hardware components that have active community support. Quadrotors built following the methodology allows the user access to the operation of the subsystems and, in particular, the user can tune the gains of the observers and controllers in order to push the overall system to its performance limits. This enables the quadrotor framework to be used for a variety of applications such as heavy lifting and high performance aggressive manoeuvres by both the hobby and academic communities. To address the question of thrust control, momentum and blade element theories are used to develop aerodynamic models for rotor blades specific to quadrotors. With the aerodynamic models, a novel thrust estimation and control scheme that improves on existing RPM (revolutions per minute) control of rotors is proposed. The approach taken uses the measured electrical power into the rotors compensating for electrical loses, to estimate changing aerodynamic conditions around a rotor as well as the aerodynamic thrust force. The resulting control algorithms are implemented in real-time on the embedded electronic speed controller (ESC) hardware. Using the estimates of the aerodynamic conditions around the rotor at this level improves the dynamic response to gust as the low-level thrust control is the fastest dynamic level on the vehicle. The aerodynamic estimation scheme enables the vehicle to react almost instantaneously to aerodynamic changes in the environment without affecting the overall dynamic performance of the vehicle. (Abstract shortened by ProQuest.).
Hardware architecture design of image restoration based on time-frequency domain computation
NASA Astrophysics Data System (ADS)
Wen, Bo; Zhang, Jing; Jiao, Zipeng
2013-10-01
The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed. Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality, hardware realizability and high efficiency.
Application-specific coarse-grained reconfigurable array: architecture and design methodology
NASA Astrophysics Data System (ADS)
Zhou, Li; Liu, Dongpei; Zhang, Jianfeng; Liu, Hengzhu
2015-06-01
Coarse-grained reconfigurable arrays (CGRAs) have shown potential for application in embedded systems in recent years. Numerous reconfigurable processing elements (PEs) in CGRAs provide flexibility while maintaining high performance by exploring different levels of parallelism. However, a difference remains between the CGRA and the application-specific integrated circuit (ASIC). Some application domains, such as software-defined radios (SDRs), require flexibility with performance demand increases. More effective CGRA architectures are expected to be developed. Customisation of a CGRA according to its application can improve performance and efficiency. This study proposes an application-specific CGRA architecture template composed of generic PEs (GPEs) and special PEs (SPEs). The hardware of the SPE can be customised to accelerate specific computational patterns. An automatic design methodology that includes pattern identification and application-specific function unit generation is also presented. A mapping algorithm based on ant colony optimisation is provided. Experimental results on the SDR target domain show that compared with other ordinary and application-specific reconfigurable architectures, the CGRA generated by the proposed method performs more efficiently for given applications.
Piromalis, Dimitrios; Arvanitis, Konstantinos
2016-08-04
Wireless Sensor and Actuators Networks (WSANs) constitute one of the most challenging technologies with tremendous socio-economic impact for the next decade. Functionally and energy optimized hardware systems and development tools maybe is the most critical facet of this technology for the achievement of such prospects. Especially, in the area of agriculture, where the hostile operating environment comes to add to the general technological and technical issues, reliable and robust WSAN systems are mandatory. This paper focuses on the hardware design architectures of the WSANs for real-world agricultural applications. It presents the available alternatives in hardware design and identifies their difficulties and problems for real-life implementations. The paper introduces SensoTube, a new WSAN hardware architecture, which is proposed as a solution to the various existing design constraints of WSANs. The establishment of the proposed architecture is based, firstly on an abstraction approach in the functional requirements context, and secondly, on the standardization of the subsystems connectivity, in order to allow for an open, expandable, flexible, reconfigurable, energy optimized, reliable and robust hardware system. The SensoTube implementation reference model together with its encapsulation design and installation are analyzed and presented in details. Furthermore, as a proof of concept, certain use cases have been studied in order to demonstrate the benefits of migrating existing designs based on the available open-source hardware platforms to SensoTube architecture.
Development and characteristics of the hardware for Skylab experiment S015
NASA Technical Reports Server (NTRS)
Thirolf, R. G.
1975-01-01
Details are given regarding the hardware for the Skylab S015 experiment, which was designed to detect the effects of zero gravity on cell growth rates. Experience gained in hardware-related considerations is presented for use of researchers concerned with future research of this type and further study of the S015 results. Brief descriptions are given of the experiment hardware, the hardware configuration for the critical design review, the major configuration changes, the final configuration, and the postflight review and analysis. An appendix describes pertinent documentation, film, and hardware that are available to qualified researchers; sources for additional or special information are given.
A Case Study of Measuring Process Risk for Early Insights into Software Safety
NASA Technical Reports Server (NTRS)
Layman, Lucas; Basili, Victor; Zelkowitz, Marvin V.; Fisher, Karen L.
2011-01-01
In this case study, we examine software safety risk in three flight hardware systems in NASA's Constellation spaceflight program. We applied our Technical and Process Risk Measurement (TPRM) methodology to the Constellation hazard analysis process to quantify the technical and process risks involving software safety in the early design phase of these projects. We analyzed 154 hazard reports and collected metrics to measure the prevalence of software in hazards and the specificity of descriptions of software causes of hazardous conditions. We found that 49-70% of 154 hazardous conditions could be caused by software or software was involved in the prevention of the hazardous condition. We also found that 12-17% of the 2013 hazard causes involved software, and that 23-29% of all causes had a software control. The application of the TPRM methodology identified process risks in the application of the hazard analysis process itself that may lead to software safety risk.
Tracking and Control of Gas Turbine Engine Component Damage/Life
NASA Technical Reports Server (NTRS)
Jaw, Link C.; Wu, Dong N.; Bryg, David J.
2003-01-01
This paper describes damage mechanisms and the methods of controlling damages to extend the on-wing life of critical gas turbine engine components. Particularly, two types of damage mechanisms are discussed: creep/rupture and thermo-mechanical fatigue. To control these damages and extend the life of engine hot-section components, we have investigated two methodologies to be implemented as additional control logic for the on-board electronic control unit. This new logic, the life-extending control (LEC), interacts with the engine control and monitoring unit and modifies the fuel flow to reduce component damages in a flight mission. The LEC methodologies were demonstrated in a real-time, hardware-in-the-loop simulation. The results show that LEC is not only a new paradigm for engine control design, but also a promising technology for extending the service life of engine components, hence reducing the life cycle cost of the engine.
Source Data Applicability Impacts on Epistemic Uncertainty for Launch Vehicle Fault Tree Models
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Novack, Steven D.; Ring, Robert W.
2016-01-01
Launch vehicle systems are designed and developed using both heritage and new hardware. Design modifications to the heritage hardware to fit new functional system requirements can impact the applicability of heritage reliability data. Risk estimates for newly designed systems must be developed from generic data sources such as commercially available reliability databases using reliability prediction methodologies, such as those addressed in MIL-HDBK-217F. Failure estimates must be converted from the generic environment to the specific operating environment of the system where it is used. In addition, some qualification of applicability for the data source to the current system should be made. Characterizing data applicability under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This paper will demonstrate a data-source applicability classification method for assigning uncertainty to a target vehicle based on the source and operating environment of the originating data. The source applicability is determined using heuristic guidelines while translation of operating environments is accomplished by applying statistical methods to MIL-HDK-217F tables. The paper will provide a case study example by translating Ground Benign (GB) and Ground Mobile (GM) to the Airborne Uninhabited Fighter (AUF) environment for three electronic components often found in space launch vehicle control systems. The classification method will be followed by uncertainty-importance routines to assess the need to for more applicable data to reduce uncertainty.
Functional Mobility Testing: A Novel Method to Establish Human System Interface Design Requirements
NASA Technical Reports Server (NTRS)
England, Scott A.; Benson, Elizabeth A.; Rajulu, Sudhakar
2008-01-01
Across all fields of human-system interface design it is vital to posses a sound methodology dictating the constraints on the system based on the capabilities of the human user. These limitations may be based on strength, mobility, dexterity, cognitive ability, etc. and combinations thereof. Data collected in an isolated environment to determine, for example, maximal strength or maximal range of motion would indeed be adequate for establishing not-to-exceed type design limitations, however these restraints on the system may be excessive over what is basally needed. Resources may potentially be saved by having a technique to determine the minimum measurements a system must accommodate. This paper specifically deals with the creation of a novel methodology for establishing mobility requirements for a new generation of space suit design concepts. Historically, the Space Shuttle and the International Space Station vehicle and space hardware design requirements documents such as the Man-Systems Integration Standards and International Space Station Flight Crew Integration Standard explicitly stated that the designers should strive to provide the maximum joint range of motion capabilities exhibited by a minimally clothed human subject. In the course of developing the Human-Systems Integration Requirements (HSIR) for the new space exploration initiative (Constellation), an effort was made to redefine the mobility requirements in the interest of safety and cost. Systems designed for manned space exploration can receive compounded gains from simplified designs that are both initially less expensive to produce and lighter, thereby, cheaper to launch.
Hardware synthesis from DDL. [Digital Design Language for computer aided design and test of LSI
NASA Technical Reports Server (NTRS)
Shah, A. M.; Shiva, S. G.
1981-01-01
The details of the digital systems can be conveniently input into the design automation system by means of Hardware Description Languages (HDL). The Computer Aided Design and Test (CADAT) system at NASA MSFC is used for the LSI design. The Digital Design Language (DDL) has been selected as HDL for the CADAT System. DDL translator output can be used for the hardware implementation of the digital design. This paper addresses problems of selecting the standard cells from the CADAT standard cell library to realize the logic implied by the DDL description of the system.
Safe to Fly: Certifying COTS Hardware for Spaceflight
NASA Technical Reports Server (NTRS)
Fichuk, Jessica L.
2011-01-01
Providing hardware for the astronauts to use on board the Space Shuttle or International Space Station (ISS) involves a certification process that entails evaluating hardware safety, weighing risks, providing mitigation, and verifying requirements. Upon completion of this certification process, the hardware is deemed safe to fly. This process from start to finish can be completed as quickly as 1 week or can take several years in length depending on the complexity of the hardware and whether the item is a unique custom design. One area of cost and schedule savings that NASA implements is buying Commercial Off the Shelf (COTS) hardware and certifying it for human spaceflight as safe to fly. By utilizing commercial hardware, NASA saves time not having to develop, design and build the hardware from scratch, as well as a timesaving in the certification process. By utilizing COTS hardware, the current detailed certification process can be simplified which results in schedule savings. Cost savings is another important benefit of flying COTS hardware. Procuring COTS hardware for space use can be more economical than custom building the hardware. This paper will investigate the cost savings associated with certifying COTS hardware to NASA s standards rather than performing a custom build.
[Network Design of the Spaceport Command and Control System
NASA Technical Reports Server (NTRS)
Teijeiro, Antonio
2017-01-01
I helped the Launch Control System (LCS) hardware team sustain the network design of the Spaceport Command and Control System. I wrote the procedure that will be used to satisfy an official hardware test for the hardware carrying data from the Launch Vehicle. I installed hardware and updated design documents in support of the ongoing development of the Spaceport Command and Control System and applied firewall experience I gained during my spring 2017 semester to inspect and create firewall security policies as requested. Finally, I completed several online courses concerning networking fundamentals and Unix operating systems.
Neuromorphic Silicon Neuron Circuits
Indiveri, Giacomo; Linares-Barranco, Bernabé; Hamilton, Tara Julia; van Schaik, André; Etienne-Cummings, Ralph; Delbruck, Tobi; Liu, Shih-Chii; Dudek, Piotr; Häfliger, Philipp; Renaud, Sylvie; Schemmel, Johannes; Cauwenberghs, Gert; Arthur, John; Hynna, Kai; Folowosele, Fopefolu; Saighi, Sylvain; Serrano-Gotarredona, Teresa; Wijekoon, Jayawan; Wang, Yingxue; Boahen, Kwabena
2011-01-01
Hardware implementations of spiking neurons can be extremely useful for a large variety of applications, ranging from high-speed modeling of large-scale neural systems to real-time behaving systems, to bidirectional brain–machine interfaces. The specific circuit solutions used to implement silicon neurons depend on the application requirements. In this paper we describe the most common building blocks and techniques used to implement these circuits, and present an overview of a wide range of neuromorphic silicon neurons, which implement different computational models, ranging from biophysically realistic and conductance-based Hodgkin–Huxley models to bi-dimensional generalized adaptive integrate and fire models. We compare the different design methodologies used for each silicon neuron design described, and demonstrate their features with experimental results, measured from a wide range of fabricated VLSI chips. PMID:21747754
Agile development approach for the observatory control software of the DAG 4m telescope
NASA Astrophysics Data System (ADS)
Güçsav, B. Bülent; ćoker, Deniz; Yeşilyaprak, Cahit; Keskin, Onur; Zago, Lorenzo; Yerli, Sinan K.
2016-08-01
Observatory Control Software for the upcoming 4m infrared telescope of DAG (Eastern Anatolian Observatory in Turkish) is in the beginning of its lifecycle. After the process of elicitation-validation of the initial requirements, we have been focused on preparation of a rapid conceptual design not only to see the big picture of the system but also to clarify the further development methodology. The existing preliminary designs for both software (including TCS and active optics control system) and hardware shall be presented here in brief to exploit the challenges the DAG software team has been facing with. The potential benefits of an agile approach for the development will be discussed depending on the published experience of the community and on the resources available to us.
NASA Astrophysics Data System (ADS)
Pang, Zhibo; Zheng, Lirong; Tian, Junzhe; Kao-Walter, Sharon; Dubrova, Elena; Chen, Qiang
2015-01-01
In-home health care services based on the Internet-of-Things are promising to resolve the challenges caused by the ageing of population. But the existing research is rather scattered and shows lack of interoperability. In this article, a business-technology co-design methodology is proposed for cross-boundary integration of in-home health care devices and services. In this framework, three key elements of a solution (business model, device and service integration architecture and information system integration architecture) are organically integrated and aligned. In particular, a cooperative Health-IoT ecosystem is formulated, and information systems of all stakeholders are integrated in a cooperative health cloud as well as extended to patients' home through the in-home health care station (IHHS). Design principles of the IHHS includes the reuse of 3C platform, certification of the Health Extension, interoperability and extendibility, convenient and trusted software distribution, standardised and secured electrical health care record handling, effective service composition and efficient data fusion. These principles are applied to the design of an IHHS solution called iMedBox. Detailed device and service integration architecture and hardware and software architecture are presented and verified by an implemented prototype. The quantitative performance analysis and field trials have confirmed the feasibility of the proposed design methodology and solution.
Corredor, Iván; Bernardos, Ana M; Iglesias, Josué; Casar, José R
2012-01-01
Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.
Fault Modeling of Extreme Scale Applications Using Machine Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vishnu, Abhinav; Dam, Hubertus van; Tallent, Nathan R.
Faults are commonplace in large scale systems. These systems experience a variety of faults such as transient, permanent and intermittent. Multi-bit faults are typically not corrected by the hardware resulting in an error. Here, this paper attempts to answer an important question: Given a multi-bit fault in main memory, will it result in an application error — and hence a recovery algorithm should be invoked — or can it be safely ignored? We propose an application fault modeling methodology to answer this question. Given a fault signature (a set of attributes comprising of system and application state), we use machinemore » learning to create a model which predicts whether a multibit permanent/transient main memory fault will likely result in error. We present the design elements such as the fault injection methodology for covering important data structures, the application and system attributes which should be used for learning the model, the supervised learning algorithms (and potentially ensembles), and important metrics. Lastly, we use three applications — NWChem, LULESH and SVM — as examples for demonstrating the effectiveness of the proposed fault modeling methodology.« less
Fault Modeling of Extreme Scale Applications Using Machine Learning
Vishnu, Abhinav; Dam, Hubertus van; Tallent, Nathan R.; ...
2016-05-01
Faults are commonplace in large scale systems. These systems experience a variety of faults such as transient, permanent and intermittent. Multi-bit faults are typically not corrected by the hardware resulting in an error. Here, this paper attempts to answer an important question: Given a multi-bit fault in main memory, will it result in an application error — and hence a recovery algorithm should be invoked — or can it be safely ignored? We propose an application fault modeling methodology to answer this question. Given a fault signature (a set of attributes comprising of system and application state), we use machinemore » learning to create a model which predicts whether a multibit permanent/transient main memory fault will likely result in error. We present the design elements such as the fault injection methodology for covering important data structures, the application and system attributes which should be used for learning the model, the supervised learning algorithms (and potentially ensembles), and important metrics. Lastly, we use three applications — NWChem, LULESH and SVM — as examples for demonstrating the effectiveness of the proposed fault modeling methodology.« less
Co-design of software and hardware to implement remote sensing algorithms
NASA Astrophysics Data System (ADS)
Theiler, James P.; Frigo, Janette R.; Gokhale, Maya; Szymanski, John J.
2002-01-01
Both for offline searches through large data archives and for onboard computation at the sensor head, there is a growing need for ever-more rapid processing of remote sensing data. For many algorithms of use in remote sensing, the bulk of the processing takes place in an ``inner loop'' with a large number of simple operations. For these algorithms, dramatic speedups can often be obtained with specialized hardware. The difficulty and expense of digital design continues to limit applicability of this approach, but the development of new design tools is making this approach more feasible, and some notable successes have been reported. On the other hand, it is often the case that processing can also be accelerated by adopting a more sophisticated algorithm design. Unfortunately, a more sophisticated algorithm is much harder to implement in hardware, so these approaches are often at odds with each other. With careful planning, however, it is sometimes possible to combine software and hardware design in such a way that each complements the other, and the final implementation achieves speedup that would not have been possible with a hardware-only or a software-only solution. We will in particular discuss the co-design of software and hardware to achieve substantial speedup of algorithms for multispectral image segmentation and for endmember identification.
Modified ACES Portable Life Support Integration, Design, and Testing for Exploration Missions
NASA Technical Reports Server (NTRS)
Kelly, Cody
2014-01-01
NASA's next generation of exploration missions provide a unique challenge to designers of EVA life support equipment, especially in a fiscally-constrained environment. In order to take the next steps of manned space exploration, NASA is currently evaluating the use of the Modified ACES (MACES) suit in conjunction with the Advanced Portable Life Support System (PLSS) currently under development. This paper will detail the analysis and integration of the PLSS thermal and ventilation subsystems into the MACES pressure garment, design of prototype hardware, and hardware-in-the-loop testing during the spring 2014 timeframe. Prototype hardware was designed with a minimal impact philosophy in order to mitigate design constraints becoming levied on either the advanced PLSS or MACES subsystems. Among challenges faced by engineers were incorporation of life support thermal water systems into the pressure garment cavity, operational concept definition between vehicle/portable life support system hardware, and structural attachment mechanisms while still enabling maximum EVA efficiency from a crew member's perspective. Analysis was completed in late summer 2013 to 'bound' hardware development, with iterative analysis cycles throughout the hardware development process. The design effort will cumulate in the first ever manned integration of NASA's advanced PLSS system with a pressure garment originally intended primarily for use in a contingency survival scenario.
Piromalis, Dimitrios; Arvanitis, Konstantinos
2016-01-01
Wireless Sensor and Actuators Networks (WSANs) constitute one of the most challenging technologies with tremendous socio-economic impact for the next decade. Functionally and energy optimized hardware systems and development tools maybe is the most critical facet of this technology for the achievement of such prospects. Especially, in the area of agriculture, where the hostile operating environment comes to add to the general technological and technical issues, reliable and robust WSAN systems are mandatory. This paper focuses on the hardware design architectures of the WSANs for real-world agricultural applications. It presents the available alternatives in hardware design and identifies their difficulties and problems for real-life implementations. The paper introduces SensoTube, a new WSAN hardware architecture, which is proposed as a solution to the various existing design constraints of WSANs. The establishment of the proposed architecture is based, firstly on an abstraction approach in the functional requirements context, and secondly, on the standardization of the subsystems connectivity, in order to allow for an open, expandable, flexible, reconfigurable, energy optimized, reliable and robust hardware system. The SensoTube implementation reference model together with its encapsulation design and installation are analyzed and presented in details. Furthermore, as a proof of concept, certain use cases have been studied in order to demonstrate the benefits of migrating existing designs based on the available open-source hardware platforms to SensoTube architecture. PMID:27527180
A Real-Time Telemetry Simulator of the IUS Spacecraft
NASA Technical Reports Server (NTRS)
Drews, Michael E.; Forman, Douglas A.; Baker, Damon M.; Khazoyan, Louis B.; Viazzo, Danilo
1998-01-01
A real-time telemetry simulator of the IUS spacecraft has recently entered operation to train Flight Control Teams for the launch of the AXAF telescope from the Shuttle. The simulator has proven to be a successful higher fidelity implementation of its predecessor, while affirming the rapid development methodology used in its design. Although composed of COTS hardware and software, the system simulates the full breadth of the mission: Launch, Pre-Deployment-Checkout, Burn Sequence, and AXAF/IUS separation. Realism is increased through patching the system into the operations facility to simulate IUS telemetry, Shuttle telemetry, and the Tracking Station link (commands and status message).
Intracavity adaptive optics. 1: Astigmatism correction performance.
Spinhirne, J M; Anafi, D; Freeman, R H; Garcia, H R
1981-03-15
A detailed experimental study has been conducted on adaptive optical control methodologies inside a laser resonator. A comparison is presented of several optimization techniques using a multidither zonal coherent optical adaptive technique system within a laser resonator for the correction of astigmatism. A dramatic performance difference is observed when optimizing on beam quality compared with optimizing on power-in-the-bucket. Experimental data are also presented on proper selection criteria for dither frequencies when controlling phase front errors. The effects of hardware limitations and design considerations on the performance of the system are presented, and general conclusions and physical interpretations on the results are made when possible.
Photonic Doppler Velocimetry Multiplexing Techniques: Evaluation of Photonic Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edward Daykin
This poster reports progress related to photonic technologies. Specifically, the authors developed diagnostic system architecture for a Multiplexed Photonic Doppler Velocimetry (MPDV) that incorporates frequency and time-division multiplexing into existing PDV methodology to provide increased channel count. Current MPDV design increases number of data records per digitizer channel 8x, and also operates as a laser-safe (Class 3a) system. Further, they applied heterodyne interferometry to allow for direction-of-travel determination and enable high-velocity measurements (>10 km/s) via optical downshifting. They also leveraged commercially available, inexpensive and robust components originally developed for telecom applications. Proposed MPDV architectures employ only commercially available, fiber-coupled hardware.
NASA Ames Research Center R and D Services Directorate Biomedical Systems Development
NASA Technical Reports Server (NTRS)
Pollitt, J.; Flynn, K.
1999-01-01
The Ames Research Center R&D Services Directorate teams with NASA, other government agencies and/or industry investigators for the development, design, fabrication, manufacturing and qualification testing of space-flight and ground-based experiment hardware for biomedical and general aerospace applications. In recent years, biomedical research hardware and software has been developed to support space-flight and ground-based experiment needs including the E 132 Biotelemetry system for the Research Animal Holding Facility (RAHF), E 100 Neurolab neuro-vestibular investigation systems, the Autogenic Feedback Systems, and the Standard Interface Glove Box (SIGB) experiment workstation module. Centrifuges, motion simulators, habitat design, environmental control systems, and other unique experiment modules and fixtures have also been developed. A discussion of engineered systems and capabilities will be provided to promote understanding of possibilities for future system designs in biomedical applications. In addition, an overview of existing engineered products will be shown. Examples of hardware and literature that demonstrate the organization's capabilities will be displayed. The Ames Research Center R&D Services Directorate is available to support the development of new hardware and software systems or adaptation of existing systems to meet the needs of academic, commercial/industrial, and government research requirements. The Ames R&D Services Directorate can provide specialized support for: System concept definition and feasibility Mathematical modeling and simulation of system performance Prototype hardware development Hardware and software design Data acquisition systems Graphical user interface development Motion control design Hardware fabrication and high-fidelity machining Composite materials development and application design Electronic/electrical system design and fabrication System performance verification testing and qualification.
Distributed digital signal processors for multi-body structures
NASA Technical Reports Server (NTRS)
Lee, Gordon K.
1990-01-01
Several digital filter designs were investigated which may be used to process sensor data from large space structures and to design digital hardware to implement the distributed signal processing architecture. Several experimental tests articles are available at NASA Langley Research Center to evaluate these designs. A summary of some of the digital filter designs is presented, an evaluation of their characteristics relative to control design is discussed, and candidate hardware microcontroller/microcomputer components are given. Future activities include software evaluation of the digital filter designs and actual hardware inplementation of some of the signal processor algorithms on an experimental testbed at NASA Langley.
NASA Technical Reports Server (NTRS)
Shiva, S. G.; Shah, A. M.
1980-01-01
The details of digital systems can be conveniently input into the design automation system by means of hardware description language (HDL). The computer aided design and test (CADAT) system at NASA MSFC is used for the LSI design. The digital design language (DDL) was selected as HDL for the CADAT System. DDL translator output can be used for the hardware implementation of the digital design. Problems of selecting the standard cells from the CADAT standard cell library to realize the logic implied by the DDL description of the system are addressed.
Design and development of data acquisition system based on WeChat hardware
NASA Astrophysics Data System (ADS)
Wang, Zhitao; Ding, Lei
2018-06-01
Data acquisition system based on WeChat hardware provides methods for popularization and practicality of data acquisition. The whole system is based on WeChat hardware platform, where the hardware part is developed on DA14580 development board and the software part is based on Alibaba Cloud. We designed service module, logic processing module, data processing module and database module. The communication between hardware and software uses AirSync Protocal. We tested this system by collecting temperature and humidity data, and the result shows that the system can aquisite the temperature and humidity in real time according to settings.
Space vehicle onboard command encoder
NASA Technical Reports Server (NTRS)
1975-01-01
A flexible onboard encoder system was designed for the space shuttle. The following areas were covered: (1) implementation of the encoder design into hardware to demonstrate the various encoding algorithms/code formats, (2) modulation techniques in a single hardware package to maintain comparable reliability and link integrity of the existing link systems and to integrate the various techniques into a single design using current technology. The primary function of the command encoder is to accept input commands, generated either locally onboard the space shuttle or remotely from the ground, format and encode the commands in accordance with the payload input requirements and appropriately modulate a subcarrier for transmission by the baseband RF modulator. The following information was provided: command encoder system design, brassboard hardware design, test set hardware and system packaging, and software.
NASA Astrophysics Data System (ADS)
Javier Romualdez, Luis
Scientific balloon-borne instrumentation offers an attractive, competitive, and effective alternative to space-borne missions when considering the overall scope, cost, and development timescale required to design and launch scientific instruments. In particular, the balloon-borne environment provides a near-space regime that is suitable for a number of modern astronomical and cosmological experiments, where the atmospheric interference suffered by ground-based instrumentation is negligible at stratospheric altitudes. This work is centered around the analytical strategies and implementation considerations for the attitude determination and control of SuperBIT, a scientific balloon-borne payload capable of meeting the strict sub-arcsecond pointing and image stability requirements demanded by modern cosmological experiments. Broadly speaking, the designed stability specifications of SuperBIT coupled with its observational efficiency, image quality, and accessibility rivals state-of-the-art astronomical observatories such as the Hubble Space Telescope. To this end, this work presents an end-to-end design methodology for precision pointing balloon-borne payloads such as SuperBIT within an analytical yet implementationally grounded context. Simulation models of SuperBIT are analytically derived to aid in pre-assembly trade-off and case studies that are pertinent to the dynamic balloon-borne environment. From these results, state estimation techniques and control methodologies are extensively developed, leveraging the analytical framework of simulation models and design studies. This pre-assembly design phase is physically validated during assembly, integration, and testing through implementation in real-time hardware and software, which bridges the gap between analytical results and practical application. SuperBIT attitude determination and control is demonstrated throughout two engineering test flights that verify pointing and image stability requirements in flight, where the post-flight results close the overall design loop by suggesting practical improvements to pre-design methodologies. Overall, the analytical and practical results presented in this work, though centered around the SuperBIT project, provide generically useful and implementationally viable methodologies for high precision balloon-borne instrumentation, all of which are validated, justified, and improved both theoretically and practically. As such, the continuing development of SuperBIT, built from the work presented in this thesis, strives to further the potential for scientific balloon-borne astronomy in the near future.
System design and integration of the large-scale advanced prop-fan
NASA Technical Reports Server (NTRS)
Huth, B. P.
1986-01-01
In recent years, considerable attention has been directed toward improving aircraft fuel consumption. Studies have shown that blades with thin airfoils and aerodynamic sweep extend the inherent efficiency advantage that turboprop propulsion systems have demonstrated to the higher speed to today's aircraft. Hamilton Standard has designed a 9-foot diameter single-rotation Prop-Fan. It will test the hardware on a static test stand, in low speed and high speed wind tunnels and on a research aircraft. The major objective of this testing is to establish the structural integrity of large scale Prop-Fans of advanced construction, in addition to the evaluation of aerodynamic performance and the aeroacoustic design. The coordination efforts performed to ensure smooth operation and assembly of the Prop-Fan are summarized. A summary of the loads used to size the system components, the methodology used to establish material allowables and a review of the key analytical results are given.
Introduction on performance analysis and profiling methodologies for KVM on ARM virtualization
NASA Astrophysics Data System (ADS)
Motakis, Antonios; Spyridakis, Alexander; Raho, Daniel
2013-05-01
The introduction of hardware virtualization extensions on ARM Cortex-A15 processors has enabled the implementation of full virtualization solutions for this architecture, such as KVM on ARM. This trend motivates the need to quantify and understand the performance impact, emerged by the application of this technology. In this work we start looking into some interesting performance metrics on KVM for ARM processors, which can provide us with useful insight that may lead to potential improvements in the future. This includes measurements such as interrupt latency and guest exit cost, performed on ARM Versatile Express and Samsung Exynos 5250 hardware platforms. Furthermore, we discuss additional methodologies that can provide us with a deeper understanding in the future of the performance footprint of KVM. We identify some of the most interesting approaches in this field, and perform a tentative analysis on how these may be implemented in the KVM on ARM port. These take into consideration hardware and software based counters for profiling, and issues related to the limitations of the simulators which are often used, such as the ARM Fast Models platform.
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Petrick, David J.; Day, John H. (Technical Monitor)
2001-01-01
Spacecraft telemetry rates have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image processing application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms and re-configurable computing hardware technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processing (DSP). It has been shown in [1] and [2] that this configuration can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft. However, since this technology is still maturing, intensive pre-hardware steps are necessary to achieve the benefits of hardware implementation. This paper describes these steps for the GOES-8 application, a software project developed using Interactive Data Language (IDL) (Trademark of Research Systems, Inc.) on a Workstation/UNIX platform. The solution involves converting the application to a PC/Windows/RC platform, selected mainly by the availability of low cost, adaptable high-speed RC hardware. In order for the hybrid system to run, the IDL software was modified to account for platform differences. It was interesting to examine the gains and losses in performance on the new platform, as well as unexpected observations before implementing hardware. After substantial pre-hardware optimization steps, the necessity of hardware implementation for bottleneck code in the PC environment became evident and solvable beginning with the methodology described in [1], [2], and implementing a novel methodology for this specific application [6]. The PC-RC interface bandwidth problem for the class of applications with moderate input-output data rates but large intermediate multi-thread data streams has been addressed and mitigated. This opens a new class of satellite image processing applications for bottleneck problems solution using RC technologies. The issue of a science algorithm level of abstraction necessary for RC hardware implementation is also described. Selected Matlab functions already implemented in hardware were investigated for their direct applicability to the GOES-8 application with the intent to create a library of Matlab and IDL RC functions for ongoing work. A complete class of spacecraft image processing applications using embedded re-configurable computing technology to meet real-time requirements, including performance results and comparison with the existing system, is described in this paper.
Complex multidisciplinary system composition for aerospace vehicle conceptual design
NASA Astrophysics Data System (ADS)
Gonzalez, Lex
Although, there exists a vast amount of work concerning the analysis, design, integration of aerospace vehicle systems, there is no standard for how this data and knowledge should be combined in order to create a synthesis system. Each institution creating a synthesis system has in house vehicle and hardware components they are attempting to model and proprietary methods with which to model them. This leads to the fact that synthesis systems begin as one-off creations meant to answer a specific problem. As the scope of the synthesis system grows to encompass more and more problems, so does its size and complexity; in order for a single synthesis system to answer multiple questions the number of methods and method interface must increase. As a means to curtail the requirement that the increase of an aircraft synthesis systems capability leads to an increase in its size and complexity, this research effort focuses on the idea that each problem in aerospace requires its own analysis framework. By focusing on the creation of a methodology which centers on the matching of an analysis framework towards the problem being solved, the complexity of the analysis framework is decoupled from the complexity of the system that creates it. The derived methodology allows for the composition of complex multi-disciplinary systems (CMDS) through the automatic creation and implementation of system and disciplinary method interfaces. The CMDS Composition process follows a four step methodology meant to take a problem definition and progress towards the creation of an analysis framework meant to answer said problem. The unique implementation of the CMDS Composition process take user selected disciplinary analysis methods and automatically integrates them, together in order to create a syntactically composable analysis framework. As a means of assessing the validity of the CMDS Composition process a prototype system (AVDDBMS) has been developed. AVD DBMS has been used to model the Generic Hypersonic Vehicle (GHV), an open source family of hypersonic vehicles originating from the Air Force Research Laboratory. AVDDBMS has been applied in three different ways in order to assess its validity: Verification using GHV disciplinary data, Validation using selected disciplinary analysis methods, and Application of the CMDS Composition Process to assess the design solution space for the GHV hardware. The research demonstrates the holistic effect that selection of individual disciplinary analysis methods has on the structure and integration of the analysis framework.
Hardware cleanliness methodology and certification
NASA Technical Reports Server (NTRS)
Harvey, Gale A.; Lash, Thomas J.; Rawls, J. Richard
1995-01-01
Inadequacy of mass loss cleanliness criteria for selection of materials for contamination sensitive uses, and processing of flight hardware for contamination sensitive instruments is discussed. Materials selection for flight hardware is usually based on mass loss (ASTM E-595). However, flight hardware cleanliness (MIL 1246A) is a surface cleanliness assessment. It is possible for materials (e.g. Sil-Pad 2000) to pass ASTM E-595 and fail MIL 1246A class A by orders of magnitude. Conversely, it is possible for small amounts of nonconforming material (Huma-Seal conformal coating) to not present significant cleanliness problems to an optical flight instrument. Effective cleaning (precleaning, precision cleaning, and ultra cleaning) and cleanliness verification are essential for contamination sensitive flight instruments. Polish cleaning of hardware, e.g. vacuum baking for vacuum applications, and storage of clean hardware, e.g. laser optics, is discussed. Silicone materials present special concerns for use in space because of the rapid conversion of the outgassed residues to glass by solar ultraviolet radiation and/or atomic oxygen. Non ozone depleting solvent cleaning and institutional support for cleaning and certification are also discussed.
Application and design of solar photovoltaic system
NASA Astrophysics Data System (ADS)
Tianze, Li; Hengwei, Lu; Chuan, Jiang; Luan, Hou; Xia, Zhang
2011-02-01
Solar modules, power electronic equipments which include the charge-discharge controller, the inverter, the test instrumentation and the computer monitoring, and the storage battery or the other energy storage and auxiliary generating plant make up of the photovoltaic system which is shown in the thesis. PV system design should follow to meet the load supply requirements, make system low cost, seriously consider the design of software and hardware, and make general software design prior to hardware design in the paper. To take the design of PV system for an example, the paper gives the analysis of the design of system software and system hardware, economic benefit, and basic ideas and steps of the installation and the connection of the system. It elaborates on the information acquisition, the software and hardware design of the system, the evaluation and optimization of the system. Finally, it shows the analysis and prospect of the application of photovoltaic technology in outer space, solar lamps, freeways and communications.
Media processors using a new microsystem architecture designed for the Internet era
NASA Astrophysics Data System (ADS)
Wyland, David C.
1999-12-01
The demands of digital image processing, communications and multimedia applications are growing more rapidly than traditional design methods can fulfill them. Previously, only custom hardware designs could provide the performance required to meet the demands of these applications. However, hardware design has reached a crisis point. Hardware design can no longer deliver a product with the required performance and cost in a reasonable time for a reasonable risk. Software based designs running on conventional processors can deliver working designs in a reasonable time and with low risk but cannot meet the performance requirements. What is needed is a media processing approach that combines very high performance, a simple programming model, complete programmability, short time to market and scalability. The Universal Micro System (UMS) is a solution to these problems. The UMS is a completely programmable (including I/O) system on a chip that combines hardware performance with the fast time to market, low cost and low risk of software designs.
Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E
2014-01-01
This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.
Energy efficient engine low-pressure compressor component test hardware detailed design report
NASA Technical Reports Server (NTRS)
Michael, C. J.; Halle, J. E.
1981-01-01
The aerodynamic and mechanical design description of the low pressure compressor component of the Energy Efficient Engine were used. The component was designed to meet the requirements of the Flight Propulsion System while maintaining a low cost approach in providing a low pressure compressor design for the Integrated Core/Low Spool test required in the Energy Efficient Engine Program. The resulting low pressure compressor component design meets or exceeds all design goals with the exception of surge margin. In addition, the expense of hardware fabrication for the Integrated Core/Low Spool test has been minimized through the use of existing minor part hardware.
Online Learning Flight Control for Intelligent Flight Control Systems (IFCS)
NASA Technical Reports Server (NTRS)
Niewoehner, Kevin R.; Carter, John (Technical Monitor)
2001-01-01
The research accomplishments for the cooperative agreement 'Online Learning Flight Control for Intelligent Flight Control Systems (IFCS)' include the following: (1) previous IFC program data collection and analysis; (2) IFC program support site (configured IFC systems support network, configured Tornado/VxWorks OS development system, made Configuration and Documentation Management Systems Internet accessible); (3) Airborne Research Test Systems (ARTS) II Hardware (developed hardware requirements specification, developing environmental testing requirements, hardware design, and hardware design development); (4) ARTS II software development laboratory unit (procurement of lab style hardware, configured lab style hardware, and designed interface module equivalent to ARTS II faceplate); (5) program support documentation (developed software development plan, configuration management plan, and software verification and validation plan); (6) LWR algorithm analysis (performed timing and profiling on algorithm); (7) pre-trained neural network analysis; (8) Dynamic Cell Structures (DCS) Neural Network Analysis (performing timing and profiling on algorithm); and (9) conducted technical interchange and quarterly meetings to define IFC research goals.
Wide field-of-view dual-band multispectral muzzle flash detection
NASA Astrophysics Data System (ADS)
Montoya, J.; Melchor, J.; Spiliotis, P.; Taplin, L.
2013-06-01
Sensor technologies are undergoing revolutionary advances, as seen in the rapid growth of multispectral methodologies. Increases in spatial, spectral, and temporal resolution, and in breadth of spectral coverage, render feasible sensors that function with unprecedented performance. A system was developed that addresses many of the key hardware requirements for a practical dual-band multispectral acquisition system, including wide field of view and spectral/temporal shift between dual bands. The system was designed using a novel dichroic beam splitter and dual band-pass filter configuration that creates two side-by-side images of a scene on a single sensor. A high-speed CMOS sensor was used to simultaneously capture data from the entire scene in both spectral bands using a short focal-length lens that provided a wide field-of-view. The beam-splitter components were arranged such that the two images were maintained in optical alignment and real-time intra-band processing could be carried out using only simple arithmetic on the image halves. An experiment related to limitations of the system to address multispectral detection requirements was performed. This characterized the system's low spectral variation across its wide field of view. This paper provides lessons learned on the general limitation of key hardware components required for multispectral muzzle flash detection, using the system as a hardware example combined with simulated multispectral muzzle flash and background signatures.
The Evolution of Exercise Hardware on ISS: Past, Present, and Future
NASA Technical Reports Server (NTRS)
Buxton, R. E.; Kalogera, K. L.; Hanson, A. M.
2017-01-01
During 16 years in low-Earth orbit, the suite of exercise hardware aboard the International Space Station (ISS) has matured significantly. Today, the countermeasure system supports an array of physical-training protocols and serves as an extensive research platform. Future hardware designs are required to have smaller operational envelopes and must also mitigate known physiologic issues observed in long-duration spaceflight. Taking lessons learned from the long history of space exercise will be important to successful development and implementation of future, compact exercise hardware. The evolution of exercise hardware as deployed on the ISS has implications for future exercise hardware and operations. Key lessons learned from the early days of ISS have helped to: 1. Enhance hardware performance (increased speed and loads). 2. Mature software interfaces. 3. Compare inflight exercise workloads to pre-, in-, and post-flight musculoskeletal and aerobic conditions. 4. Improve exercise comfort. 5. Develop complimentary hardware for research and operations. Current ISS exercise hardware includes both custom and commercial-off-the-shelf (COTS) hardware. Benefits and challenges to this approach have prepared engineering teams to take a hybrid approach when designing and implementing future exercise hardware. Significant effort has gone into consideration of hardware instrumentation and wearable devices that provide important data to monitor crew health and performance.
Design and Analysis of Morpheus Lander Flight Control System
NASA Technical Reports Server (NTRS)
Jang, Jiann-Woei; Yang, Lee; Fritz, Mathew; Nguyen, Louis H.; Johnson, Wyatt R.; Hart, Jeremy J.
2014-01-01
The Morpheus Lander is a vertical takeoff and landing test bed vehicle developed to demonstrate the system performance of the Guidance, Navigation and Control (GN&C) system capability for the integrated autonomous landing and hazard avoidance system hardware and software. The Morpheus flight control system design must be robust to various mission profiles. This paper presents a design methodology for employing numerical optimization to develop the Morpheus flight control system. The design objectives include attitude tracking accuracy and robust stability with respect to rigid body dynamics and propellant slosh. Under the assumption that the Morpheus time-varying dynamics and control system can be frozen over a short period of time, the flight controllers are designed to stabilize all selected frozen-time control systems in the presence of parametric uncertainty. Both control gains in the inner attitude control loop and guidance gains in the outer position control loop are designed to maximize the vehicle performance while ensuring robustness. The flight control system designs provided herein have been demonstrated to provide stable control systems in both Draper Ares Stability Analysis Tool (ASAT) and the NASA/JSC Trick-based Morpheus time domain simulation.
NASA Technical Reports Server (NTRS)
Miller, Eric J.; Holguin, Andrew C.; Cruz, Josue; Lokos, William A.
2014-01-01
The safety-of-flight parameters for the Adaptive Compliant Trailing Edge (ACTE) flap experiment require that flap-to-wing interface loads be sensed and monitored in real time to ensure that the structural load limits of the wing are not exceeded. This paper discusses the strain gage load calibration testing and load equation derivation methodology for the ACTE interface fittings. Both the left and right wing flap interfaces were monitored; each contained four uniquely designed and instrumented flap interface fittings. The interface hardware design and instrumentation layout are discussed. Twenty-one applied test load cases were developed using the predicted in-flight loads. Pre-test predictions of strain gage responses were produced using finite element method models of the interface fittings. Predicted and measured test strains are presented. A load testing rig and three hydraulic jacks were used to apply combinations of shear, bending, and axial loads to the interface fittings. Hardware deflections under load were measured using photogrammetry and transducers. Due to deflections in the interface fitting hardware and test rig, finite element model techniques were used to calculate the reaction loads throughout the applied load range, taking into account the elastically-deformed geometry. The primary load equations were selected based on multiple calibration metrics. An independent set of validation cases was used to validate each derived equation. The 2-sigma residual errors for the shear loads were less than eight percent of the full-scale calibration load; the 2-sigma residual errors for the bending moment loads were less than three percent of the full-scale calibration load. The derived load equations for shear, bending, and axial loads are presented, with the calculated errors for both the calibration cases and the independent validation load cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Lee H.; Laros, James H., III
This paper describes a methodology for implementing disk-less cluster systems using the Network File System (NFS) that scales to thousands of nodes. This method has been successfully deployed and is currently in use on several production systems at Sandia National Labs. This paper will outline our methodology and implementation, discuss hardware and software considerations in detail and present cluster configurations with performance numbers for various management operations like booting.
Performance/price estimates for cortex-scale hardware: a design space exploration.
Zaveri, Mazad S; Hammerstrom, Dan
2011-04-01
In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering. Copyright © 2010 Elsevier Ltd. All rights reserved.
Advanced telemetry systems for payloads. Technology needs, objectives and issues
NASA Technical Reports Server (NTRS)
1990-01-01
The current trends in advanced payload telemetry are the new developments in advanced modulation/coding, the applications of intelligent techniques, data distribution processing, and advanced signal processing methodologies. Concerted efforts will be required to design ultra-reliable man-rated software to cope with these applications. The intelligence embedded and distributed throughout various segments of the telemetry system will need to be overridden by an operator in case of life-threatening situations, making it a real-time integration issue. Suitable MIL standards on physical interfaces and protocols will be adopted to suit the payload telemetry system. New technologies and techniques will be developed for fast retrieval of mass data. Currently, these technology issues are being addressed to provide more efficient, reliable, and reconfigurable systems. There is a need, however, to change the operation culture. The current role of NASA as a leader in developing all the new innovative hardware should be altered to save both time and money. We should use all the available hardware/software developed by the industry and use the existing standards rather than inventing our own.
Flight Hardware Packaging Design for Stringent EMC Radiated Emission Requirements
NASA Technical Reports Server (NTRS)
Lortz, Charlene L.; Huang, Chi-Chien N.; Ravich, Joshua A.; Steiner, Carl N.
2013-01-01
This packaging design approach can help heritage hardware meet a flight project's stringent EMC radiated emissions requirement. The approach requires only minor modifications to a hardware's chassis and mainly concentrates on its connector interfaces. The solution is to raise the surface area where the connector is mounted by a few millimeters using a pedestal, and then wrapping with conductive tape from the cable backshell down to the surface-mounted connector. This design approach has been applied to JPL flight project subsystems. The EMC radiated emissions requirements for flight projects can vary from benign to mission critical. If the project's EMC requirements are stringent, the best approach to meet EMC requirements would be to design an EMC control program for the project early on and implement EMC design techniques starting with the circuit board layout. This is the ideal scenario for hardware that is built from scratch. Implementation of EMC radiated emissions mitigation techniques can mature as the design progresses, with minimal impact to the design cycle. The real challenge exists for hardware that is planned to be flown following a built-to-print approach, in which heritage hardware from a past project with a different set of requirements is expected to perform satisfactorily for a new project. With acceptance of heritage, the design would already be established (circuit board layout and components have already been pre-determined), and hence any radiated emissions mitigation techniques would only be applicable at the packaging level. The key is to take a heritage design with its known radiated emissions spectrum and repackage, or modify its chassis design so that it would have a better chance of meeting the new project s radiated emissions requirements.
Propulsion/flight control integration technology (PROFIT) design analysis status
NASA Technical Reports Server (NTRS)
Carlin, C. M.; Hastings, W. J.
1978-01-01
The propulsion flight control integration technology (PROFIT) program was designed to develop a flying testbed dedicated to controls research. The preliminary design, analysis, and feasibility studies conducted in support of the PROFIT program are reported. The PROFIT system was built around existing IPCS hardware. In order to achieve the desired system flexibility and capability, additional interfaces between the IPCS hardware and F-15 systems were required. The requirements for additions and modifications to the existing hardware were defined. Those interfaces involving the more significant changes were studied. The DCU memory expansion to 32K with flight qualified hardware was completed on a brassboard basis. The uplink interface breadboard and a brassboard of the central computer interface were also tested. Two preliminary designs and corresponding program plans are presented.
RF control at SSCL — an object oriented design approach
NASA Astrophysics Data System (ADS)
Dohan, D. A.; Osberg, E.; Biggs, R.; Bossom, J.; Chillara, K.; Richter, R.; Wade, D.
1994-12-01
The Superconducting Super Collider (SSC) in Texas, the construction of which was stopped in 1994, would have represented a major challenge in accelerator research and development. This paper addresses the issues encountered in the parallel design and construction of the control systems for the RF equipment for the five accelerators comprising the SSC. An extensive analysis of the components of the RF control systems has been undertaken, based upon the Schlaer-Mellor object-oriented analysis and design (OOA/OOD) methodology. The RF subsystem components such as amplifiers, tubes, power supplies, PID loops, etc. were analyzed to produce OOA information, behavior and process models. Using these models, OOD was iteratively applied to develop a generic RF control system design. This paper describes the results of this analysis and the development of 'bridges' between the analysis objects, and the EPICS-based software and underlying VME-based hardware architectures. The application of this approach to several of the SSCL RF control systems is discussed.
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan; Teubert, Chris; Gorospe, George; Burgett, Drew; Quach, Cuong C.; Hogge, Edward
2016-01-01
The airspace is becoming more and more complicated, and will continue to do so in the future with the integration of Unmanned Aerial Vehicles (UAVs), autonomy, spacecraft, other forms of aviation technology into the airspace. The new technology and complexity increases the importance and difficulty of safety assurance. Additionally, testing new technologies on complex aviation systems & systems of systems can be very difficult, expensive, and sometimes unsafe in real life scenarios. Prognostic methodology provides an estimate of the health and risks of a component, vehicle, or airspace and knowledge of how that will change over time. That measure is especially useful in safety determination, mission planning, and maintenance scheduling. The developed testbed will be used to validate prediction algorithms for the real-time safety monitoring of the National Airspace System (NAS) and the prediction of unsafe events. The framework injects flight related anomalies related to ground systems, routing, airport congestion, etc. to test and verify algorithms for NAS safety. In our research work, we develop a live, distributed, hardware-in-the-loop testbed for aviation and airspace prognostics along with exploring further research possibilities to verify and validate future algorithms for NAS safety. The testbed integrates virtual aircraft using the X-Plane simulator and X-PlaneConnect toolbox, UAVs using onboard sensors and cellular communications, and hardware in the loop components. In addition, the testbed includes an additional research framework to support and simplify future research activities. It enables safe, accurate, and inexpensive experimentation and research into airspace and vehicle prognosis that would not have been possible otherwise. This paper describes the design, development, and testing of this system. Software reliability, safety and latency are some of the critical design considerations in development of the testbed. Integration of HITL elements in the development phases and veri cation/ validation are key elements to this report.
FPS-RAM: Fast Prefix Search RAM-Based Hardware for Forwarding Engine
NASA Astrophysics Data System (ADS)
Zaitsu, Kazuya; Yamamoto, Koji; Kuroda, Yasuto; Inoue, Kazunari; Ata, Shingo; Oka, Ikuo
Ternary content addressable memory (TCAM) is becoming very popular for designing high-throughput forwarding engines on routers. However, TCAM has potential problems in terms of hardware and power costs, which limits its ability to deploy large amounts of capacity in IP routers. In this paper, we propose new hardware architecture for fast forwarding engines, called fast prefix search RAM-based hardware (FPS-RAM). We designed FPS-RAM hardware with the intent of maintaining the same search performance and physical user interface as TCAM because our objective is to replace the TCAM in the market. Our RAM-based hardware architecture is completely different from that of TCAM and has dramatically reduced the costs and power consumption to 62% and 52%, respectively. We implemented FPS-RAM on an FPGA to examine its lookup operation.
NASA Technical Reports Server (NTRS)
Keltner, D. J.
1975-01-01
This functional design specification defines the total systems approach to meeting the requirements stated in the Detailed Requirements Document for Stowage List and Hardware Tracking System for the space shuttle program. The stowage list and hardware tracking system is identified at the system and subsystem level with each subsystem defined as a function of the total system.
Research of Steel-dielectric Transition Using Subminiature Eddy-current Transducer
NASA Astrophysics Data System (ADS)
Dmitriev, S. F.; Malikov, V. N.; Sagalakov, A. M.; Ishkov, A. V.
2018-05-01
The research aims to develop a subminiature transducer for electrical steel investigation. The authors determined the capability to study steel characteristics at different depths based on variations of eddy-current transducer amplitude at the steel-dielectric boundary. A subminiature transformer-type transducer was designed, which enables to perform local investigations of ferromagnetic materials using an eddy-current method based on local studies of the steel electrical conductivity. Having the designed transducer as a basis, a hardware-software complex was built to perform experimental studies of steel at the interface boundary. Test results are reported for a specimen with continuous and discrete measurements taken at different frequencies. The article provides the key technical information about the eddy current transformer used and describes the methodology of measurements that makes it possible to control steel to dielectric transition.
Cyber-workstation for computational neuroscience.
Digiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C; Fortes, Jose; Sanchez, Justin C
2010-01-01
A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface.
Cyber-Workstation for Computational Neuroscience
DiGiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C.; Fortes, Jose; Sanchez, Justin C.
2009-01-01
A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface. PMID:20126436
Virtual Reality as Innovative Approach to the Interior Designing
NASA Astrophysics Data System (ADS)
Kaleja, Pavol; Kozlovská, Mária
2017-06-01
We can observe significant potential of information and communication technologies (ICT) in interior designing field, by development of software and hardware virtual reality tools. Using ICT tools offer realistic perception of proposal in its initial idea (the study). A group of real-time visualization, supported by hardware tools like Oculus Rift HTC Vive, provides free walkthrough and movement in virtual interior with the possibility of virtual designing. By improving of ICT software tools for designing in virtual reality we can achieve still more realistic virtual environment. The contribution presented proposal of an innovative approach of interior designing in virtual reality, using the latest software and hardware ICT virtual reality technologies
Leavesley, Silas J; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter; Rich, Thomas C
2018-01-01
Spectral imaging technologies have been used for many years by the remote sensing community. More recently, these approaches have been applied to biomedical problems, where they have shown great promise. However, biomedical spectral imaging has been complicated by the high variance of biological data and the reduced ability to construct test scenarios with fixed ground truths. Hence, it has been difficult to objectively assess and compare biomedical spectral imaging assays and technologies. Here, we present a standardized methodology that allows assessment of the performance of biomedical spectral imaging equipment, assays, and analysis algorithms. This methodology incorporates real experimental data and a theoretical sensitivity analysis, preserving the variability present in biomedical image data. We demonstrate that this approach can be applied in several ways: to compare the effectiveness of spectral analysis algorithms, to compare the response of different imaging platforms, and to assess the level of target signature required to achieve a desired performance. Results indicate that it is possible to compare even very different hardware platforms using this methodology. Future applications could include a range of optimization tasks, such as maximizing detection sensitivity or acquisition speed, providing high utility for investigators ranging from design engineers to biomedical scientists. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Door Hardware and Installations; Carpentry: 901894.
ERIC Educational Resources Information Center
Dade County Public Schools, Miami, FL.
The curriculum guide outlines a course designed to provide instruction in the selection, preparation, and installation of hardware for door assemblies. The course is divided into five blocks of instruction (introduction to doors and hardware, door hardware, exterior doors and jambs, interior doors and jambs, and a quinmester post-test) totaling…
NASA Astrophysics Data System (ADS)
Reynerson, Charles Martin
This research has been performed to create concept design and economic feasibility data for space business parks. A space business park is a commercially run multi-use space station facility designed for use by a wide variety of customers. Both space hardware and crew are considered as revenue producing payloads. Examples of commercial markets may include biological and materials research, processing, and production, space tourism habitats, and satellite maintenance and resupply depots. This research develops a design methodology and an analytical tool to create feasible preliminary design information for space business parks. The design tool is validated against a number of real facility designs. Appropriate model variables are adjusted to ensure that statistical approximations are valid for subsequent analyses. The tool is used to analyze the effect of various payload requirements on the size, weight and power of the facility. The approach for the analytical tool was to input potential payloads as simple requirements, such as volume, weight, power, crew size, and endurance. In creating the theory, basic principles are used and combined with parametric estimation of data when necessary. Key system parameters are identified for overall system design. Typical ranges for these key parameters are identified based on real human spaceflight systems. To connect the economics to design, a life-cycle cost model is created based upon facility mass. This rough cost model estimates potential return on investments, initial investment requirements and number of years to return on the initial investment. Example cases are analyzed for both performance and cost driven requirements for space hotels, microgravity processing facilities, and multi-use facilities. In combining both engineering and economic models, a design-to-cost methodology is created for more accurately estimating the commercial viability for multiple space business park markets.
Testing Strategies and Methodologies for the Max Launch Abort System
NASA Technical Reports Server (NTRS)
Schaible, Dawn M.; Yuchnovicz, Daniel E.
2011-01-01
The National Aeronautics and Space Administration (NASA) Engineering and Safety Center (NESC) was tasked to develop an alternate, tower-less launch abort system (LAS) as risk mitigation for the Orion Project. The successful pad abort flight demonstration test in July 2009 of the "Max" launch abort system (MLAS) provided data critical to the design of future LASs, while demonstrating the Agency s ability to rapidly design, build and fly full-scale hardware at minimal cost in a "virtual" work environment. Limited funding and an aggressive schedule presented a challenge for testing of the complex MLAS system. The successful pad abort flight demonstration test was attributed to the project s systems engineering and integration process, which included: a concise definition of, and an adherence to, flight test objectives; a solid operational concept; well defined performance requirements, and a test program tailored to reducing the highest flight test risks. The testing ranged from wind tunnel validation of computational fluid dynamic simulations to component ground tests of the highest risk subsystems. This paper provides an overview of the testing/risk management approach and methodologies used to understand and reduce the areas of highest risk - resulting in a successful flight demonstration test.
The Systems Engineering Process for Human Support Technology Development
NASA Technical Reports Server (NTRS)
Jones, Harry
2005-01-01
Systems engineering is designing and optimizing systems. This paper reviews the systems engineering process and indicates how it can be applied in the development of advanced human support systems. Systems engineering develops the performance requirements, subsystem specifications, and detailed designs needed to construct a desired system. Systems design is difficult, requiring both art and science and balancing human and technical considerations. The essential systems engineering activity is trading off and compromising between competing objectives such as performance and cost, schedule and risk. Systems engineering is not a complete independent process. It usually supports a system development project. This review emphasizes the NASA project management process as described in NASA Procedural Requirement (NPR) 7120.5B. The process is a top down phased approach that includes the most fundamental activities of systems engineering - requirements definition, systems analysis, and design. NPR 7120.5B also requires projects to perform the engineering analyses needed to ensure that the system will operate correctly with regard to reliability, safety, risk, cost, and human factors. We review the system development project process, the standard systems engineering design methodology, and some of the specialized systems analysis techniques. We will discuss how they could apply to advanced human support systems development. The purpose of advanced systems development is not directly to supply human space flight hardware, but rather to provide superior candidate systems that will be selected for implementation by future missions. The most direct application of systems engineering is in guiding the development of prototype and flight experiment hardware. However, anticipatory systems engineering of possible future flight systems would be useful in identifying the most promising development projects.
Satellite Communication Hardware Emulation System (SCHES)
NASA Technical Reports Server (NTRS)
Kaplan, Ted
1993-01-01
Satellite Communication Hardware Emulator System (SCHES) is a powerful simulator that emulates the hardware used in TDRSS links. SCHES is a true bit-by-bit simulator that models communications hardware accurately enough to be used as a verification mechanism for actual hardware tests on user spacecraft. As a credit to its modular design, SCHES is easily configurable to model any user satellite communication link, though some development may be required to tailor existing software to user specific hardware.
Real-Time Data Processing Onboard Remote Sensor Platforms: Annual Review #3 Data Package
NASA Technical Reports Server (NTRS)
Cook, Sid; Harsanyi, Joe
2003-01-01
The current program status reviewed by this viewgraph presentation includes: 1) New Evaluation Results; 2) Algorithm Improvement Investigations; 3) Electronic Hardware Design; 4) Software Hardware Interface Design.
NASA Technical Reports Server (NTRS)
Srivas, Mandayam; Bickford, Mark
1991-01-01
The design and formal verification of a hardware system for a task that is an important component of a fault tolerant computer architecture for flight control systems is presented. The hardware system implements an algorithm for obtaining interactive consistancy (byzantine agreement) among four microprocessors as a special instruction on the processors. The property verified insures that an execution of the special instruction by the processors correctly accomplishes interactive consistency, provided certain preconditions hold. An assumption is made that the processors execute synchronously. For verification, the authors used a computer aided design hardware design verification tool, Spectool, and the theorem prover, Clio. A major contribution of the work is the demonstration of a significant fault tolerant hardware design that is mechanically verified by a theorem prover.
Novel graphical environment for virtual and real-world operations of tracked mobile manipulators
NASA Astrophysics Data System (ADS)
Chen, ChuXin; Trivedi, Mohan M.; Azam, Mir; Lassiter, Nils T.
1993-08-01
A simulation, animation, visualization and interactive control (SAVIC) environment has been developed for the design and operation of an integrated mobile manipulator system. This unique system possesses the abilities for (1) multi-sensor simulation, (2) kinematics and locomotion animation, (3) dynamic motion and manipulation animation, (4) transformation between real and virtual modes within the same graphics system, (5) ease in exchanging software modules and hardware devices between real and virtual world operations, and (6) interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.
Digital Hardware Design Teaching: An Alternative Approach
ERIC Educational Resources Information Center
Benkrid, Khaled; Clayton, Thomas
2012-01-01
This article presents the design and implementation of a complete review of undergraduate digital hardware design teaching in the School of Engineering at the University of Edinburgh. Four guiding principles have been used in this exercise: learning-outcome driven teaching, deep learning, affordability, and flexibility. This has identified…
Fifty Years of Observing Hardware and Human Behavior
NASA Technical Reports Server (NTRS)
McMann, Joe
2011-01-01
During this half-day workshop, Joe McMann presented the lessons learned during his 50 years of experience in both industry and government, which included all U.S. manned space programs, from Mercury to the ISS. He shared his thoughts about hardware and people and what he has learned from first-hand experience. Included were such topics as design, testing, design changes, development, failures, crew expectations, hardware, requirements, and meetings.
A Framework for an Automated Compilation System for Reconfigurable Architectures
1997-03-01
HDLs, Hardware C requires the designer to be thoroughly familiar with digital hardware design. 48 Vahid, Gong, and Gajski focus on the partitioning...of hardware used. Vahid, Gong, and Gajski suggest that the greedy approach used by Gupta and De Micheli is easily trapped in local minimums [46:216...iterative algorithm. To overcome this limitation, the Vahid, Gong, and Gajski suggest a binary constraint partitioning approach. The partitioning
Environmental qualification testing of payload G-534, the Pool Boiling Experiment
NASA Technical Reports Server (NTRS)
Sexton, J. Andrew
1992-01-01
Payload G-534, the prototype Pool Boiling Experiment (PBE), is scheduled to fly on the STS-47 mission in September 1992. This paper describes the purpose of the experiment and the environmental qualification testing program that was used to prove the integrity of the hardware. Component and box level vibration and thermal cycling tests were performed to give an early level of confidence in the hardware designs. At the system level, vibration, thermal extreme soaks, and thermal vacuum cycling tests were performed to qualify the complete design for the expected shuttle environment. The system level vibration testing included three axis sine sweeps and random inputs. The system level hot and cold soak tests demonstrated the hardware's capability to operate over a wide range of temperatures and gave wider latitude in determining which shuttle thermal attitudes were compatible with the experiment. The system level thermal vacuum cycling tests demonstrated the hardware's capability to operate in a convection free environment. A unique environmental chamber was designed and fabricated by the PBE team and allowed most of the environmental testing to be performed within the hardware build laboratory. The completion of the test program gave the project team high confidence in the hardware's ability to function as designed during flight.
Design Report for Low Power Acoustic Detector
2013-08-01
high speed integrated circuit (VHSIC) hardware description language ( VHDL ) implementation of both the HED and DCD detectors. Figures 4 and 5 show the...the hardware design, target detection algorithm design in both MATLAB and VHDL , and typical performance results. 15. SUBJECT TERMS Acoustic low...5 2.4 Algorithm Implementation ..............................................................................................6 3. Testing
Design considerations for space flight hardware
NASA Technical Reports Server (NTRS)
Glover, Daniel
1990-01-01
The environmental and design constraints are reviewed along with some insight into the established design and quality assurance practices that apply to low earth orbit (LEO) space flight hardware. It is intended as an introduction for people unfamiliar with space flight considerations. Some basic data and a bibliography are included.
Corredor, Iván; Bernardos, Ana M.; Iglesias, Josué; Casar, José R.
2012-01-01
Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym. PMID:23012544
Without Gravity: Designing Science Equipment for the International Space Station and Beyond
NASA Technical Reports Server (NTRS)
Sato, Kevin Y.
2016-01-01
This presentation discusses space biology research, the space flight factors needed to design hardware to conduct biological science in microgravity, and examples of NASA and commercial hardware that enable space biology study.
A haptic interface for virtual simulation of endoscopic surgery.
Rosenberg, L B; Stredney, D
1996-01-01
Virtual reality can be described as a convincingly realistic and naturally interactive simulation in which the user is given a first person illusion of being immersed within a computer generated environment While virtual reality systems offer great potential to reduce the cost and increase the quality of medical training, many technical challenges must be overcome before such simulation platforms offer effective alternatives to more traditional training means. A primary challenge in developing effective virtual reality systems is designing the human interface hardware which allows rich sensory information to be presented to users in natural ways. When simulating a given manual procedure, task specific human interface requirements dictate task specific human interface hardware. The following paper explores the design of human interface hardware that satisfies the task specific requirements of virtual reality simulation of Endoscopic surgical procedures. Design parameters were derived through direct cadaver studies and interviews with surgeons. Final hardware design is presented.
Extravehicular Activity training and hardware design considerations
NASA Technical Reports Server (NTRS)
Thuot, Pierre J.; Harbaugh, Gregory J.
1993-01-01
Designing hardware that can be successfully operated by EVA astronauts for EVA tasks required to assemble and maintain Space Station Freedom requires a thorough understanding of human factors and of the capabilities and limitations of the space-suited astronaut, as well as of the effect of microgravity environment on the crew member's capabilities and on the overhead associated with EVA. This paper describes various training methods and facilities that are being designed for training EVA astronauts for Space Station assembly and maintenance, taking into account the above discussed factors. Particular attention is given to the user-friendly hardware design for EVA and to recent EVA flight experience.
Comparative Modal Analysis of Sieve Hardware Designs
NASA Technical Reports Server (NTRS)
Thompson, Nathaniel
2012-01-01
The CMTB Thwacker hardware operates as a testbed analogue for the Flight Thwacker and Sieve components of CHIMRA, a device on the Curiosity Rover. The sieve separates particles with a diameter smaller than 150 microns for delivery to onboard science instruments. The sieving behavior of the testbed hardware should be similar to the Flight hardware for the results to be meaningful. The elastodynamic behavior of both sieves was studied analytically using the Rayleigh Ritz method in conjunction with classical plate theory. Finite element models were used to determine the mode shapes of both designs, and comparisons between the natural frequencies and mode shapes were made. The analysis predicts that the performance of the CMTB Thwacker will closely resemble the performance of the Flight Thwacker within the expected steady state operating regime. Excitations of the testbed hardware that will mimic the flight hardware were recommended, as were those that will improve the efficiency of the sieving process.
A Stream lined Approach for the Payload Customer in Identifying Payload Design Requirements
NASA Technical Reports Server (NTRS)
Miller, Ladonna J.; Schneider, Walter F.; Johnson, Dexer E.; Roe, Lesa B.
2001-01-01
NASA payload developers from across various disciplines were asked to identify areas where process changes would simplify their task of developing and flying flight hardware. Responses to this query included a central location for consistent hardware design requirements for middeck payloads. The multidisciplinary team assigned to review the numerous payload interface design documents is assessing the Space Shuttle middeck, the SPACEHAB Inc. locker, as well as the MultiPurpose Logistics Module (MPLM) and EXpedite the PRocessing of Experiments to Space Station (EXPRESS) rack design requirements for the payloads. They are comparing the multiple carriers and platform requirements and developing a matrix which illustrates the individual requirements, and where possible, the envelope that encompasses all of the possibilities. The matrix will be expanded to form an overall envelope that the payload developers will have the option to utilize when designing their payload's hardware. This will optimize the flexibility for payload hardware and ancillary items to be manifested on multiple carriers and platforms with minimal impact to the payload developer.
Representation and matching of knowledge to design digital systems
NASA Technical Reports Server (NTRS)
Jones, J. U.; Shiva, S. G.
1988-01-01
A knowledge-based expert system is described that provides an approach to solve a problem requiring an expert with considerable domain expertise and facts about available digital hardware building blocks. To design digital hardware systems from their high level VHDL (Very High Speed Integrated Circuit Hardware Description Language) representation to their finished form, a special data representation is required. This data representation as well as the functioning of the overall system is described.
Water Processor and Oxygen Generation Assembly
NASA Technical Reports Server (NTRS)
Bedard, John
1997-01-01
This report documents the results of the tasks which initiated efforts on design issues relating to the Water Processor (WP) and the Oxygen Generation Assembly (OGA) Flight Hardware for the International Space Station. This report fulfills the Statement of Work deliverables requirement for contract H-29387D. The following lists the tasks required by contract H-29387D: (1) HSSSI shall coordinate a detailed review of WP/OGA Flight Hardware program requirements with personnel from MSFC to identify requirements that can be eliminated without affecting the technical integrity of the WP/OGA Hardware; (2) HSSSI shall conduct the technical interchanges with personnel from MSFC to resolve design issues related to WP/OGA Flight Hardware; (3) HSSSI will initiate discussions with Zellwegger Analytics, Inc. to address design issues related to WP and PCWQM interfaces.
Applying reconfigurable hardware to the analysis of multispectral and hyperspectral imagery
NASA Astrophysics Data System (ADS)
Leeser, Miriam E.; Belanovic, Pavle; Estlick, Michael; Gokhale, Maya; Szymanski, John J.; Theiler, James P.
2002-01-01
Unsupervised clustering is a powerful technique for processing multispectral and hyperspectral images. Last year, we reported on an implementation of k-means clustering for multispectral images. Our implementation in reconfigurable hardware processed 10 channel multispectral images two orders of magnitude faster than a software implementation of the same algorithm. The advantage of using reconfigurable hardware to accelerate k-means clustering is clear; the disadvantage is the hardware implementation worked for one specific dataset. It is a non-trivial task to change this implementation to handle a dataset with different number of spectral channels, bits per spectral channel, or number of pixels; or to change the number of clusters. These changes required knowledge of the hardware design process and could take several days of a designer's time. Since multispectral data sets come in many shapes and sizes, being able to easily change the k-means implementation for these different data sets is important. For this reason, we have developed a parameterized implementation of the k-means algorithm. Our design is parameterized by the number of pixels in an image, the number of channels per pixel, and the number of bits per channel as well as the number of clusters. These parameters can easily be changed in a few minutes by someone not familiar with the design process. The resulting implementation is very close in performance to the original hardware implementation. It has the added advantage that the parameterized design compiles approximately three times faster than the original.
Model-Based Development of Automotive Electronic Climate Control Software
NASA Astrophysics Data System (ADS)
Kakade, Rupesh; Murugesan, Mohan; Perugu, Bhupal; Nair, Mohanan
With increasing complexity of software in today's products, writing and maintaining thousands of lines of code is a tedious task. Instead, an alternative methodology must be employed. Model-based development is one candidate that offers several benefits and allows engineers to focus on the domain of their expertise than writing huge codes. In this paper, we discuss the application of model-based development to the electronic climate control software of vehicles. The back-to-back testing approach is presented that ensures flawless and smooth transition from legacy designs to the model-based development. Simulink report generator to create design documents from the models is presented along with its usage to run the simulation model and capture the results into the test report. Test automation using model-based development tool that support the use of unique set of test cases for several testing levels and the test procedure that is independent of software and hardware platform is also presented.
Reusable Rocket Engine Operability Modeling and Analysis
NASA Technical Reports Server (NTRS)
Christenson, R. L.; Komar, D. R.
1998-01-01
This paper describes the methodology, model, input data, and analysis results of a reusable launch vehicle engine operability study conducted with the goal of supporting design from an operations perspective. Paralleling performance analyses in schedule and method, this requires the use of metrics in a validated operations model useful for design, sensitivity, and trade studies. Operations analysis in this view is one of several design functions. An operations concept was developed given an engine concept and the predicted operations and maintenance processes incorporated into simulation models. Historical operations data at a level of detail suitable to model objectives were collected, analyzed, and formatted for use with the models, the simulations were run, and results collected and presented. The input data used included scheduled and unscheduled timeline and resource information collected into a Space Transportation System (STS) Space Shuttle Main Engine (SSME) historical launch operations database. Results reflect upon the importance not only of reliable hardware but upon operations and corrective maintenance process improvements.
Generation of 2N + 1-scroll existence in new three-dimensional chaos systems.
Liu, Yue; Guan, Jian; Ma, Chunyang; Guo, Shuxu
2016-08-01
We propose a systematic methodology for creating 2N + 1-scroll chaotic attractors from a simple three-dimensional system, which is named as the translation chaotic system. It satisfies the condition a12a21 = 0, while the Chua system satisfies a12a21 > 0. In this paper, we also propose a successful (an effective) design and an analytical approach for constructing 2N + 1-scrolls, the translation transformation principle. Also, the dynamics properties of the system are studied in detail. MATLAB simulation results show very sophisticated dynamical behaviors and unique chaotic behaviors of the system. It provides a new approach for 2N + 1-scroll attractors. Finally, to explore the potential use in technological applications, a novel block circuit diagram is also designed for the hardware implementation of 1-, 3-, 5-, and 7-scroll attractors via switching the switches. Translation chaotic system has the merit of convenience and high sensitivity to initial values, emerging potentials in future engineering chaos design.
Evaluation of cellular glasses for solar mirror panel applications
NASA Technical Reports Server (NTRS)
Giovan, M.; Adams, M.
1979-01-01
An analytic technique was developed to compare the structural and environmental performance of various materials considered for backing of second surface glass solar mirrors. Cellular glass was determined to be a prime candidate due to its low cost, high stiffness-to-weight ratio, thermal expansion match to mirror glass, evident minimal environmental impact and chemical and dimensional stability under conditions of use. The current state of the art and anticipated developments in cellular glass technology are discussed; material properties are correlated to design requirements. A mathematical model is presented which suggests a design approach which allows minimization of life cost; and, a mechanical and environmental testing program is outlined, designed to provide a material property basis for development of cellular glass hardware, together with methodology for collecting lifetime predictive data. Preliminary material property data from measurements are given. Microstructure of several cellular materials is shown, and sensitivity of cellular glass to freeze-thaw degradation and to slow crack growth is discussed. The effect of surface coating is addressed.
Driver face tracking using semantics-based feature of eyes on single FPGA
NASA Astrophysics Data System (ADS)
Yu, Ying-Hao; Chen, Ji-An; Ting, Yi-Siang; Kwok, Ngaiming
2017-06-01
Tracking driver's face is one of the essentialities for driving safety control. This kind of system is usually designed with complicated algorithms to recognize driver's face by means of powerful computers. The design problem is not only about detecting rate but also from parts damages under rigorous environments by vibration, heat, and humidity. A feasible strategy to counteract these damages is to integrate entire system into a single chip in order to achieve minimum installation dimension, weight, power consumption, and exposure to air. Meanwhile, an extraordinary methodology is also indispensable to overcome the dilemma of low-computing capability and real-time performance on a low-end chip. In this paper, a novel driver face tracking system is proposed by employing semantics-based vague image representation (SVIR) for minimum hardware resource usages on a FPGA, and the real-time performance is also guaranteed at the same time. Our experimental results have indicated that the proposed face tracking system is viable and promising for the smart car design in the future.
Decentralized and self-centered estimation architecture for formation flying of spacecraft
NASA Technical Reports Server (NTRS)
Kang, B. H.; Hadaegh, F. Y.; Scharf, D. P.; Ke, N. -P.
2001-01-01
Formation estimation methodologies for distributed spacecraft systems are formulated and analyzed. A generic form of the formation estimation problem is described by defining a common hardware configuration, observation graph, and feasible estimation topologies.
Computer-aided design and computer science technology
NASA Technical Reports Server (NTRS)
Fulton, R. E.; Voigt, S. J.
1976-01-01
A description is presented of computer-aided design requirements and the resulting computer science advances needed to support aerospace design. The aerospace design environment is examined, taking into account problems of data handling and aspects of computer hardware and software. The interactive terminal is normally the primary interface between the computer system and the engineering designer. Attention is given to user aids, interactive design, interactive computations, the characteristics of design information, data management requirements, hardware advancements, and computer science developments.
Design guidelines for robotically serviceable hardware
NASA Technical Reports Server (NTRS)
Gordon, Scott A.
1988-01-01
Research being conducted at the Goddard Space Flight Center into the development of guidelines for the design of robotically serviceable spaceflight hardware is described. A mock-up was built based on an existing spaceflight system demonstrating how these guidelines can be applied to actual hardware. The report examines the basic servicing philosophy being studied and how this philosophy is reflected in the formulation of design guidelines for robotic servicing. A description of the mock-up is presented with emphasis on the design features that make it robot friendly. Three robotic servicing schemes fulfilling the design guidelines were developed for the mock-up. These servicing schemes are examined as to how their implementation was affected by the constraints of the spacecraft system on which the mock-up is based.
Vestibular Function Research (VFR) experiment. Phase B: Design definition study
NASA Technical Reports Server (NTRS)
1978-01-01
The Vestibular Functions Research (VFR) Experiment was established to investigate the neurosensory and related physiological processes believed to be associated with the space flight nausea syndrome and to develop logical means for its prediction, prevention and treatment. The VFR Project consists of ground and spaceflight experimentation using frogs as specimens. The phase B Preliminary Design Study provided for the preliminary design of the experiment hardware, preparation of performance and hardware specification and a Phase C/D development plan, establishment of STS (Space Transportation System) interfaces and mission operations, and the study of a variety of hardware, experiment and mission options. The study consist of three major tasks: (1) mission mode trade-off; (2) conceptual design; and (3) preliminary design.
Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.0)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hukerikar, Saurabh; Engelmann, Christian
Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest that very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Practical limits on power consumption in HPC systems will require future systems to embrace innovative architectures, increasing the levels of hardware and software complexities. The resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies thatmore » are capable of handling a broad set of fault models at accelerated fault rates. These techniques must seek to improve resilience at reasonable overheads to power consumption and performance. While the HPC community has developed various solutions, application-level as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power eciency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software ecosystems, which are expected to be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience based on the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. The catalog of resilience design patterns provides designers with reusable design elements. We define a design framework that enhances our understanding of the important constraints and opportunities for solutions deployed at various layers of the system stack. The framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also enables optimization of the cost-benefit trade-os among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-ecient manner in spite of frequent faults, errors, and failures of various types.« less
Applying a Genetic Algorithm to Reconfigurable Hardware
NASA Technical Reports Server (NTRS)
Wells, B. Earl; Weir, John; Trevino, Luis; Patrick, Clint; Steincamp, Jim
2004-01-01
This paper investigates the feasibility of applying genetic algorithms to solve optimization problems that are implemented entirely in reconfgurable hardware. The paper highlights the pe$ormance/design space trade-offs that must be understood to effectively implement a standard genetic algorithm within a modem Field Programmable Gate Array, FPGA, reconfgurable hardware environment and presents a case-study where this stochastic search technique is applied to standard test-case problems taken from the technical literature. In this research, the targeted FPGA-based platform and high-level design environment was the Starbridge Hypercomputing platform, which incorporates multiple Xilinx Virtex II FPGAs, and the Viva TM graphical hardware description language.
Jiang, Chao; Zhang, Hongyan; Wang, Jia; Wang, Yaru; He, Heng; Liu, Rui; Zhou, Fangyuan; Deng, Jialiang; Li, Pengcheng; Luo, Qingming
2011-11-01
Laser speckle imaging (LSI) is a noninvasive and full-field optical imaging technique which produces two-dimensional blood flow maps of tissues from the raw laser speckle images captured by a CCD camera without scanning. We present a hardware-friendly algorithm for the real-time processing of laser speckle imaging. The algorithm is developed and optimized specifically for LSI processing in the field programmable gate array (FPGA). Based on this algorithm, we designed a dedicated hardware processor for real-time LSI in FPGA. The pipeline processing scheme and parallel computing architecture are introduced into the design of this LSI hardware processor. When the LSI hardware processor is implemented in the FPGA running at the maximum frequency of 130 MHz, up to 85 raw images with the resolution of 640×480 pixels can be processed per second. Meanwhile, we also present a system on chip (SOC) solution for LSI processing by integrating the CCD controller, memory controller, LSI hardware processor, and LCD display controller into a single FPGA chip. This SOC solution also can be used to produce an application specific integrated circuit for LSI processing.
Using Modern Design Tools for Digital Avionics Development
NASA Technical Reports Server (NTRS)
Hyde, David W.; Lakin, David R., II; Asquith, Thomas E.
2000-01-01
Using Modem Design Tools for Digital Avionics Development Shrinking development time and increased complexity of new avionics forces the designer to use modem tools and methods during hardware development. Engineers at the Marshall Space Flight Center have successfully upgraded their design flow and used it to develop a Mongoose V based radiation tolerant processor board for the International Space Station's Water Recovery System. The design flow, based on hardware description languages, simulation, synthesis, hardware models, and full functional software model libraries, allowed designers to fully simulate the processor board from reset, through initialization before any boards were built. The fidelity of a digital simulation is limited to the accuracy of the models used and how realistically the designer drives the circuit's inputs during simulation. By using the actual silicon during simulation, device modeling errors are reduced. Numerous design flaws were discovered early in the design phase when they could be easily fixed. The use of hardware models and actual MIPS software loaded into full functional memory models also provided checkout of the software development environment. This paper will describe the design flow used to develop the processor board and give examples of errors that were found using the tools. An overview of the processor board firmware will also be covered.
CD-ROM Hardware Configurations: Selection and Design.
ERIC Educational Resources Information Center
Jaffe, Lee David; Watkins, Steven G.
1992-01-01
Presents selection and design considerations to help libraries make informed decisions about hardware configurations of CD-ROM systems. Highlights include CD-ROM configurations, including single drive workstations, daisychains, and jukeboxes; network configurations, including remote access; microcomputer features; CD-ROM drive selection; and…
Stand-alone development system using a KIM-1 microcomputer module
NASA Technical Reports Server (NTRS)
Nickum, J. D.
1978-01-01
A small microprocessor-based system designed to: contain all or most of the interface hardware, designed to be easy to access and modify the hardware, to be capable of being strapped to the seat of a small general aviation aircraft, and to be independent of the aircraft power system is described. The system is used to develop a low cost Loran C sensor processor, but is designed such that the Loran interface boards may be removed and other hardware interfaces inserted into the same connectors. This flexibility is achieved through memory-mapping techniques into the microprocessor.
Criticality as a Set-Point for Adaptive Behavior in Neuromorphic Hardware
Srinivasa, Narayan; Stepp, Nigel D.; Cruz-Albrecht, Jose
2015-01-01
Neuromorphic hardware are designed by drawing inspiration from biology to overcome limitations of current computer architectures while forging the development of a new class of autonomous systems that can exhibit adaptive behaviors. Several designs in the recent past are capable of emulating large scale networks but avoid complexity in network dynamics by minimizing the number of dynamic variables that are supported and tunable in hardware. We believe that this is due to the lack of a clear understanding of how to design self-tuning complex systems. It has been widely demonstrated that criticality appears to be the default state of the brain and manifests in the form of spontaneous scale-invariant cascades of neural activity. Experiment, theory and recent models have shown that neuronal networks at criticality demonstrate optimal information transfer, learning and information processing capabilities that affect behavior. In this perspective article, we argue that understanding how large scale neuromorphic electronics can be designed to enable emergent adaptive behavior will require an understanding of how networks emulated by such hardware can self-tune local parameters to maintain criticality as a set-point. We believe that such capability will enable the design of truly scalable intelligent systems using neuromorphic hardware that embrace complexity in network dynamics rather than avoiding it. PMID:26648839
The capture and recreation of 3D auditory scenes
NASA Astrophysics Data System (ADS)
Li, Zhiyun
The main goal of this research is to develop the theory and implement practical tools (in both software and hardware) for the capture and recreation of 3D auditory scenes. Our research is expected to have applications in virtual reality, telepresence, film, music, video games, auditory user interfaces, and sound-based surveillance. The first part of our research is concerned with sound capture via a spherical microphone array. The advantage of this array is that it can be steered into any 3D directions digitally with the same beampattern. We develop design methodologies to achieve flexible microphone layouts, optimal beampattern approximation and robustness constraint. We also design novel hemispherical and circular microphone array layouts for more spatially constrained auditory scenes. Using the captured audio, we then propose a unified and simple approach for recreating them by exploring the reciprocity principle that is satisfied between the two processes. Our approach makes the system easy to build, and practical. Using this approach, we can capture the 3D sound field by a spherical microphone array and recreate it using a spherical loudspeaker array, and ensure that the recreated sound field matches the recorded field up to a high order of spherical harmonics. For some regular or semi-regular microphone layouts, we design an efficient parallel implementation of the multi-directional spherical beamformer by using the rotational symmetries of the beampattern and of the spherical microphone array. This can be implemented in either software or hardware and easily adapted for other regular or semi-regular layouts of microphones. In addition, we extend this approach for headphone-based system. Design examples and simulation results are presented to verify our algorithms. Prototypes are built and tested in real-world auditory scenes.
Compiler-Assisted Multiple Instruction Rollback Recovery Using a Read Buffer. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Alewine, Neal Jon
1993-01-01
Multiple instruction rollback (MIR) is a technique to provide rapid recovery from transient processor failures and was implemented in hardware by researchers and slow in mainframe computers. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs were also developed which remove rollback data hazards directly with data flow manipulations, thus eliminating the need for most data redundancy hardware. Compiler-assisted techniques to achieve multiple instruction rollback recovery are addressed. It is observed that data some hazards resulting from instruction rollback can be resolved more efficiently by providing hardware redundancy while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations were conducted which indicate improved efficiency over previous hardware-based and compiler-based schemes. Various enhancements to the compiler transformations and to the data redundancy hardware developed for the compiler-assisted MIR scheme are described and evaluated. The final topic deals with the application of compiler-assisted MIR techniques to aid in exception repair and branch repair in a speculative execution architecture.
Simulation of Attacks for Security in Wireless Sensor Network.
Diaz, Alvaro; Sanchez, Pablo
2016-11-18
The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node's software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work.
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Cruz, Jose A.; Johnson Stephen B.; Lo, Yunnhon
2015-01-01
This paper describes a quantitative methodology for bounding the false positive (FP) and false negative (FN) probabilities associated with a human-rated launch vehicle abort trigger (AT) that includes sensor data qualification (SDQ). In this context, an AT is a hardware and software mechanism designed to detect the existence of a specific abort condition. Also, SDQ is an algorithmic approach used to identify sensor data suspected of being corrupt so that suspect data does not adversely affect an AT's detection capability. The FP and FN methodologies presented here were developed to support estimation of the probabilities of loss of crew and loss of mission for the Space Launch System (SLS) which is being developed by the National Aeronautics and Space Administration (NASA). The paper provides a brief overview of system health management as being an extension of control theory; and describes how ATs and the calculation of FP and FN probabilities relate to this theory. The discussion leads to a detailed presentation of the FP and FN methodology and an example showing how the FP and FN calculations are performed. This detailed presentation includes a methodology for calculating the change in FP and FN probabilities that result from including SDQ in the AT architecture. To avoid proprietary and sensitive data issues, the example incorporates a mixture of open literature and fictitious reliability data. Results presented in the paper demonstrate the effectiveness of the approach in providing quantitative estimates that bound the probability of a FP or FN abort determination.
NASA Astrophysics Data System (ADS)
Tambara, Lucas Antunes; Tonfat, Jorge; Santos, André; Kastensmidt, Fernanda Lima; Medina, Nilberto H.; Added, Nemitala; Aguiar, Vitor A. P.; Aguirre, Fernando; Silveira, Marcilei A. G.
2017-02-01
The increasing system complexity of FPGA-based hardware designs and shortening of time-to-market have motivated the adoption of new designing methodologies focused on addressing the current need for high-performance circuits. High-Level Synthesis (HLS) tools can generate Register Transfer Level (RTL) designs from high-level software programming languages. These tools have evolved significantly in recent years, providing optimized RTL designs, which can serve the needs of safety-critical applications that require both high performance and high reliability levels. However, a reliability evaluation of HLS-based designs under soft errors has not yet been presented. In this work, the trade-offs of different HLS-based designs in terms of reliability, resource utilization, and performance are investigated by analyzing their behavior under soft errors and comparing them to a standard processor-based implementation in an SRAM-based FPGA. Results obtained from fault injection campaigns and radiation experiments show that it is possible to increase the performance of a processor-based system up to 5,000 times by changing its architecture with a small impact in the cross section (increasing up to 8 times), and still increasing the Mean Workload Between Failures (MWBF) of the system.
Environmental qualification testing of the prototype pool boiling experiment
NASA Technical Reports Server (NTRS)
Sexton, J. Andrew
1992-01-01
The prototype Pool Boiling Experiment (PBE) flew on the STS-47 mission in September 1992. This report describes the purpose of the experiment and the environmental qualification testing program that was used to prove the integrity of the prototype hardware. Component and box level vibration and thermal cycling tests were performed to give an early level of confidence in the hardware designs. At the system level, vibration, thermal extreme soaks, and thermal vacuum cycling tests were performed to qualify the complete design for the expected shuttle environment. The system level vibration testing included three axis sine sweeps and random inputs. The system level hot and cold soak tests demonstrated the hardware's capability to operate over a wide range of temperatures and gave the project team a wider latitude in determining which shuttle thermal altitudes were compatible with the experiment. The system level thermal vacuum cycling tests demonstrated the hardware's capability to operate in a convection free environment. A unique environmental chamber was designed and fabricated by the PBE team and allowed most of the environmental testing to be performed within the project's laboratory. The completion of the test program gave the project team high confidence in the hardware's ability to function as designed during flight.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lazareva, Maria V.
2010-05-01
In the paper we show that the biologically motivated conception of time-pulse encoding usage gives a set of advantages (single methodological basis, universality, tuning simplicity, learning and programming et al) at creation and design of sensor systems with parallel input-output and processing for 2D structures hybrid and next generations neuro-fuzzy neurocomputers. We show design principles of programmable relational optoelectronic time-pulse encoded processors on the base of continuous logic, order logic and temporal waves processes. We consider a structure that execute analog signal extraction, analog and time-pulse coded variables sorting. We offer optoelectronic realization of such base relational order logic element, that consists of time-pulse coded photoconverters (pulse-width and pulse-phase modulators) with direct and complementary outputs, sorting network on logical elements and programmable commutation blocks. We make technical parameters estimations of devices and processors on such base elements by simulation and experimental research: optical input signals power 0.2 - 20 uW, processing time 1 - 10 us, supply voltage 1 - 3 V, consumption power 10 - 100 uW, extended functional possibilities, learning possibilities. We discuss some aspects of possible rules and principles of learning and programmable tuning on required function, relational operation and realization of hardware blocks for modifications of such processors. We show that it is possible to create sorting machines, neural networks and hybrid data-processing systems with untraditional numerical systems and pictures operands on the basis of such quasiuniversal hardware simple blocks with flexible programmable tuning.
Demonstration of a Safety Analysis on a Complex System
NASA Technical Reports Server (NTRS)
Leveson, Nancy; Alfaro, Liliana; Alvarado, Christine; Brown, Molly; Hunt, Earl B.; Jaffe, Matt; Joslyn, Susan; Pinnell, Denise; Reese, Jon; Samarziya, Jeffrey;
1997-01-01
For the past 17 years, Professor Leveson and her graduate students have been developing a theoretical foundation for safety in complex systems and building a methodology upon that foundation. The methodology includes special management structures and procedures, system hazard analyses, software hazard analysis, requirements modeling and analysis for completeness and safety, special software design techniques including the design of human-machine interaction, verification, operational feedback, and change analysis. The Safeware methodology is based on system safety techniques that are extended to deal with software and human error. Automation is used to enhance our ability to cope with complex systems. Identification, classification, and evaluation of hazards is done using modeling and analysis. To be effective, the models and analysis tools must consider the hardware, software, and human components in these systems. They also need to include a variety of analysis techniques and orthogonal approaches: There exists no single safety analysis or evaluation technique that can handle all aspects of complex systems. Applying only one or two may make us feel satisfied, but will produce limited results. We report here on a demonstration, performed as part of a contract with NASA Langley Research Center, of the Safeware methodology on the Center-TRACON Automation System (CTAS) portion of the air traffic control (ATC) system and procedures currently employed at the Dallas/Fort Worth (DFW) TRACON (Terminal Radar Approach CONtrol). CTAS is an automated system to assist controllers in handling arrival traffic in the DFW area. Safety is a system property, not a component property, so our safety analysis considers the entire system and not simply the automated components. Because safety analysis of a complex system is an interdisciplinary effort, our team included system engineers, software engineers, human factors experts, and cognitive psychologists.
NASA Technical Reports Server (NTRS)
Bragg-Sitton, S. M.; Webster, K. L.
2007-01-01
Nonnuclear testing can be a valuable tool in the development of an in-space nuclear power or propulsion system. In a nonnuclear test facility, electric heaters are used to simulate heat from nuclear fuel. Standard testing allows one to fully assess thermal, heat transfer, and stress related attributes of a given system but fails to demonstrate the dynamic response that would be present in an integrated, fueled reactor system. The integration of thermal hydraulic hardware tests with simulated neutronic response provides a bridge between electrically heated testing and full nuclear testing. By implementing a neutronic response model to simulate the dynamic response that would be expected in a fueled reactor system, one can better understand system integration issues, characterize integrated system response times and response and response characteristics, and assess potential design improvements with a relatively small fiscal investment. Initial system dynamic response testing was demonstrated on the integrated SAFE 100a heat pipe cooled, electrically heated reactor and heat exchanger hardware. This Technical Memorandum discusses the status of the planned dynamic test methodology for implementation in the direct-drive gas-cooled reactor testing and assesses the additional instrumentation needed to implement high-fidelity dynamic testing.
A Stochastic Spiking Neural Network for Virtual Screening.
Morro, A; Canals, V; Oliver, A; Alomar, M L; Galan-Prado, F; Ballester, P J; Rossello, J L
2018-04-01
Virtual screening (VS) has become a key computational tool in early drug design and screening performance is of high relevance due to the large volume of data that must be processed to identify molecules with the sought activity-related pattern. At the same time, the hardware implementations of spiking neural networks (SNNs) arise as an emerging computing technique that can be applied to parallelize processes that normally present a high cost in terms of computing time and power. Consequently, SNN represents an attractive alternative to perform time-consuming processing tasks, such as VS. In this brief, we present a smart stochastic spiking neural architecture that implements the ultrafast shape recognition (USR) algorithm achieving two order of magnitude of speed improvement with respect to USR software implementations. The neural system is implemented in hardware using field-programmable gate arrays allowing a highly parallelized USR implementation. The results show that, due to the high parallelization of the system, millions of compounds can be checked in reasonable times. From these results, we can state that the proposed architecture arises as a feasible methodology to efficiently enhance time-consuming data-mining processes such as 3-D molecular similarity search.
NASA Astrophysics Data System (ADS)
Udell, C.; Selker, J. S.
2017-12-01
The increasing availability and functionality of Open-Source software and hardware along with 3D printing, low-cost electronics, and proliferation of open-access resources for learning rapid prototyping are contributing to fundamental transformations and new technologies in environmental sensing. These tools invite reevaluation of time-tested methodologies and devices toward more efficient, reusable, and inexpensive alternatives. Building upon Open-Source design facilitates community engagement and invites a Do-It-Together (DIT) collaborative framework for research where solutions to complex problems may be crowd-sourced. However, barriers persist that prevent researchers from taking advantage of the capabilities afforded by open-source software, hardware, and rapid prototyping. Some of these include: requisite technical skillsets, knowledge of equipment capabilities, identifying inexpensive sources for materials, money, space, and time. A university MAKER space staffed by engineering students to assist researchers is one proposed solution to overcome many of these obstacles. This presentation investigates the unique capabilities the USDA-funded Openly Published Environmental Sensing (OPEnS) Lab affords researchers, within Oregon State and internationally, and the unique functions these types of initiatives support at the intersection of MAKER spaces, Open-Source academic research, and open-access dissemination.
Advanced Propulsion and TPS for a Rapidly-Prototyped CEV
NASA Astrophysics Data System (ADS)
Hudson, Gary C.
2005-02-01
Transformational Space Corporation (t/Space) is developing for NASA the initial designs for the Crew Exploration Vehicle family, focusing on a Launch CEV for transporting NASA and civilian passengers from Earth to orbit. The t/Space methodology is rapid prototyping of major vehicle systems, and deriving detailed specifications from the resulting hardware, avoiding "written-in-advance" specs that can force the costly invention of new capabilities simply to meet such specs. A key technology shared by the CEV family is Vapor Pressurized propulsion (Vapak) for simplicity and reliability, which provides electrical power, life support gas and a heat sink in addition to propulsion. The CEV family also features active transpiration cooling of re-entry surfaces (for reusability) backed up by passive thermal protection.
Hyperscanning: simultaneous fMRI during linked social interactions.
Montague, P Read; Berns, Gregory S; Cohen, Jonathan D; McClure, Samuel M; Pagnoni, Giuseppe; Dhamala, Mukesh; Wiest, Michael C; Karpov, Igor; King, Richard D; Apple, Nathan; Fisher, Ronald E
2002-08-01
"Plain question and plain answer make the shortest road out of most perplexities." Mark Twain-Life on the Mississippi. A new methodology for the measurement of the neural substrates of human social interaction is described. This technology, termed "Hyperscan," embodies both the hardware and the software necessary to link magnetic resonance scanners through the internet. Hyperscanning allows for the performance of human behavioral experiments in which participants can interact with each other while functional MRI is acquired in synchrony with the behavioral interactions. Data are presented from a simple game of deception between pairs of subjects. Because people may interact both asymmetrically and asynchronously, both the design and the analysis must accommodate this added complexity. Several potential approaches are described.
NASA Astrophysics Data System (ADS)
Chen, ChuXin; Trivedi, Mohan M.
1992-03-01
This research is focused on enhancing the overall productivity of an integrated human-robot system. A simulation, animation, visualization, and interactive control (SAVIC) environment has been developed for the design and operation of an integrated robotic manipulator system. This unique system possesses the abilities for multisensor simulation, kinematics and locomotion animation, dynamic motion and manipulation animation, transformation between real and virtual modes within the same graphics system, ease in exchanging software modules and hardware devices between real and virtual world operations, and interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation, and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.
Human factors technology for America's space program
NASA Technical Reports Server (NTRS)
Montemerlo, M. D.
1982-01-01
NASA is initiating a space human factors research and technology development program in October 1982. The impetus for this program stems from: the frequent and economical access to space provided by the Shuttle, the advances in control and display hardware/software made possible through the recent explosion in microelectronics technology, heightened interest in a space station, heightened interest by the military in space operations, and the fact that the technology for long duration stay times for man in space has received relatively little attention since the Apollo and Skylab missions. The rationale for and issues in the five thrusts of the new program are described. The main thrusts are: basic methodology, crew station design, ground control/operations, teleoperations and extra vehicular activity.
Automated power distribution system hardware. [for space station power supplies
NASA Technical Reports Server (NTRS)
Anderson, Paul M.; Martin, James A.; Thomason, Cindy
1989-01-01
An automated power distribution system testbed for the space station common modules has been developed. It incorporates automated control and monitoring of a utility-type power system. Automated power system switchgear, control and sensor hardware requirements, hardware design, test results, and potential applications are discussed. The system is designed so that the automated control and monitoring of the power system is compatible with both a 208-V, 20-kHz single-phase AC system and a high-voltage (120 to 150 V) DC system.
MSFC Skylab structures and mechanical systems mission evaluation
NASA Technical Reports Server (NTRS)
1974-01-01
A performance analysis for structural and mechanical major hardware systems and components is presented. Development background testing, modifications, and requirement adjustments are included. Functional narratives are provided for comparison purposes as are predicted design performance criterion. Each item is evaluated on an individual basis: that is, (1) history (requirements, design, manufacture, and test); (2) in-orbit performance (description and analysis); and (3) conclusions and recommendations regarding future space hardware application. Overall, the structural and mechanical performance of the Skylab hardware was outstanding.
Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hukerikar, Saurabh; Engelmann, Christian
Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Therefore the resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. Also, due to practical limits on powermore » consumption in HPC systems future systems are likely to embrace innovative architectures, increasing the levels of hardware and software complexities. As a result the techniques that seek to improve resilience must navigate the complex trade-off space between resilience and the overheads to power consumption and performance. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power efficiency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience using the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. Each established solution is described in the form of a pattern that addresses concrete problems in the design of resilient systems. The complete catalog of resilience design patterns provides designers with reusable design elements. We also define a framework that enhances a designer's understanding of the important constraints and opportunities for the design patterns to be implemented and deployed at various layers of the system stack. This design framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also supports optimization of the cost-benefit trade-offs among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner in spite of frequent faults, errors, and failures of various types.« less
Preliminary design polymeric materials experiment. [for space shuttles and Spacelab missions
NASA Technical Reports Server (NTRS)
Mattingly, S. G.; Rude, E. T.; Marshner, R. L.
1975-01-01
A typical Advanced Technology Laboratory mission flight plan was developed and used as a guideline for the identification of a number of experiment considerations. The experiment logistics beginning with sample preparation and ending with sample analysis are then overlaid on the mission in order to have a complete picture of the design requirements. The results of this preliminary design study fall into two categories. First specific preliminary designs of experiment hardware which is adaptable to a variety of mission requirements. Second, identification of those mission considerations which affect hardware design and will require further definition prior to final design. Finally, a program plan is presented which will provide the necessary experiment hardware in a realistic time period to match the planned shuttle flights. A bibliography of all material reviewed and consulted but not specifically referenced is provided.
The New Meteor Radar at Penn State: Design and First Observations
NASA Technical Reports Server (NTRS)
Urbina, J.; Seal, R.; Dyrud, L.
2011-01-01
In an effort to provide new and improved meteor radar sensing capabilities, Penn State has been developing advanced instruments and technologies for future meteor radars, with primary objectives of making such instruments more capable and more cost effective in order to study the basic properties of the global meteor flux, such as average mass, velocity, and chemical composition. Using low-cost field programmable gate arrays (FPGAs), combined with open source software tools, we describe a design methodology enabling one to develop state-of-the art radar instrumentation, by developing a generalized instrumentation core that can be customized using specialized output stage hardware. Furthermore, using object-oriented programming (OOP) techniques and open-source tools, we illustrate a technique to provide a cost-effective, generalized software framework to uniquely define an instrument s functionality through a customizable interface, implemented by the designer. The new instrument is intended to provide instantaneous profiles of atmospheric parameters and climatology on a daily basis throughout the year. An overview of the instrument design concepts and some of the emerging technologies developed for this meteor radar are presented.
NASA Technical Reports Server (NTRS)
Martini, W. R.
1978-01-01
This manual is intended to serve both as an introduction to Stirling engine analysis methods and as a key to the open literature on Stirling engines. Over 800 references are listed and these are cross referenced by date of publication, author and subject. Engine analysis is treated starting from elementary principles and working through cycles analysis. Analysis methodologies are classified as first, second or third order depending upon degree of complexity and probable application; first order for preliminary engine studies, second order for performance prediction and engine optimization, and third order for detailed hardware evaluation and engine research. A few comparisons between theory and experiment are made. A second order design procedure is documented step by step with calculation sheets and a worked out example to follow. Current high power engines are briefly described and a directory of companies and individuals who are active in Stirling engine development is included. Much remains to be done. Some of the more complicated and potentially very useful design procedures are now only referred to. Future support will enable a more thorough job of comparing all available design procedures against experimental data which should soon be available.
Hardware development process for Human Research facility applications
NASA Astrophysics Data System (ADS)
Bauer, Liz
2000-01-01
The simple goal of the Human Research Facility (HRF) is to conduct human research experiments on the International Space Station (ISS) astronauts during long-duration missions. This is accomplished by providing integration and operation of the necessary hardware and software capabilities. A typical hardware development flow consists of five stages: functional inputs and requirements definition, market research, design life cycle through hardware delivery, crew training, and mission support. The purpose of this presentation is to guide the audience through the early hardware development process: requirement definition through selecting a development path. Specific HRF equipment is used to illustrate the hardware development paths. .
Design Tools for Reconfigurable Hardware in Orbit (RHinO)
NASA Technical Reports Server (NTRS)
French, Mathew; Graham, Paul; Wirthlin, Michael; Larchev, Gregory; Bellows, Peter; Schott, Brian
2004-01-01
The Reconfigurable Hardware in Orbit (RHinO) project is focused on creating a set of design tools that facilitate and automate design techniques for reconfigurable computing in space, using SRAM-based field-programmable-gate-array (FPGA) technology. These tools leverage an established FPGA design environment and focus primarily on space effects mitigation and power optimization. The project is creating software to automatically test and evaluate the single-event-upsets (SEUs) sensitivities of an FPGA design and insert mitigation techniques. Extensions into the tool suite will also allow evolvable algorithm techniques to reconfigure around single-event-latchup (SEL) events. In the power domain, tools are being created for dynamic power visualiization and optimization. Thus, this technology seeks to enable the use of Reconfigurable Hardware in Orbit, via an integrated design tool-suite aiming to reduce risk, cost, and design time of multimission reconfigurable space processors using SRAM-based FPGAs.
FPGA Based Reconfigurable ATM Switch Test Bed
NASA Technical Reports Server (NTRS)
Chu, Pong P.; Jones, Robert E.
1998-01-01
Various issues associated with "FPGA Based Reconfigurable ATM Switch Test Bed" are presented in viewgraph form. Specific topics include: 1) Network performance evaluation; 2) traditional approaches; 3) software simulation; 4) hardware emulation; 5) test bed highlights; 6) design environment; 7) test bed architecture; 8) abstract sheared-memory switch; 9) detailed switch diagram; 10) traffic generator; 11) data collection circuit and user interface; 12) initial results; and 13) the following conclusions: Advances in FPGA make hardware emulation feasible for performance evaluation, hardware emulation can provide several orders of magnitude speed-up over software simulation; due to the complexity of hardware synthesis process, development in emulation is much more difficult than simulation and requires knowledge in both networks and digital design.
NASA Technical Reports Server (NTRS)
Jackson, L. Neal; Crenshaw, John, Sr.; Schulze, Arthur E.; Wood, H. J., Jr.
1989-01-01
The objective was to define the factors which space flight hardware developers and planners should consider when determining: (1) the number of hardware units required to support program; (2) design level of the units; and (3) most efficient means of utilization of the units. The analysis considered technology risk, maintainability, reliability, and safety design requirements for achieving the delivery of highest quality flight hardware. Relative cost impacts of the utilization of prototyping were identified. The development of Space Biology Initiative research hardware will involve intertwined hardware/software activities. Experience has shown that software development can be an expensive portion of a system design program. While software prototyping could imply the development of a significantly different end item, an operational system prototype must be considered to be a combination of software and hardware. Hundreds of factors were identified that could be considered in determining the quantity and types of prototypes that should be constructed. In developing the decision models, these factors were combined and reduced by approximately ten-to-one in order to develop a manageable structure based on the major determining factors. The Baseline SBI hardware list of Appendix D was examined and reviewed in detail; however, from the facts available it was impossible to identify the exact types and quantities of prototypes required for each of these items. Although the factors that must be considered could be enumerated for each of these pieces of equipment, the exact status and state of development of the equipment is variable and uncertain at this time.
Study of the adaptability of existing hardware designs to a Pioneer Saturn/Uranus probe
NASA Technical Reports Server (NTRS)
1973-01-01
The basic concept of designing a scientific entry probe for the expected range of environments at Saturn or Uranus and making the probe compatible with the interface constraints of the Pioneer spacecraft was investigated for launches in the early 1980's. It was found that the amount of hardware commonality between that used in the Pioneer Venus program and that for the Saturn/Uranus probe was approximately 85%. It is recommended that additional development studies be conducted to improve the hardware definitions of the probe design for the following: heat shield, battery, nose cap jettisoning, and thermal control insulation.
Shuttle mission simulator hardware conceptual design report
NASA Technical Reports Server (NTRS)
Burke, J. F.
1973-01-01
The detailed shuttle mission simulator hardware requirements are discussed. The conceptual design methods, or existing technology, whereby those requirements will be fulfilled are described. Information of a general nature on the total design problem plus specific details on how these requirements are to be satisfied are reported. The configuration of the simulator is described and the capabilities for various types of training are identified.
Real-Time Hardware-in-the-Loop Simulation of Ares I Launch Vehicle
NASA Technical Reports Server (NTRS)
Tobbe, Patrick; Matras, Alex; Walker, David; Wilson, Heath; Fulton, Chris; Alday, Nathan; Betts, Kevin; Hughes, Ryan; Turbe, Michael
2009-01-01
The Ares Real-Time Environment for Modeling, Integration, and Simulation (ARTEMIS) has been developed for use by the Ares I launch vehicle System Integration Laboratory at the Marshall Space Flight Center. The primary purpose of the Ares System Integration Laboratory is to test the vehicle avionics hardware and software in a hardware - in-the-loop environment to certify that the integrated system is prepared for flight. ARTEMIS has been designed to be the real-time simulation backbone to stimulate all required Ares components for verification testing. ARTE_VIIS provides high -fidelity dynamics, actuator, and sensor models to simulate an accurate flight trajectory in order to ensure realistic test conditions. ARTEMIS has been designed to take advantage of the advances in underlying computational power now available to support hardware-in-the-loop testing to achieve real-time simulation with unprecedented model fidelity. A modular realtime design relying on a fully distributed computing architecture has been implemented.
NASA Astrophysics Data System (ADS)
Keshet, Aviv; Ketterle, Wolfgang
2013-01-01
Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.
Keshet, Aviv; Ketterle, Wolfgang
2013-01-01
Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.
NASA Astrophysics Data System (ADS)
Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid
2016-11-01
The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.
Computer hardware description languages - A tutorial
NASA Technical Reports Server (NTRS)
Shiva, S. G.
1979-01-01
The paper introduces hardware description languages (HDL) as useful tools for hardware design and documentation. The capabilities and limitations of HDLs are discussed along with the guidelines needed in selecting an appropriate HDL. The directions for future work are provided and attention is given to the implementation of HDLs in microcomputers.
Marshall Space Flight Center CFD overview
NASA Technical Reports Server (NTRS)
Schutzenhofer, Luke A.
1989-01-01
Computational Fluid Dynamics (CFD) activities at Marshall Space Flight Center (MSFC) have been focused on hardware specific and research applications with strong emphasis upon benchmark validation. The purpose here is to provide insight into the MSFC CFD related goals, objectives, current hardware related CFD activities, propulsion CFD research efforts and validation program, future near-term CFD hardware related programs, and CFD expectations. The current hardware programs where CFD has been successfully applied are the Space Shuttle Main Engines (SSME), Alternate Turbopump Development (ATD), and Aeroassist Flight Experiment (AFE). For the future near-term CFD hardware related activities, plans are being developed that address the implementation of CFD into the early design stages of the Space Transportation Main Engine (STME), Space Transportation Booster Engine (STBE), and the Environmental Control and Life Support System (ECLSS) for the Space Station. Finally, CFD expectations in the design environment will be delineated.
ERIC Educational Resources Information Center
Chandramouli, Magesh; Chittamuru, Siva-Teja
2016-01-01
This paper explains the design of a graphics-based virtual environment for instructing computer hardware concepts to students, especially those at the beginner level. Photorealistic visualizations and simulations are designed and programmed with interactive features allowing students to practice, explore, and test themselves on computer hardware…
NASA Technical Reports Server (NTRS)
Fogal, G. L.; Mangialardi, J. K.; Young, R.
1974-01-01
The capability of the basic automated Biowaste Sampling System (ABSS) hardware was extended and improved through the design, fabrication and test of breadboard hardware. A preliminary system design effort established the feasibility of integrating the breadboard concepts into the ABSS.
Bistatic radar sea state monitoring system design
NASA Technical Reports Server (NTRS)
Ruck, G. T.; Krichbaum, C. K.; Everly, J. O.
1975-01-01
Remote measurement of the two-dimensional surface wave height spectrum of the ocean by the use of bistatic radar techniques was examined. Potential feasibility and experimental verification by field experiment are suggested. The required experimental hardware is defined along with the designing, assembling, and testing of several required experimental hardware components.
When "Less is More": The Optimal Design of Language Laboratory Hardware.
ERIC Educational Resources Information Center
Kershaw, Gary; Boyd, Gary
1980-01-01
The results of a process of designing, building, and "de-bugging" two replacement language laboratory hardware systems at Concordia University (Montreal) are described. Because commercially available systems did not meet specifications within budgetary constraints, the systems were built by the university technical department. The systems replaced…
An Alternative Approach to Human Servicing of Crewed Earth Orbiting Spacecraft
NASA Technical Reports Server (NTRS)
Mularski, John R.; Alpert, Brian K.
2017-01-01
As crewed spacecraft have grown larger and more complex, they have come to rely on spacewalks, or Extravehicular Activities (EVA), for mission success and crew safety. Typically, these spacecraft maintain all of the hardware and trained personnel needed to perform an EVA on-board at all times. Maintaining this capability requires volume and up-mass for storage of EVA hardware, crew time for ground and on-orbit training, and on-orbit maintenance of EVA hardware. This paper proposes an alternative methodology, utilizing launch on-need hardware and crew to provide EVA capability for space stations in Earth orbit after assembly complete, in the same way that one would call a repairman to fix something at their home. This approach would reduce ground training requirements, save Intravehicular Activity (IVA) crew time in the form of EVA hardware maintenance and on-orbit training, and lead to more efficient EVAs because they would be performed by specialists with detailed knowledge and training stemming from their direct involvement in the development of the EVA. The on-orbit crew would then be available to focus on the immediate response to the failure as well as the day-to-day operations of the spacecraft and payloads. This paper will look at how current unplanned EVAs are conducted, including the time required for preparation, and offer alternatives for future spacecraft. As this methodology relies on the on-time and on-need launch of spacecraft, any space station that utilized this approach would need a robust transportation system including more than one launch vehicle capable of carrying crew. In addition, the fault tolerance of the space station would be an important consideration in how much time was available for EVA preparation after the failure. Each future program would have to weigh the risk of on-time launch against the increase in available crew time for the main objective of the spacecraft.
An Alternative Approach to Human Servicing of Manned Earth Orbiting Spacecraft
NASA Technical Reports Server (NTRS)
Mularski, John; Alpert, Brian
2011-01-01
As manned spacecraft have grown larger and more complex, they have come to rely on spacewalks or Extravehicular Activities (EVA) for both mission success and crew safety. Typically these spacecraft maintain all of the hardware and trained personnel needed to perform an EVA on-board at all times. Maintaining this capability requires volume and up-mass for storage of EVA hardware, crew time for ground and on-orbit training, and on-orbit maintenance of EVA hardware . This paper proposes an alternative methodology to utilize launch-on-need hardware and crew to provide EVA capability for space stations in Earth orbit after assembly complete, in the same way that most people would call a repairman to fix something at their home. This approach would not only reduce ground training requirements and save Intravehicular Activity (IVA) crew time in the form of EVA hardware maintenance and on-orbit training, but would also lead to more efficient EVAs because they would be performed by specialists with detailed knowledge and training stemming from their direct involvement in the development of the EVA. The on-orbit crew would then be available to focus on the immediate response to the failure as well as the day-to-day operations of the spacecraft and payloads. This paper will look at how current ISS unplanned EVAs are conducted, including the time required for preparation, and offer alternatives for future spacecraft utilizing lessons learned from ISS. As this methodology relies entirely on the on-time and on-need launch of spacecraft, any space station that utilized this approach would need a robust transportation system including more than one launch vehicle capable of carrying crew. In addition the fault tolerance of the space station would be an important consideration in how much time was available for EVA preparation after the failure. Each future program would have to weigh the risk of on-time launch against the increase in available crew time for the main objective of the spacecraft.
Programmable data collection platform study
NASA Technical Reports Server (NTRS)
1976-01-01
The results of a feasibility study incorporating microprocessors in data collection platforms in described. An introduction to microcomputer hardware and software concepts is provided. The influence of microprocessor technology on the design of programmable data collection platform hardware is discussed. A standard modular PDCP design capable of meeting the design goals is proposed, and the process of developing PDCP programs is examined. A description of design and construction of the UT PDCP development system is given.
Developments at the Advanced Design Technologies Testbed
NASA Technical Reports Server (NTRS)
VanDalsem, William R.; Livingston, Mary E.; Melton, John E.; Torres, Francisco J.; Stremel, Paul M.
2003-01-01
A report presents background and historical information, as of August 1998, on the Advanced Design Technologies Testbed (ADTT) at Ames Research Center. The ADTT is characterized as an activity initiated to facilitate improvements in aerospace design processes; provide a proving ground for product-development methods and computational software and hardware; develop bridging methods, software, and hardware that can facilitate integrated solutions to design problems; and disseminate lessons learned to the aerospace and information technology communities.
NASA Technical Reports Server (NTRS)
Stambaugh, Imelda; Baccus, Shelley; Buffington, Jessie; Hood, Andrew; Naids, Adam; Borrego, Melissa; Hanford, Anthony J.; Eckhardt, Brad; Allada, Rama Kumar; Yagoda, Evan
2013-01-01
Engineers at Johnson Space Center (JSC) are developing an Environmental Control and Life Support System (ECLSS) design for the Multi-Mission Space Exploration Vehicle (MMSEV). The purpose of the MMSEV is to extend the human exploration envelope for Lunar, Near Earth Object (NEO), or Deep Space missions by using pressurized exploration vehicles. The MMSEV, formerly known as the Space Exploration Vehicle (SEV), employs ground prototype hardware for various systems and tests it in manned and unmanned configurations. Eventually, the system hardware will evolve and become part of a flight vehicle capable of supporting different design reference missions. This paper will discuss the latest MMSEV ECLSS architectures developed for a variety of design reference missions, any work contributed toward the development of the ECLSS design, lessons learned from testing prototype hardware, and the plan to advance the ECLSS toward a flight design.
NASA Technical Reports Server (NTRS)
Stambaugh, Imelda; Baccus, Shelley; Naids, Adam; Hanford, Anthony
2012-01-01
Engineers at Johnson Space Center (JSC) are developing an Environmental Control and Life Support System (ECLSS) design for the Multi-Mission Space Exploration Vehicle (MMSEV). The purpose of the MMSEV is to extend the human exploration envelope for Lunar, Near Earth Object (NEO), or Deep Space missions by using pressurized exploration vehicles. The MMSEV, formerly known as the Space Exploration Vehicle (SEV), employs ground prototype hardware for various systems and tests it in manned and unmanned configurations. Eventually, the system hardware will evolve and become part of a flight vehicle capable of supporting different design reference missions. This paper will discuss the latest MMSEV ECLSS architectures developed for a variety of design reference missions, any work contributed toward the development of the ECLSS design, lessons learned from testing prototype hardware, and the plan to advance the ECLSS toward a flight design.
Impact of uniform electrode current distribution on ETF. [Engineering Test Facility MHD generator
NASA Technical Reports Server (NTRS)
Bents, D. J.
1982-01-01
A basic reason for the complexity and sheer volume of electrode consolidation hardware in the MHD ETF Powertrain system is the channel electrode current distribution, which is non-uniform. If the channel design is altered to provide uniform electrode current distribution, the amount of hardware required decreases considerably, but at the possible expense of degraded channel performance. This paper explains the design impacts on the ETF electrode consolidation network associated with uniform channel electrode current distribution, and presents the alternate consolidation designs which occur. They are compared to the baseline (non-uniform current) design with respect to performance, and hardware requirements. A rational basis is presented for comparing the requirements for the different designs and the savings that result from uniform current distribution. Performance and cost impacts upon the combined cycle plant are discussed.
HOW GOOD ARE MY DATA? INFORMATION QUALITY ASSESSMENT METHODOLOGY
Quality assurance techniques used in software development and hardware maintenance/reliability help to ensure that data in a computerized information management system are maintained well. However, information workers may not know the quality of the data resident in their inf...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mellor-Crummey, John
The PIPER project set out to develop methodologies and software for measurement, analysis, attribution, and presentation of performance data for extreme-scale systems. Goals of the project were to support analysis of massive multi-scale parallelism, heterogeneous architectures, multi-faceted performance concerns, and to support both post-mortem performance analysis to identify program features that contribute to problematic performance and on-line performance analysis to drive adaptation. This final report summarizes the research and development activity at Rice University as part of the PIPER project. Producing a complete suite of performance tools for exascale platforms during the course of this project was impossible since bothmore » hardware and software for exascale systems is still a moving target. For that reason, the project focused broadly on the development of new techniques for measurement and analysis of performance on modern parallel architectures, enhancements to HPCToolkit’s software infrastructure to support our research goals or use on sophisticated applications, engaging developers of multithreaded runtimes to explore how support for tools should be integrated into their designs, engaging operating system developers with feature requests for enhanced monitoring support, engaging vendors with requests that they add hardware measure- ment capabilities and software interfaces needed by tools as they design new components of HPC platforms including processors, accelerators and networks, and finally collaborations with partners interested in using HPCToolkit to analyze and tune scalable parallel applications.« less
High-Speed Isolation Board for Flight Hardware Testing
NASA Technical Reports Server (NTRS)
Yamamoto, Clifford K.; Goodpasture, Richard L.
2011-01-01
There is a need to provide a portable and cost-effective galvanic isolation between ground support equipment and flight hardware such that any unforeseen voltage differential between ground and power supplies is eliminated. An interface board was designed for use between the ground support equipment and the flight hardware that electrically isolates all input and output signals and faithfully reproduces them on each side of the interface. It utilizes highly integrated multi-channel isolating devices to minimize size and reduce assembly time. This single-board solution provides appropriate connector hardware and breakout of required flight signals to individual connectors as needed for various ground support equipment. The board utilizes multi-channel integrated circuits that contain transformer coupling, thereby allowing input and output signals to be isolated from one another while still providing high-fidelity reproduction of the signal up to 90 MHz. The board also takes in a single-voltage power supply input from the ground support equipment and in turn provides a transformer-derived isolated voltage supply to power the portion of the circuitry that is electrically connected to the flight hardware. Prior designs used expensive opto-isolated couplers that were required for each signal to isolate and were time-consuming to assemble. In addition, these earlier designs were bulky and required a 2U rack-mount enclosure. The new design is smaller than a piece of 8.5 11-in. (.22 28-mm) paper and can be easily hand-carried where needed. The flight hardware in question is based on a lineage of existing software-defined radios (SDRs) that utilize a common interface connector with many similar input-output signals present. There are currently four to five variations of this SDR, and more upcoming versions are planned based on the more recent design.
Solid State Audio/Speech Processor Analysis.
1980-03-01
techniques. The techniques were demonstrated to be worthwhile in an efficient realtime AWR system. Finally, microprocessor architectures were designed to...do not include custom chip development, detailed hardware design , construction or testing. ITTDCD is very encouraged by the results obtained in this...California, Berkley, was responsible for furnishing the simulation data of OD speech analysis techniques and for the design and development of the hardware OD
The Triangle: a Multiprocessor Architecture for Fast Curve and Surface Generation.
1987-08-01
design , curves and surfaces, graphics hardware. 20...curves, B-splines, computer-aided geometric design ; curves and sur- faces, graphics hardware. (k 12). -/ .... This work was supported in part by the...34 Electronic Design , October 30, 1986. 21. M. A. Penna and R. R. Patterson, Projective Geometry and its Applications to Computer Graphics , Prentice-Hall, Englewood Cliffs, N.J., 1985. 70,e, 41100vr -~ ~ - -- --
Human Systems Engineering for Launch processing at Kennedy Space Center (KSC)
NASA Technical Reports Server (NTRS)
Henderson, Gena; Stambolian, Damon B.; Stelges, Katrine
2012-01-01
Launch processing at Kennedy Space Center (KSC) is primarily accomplished by human users of expensive and specialized equipment. In order to reduce the likelihood of human error, to reduce personal injuries, damage to hardware, and loss of mission the design process for the hardware needs to include the human's relationship with the hardware. Just as there is electrical, mechanical, and fluids, the human aspect is just as important. The focus of this presentation is to illustrate how KSC accomplishes the inclusion of the human aspect in the design using human centered hardware modeling and engineering. The presentations also explain the current and future plans for research and development for improving our human factors analysis tools and processes.
Stretched Lens Array (SLA) Photovoltaic Concentrator Hardware Development and Testing
NASA Technical Reports Server (NTRS)
Piszczor, Michael; O'Neill, Mark J.; Eskenazi, Michael
2003-01-01
Over the past two years, the Stretched Lens Array (SLA) photovoltaic concentrator has evolved, under a NASA contract, from a concept with small component demonstrators to operational array hardware that is ready for space validation testing. A fully-functional four panel SLA solar array has been designed, built and tested. This paper will summarize the focus of the hardware development effort, discuss the results of recent testing conducted under this program and present the expected performance of a full size 7kW array designed to meet the requirements of future space missions.
Towards a Unified Approach to Information Integration - A review paper on data/information fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitney, Paul D.; Posse, Christian; Lei, Xingye C.
2005-10-14
Information or data fusion of data from different sources are ubiquitous in many applications, from epidemiology, medical, biological, political, and intelligence to military applications. Data fusion involves integration of spectral, imaging, text, and many other sensor data. For example, in epidemiology, information is often obtained based on many studies conducted by different researchers at different regions with different protocols. In the medical field, the diagnosis of a disease is often based on imaging (MRI, X-Ray, CT), clinical examination, and lab results. In the biological field, information is obtained based on studies conducted on many different species. In military field, informationmore » is obtained based on data from radar sensors, text messages, chemical biological sensor, acoustic sensor, optical warning and many other sources. Many methodologies are used in the data integration process, from classical, Bayesian, to evidence based expert systems. The implementation of the data integration ranges from pure software design to a mixture of software and hardware. In this review we summarize the methodologies and implementations of data fusion process, and illustrate in more detail the methodologies involved in three examples. We propose a unified multi-stage and multi-path mapping approach to the data fusion process, and point out future prospects and challenges.« less
Vapor Compression Distillation Urine Processor Lessons Learned from Development and Life Testing
NASA Technical Reports Server (NTRS)
Hutchens, Cindy F.; Long, David A.
1999-01-01
Vapor Compression Distillation (VCD) is the chosen technology for urine processing aboard the International Space Station (155). Development and life testing over the past several years have brought to the forefront problems and solutions for the VCD technology. Testing between 1992 and 1998 has been instrumental in developing estimates of hardware life and reliability. It has also helped improve the hardware design in ways that either correct existing problems or enhance the existing design of the hardware. The testing has increased the confidence in the VCD technology and reduced technical and programmatic risks. This paper summarizes the test results and changes that have been made to the VCD design.
Elad, Tal; Lee, Jin Hyung; Belkin, Shimshon; Gu, Man Bock
2008-01-01
Summary The coming of age of whole‐cell biosensors, combined with the continuing advances in array technologies, has prepared the ground for the next step in the evolution of both disciplines – the whole‐cell array. In the present review, we highlight the state‐of‐the‐art in the different disciplines essential for a functional bacterial array. These include the genetic engineering of the biological components, their immobilization in different polymers, technologies for live cell deposition and patterning on different types of solid surfaces, and cellular viability maintenance. Also reviewed are the types of signals emitted by the reporter cell arrays, some of the transduction methodologies for reading these signals and the mathematical approaches proposed for their analysis. Finally, we review some of the potential applications for bacterial cell arrays, and list the future needs for their maturation: a richer arsenal of high‐performance reporter strains, better methodologies for their incorporation into hardware platforms, design of appropriate detection circuits, the continuing development of dedicated algorithms for multiplex signal analysis and – most importantly – enhanced long‐term maintenance of viability and activity on the fabricated biochips. PMID:21261831
NASA Technical Reports Server (NTRS)
Krause, David L.; Brewer, Ethan J.; Pawlik, Ralph
2013-01-01
This report provides test methodology details and qualitative results for the first structural benchmark creep test of an Advanced Stirling Convertor (ASC) heater head of ASC-E2 design heritage. The test article was recovered from a flight-like Microcast MarM-247 heater head specimen previously used in helium permeability testing. The test article was utilized for benchmark creep test rig preparation, wall thickness and diametral laser scan hardware metrological developments, and induction heater custom coil experiments. In addition, a benchmark creep test was performed, terminated after one week when through-thickness cracks propagated at thermocouple weld locations. Following this, it was used to develop a unique temperature measurement methodology using contact thermocouples, thereby enabling future benchmark testing to be performed without the use of conventional welded thermocouples, proven problematic for the alloy. This report includes an overview of heater head structural benchmark creep testing, the origin of this particular test article, test configuration developments accomplished using the test article, creep predictions for its benchmark creep test, qualitative structural benchmark creep test results, and a short summary.
Benchmarking Evaluation Results for Prototype Extravehicular Activity Gloves
NASA Technical Reports Server (NTRS)
Aitchison, Lindsay; McFarland, Shane
2012-01-01
The Space Suit Assembly (SSA) Development Team at NASA Johnson Space Center has invested heavily in the advancement of rear-entry planetary exploration suit design but largely deferred development of extravehicular activity (EVA) glove designs, and accepted the risk of using the current flight gloves, Phase VI, for unique mission scenarios outside the Space Shuttle and International Space Station (ISS) Program realm of experience. However, as design reference missions mature, the risks of using heritage hardware have highlighted the need for developing robust new glove technologies. To address the technology gap, the NASA Game-Changing Technology group provided start-up funding for the High Performance EVA Glove (HPEG) Project in the spring of 2012. The overarching goal of the HPEG Project is to develop a robust glove design that increases human performance during EVA and creates pathway for future implementation of emergent technologies, with specific aims of increasing pressurized mobility to 60% of barehanded capability, increasing the durability by 100%, and decreasing the potential of gloves to cause injury during use. The HPEG Project focused initial efforts on identifying potential new technologies and benchmarking the performance of current state of the art gloves to identify trends in design and fit leading to establish standards and metrics against which emerging technologies can be assessed at both the component and assembly levels. The first of the benchmarking tests evaluated the quantitative mobility performance and subjective fit of four prototype gloves developed by Flagsuit LLC, Final Frontier Designs, LLC Dover, and David Clark Company as compared to the Phase VI. All of the companies were asked to design and fabricate gloves to the same set of NASA provided hand measurements (which corresponded to a single size of Phase Vi glove) and focus their efforts on improving mobility in the metacarpal phalangeal and carpometacarpal joints. Four test subjects representing the design ]to hand anthropometry completed range of motion, grip/pinch strength, dexterity, and fit evaluations for each glove design in both the unpressurized and pressurized conditions. This paper provides a comparison of the test results along with a detailed description of hardware and test methodologies used.
Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation
Liu, Qian; Pineda-García, Garibaldi; Stromatias, Evangelos; Serrano-Gotarredona, Teresa; Furber, Steve B.
2016-01-01
Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field. PMID:27853419
Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation.
Liu, Qian; Pineda-García, Garibaldi; Stromatias, Evangelos; Serrano-Gotarredona, Teresa; Furber, Steve B
2016-01-01
Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field.
ASIC implementation of recursive scaled discrete cosine transform algorithm
NASA Astrophysics Data System (ADS)
On, Bill N.; Narasimhan, Sam; Huang, Victor K.
1994-05-01
A program to implement the Recursive Scaled Discrete Cosine Transform (DCT) algorithm as proposed by H. S. Hou has been undertaken at the Institute of Microelectronics. Implementation of the design was done using top-down design methodology with VHDL (VHSIC Hardware Description Language) for chip modeling. When the VHDL simulation has been satisfactorily completed, the design is synthesized into gates using a synthesis tool. The architecture of the design consists of two processing units together with a memory module for data storage and transpose. Each processing unit is composed of four pipelined stages which allow the internal clock to run at one-eighth (1/8) the speed of the pixel clock. Each stage operates on eight pixels in parallel. As the data flows through each stage, there are various adders and multipliers to transform them into the desired coefficients. The Scaled IDCT was implemented in a similar fashion with the adders and multipliers rearranged to perform the inverse DCT algorithm. The chip has been verified using Field Programmable Gate Array devices. The design is operational. The combination of fewer multiplications required and pipelined architecture give Hou's Recursive Scaled DCT good potential of achieving high performance at a low cost in using Very Large Scale Integration implementation.
Round Girls in Square Computers: Feminist Perspectives on the Aesthetics of Computer Hardware.
ERIC Educational Resources Information Center
Carr-Chellman, Alison A.; Marra, Rose M.; Roberts, Shari L.
2002-01-01
Considers issues related to computer hardware, aesthetics, and gender. Explores how gender has influenced the design of computer hardware and how these gender-driven aesthetics may have worked to maintain, extend, or alter gender distinctions, roles, and stereotypes; discusses masculine media representations; and presents an alternative model.…
Teaching Robotics Software with the Open Hardware Mobile Manipulator
ERIC Educational Resources Information Center
Vona, M.; Shekar, N. H.
2013-01-01
The "open hardware mobile manipulator" (OHMM) is a new open platform with a unique combination of features for teaching robotics software and algorithms. On-board low- and high-level processors support real-time embedded programming and motor control, as well as higher-level coding with contemporary libraries. Full hardware designs and…
Mechanically verified hardware implementing an 8-bit parallel IO Byzantine agreement processor
NASA Technical Reports Server (NTRS)
Moore, J. Strother
1992-01-01
Consider a network of four processors that use the Oral Messages (Byzantine Generals) Algorithm of Pease, Shostak, and Lamport to achieve agreement in the presence of faults. Bevier and Young have published a functional description of a single processor that, when interconnected appropriately with three identical others, implements this network under the assumption that the four processors step in synchrony. By formalizing the original Pease, et al work, Bevier and Young mechanically proved that such a network achieves fault tolerance. We develop, formalize, and discuss a hardware design that has been mechanically proven to implement their processor. In particular, we formally define mapping functions from the abstract state space of the Bevier-Young processor to a concrete state space of a hardware module and state a theorem that expresses the claim that the hardware correctly implements the processor. We briefly discuss the Brock-Hunt Formal Hardware Description Language which permits designs both to be proved correct with the Boyer-Moore theorem prover and to be expressed in a commercially supported hardware description language for additional electrical analysis and layout. We briefly describe our implementation.
Study and design of cryogenic propellant acquisition systems. Volume 1: Design studies
NASA Technical Reports Server (NTRS)
Burge, G. W.; Blackmon, J. B.
1973-01-01
An in-depth study and selection of practical propellant surface tension acquisition system designs for two specific future cryogenic space vehicles, an advanced cryogenic space shuttle auxiliary propulsion system and an advanced space propulsion module is reported. A supporting laboratory scale experimental program was also conducted to provide design information critical to concept finalization and selection. Designs using localized pressure isolated surface tension screen devices were selected for each application and preliminary designs were generated. Based on these designs, large scale acquisition prototype hardware was designed and fabricated to be compatible with available NASA-MSFC feed system hardware.
Microprocessor Design Using Hardware Description Language
ERIC Educational Resources Information Center
Mita, Rosario; Palumbo, Gaetano
2008-01-01
The following paper has been conceived to deal with the contents of some lectures aimed at enhancing courses on digital electronic, microelectronic or VLSI systems. Those lectures show how to use a hardware description language (HDL), such as the VHDL, to specify, design and verify a custom microprocessor. The general goal of this work is to teach…
49 CFR Appendix C to Part 236 - Safety Assurance Criteria and Processes
Code of Federal Regulations, 2010 CFR
2010-10-01
... system (all its elements including hardware and software) must be designed to assure safe operation with... unsafe errors in the software due to human error in the software specification, design, or coding phases... (hardware or software, or both) are used in combination to ensure safety. If a common mode failure exists...
Sodium MRI: Methods and applications
Madelin, Guillaume; Lee, Jae-Seung; Regatte, Ravinder R.; Jerschow, Alexej
2014-01-01
Sodium NMR spectroscopy and MRI have become popular in recent years through the increased availability of high-field MRI scanners, advanced scanner hardware and improved methodology. Sodium MRI is being evaluated for stroke and tumor detection, for breast cancer studies, and for the assessment of osteoarthritis and muscle and kidney functions, to name just a few. In this article, we aim to present an up-to-date review of the theoretical background, the methodology, the challenges and limitations, and current and potential new applications of sodium MRI. PMID:24815363
Simulation of Attacks for Security in Wireless Sensor Network
Diaz, Alvaro; Sanchez, Pablo
2016-01-01
The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node’s software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work. PMID:27869710
Energy Efficient Engine combustor test hardware detailed design report
NASA Technical Reports Server (NTRS)
Burrus, D. L.; Chahrour, C. A.; Foltz, H. L.; Sabla, P. E.; Seto, S. P.; Taylor, J. R.
1984-01-01
The Energy Efficient Engine (E3) Combustor Development effort was conducted as part of the overall NASA/GE E3 Program. This effort included the selection of an advanced double-annular combustion system design. The primary intent was to evolve a design which meets the stringent emissions and life goals of the E3 as well as all of the usual performance requirements of combustion systems for modern turbofan engines. Numerous detailed design studies were conducted to define the features of the combustion system design. Development test hardware was fabricated, and an extensive testing effort was undertaken to evaluate the combustion system subcomponents in order to verify and refine the design. Technology derived from this development effort will be incorporated into the engine combustion system hardware design. This advanced engine combustion system will then be evaluated in component testing to verify the design intent. What is evolving from this development effort is an advanced combustion system capable of satisfying all of the combustion system design objectives and requirements of the E3. Fuel nozzle, diffuser, starting, and emissions design studies are discussed.
NASA Technical Reports Server (NTRS)
Trabanino, Rudy; Murphy, George L.; Yakut, M. M.
1986-01-01
An Advanced Food Hardware System galley for the initial operating capability (IOC) Space Station is discussed. Space Station will employ food hardware items that have never been flown in space, such as a dishwasher, microwave oven, blender/mixer, bulk food and beverage dispensers, automated food inventory management, a trash compactor, and an advanced technology refrigerator/freezer. These new technologies and designs are described and the trades, design, development, and testing associated with each are summarized.
Using SysML for MBSE analysis of the LSST system
NASA Astrophysics Data System (ADS)
Claver, Charles F.; Dubois-Felsmann, Gregory; Delgado, Francisco; Hascall, Pat; Marshall, Stuart; Nordby, Martin; Schalk, Terry; Schumacher, German; Sebag, Jacques
2010-07-01
The Large Synoptic Survey Telescope is a complex hardware - software system of systems, making up a highly automated observatory in the form of an 8.4m wide-field telescope, a 3.2 billion pixel camera, and a peta-scale data processing and archiving system. As a project, the LSST is using model based systems engineering (MBSE) methodology for developing the overall system architecture coded with the Systems Modeling Language (SysML). With SysML we use a recursive process to establish three-fold relationships between requirements, logical & physical structural component definitions, and overall behavior (activities and sequences) at successively deeper levels of abstraction and detail. Using this process we have analyzed and refined the LSST system design, ensuring the consistency and completeness of the full set of requirements and their match to associated system structure and behavior. As the recursion process proceeds to deeper levels we derive more detailed requirements and specifications, and ensure their traceability. We also expose, define, and specify critical system interfaces, physical and information flows, and clarify the logic and control flows governing system behavior. The resulting integrated model database is used to generate documentation and specifications and will evolve to support activities from construction through final integration, test, and commissioning, serving as a living representation of the LSST as designed and built. We discuss the methodology and present several examples of its application to specific systems engineering challenges in the LSST design.
NASA Technical Reports Server (NTRS)
Jackson, L. Neal; Crenshaw, John, Sr.; Davidson, William L.; Herbert, Frank J.; Bilodeau, James W.; Stoval, J. Michael; Sutton, Terry
1989-01-01
The optimum hardware miniaturization level with the lowest cost impact for space biology hardware was determined. Space biology hardware and/or components/subassemblies/assemblies which are the most likely candidates for application of miniaturization are to be defined and relative cost impacts of such miniaturization are to be analyzed. A mathematical or statistical analysis method with the capability to support development of parametric cost analysis impacts for levels of production design miniaturization are provided.
No-hardware-signature cybersecurity-crypto-module: a resilient cyber defense agent
NASA Astrophysics Data System (ADS)
Zaghloul, A. R. M.; Zaghloul, Y. A.
2014-06-01
We present an optical cybersecurity-crypto-module as a resilient cyber defense agent. It has no hardware signature since it is bitstream reconfigurable, where single hardware architecture functions as any selected device of all possible ones of the same number of inputs. For a two-input digital device, a 4-digit bitstream of 0s and 1s determines which device, of a total of 16 devices, the hardware performs as. Accordingly, the hardware itself is not physically reconfigured, but its performance is. Such a defense agent allows the attack to take place, rendering it harmless. On the other hand, if the system is already infected with malware sending out information, the defense agent allows the information to go out, rendering it meaningless. The hardware architecture is immune to side attacks since such an attack would reveal information on the attack itself and not on the hardware. This cyber defense agent can be used to secure a point-to-point, point-to-multipoint, a whole network, and/or a single entity in the cyberspace. Therefore, ensuring trust between cyber resources. It can provide secure communication in an insecure network. We provide the hardware design and explain how it works. Scalability of the design is briefly discussed. (Protected by United States Patents No.: US 8,004,734; US 8,325,404; and other National Patents worldwide.)
Design of Measure and Control System for Precision Pesticide Deploying Dynamic Simulating Device
NASA Astrophysics Data System (ADS)
Liang, Yong; Liu, Pingzeng; Wang, Lu; Liu, Jiping; Wang, Lang; Han, Lei; Yang, Xinxin
A measure and control system for precision deploying pesticide simulating equipment is designed in order to study pesticide deployment technology. The system can simulate every state of practical pesticide deployment, and carry through precise, simultaneous measure to every factor affecting pesticide deployment effects. The hardware and software incorporates a structural design of modularization. The system is divided into many different function modules of hardware and software, and exploder corresponding modules. The modules’ interfaces are uniformly defined, which is convenient for module connection, enhancement of system’s universality, explodes efficiency and systemic reliability, and make the program’s characteristics easily extended and easy maintained. Some relevant hardware and software modules can be adapted to other measures and control systems easily. The paper introduces the design of special numeric control system, the main module of information acquisition system and the speed acquisition module in order to explain the design process of the module.
NASA Technical Reports Server (NTRS)
Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)
1998-01-01
Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.
From Goal-Oriented Requirements to Event-B Specifications
NASA Technical Reports Server (NTRS)
Aziz, Benjamin; Arenas, Alvaro E.; Bicarregui, Juan; Ponsard, Christophe; Massonet, Philippe
2009-01-01
In goal-oriented requirements engineering methodologies, goals are structured into refinement trees from high-level system-wide goals down to fine-grained requirements assigned to specific software/ hardware/human agents that can realise them. Functional goals assigned to software agents need to be operationalised into specification of services that the agent should provide to realise those requirements. In this paper, we propose an approach for operationalising requirements into specifications expressed in the Event-B formalism. Our approach has the benefit of aiding software designers by bridging the gap between declarative requirements and operational system specifications in a rigorous manner, enabling powerful correctness proofs and allowing further refinements down to the implementation level. Our solution is based on verifying that a consistent Event-B machine exhibits properties corresponding to requirements.
Software Development Technologies for Reactive, Real-Time, and Hybrid Systems: Summary of Research
NASA Technical Reports Server (NTRS)
Manna, Zohar
1998-01-01
This research is directed towards the implementation of a comprehensive deductive-algorithmic environment (toolkit) for the development and verification of high assurance reactive systems, especially concurrent, real-time, and hybrid systems. For this, we have designed and implemented the STCP (Stanford Temporal Prover) verification system. Reactive systems have an ongoing interaction with their environment, and their computations are infinite sequences of states. A large number of systems can be seen as reactive systems, including hardware, concurrent programs, network protocols, and embedded systems. Temporal logic provides a convenient language for expressing properties of reactive systems. A temporal verification methodology provides procedures for proving that a given system satisfies a given temporal property. The research covered necessary theoretical foundations as well as implementation and application issues.
Radiation Susceptibility Assessment of Off the Shelf (OTS) Hardware
NASA Technical Reports Server (NTRS)
Culpepper, William X.; Nicholson, Leonard L. (Technical Monitor)
2000-01-01
The reduction in budgets, shortening of schedules and necessity of flying near state of the art technology have forced projects and designers to utilize not only modern, non-space rated EEE parts but also OTS boards, subassemblies and systems. New instrumentation, communications, portable computers and navigation systems for the International Space Station, Space Shuttle, and Crew Return Vehicle are examples of the realization of this paradigm change at the Johnson Space Center. Because of this change, there has been a shift in the radiation assessment methodology from individual part testing using low energy heavy ions to board and box level testing using high-energy particle beams. Highlights of several years of board and system level testing are presented along with lessons learned, present areas of concern, insights into test costs, and future challenges.
Evaluation of the HARDMAN comparability methodology for manpower, personnel and training
NASA Technical Reports Server (NTRS)
Zimmerman, W.; Butler, R.; Gray, V.; Rosenberg, L.
1984-01-01
The methodology evaluation and recommendation are part of an effort to improve Hardware versus Manpower (HARDMAN) methodology for projecting manpower, personnel, and training (MPT) to support new acquisition. Several different validity tests are employed to evaluate the methodology. The methodology conforms fairly well with both the MPT user needs and other accepted manpower modeling techniques. Audits of three completed HARDMAN applications reveal only a small number of potential problem areas compared to the total number of issues investigated. The reliability study results conform well with the problem areas uncovered through the audits. The results of the accuracy studies suggest that the manpower life-cycle cost component is only marginally sensitive to changes in other related cost variables. Even with some minor problems, the methodology seem sound and has good near term utility to the Army. Recommendations are provided to firm up the problem areas revealed through the evaluation.
Autonomous power system brassboard
NASA Technical Reports Server (NTRS)
Merolla, Anthony
1992-01-01
The Autonomous Power System (APS) brassboard is a 20 kHz power distribution system which has been developed at NASA Lewis Research Center, Cleveland, Ohio. The brassboard exists to provide a realistic hardware platform capable of testing artificially intelligent (AI) software. The brassboard's power circuit topology is based upon a Power Distribution Control Unit (PDCU), which is a subset of an advanced development 20 kHz electrical power system (EPS) testbed, originally designed for Space Station Freedom (SSF). The APS program is designed to demonstrate the application of intelligent software as a fault detection, isolation, and recovery methodology for space power systems. This report discusses both the hardware and software elements used to construct the present configuration of the brassboard. The brassboard power components are described. These include the solid-state switches (herein referred to as switchgear), transformers, sources, and loads. Closely linked to this power portion of the brassboard is the first level of embedded control. Hardware used to implement this control and its associated software is discussed. An Ada software program, developed by Lewis Research Center's Space Station Freedom Directorate for their 20 kHz testbed, is used to control the brassboard's switchgear, as well as monitor key brassboard parameters through sensors located within these switches. The Ada code is downloaded from a PC/AT, and is resident within the 8086 microprocessor-based embedded controllers. The PC/AT is also used for smart terminal emulation, capable of controlling the switchgear as well as displaying data from them. Intelligent control is provided through use of a T1 Explorer and the Autonomous Power Expert (APEX) LISP software. Real-time load scheduling is implemented through use of a 'C' program-based scheduling engine. The methods of communication between these computers and the brassboard are explored. In order to evaluate the features of both the brassboard hardware and intelligent controlling software, fault circuits have been developed and integrated as part of the brassboard. A description of these fault circuits and their function is included. The brassboard has become an extremely useful test facility, promoting artificial intelligence (AI) applications for power distribution systems. However, there are elements of the brassboard which could be enhanced, thus improving system performance. Modifications and enhancements to improve the brassboard's operation are discussed.
A Virtual Laboratory for Aviation and Airspace Prognostics Research
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan; Gorospe, George; Teubert, Christ; Quach, Cuong C.; Hogge, Edward; Darafsheh, Kaveh
2017-01-01
Integration of Unmanned Aerial Vehicles (UAVs), autonomy, spacecraft, and other aviation technologies, in the airspace is becoming more and more complicated, and will continue to do so in the future. Inclusion of new technology and complexity into the airspace increases the importance and difficulty of safety assurance. Additionally, testing new technologies on complex aviation systems and systems of systems can be challenging, expensive, and at times unsafe when implementing real life scenarios. The application of prognostics to aviation and airspace management may produce new tools and insight into these problems. Prognostic methodology provides an estimate of the health and risks of a component, vehicle, or airspace and knowledge of how that will change over time. That measure is especially useful in safety determination, mission planning, and maintenance scheduling. In our research, we develop a live, distributed, hardware- in-the-loop Prognostics Virtual Laboratory testbed for aviation and airspace prognostics. The developed testbed will be used to validate prediction algorithms for the real-time safety monitoring of the National Airspace System (NAS) and the prediction of unsafe events. In our earlier work1 we discussed the initial Prognostics Virtual Laboratory testbed development work and related results for milestones 1 & 2. This paper describes the design, development, and testing of the integrated tested which are part of milestone 3, along with our next steps for validation of this work. Through a framework consisting of software/hardware modules and associated interface clients, the distributed testbed enables safe, accurate, and inexpensive experimentation and research into airspace and vehicle prognosis that would not have been possible otherwise. The testbed modules can be used cohesively to construct complex and relevant airspace scenarios for research. Four modules are key to this research: the virtual aircraft module which uses the X-Plane simulator and X-PlaneConnect toolbox, the live aircraft module which connects fielded aircraft using onboard cellular communications devices, the hardware in the loop (HITL) module which connects laboratory based bench-top hardware testbeds and the research module which contains diagnostics and prognostics tools for analysis of live air traffic situations and vehicle health conditions. The testbed also features other modules for data recording and playback, information visualization, and air traffic generation. Software reliability, safety, and latency are some of the critical design considerations in development of the testbed.
Compiler-assisted multiple instruction rollback recovery using a read buffer
NASA Technical Reports Server (NTRS)
Alewine, N. J.; Chen, S.-K.; Fuchs, W. K.; Hwu, W.-M.
1993-01-01
Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper focuses on compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations indicate improved efficiency over previous hardware-based and compiler-based schemes.
Neuroimaging of Human Balance Control: A Systematic Review
Wittenberg, Ellen; Thompson, Jessica; Nam, Chang S.; Franz, Jason R.
2017-01-01
This review examined 83 articles using neuroimaging modalities to investigate the neural correlates underlying static and dynamic human balance control, with aims to support future mobile neuroimaging research in the balance control domain. Furthermore, this review analyzed the mobility of the neuroimaging hardware and research paradigms as well as the analytical methodology to identify and remove movement artifact in the acquired brain signal. We found that the majority of static balance control tasks utilized mechanical perturbations to invoke feet-in-place responses (27 out of 38 studies), while cognitive dual-task conditions were commonly used to challenge balance in dynamic balance control tasks (20 out of 32 studies). While frequency analysis and event related potential characteristics supported enhanced brain activation during static balance control, that in dynamic balance control studies was supported by spatial and frequency analysis. Twenty-three of the 50 studies utilizing EEG utilized independent component analysis to remove movement artifacts from the acquired brain signals. Lastly, only eight studies used truly mobile neuroimaging hardware systems. This review provides evidence to support an increase in brain activation in balance control tasks, regardless of mechanical, cognitive, or sensory challenges. Furthermore, the current body of literature demonstrates the use of advanced signal processing methodologies to analyze brain activity during movement. However, the static nature of neuroimaging hardware and conventional balance control paradigms prevent full mobility and limit our knowledge of neural mechanisms underlying balance control. PMID:28443007
An Approach for Performance Assessments of Extravehicular Activity Gloves
NASA Technical Reports Server (NTRS)
Aitchison, Lindsay; Benosn, Elizabeth
2014-01-01
The Space Suit Assembly (SSA) Development Team at NASA Johnson Space Center has invested heavily in the advancement of rear-entry planetary exploration suit design but largely deferred development of extravehicular activity (EVA) glove designs, and accepted the risk of using the current flight gloves, Phase VI, for unique mission scenarios outside the Space Shuttle and International Space Station (ISS) Program realm of experience. However, as design reference missions mature, the risks of using heritage hardware have highlighted the need for developing robust new glove technologies. To address the technology gap, the NASA Game-Changing Technology group provided start-up funding for the High Performance EVA Glove (HPEG) Project in the spring of 2012. The overarching goal of the HPEG Project is to develop a robust glove design that increases human performance during EVA and creates pathway for future implementation of emergent technologies, with specific aims of increasing pressurized mobility to 60% of barehanded capability, increasing the durability by 100%, and decreasing the potential of gloves to cause injury during use. The HPEG Project focused initial efforts on identifying potential new technologies and benchmarking the performance of current state of the art gloves to identify trends in design and fit leading to establish standards and metrics against which emerging technologies can be assessed at both the component and assembly levels. The first of the benchmarking tests evaluated the quantitative mobility performance and subjective fit of two sets of prototype EVA gloves developed ILC Dover and David Clark Company as compared to the Phase VI. Both companies were asked to design and fabricate gloves to the same set of NASA provided hand measurements (which corresponded to a single size of Phase Vi glove) and focus their efforts on improving mobility in the metacarpal phalangeal and carpometacarpal joints. Four test subjects representing the design-to hand anthropometry completed range of motion, grip/pinch strength, dexterity, and fit evaluations for each glove design in pressurized conditions, with and without thermal micrometeoroid garments (TMG) installed. This paper provides a detailed description of hardware and test methodologies used and lessons learned.
An Alternative Approach to Human Servicing of Crewed Earth Orbiting Spacecraft
NASA Technical Reports Server (NTRS)
Mularski, John R.; Alpert, Brian K.
2017-01-01
As crewed spacecraft have grown larger and more complex, they have come to rely on spacewalks, or Extravehicular Activities (EVA), for assembly and to assure mission success. Typically, these spacecraft maintain all of the hardware and trained personnel needed to perform an EVA on-board at all times. Maintaining this capability requires up-mass, volume for storage of EVA hardware, crew time for ground and on-orbit training, and on-orbit maintenance of EVA hardware. This paper proposes an alternative methodology, utilizing either launch-on-need hardware and crew or regularly scheduled missions to provide EVA capability for space stations in low Earth orbit after assembly complete. Much the same way that one would call a repairman to fix something at their home these EVAs are dedicated to maintenance and upgrades of the orbiting station. For crew safety contingencies it is assumed the station would be designed such the crew could either solve those issues from inside the spacecraft or use the docked Earth to Orbit vehicles as a return lifeboat, in the same manner as the International Space Station (ISS) which does not rely on EVA for crew safety related contingencies. This approach would reduce ground training requirements for long duration crews, save Intravehicular Activity (IVA) crew time in the form of EVA hardware maintenance and on-orbit training, and lead to more efficient EVAs because they would be performed by specialists with detailed knowledge and training stemming from their direct involvement in the development of the EVA. The on-orbit crew would then be available to focus on the immediate response to any failures such as IVA systems reconfiguration or jumper installation as well as the day-to-day operations of the spacecraft and payloads. This paper will look at how current unplanned EVAs are conducted on ISS, including the time required for preparation, and offer an alternative for future spacecraft. As this methodology relies on the on-time and on-need launch of spacecraft, any space station that utilized this approach would need a robust transportation system, possibly including more than one launch vehicle capable of carrying crew. In addition, the fault tolerance of the future space station would be an important consideration in how much time was available for EVA preparation after the failure. Ideally the fault tolerance of the station would allow for the maintenance tasks to be grouped such that they could be handled by regularly scheduled maintenance visits and not contingency launches. Each future program would have to weigh the risk of on-time launch against the increase in available crew time for the main objective of the spacecraft. This is only one of several ideas that could be used to reduce or eliminate a station's reliance on rapid turnaround EVAs using on-board crew. Others could include having shirt-sleeve access to critical systems or utilizing low pressure temporarily pressurized equipment bays.
ERIC Educational Resources Information Center
Sirakaya, Mustafa; Cakmak, Ebru Kilic
2018-01-01
This study aimed to test the impact of augmented reality (AR) use on student achievement and self-efficacy in vocational education and training. For this purpose, a marker-based AR application, called HardwareAR, was developed. HardwareAR provides information about characteristics of hardware components, ports and assembly. The research design was…
A Flexible Hardware Test and Demonstration Platform for the Fractionated System Architecture YETE
NASA Astrophysics Data System (ADS)
Kempf, Florian; Haber, Roland; Tzschichholz, Tristan; Mikschl, Tobias; Hilgarth, Alexander; Montenegro, Sergio; Schilling, Klaus
2016-08-01
This paper introduces a hardware-in-the loop test and demonstration platform for the YETE system architecture for fractionated spacecraft. It is designed for rapid prototyping and testing of distributed control approaches for the YETE architecture subject to varying network topologies and transmission channel properties between the individual YETE hardware nodes.
Automated culture system experiments hardware: developing test results and design solutions.
Freddi, M; Covini, M; Tenconi, C; Ricci, C; Caprioli, M; Cotronei, V
2002-07-01
The experiment proposed by Prof. Ricci University of Milan is funded by ASI with Laben as industrial Prime Contractor. ACS-EH (Automated Culture System-Experiment Hardware) will support the multigenerational experiment on weightlessness with rotifers and nematodes within four Experiment Containers (ECs) located inside the European Modular Cultivation System (EMCS) facility..Actually the Phase B is in progress and a concept design solution has been defined. The most challenging aspects for the design of such hardware are, from biological point of view the provision of an environment which permits animal's survival and to maintain desiccated generations separated and from the technical point of view, the miniaturisation of the hardware itself due to the reduce EC provided volume (160mmx60mmx60mm). The miniaturisation will allow a better use of the available EMCS Facility resources (e.g. volume. power etc.) and to fulfil the experiment requirements. ACS-EH, will be ready to fly in the year 2005 on boar the ISS.
Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao; Chang, Chin-Chen
2016-12-01
Iris recognition has gained increasing popularity over the last few decades; however, the stand-off distance in a conventional iris recognition system is too short, which limits its application. In this paper, we propose a novel hardware-software hybrid method to increase the stand-off distance in an iris recognition system. When designing the system hardware, we use an optimized wavefront coding technique to extend the depth of field. To compensate for the blurring of the image caused by wavefront coding, on the software side, the proposed system uses a local patch-based super-resolution method to restore the blurred image to its clear version. The collaborative effect of the new hardware design and software post-processing showed great potential in our experiment. The experimental results showed that such improvement cannot be achieved by using a hardware-or software-only design. The proposed system can increase the capture volume of a conventional iris recognition system by three times and maintain the system's high recognition rate.
Anthropometric Accommodation in Space Suit Design
NASA Technical Reports Server (NTRS)
Rajulu, Sudhakar; Thaxton, Sherry
2007-01-01
Design requirements for next generation hardware are in process at NASA. Anthropometry requirements are given in terms of minimum and maximum sizes for critical dimensions that hardware must accommodate. These dimensions drive vehicle design and suit design, and implicitly have an effect on crew selection and participation. At this stage in the process, stakeholders such as cockpit and suit designers were asked to provide lists of dimensions that will be critical for their design. In addition, they were asked to provide technically feasible minimum and maximum ranges for these dimensions. Using an adjusted 1988 Anthropometric Survey of U.S. Army (ANSUR) database to represent a future astronaut population, the accommodation ranges provided by the suit critical dimensions were calculated. This project involved participation from the Anthropometry and Biomechanics facility (ABF) as well as suit designers, with suit designers providing expertise about feasible hardware dimensions and the ABF providing accommodation analysis. The initial analysis provided the suit design team with the accommodation levels associated with the critical dimensions provided early in the study. Additional outcomes will include a comparison of principal components analysis as an alternate method for anthropometric analysis.
Generation of 2N + 1-scroll existence in new three-dimensional chaos systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yue; Guan, Jian; Ma, Chunyang
2016-08-15
We propose a systematic methodology for creating 2N + 1-scroll chaotic attractors from a simple three-dimensional system, which is named as the translation chaotic system. It satisfies the condition a{sub 12}a{sub 21} = 0, while the Chua system satisfies a{sub 12}a{sub 21} > 0. In this paper, we also propose a successful (an effective) design and an analytical approach for constructing 2N + 1-scrolls, the translation transformation principle. Also, the dynamics properties of the system are studied in detail. MATLAB simulation results show very sophisticated dynamical behaviors and unique chaotic behaviors of the system. It provides a new approach for 2N + 1-scroll attractors. Finally, to explore the potential usemore » in technological applications, a novel block circuit diagram is also designed for the hardware implementation of 1-, 3-, 5-, and 7-scroll attractors via switching the switches. Translation chaotic system has the merit of convenience and high sensitivity to initial values, emerging potentials in future engineering chaos design.« less
An integrated ball projection technology for the study of dynamic interceptive actions.
Stone, J A; Panchuk, D; Davids, K; North, J S; Fairweather, I; Maynard, I W
2014-12-01
Dynamic interceptive actions, such as catching or hitting a ball, are important task vehicles for investigating the complex relationship between cognition, perception, and action in performance environments. Representative experimental designs have become more important recently, highlighting the need for research methods to ensure that the coupling of information and movement is faithfully maintained. However, retaining representative design while ensuring systematic control of experimental variables is challenging, due to the traditional tendency to employ methods that typically involve use of reductionist motor responses such as buttonpressing or micromovements. Here, we outline the methodology behind a custom-built, integrated ball projection technology that allows images of advanced visual information to be synchronized with ball projection. This integrated technology supports the controlled presentation of visual information to participants while they perform dynamic interceptive actions. We discuss theoretical ideas behind the integration of hardware and software, along with practical issues resolved in technological design, and emphasize how the system can be integrated with emerging developments such as mixed reality environments. We conclude by considering future developments and applications of the integrated projection technology for research in human movement behaviors.
Radiation Specifications for Fission Power Conversion Component Materials
NASA Technical Reports Server (NTRS)
Bowman, Cheryl L.; Shin, E. Eugene; Mireles, Omar R.; Radel, Ross F.; Qualls, A. Louis
2011-01-01
NASA has been supporting design studies and technology development that could provide power to an outpost on the moon, Mars, or an asteroid. One power-generation system that is independent of sunlight or power-storage limitations is a fission-based power plant. There is a wealth of terrestrial system heritage that can be transferred to the design and fabrication of a fission power system for space missions, but there are certain design aspects that require qualification. The radiation tolerance of the power conversion system requires scrutiny because the compact nature of a space power plant restricts the dose reduction methodologies compared to those used in terrestrial systems. An integrated research program has been conducted to establish the radiation tolerance of power conversion system-component materials. The radiation limit specifications proposed for a Fission Power System power convertor is 10 Mrad ionizing dose and 5 x 10(exp 14) neutron per square centimeter fluence for a convertor operating at 150 C. Specific component materials and their radiation tolerances are discussed. This assessment is for the power convertor hardware; electronic components are not covered here.
Flight software development for the isothermal dendritic growth experiment
NASA Technical Reports Server (NTRS)
Levinson, Laurie H.; Winsa, Edward A.; Glicksman, Martin E.
1989-01-01
The Isothermal Dendritic Growth Experiment (IDGE) is a microgravity materials science experiment scheduled to fly in the cargo bay of the shuttle on the United States Microgravity Payload (USMP) carrier. The experiment will be operated by real-time control software which will not only monitor and control onboard experiment hardware, but will also communicate, via downlink data and uplink commands, with the Payload Operations Control Center (POCC) at NASA George C. Marshall Space Flight Center (MSFC). The software development approach being used to implement this system began with software functional requirements specification. This was accomplished using the Yourdon/DeMarco methodology as supplemented by the Ward/Mellor real-time extensions. The requirements specification in combination with software prototyping was then used to generate a detailed design consisting of structure charts, module prologues, and Program Design Language (PDL) specifications. This detailed design will next be used to code the software, followed finally by testing against the functional requirements. The result will be a modular real-time control software system with traceability through every phase of the development process.
Flight software development for the isothermal dendritic growth experiment
NASA Technical Reports Server (NTRS)
Levinson, Laurie H.; Winsa, Edward A.; Glicksman, M. E.
1990-01-01
The Isothermal Dendritic Growth Experiment (IDGE) is a microgravity materials science experiment scheduled to fly in the cargo bay of the shuttle on the United States Microgravity Payload (USMP) carrier. The experiment will be operated by real-time control software which will not only monitor and control onboard experiment hardware, but will also communicate, via downlink data and unlink commands, with the Payload Operations Control Center (POCC) at NASA George C. Marshall Space Flight Center (MSFC). The software development approach being used to implement this system began with software functional requirements specification. This was accomplished using the Yourdon/DeMarco methodology as supplemented by the Ward/Mellor real-time extensions. The requirements specification in combination with software prototyping was then used to generate a detailed design consisting of structure charts, module prologues, and Program Design Language (PDL) specifications. This detailed design will next be used to code the software, followed finally by testing against the functional requirements. The result will be a modular real-time control software system with traceability through every phase of the development process.
NASA Technical Reports Server (NTRS)
Anderson, Leif; Box, Neil; Carter-Journet, Katrina; DiFilippo, Denise; Harrington, Sean; Jackson, David; Lutomski, Michael
2012-01-01
Purpose of presentation: (1) Status update on the developing methodology to revise sub-system sparing targets. (2) To describe how to incorporate uncertainty into spare assessments and why it is important to do so (3) Demonstrate hardware risk postures through PACT evaluation
NASA Technical Reports Server (NTRS)
1972-01-01
A long life assurance program for the development of design, process, test, and application guidelines for achieving reliable spacecraft hardware was conducted. The study approach consisted of a review of technical data performed concurrently with a survey of the aerospace industry. The data reviewed included design and operating characteristics, failure histories and solutions, and similar documents. The topics covered by the guidelines are reported. It is concluded that long life hardware is achieved through meticulous attention to many details and no simple set of rules can suffice.
Three-Dimensional Nanobiocomputing Architectures With Neuronal Hypercells
2007-06-01
Neumann architectures, and CMOS fabrication. Novel solutions of massive parallel distributed computing and processing (pipelined due to systolic... and processing platforms utilizing molecular hardware within an enabling organization and architecture. The design technology is based on utilizing a...Microsystems and Nanotechnologies investigated a novel 3D3 (Hardware Software Nanotechnology) technology to design super-high performance computing
Marketing and Distributive Education Curriculum Guide: Hardware-Building Materials, Farm and Garden.
ERIC Educational Resources Information Center
Cluck, Janice Bora
Designed to be used with the General Marketing Curriculum Planning Guide (ED 156 860), this guide is intended to provide the curriculum coordinator with a basis for planning a comprehensive program in the field of marketing for farm and garden hardware building materials; it is designed also to allow marketing and distributive education…
Organizational Analysis of the United States Army Evaluation Center
2014-12-01
analysis of qualitative or quantitative data obtained from design reviews, hardware inspections, M&S, hardware and software testing , metrics review... Research Development Test & Evaluation (RDT&E) appropriation account. The Defense Acquisition Portal ACQuipedia website describes RDT&E as “ one of the... research , design , development, test and evaluation, production, installation, operation, and maintenance; data collection; processing and analysis
Reliability achievement in high technology space systems
NASA Technical Reports Server (NTRS)
Lindstrom, D. L.
1981-01-01
The production of failure-free hardware is discussed. The elements required to achieve such hardware are: technical expertise to design, analyze, and fully understand the design; use of high reliability parts and materials control in the manufacturing process; and testing to understand the system and weed out defects. The durability of the Hughes family of satellites is highlighted.
Error protection capability of space shuttle data bus designs
NASA Technical Reports Server (NTRS)
Proch, G. E.
1974-01-01
Error protection assurance in the reliability of digital data communications is discussed. The need for error protection on the space shuttle data bus system has been recognized and specified as a hardware requirement. The error protection techniques of particular concern are those designed into the Shuttle Main Engine Interface (MEI) and the Orbiter Multiplex Interface Adapter (MIA). The techniques and circuit design details proposed for these hardware are analyzed in this report to determine their error protection capability. The capability is calculated in terms of the probability of an undetected word error. Calculated results are reported for a noise environment that ranges from the nominal noise level stated in the hardware specifications to burst levels which may occur in extreme or anomalous conditions.
The use of COSMIC NASTRAN in an integrated conceptual design environment
NASA Technical Reports Server (NTRS)
White, Gil
1989-01-01
Changes in both software and hardware are rapidly bringing conceptual engineering tools like finite element analysis into mainstream mechanical design. Systems that integrate all phases of the manufacturing process provide the most cost benefits. The application of programming concepts like object oriented programming allow for the encapsulation of intelligent data within the design geometry. This combined with declining cost in per seat hardware bring new alternatives to the user.
Closed-Loop Neuromorphic Benchmarks
Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris
2015-01-01
Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820
Development of a Sensor Node for Precision Horticulture
López, Juan A.; Soto, Fulgencio; Sánchez, Pedro; Iborra, Andrés; Suardiaz, Juan; Vera, Juan A.
2009-01-01
This paper presents the design of a new wireless sensor node (GAIA Soil-Mote) for precision horticulture applications which permits the use of precision agricultural instruments based on the SDI-12 standard. Wireless communication is achieved with a transceiver compliant with the IEEE 802.15.4 standard. The GAIA Soil-Mote software implementation is based on TinyOS. A two-phase methodology was devised to validate the design of this sensor node. The first phase consisted of laboratory validation of the proposed hardware and software solution, including a study on power consumption and autonomy. The second phase consisted of implementing a monitoring application in a real broccoli (Brassica oleracea L. var Marathon) crop in Campo de Cartagena in south-east Spain. In this way the sensor node was validated in real operating conditions. This type of application was chosen because there is a large potential market for it in the farming sector, especially for the development of precision agriculture applications. PMID:22412309
Anzalone, Gerald C; Glover, Alexandra G; Pearce, Joshua M
2013-04-19
The high cost of what have historically been sophisticated research-related sensors and tools has limited their adoption to a relatively small group of well-funded researchers. This paper provides a methodology for applying an open-source approach to design and development of a colorimeter. A 3-D printable, open-source colorimeter utilizing only open-source hardware and software solutions and readily available discrete components is discussed and its performance compared to a commercial portable colorimeter. Performance is evaluated with commercial vials prepared for the closed reflux chemical oxygen demand (COD) method. This approach reduced the cost of reliable closed reflux COD by two orders of magnitude making it an economic alternative for the vast majority of potential users. The open-source colorimeter demonstrated good reproducibility and serves as a platform for further development and derivation of the design for other, similar purposes such as nephelometry. This approach promises unprecedented access to sophisticated instrumentation based on low-cost sensors by those most in need of it, under-developed and developing world laboratories.
Development of a sensor node for precision horticulture.
López, Juan A; Soto, Fulgencio; Sánchez, Pedro; Iborra, Andrés; Suardiaz, Juan; Vera, Juan A
2009-01-01
This paper presents the design of a new wireless sensor node (GAIA Soil-Mote) for precision horticulture applications which permits the use of precision agricultural instruments based on the SDI-12 standard. Wireless communication is achieved with a transceiver compliant with the IEEE 802.15.4 standard. The GAIA Soil-Mote software implementation is based on TinyOS. A two-phase methodology was devised to validate the design of this sensor node. The first phase consisted of laboratory validation of the proposed hardware and software solution, including a study on power consumption and autonomy. The second phase consisted of implementing a monitoring application in a real broccoli (Brassica oleracea L. var Marathon) crop in Campo de Cartagena in south-east Spain. In this way the sensor node was validated in real operating conditions. This type of application was chosen because there is a large potential market for it in the farming sector, especially for the development of precision agriculture applications.
Anzalone, Gerald C.; Glover, Alexandra G.; Pearce, Joshua M.
2013-01-01
The high cost of what have historically been sophisticated research-related sensors and tools has limited their adoption to a relatively small group of well-funded researchers. This paper provides a methodology for applying an open-source approach to design and development of a colorimeter. A 3-D printable, open-source colorimeter utilizing only open-source hardware and software solutions and readily available discrete components is discussed and its performance compared to a commercial portable colorimeter. Performance is evaluated with commercial vials prepared for the closed reflux chemical oxygen demand (COD) method. This approach reduced the cost of reliable closed reflux COD by two orders of magnitude making it an economic alternative for the vast majority of potential users. The open-source colorimeter demonstrated good reproducibility and serves as a platform for further development and derivation of the design for other, similar purposes such as nephelometry. This approach promises unprecedented access to sophisticated instrumentation based on low-cost sensors by those most in need of it, under-developed and developing world laboratories. PMID:23604032
Empirical cost models for estimating power and energy consumption in database servers
NASA Astrophysics Data System (ADS)
Valdivia Garcia, Harold Dwight
The explosive growth in the size of data centers, coupled with the widespread use of virtualization technology has brought power and energy consumption as major concerns for data center administrators. Provisioning decisions must take into consideration not only target application performance but also the power demands and total energy consumption incurred by the hardware and software to be deployed at the data center. Failure to do so will result in damaged equipment, power outages, and inefficient operation. Since database servers comprise one of the most popular and important server applications deployed in such facilities, it becomes necessary to have accurate cost models that can predict the power and energy demands that each database workloads will impose in the system. In this work we present an empirical methodology to estimate the power and energy cost of database operations. Our methodology uses multiple-linear regression to derive accurate cost models that depend only on readily available statistics such as selectivity factors, tuple size, numbers columns and relational cardinality. Moreover, our method does not need measurement of individual hardware components, but rather total power and energy consumption measured at a server. We have implemented our methodology, and ran experiments with several server configurations. Our experiments indicate that we can predict power and energy more accurately than alternative methods found in the literature.
Multimission helicopter information display technology
NASA Astrophysics Data System (ADS)
Terry, William S.
1995-06-01
A new Operator display subsystem is being incorporated as part of the next generation United States Navy (USN) helicopter avionics system to be integrated into the Multi-Mission Helicopter (MMH) which will replace both the SH-60B and the SH- 60F in 2001. This subsystem exploits state-of-the-art technology for the display hardware, the display driver hardware, information presentation methodologies, and software architecture. The technologies to be base technologies have evolved during the development period and the solution has been modified to include current elements including high resolution AMLCD color displays that are sunlight readable, highly reliable, and significantly lighter that CRT technology, as well as Reduced Instruction Set Computer (RISC) based high-performance display generators that have only recently become feasible to implement in a military aircraft. This paper describes the overall subsystem architecture, some detail on the individual elements along with supporting rationale, the manner in which the display subsystem provides the necessary tools to significantly enhance the performance of the weapon system through the vital Operator-System Interface. Also addressed is a summary of the evolution of design leading to the current approach to MMH Operator displays and display processing as well as the growth path that the MMH display subsystem will most likely follow as additional technology evolution occurs.
Hardware problems encountered in solar heating and cooling systems
NASA Technical Reports Server (NTRS)
Cash, M.
1978-01-01
Numerous problems in the design, production, installation, and operation of solar energy systems are discussed. Described are hardware problems, which range from simple to obscure and complex, and their resolution.
Analyzing SystemC Designs: SystemC Analysis Approaches for Varying Applications
Stoppe, Jannis; Drechsler, Rolf
2015-01-01
The complexity of hardware designs is still increasing according to Moore's law. With embedded systems being more and more intertwined and working together not only with each other, but also with their environments as cyber physical systems (CPSs), more streamlined development workflows are employed to handle the increasing complexity during a system's design phase. SystemC is a C++ library for the design of hardware/software systems, enabling the designer to quickly prototype, e.g., a distributed CPS without having to decide about particular implementation details (such as whether to implement a feature in hardware or in software) early in the design process. Thereby, this approach reduces the initial implementation's complexity by offering an abstract layer with which to build a working prototype. However, as SystemC is based on C++, analyzing designs becomes a difficult task due to the complex language features that are available to the designer. Several fundamentally different approaches for analyzing SystemC designs have been suggested. This work illustrates several different SystemC analysis approaches, including their specific advantages and shortcomings, allowing designers to pick the right tools to assist them with a specific problem during the design of a system using SystemC. PMID:25946632
Analyzing SystemC Designs: SystemC Analysis Approaches for Varying Applications.
Stoppe, Jannis; Drechsler, Rolf
2015-05-04
The complexity of hardware designs is still increasing according to Moore's law. With embedded systems being more and more intertwined and working together not only with each other, but also with their environments as cyber physical systems (CPSs), more streamlined development workflows are employed to handle the increasing complexity during a system's design phase. SystemC is a C++ library for the design of hardware/software systems, enabling the designer to quickly prototype, e.g., a distributed CPS without having to decide about particular implementation details (such as whether to implement a feature in hardware or in software) early in the design process. Thereby, this approach reduces the initial implementation's complexity by offering an abstract layer with which to build a working prototype. However, as SystemC is based on C++, analyzing designs becomes a difficult task due to the complex language features that are available to the designer. Several fundamentally different approaches for analyzing SystemC designs have been suggested. This work illustrates several different SystemC analysis approaches, including their specific advantages and shortcomings, allowing designers to pick the right tools to assist them with a specific problem during the design of a system using SystemC.
Standard high-reliability integrated circuit logic packaging. [for deep space tracking stations
NASA Technical Reports Server (NTRS)
Slaughter, D. W.
1977-01-01
A family of standard, high-reliability hardware used for packaging digital integrated circuits is described. The design transition from early prototypes to production hardware is covered and future plans are discussed. Interconnections techniques are described as well as connectors and related hardware available at both the microcircuit packaging and main-frame level. General applications information is also provided.
Apollo experience report: Battery subsystem
NASA Technical Reports Server (NTRS)
Trout, J. B.
1972-01-01
Experience with the Apollo command service module and lunar module batteries is discussed. Significant hardware development concepts and hardware test results are summarized, and the operational performance of batteries on the Apollo 7 to 13 missions is discussed in terms of performance data, mission constraints, and basic hardware design and capability. Also, the flight performance of the Apollo battery charger is discussed. Inflight data are presented.
Evaluating Usability in a Distance Digital Systems Laboratory Class
ERIC Educational Resources Information Center
Kostaras, N.; Xenos, M.; Skodras, A. N.
2011-01-01
This paper presents the usability evaluation of a digital systems laboratory class offered to distance-learning students. It details the way in which students can participate remotely in such a laboratory, the methodology employed in the usability assessment of the laboratory infrastructure (hardware and software), and also outlines the main…
Overview of Computer Simulation Modeling Approaches and Methods
Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett
2005-01-01
The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...
Dynamic Response Testing in an Electrically Heated Reactor Test Facility
NASA Astrophysics Data System (ADS)
Bragg-Sitton, Shannon M.; Morton, T. J.
2006-01-01
Non-nuclear testing can be a valuable tool in the development of a space nuclear power or propulsion system. In a non-nuclear test bed, electric heaters are used to simulate the heat from nuclear fuel. Standard testing allows one to fully assess thermal, heat transfer, and stress related attributes of a given system, but fails to demonstrate the dynamic response that would be present in an integrated, fueled reactor system. The integration of thermal hydraulic hardware tests with simulated neutronic response provides a bridge between electrically heated testing and fueled nuclear testing. By implementing a neutronic response model to simulate the dynamic response that would be expected in a fueled reactor system, one can better understand system integration issues, characterize integrated system response times and response characteristics, and assess potential design improvements at a relatively small fiscal investment. Initial system dynamic response testing was demonstrated on the integrated SAFE-100a heat pipe (HP) cooled, electrically heated reactor and heat exchanger hardware, utilizing a one-group solution to the point kinetics equations to simulate the expected neutronic response of the system. Reactivity feedback calculations were then based on a bulk reactivity feedback coefficient and measured average core temperature. This paper presents preliminary results from similar dynamic testing of a direct drive gas cooled reactor system (DDG), demonstrating the applicability of the testing methodology to any reactor type and demonstrating the variation in system response characteristics in different reactor concepts. Although the HP and DDG designs both utilize a fast spectrum reactor, the method of cooling the reactor differs significantly, leading to a variable system response that can be demonstrated and assessed in a non-nuclear test facility. Planned system upgrades to allow implementation of higher fidelity dynamic testing are also discussed. Proposed DDG testing will utilize a higher fidelity point kinetics model to control core power transients, and reactivity feedback will be based on localized feedback coefficients and several independent temperature measurements taken within the core block. This paper presents preliminary test results and discusses the methodology that will be implemented in follow-on DDG testing and the additional instrumentation required to implement high fidelity dynamic testing.
Thermal Hardware for the Thermal Analyst
NASA Technical Reports Server (NTRS)
Steinfeld, David
2015-01-01
The presentation will be given at the 26th Annual Thermal Fluids Analysis Workshop (TFAWS 2015) hosted by the Goddard Space Flight Center (GSFC) Thermal Engineering Branch (Code 545). NCTS 21070-1. Most Thermal analysts do not have a good background into the hardware which thermally controls the spacecraft they design. SINDA and Thermal Desktop models are nice, but knowing how this applies to the actual thermal hardware (heaters, thermostats, thermistors, MLI blanketing, optical coatings, etc...) is just as important. The course will delve into the thermal hardware and their application techniques on actual spacecraft. Knowledge of how thermal hardware is used and applied will make a thermal analyst a better engineer.
NASA Astrophysics Data System (ADS)
MacDonald, B.; Finot, M.; Heiken, B.; Trowbridge, T.; Ackler, H.; Leonard, L.; Johnson, E.; Chang, B.; Keating, T.
2009-08-01
Skyline Solar Inc. has developed a novel silicon-based PV system to simultaneously reduce energy cost and improve scalability of solar energy. The system achieves high gain through a combination of high capacity factor and optical concentration. The design approach drives innovation not only into the details of the system hardware, but also into manufacturing and deployment-related costs and bottlenecks. The result of this philosophy is a modular PV system whose manufacturing strategy relies only on currently existing silicon solar cell, module, reflector and aluminum parts supply chains, as well as turnkey PV module production lines and metal fabrication industries that already exist at enormous scale. Furthermore, with a high gain system design, the generating capacity of all components is multiplied, leading to a rapidly scalable system. The product design and commercialization strategy cooperate synergistically to promise dramatically lower LCOE with substantially lower risk relative to materials-intensive innovations. In this paper, we will present the key design aspects of Skyline's system, including aspects of the optical, mechanical and thermal components, revealing the ease of scalability, low cost and high performance. Additionally, we will present performance and reliability results on modules and the system, using ASTM and UL/IEC methodologies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eric A. Wernert; William R. Sherman; Patrick O'Leary
Immersive visualization makes use of the medium of virtual reality (VR) - it is a subset of virtual reality focused on the application of VR technologies to scientific and information visualization. As the name implies, there is a particular focus on the physically immersive aspect of VR that more fully engages the perceptual and kinesthetic capabilities of the scientist with the goal of producing greater insight. The immersive visualization community is uniquely positioned to address the analysis needs of the wide spectrum of domain scientists who are becoming increasingly overwhelmed by data. The outputs of computational science simulations and high-resolutionmore » sensors are creating a data deluge. Data is coming in faster than it can be analyzed, and there are countless opportunities for discovery that are missed as the data speeds by. By more fully utilizing the scientists visual and other sensory systems, and by offering a more natural user interface with which to interact with computer-generated representations, immersive visualization offers great promise in taming this data torrent. However, increasing the adoption of immersive visualization in scientific research communities can only happen by simultaneously lowering the engagement threshold while raising the measurable benefits of adoption. Scientists time spent immersed with their data will thus be rewarded with higher productivity, deeper insight, and improved creativity. Immersive visualization ties together technologies and methodologies from a variety of related but frequently disjoint areas, including hardware, software and human-computer interaction (HCI) disciplines. In many ways, hardware is a solved problem. There are well established technologies including large walk-in systems such as the CAVE{trademark} and head-based systems such as the Wide-5{trademark}. The advent of new consumer-level technologies now enable an entirely new generation of immersive displays, with smaller footprints and costs, widening the potential consumer base. While one would be hard-pressed to call software a solved problem, we now understand considerably more about best practices for designing and developing sustainable, scalable software systems, and we have useful software examples that illuminate the way to even better implementations. As with any research endeavour, HCI will always be exploring new topics in interface design, but we now have a sizable knowledge base of the strengths and weaknesses of the human perceptual systems and we know how to design effective interfaces for immersive systems. So, in a research landscape with a clear need for better visualization and analysis tools, a methodology in immersive visualization that has been shown to effectively address some of those needs, and vastly improved supporting technologies and knowledge of hardware, software, and HCI, why hasn't immersive visualization 'caught on' more with scientists? What can we do as a community of immersive visualization researchers and practitioners to facilitate greater adoption by scientific communities so as to make the transition from 'the promise of virtual reality' to 'the reality of virtual reality'.« less
Embedded algorithms within an FPGA-based system to process nonlinear time series data
NASA Astrophysics Data System (ADS)
Jones, Jonathan D.; Pei, Jin-Song; Tull, Monte P.
2008-03-01
This paper presents some preliminary results of an ongoing project. A pattern classification algorithm is being developed and embedded into a Field-Programmable Gate Array (FPGA) and microprocessor-based data processing core in this project. The goal is to enable and optimize the functionality of onboard data processing of nonlinear, nonstationary data for smart wireless sensing in structural health monitoring. Compared with traditional microprocessor-based systems, fast growing FPGA technology offers a more powerful, efficient, and flexible hardware platform including on-site (field-programmable) reconfiguration capability of hardware. An existing nonlinear identification algorithm is used as the baseline in this study. The implementation within a hardware-based system is presented in this paper, detailing the design requirements, validation, tradeoffs, optimization, and challenges in embedding this algorithm. An off-the-shelf high-level abstraction tool along with the Matlab/Simulink environment is utilized to program the FPGA, rather than coding the hardware description language (HDL) manually. The implementation is validated by comparing the simulation results with those from Matlab. In particular, the Hilbert Transform is embedded into the FPGA hardware and applied to the baseline algorithm as the centerpiece in processing nonlinear time histories and extracting instantaneous features of nonstationary dynamic data. The selection of proper numerical methods for the hardware execution of the selected identification algorithm and consideration of the fixed-point representation are elaborated. Other challenges include the issues of the timing in the hardware execution cycle of the design, resource consumption, approximation accuracy, and user flexibility of input data types limited by the simplicity of this preliminary design. Future work includes making an FPGA and microprocessor operate together to embed a further developed algorithm that yields better computational and power efficiency.
NASA Technical Reports Server (NTRS)
1973-01-01
Power subsystem cost/weight tradeoffs are discussed for the Venus probe spacecraft. The cost estimations of power subsystem units were based upon DSCS-2, DSP, and Pioneer 10 and 11 hardware design and development and manufacturing experience. Parts count and degree of modification of existing hardware were factored into the estimate of manufacturing and design and development costs. Cost data includes sufficient quantities of units to equip probe bus and orbiter versions. It was based on the orbiter complement of equipment, but the savings in fewer slices for the probe bus balance the cost of the different probe bus battery. The preferred systems for the Thor/Delta and for the Atlas/Centaur are discussed. The weights of the candidate designs were based upon slice or tray weights for functionally equivalent circuitry measured on existing hardware such as Pioneers 10 and 11, Intelsat 3, DSCS-2, or DSP programs. Battery weights were based on measured cell weight data adjusted for case weight or off-the-shelf battery weights. The solar array weight estimate was based upon recent hardware experience on DSCS-2 and DSP arrays.
Computerized atmospheric trace contaminant control simulation for manned spacecraft
NASA Technical Reports Server (NTRS)
Perry, J. L.
1993-01-01
Buildup of atmospheric trace contaminants in enclosed volumes such as a spacecraft may lead to potentially serious health problems for the crew members. For this reason, active control methods must be implemented to minimize the concentration of atmospheric contaminants to levels that are considered safe for prolonged, continuous exposure. Designing hardware to accomplish this has traditionally required extensive testing to characterize and select appropriate control technologies. Data collected since the Apollo project can now be used in a computerized performance simulation to predict the performance and life of contamination control hardware to allow for initial technology screening, performance prediction, and operations and contingency studies to determine the most suitable hardware approach before specific design and testing activities begin. The program, written in FORTRAN 77, provides contaminant removal rate, total mass removed, and per pass efficiency for each control device for discrete time intervals. In addition, projected cabin concentration is provided. Input and output data are manipulated using commercial spreadsheet and data graphing software. These results can then be used in analyzing hardware design parameters such as sizing and flow rate, overall process performance and program economics. Test performance may also be predicted to aid test design.
NASA's Space Launch System Program Update
NASA Technical Reports Server (NTRS)
May, Todd; Lyles, Garry
2015-01-01
Hardware and software for the world's most powerful launch vehicle for exploration is being welded, assembled, and tested today in high bays, clean rooms and test stands across the United States. NASA's Space Launch System (SLS) continued to make significant progress in the past year, including firing tests of both main propulsion elements, manufacturing of flight hardware, and the program Critical Design Review (CDR). Developed with the goals of safety, affordability, and sustainability, SLS will deliver unmatched capability for human and robotic exploration. The initial Block 1 configuration will deliver more than 70 metric tons (t) (154,000 pounds) of payload to low Earth orbit (LEO). The evolved Block 2 design will deliver some 130 t (286,000 pounds) to LEO. Both designs offer enormous opportunity and flexibility for larger payloads, simplifying payload design as well as ground and on-orbit operations, shortening interplanetary transit times, and decreasing overall mission risk. Over the past year, every vehicle element has manufactured or tested hardware, including flight hardware for Exploration Mission 1 (EM-1). This paper will provide an overview of the progress made over the past year and provide a glimpse of upcoming milestones on the way to a 2018 launch readiness date.
System and Mass Storage Study for Defense Mapping Agency Topographic Center (DMATC/HC)
1977-04-01
34•»-—•—■»■—- view. The assessment should be based on carefully designed control condi- tions—data volume, resolution, function, etc...egories: hardware control and library management support. This software is designed to interface with IBM 360/370 OS and OS/VS. No interface with a...laser re- cording unit includes a programmable recorder control subsystem which can be designed to provide a hardware and software interface compatible
2017-02-19
software systems: the students design and build robotics software towards real-world applications, without being distracted by hardware issues; (ii) it...high school students require the students to focus on building and integrating the hardware that make up the robot, at the expense of designing and...robotics programs focus on the mechanics; as a result, they do not have room for students to design and implement relatively complex software systems, as
Multi-Mission System Architecture Platform: Design and Verification of the Remote Engineering Unit
NASA Technical Reports Server (NTRS)
Sartori, John
2005-01-01
The Multi-Mission System Architecture Platform (MSAP) represents an effort to bolster efficiency in the spacecraft design process. By incorporating essential spacecraft functionality into a modular, expandable system, the MSAP provides a foundation on which future spacecraft missions can be developed. Once completed, the MSAP will provide support for missions with varying objectives, while maintaining a level of standardization that will minimize redesign of general system components. One subsystem of the MSAP, the Remote Engineering Unit (REU), functions by gathering engineering telemetry from strategic points on the spacecraft and providing these measurements to the spacecraft's Command and Data Handling (C&DH) subsystem. Before the MSAP Project reaches completion, all hardware, including the REU, must be verified. However, the speed and complexity of the REU circuitry rules out the possibility of physical prototyping. Instead, the MSAP hardware is designed and verified using the Verilog Hardware Definition Language (HDL). An increasingly popular means of digital design, HDL programming provides a level of abstraction, which allows the designer to focus on functionality while logic synthesis tools take care of gate-level design and optimization. As verification of the REU proceeds, errors are quickly remedied, preventing costly changes during hardware validation. After undergoing the careful, iterative processes of verification and validation, the REU and MSAP will prove their readiness for use in a multitude of spacecraft missions.
Thermal management of advanced fuel cell power systems
NASA Technical Reports Server (NTRS)
Vanderborgh, N. E.; Hedstrom, J.; Huff, J.
1990-01-01
It is shown that fuel cell devices are particularly attractive for the high-efficiency, high-reliability space hardware necessary to support upcoming space missions. These low-temperature hydrogen-oxygen systems necessarily operate with two-phase water. In either PEMFCs (proton exchange membrane fuel cells) or AFCs (alkaline fuel cells), engineering design must be critically focused on both stack temperature control and on the relative humidity control necessary to sustain appropriate conductivity within the ionic conductor. Water must also be removed promptly from the hardware. Present designs for AFC space hardware accomplish thermal management through two coupled cooling loops, both driven by a heat transfer fluid, and involve a recirculation fan to remove water and heat from the stack. There appears to be a certain advantage in using product water for these purposes within PEM hardware, because in that case a single fluid can serve both to control stack temperature, operating simultaneously as a heat transfer medium and through evaporation, and to provide the gas-phase moisture levels necessary to set the ionic conductor at appropriate performance levels. Moreover, the humidification cooling process automatically follows current loads. This design may remove the necessity for recirculation gas fans, thus demonstrating the long-term reliability essential for future space power hardware.
Skylab SO71/SO72 circadian periodicity experiment. [experimental design and checkout of hardware
NASA Technical Reports Server (NTRS)
Fairchild, M. K.; Hartmann, R. A.
1973-01-01
The circadian rhythm hardware activities from 1965 through 1973 are considered. A brief history of the programs leading to the development of the combined Skylab SO71/SO72 Circadian Periodicity Experiment (CPE) is given. SO71 is the Skylab experiment number designating the pocket mouse circadian experiment, and SO72 designates the vinegar gnat circadian experiment. Final design modifications and checkout of the CPE, integration testing with the Apollo service module CSM 117 and the launch preparation and support tasks at Kennedy Space Center are reported.
NASA Technical Reports Server (NTRS)
Tobagi, Fouad A.; Dalgic, Ismail; Pang, Joseph
1990-01-01
The design and implementation of interface units for high speed Fiber Optic Local Area Networks and Broadband Integrated Services Digital Networks are discussed. During the last years, a number of network adapters that are designed to support high speed communications have emerged. This approach to the design of a high speed network interface unit was to implement package processing functions in hardware, using VLSI technology. The VLSI hardware implementation of a buffer management unit, which is required in such architectures, is described.
Optical System Design for the Next Generation Space Telescope
NASA Technical Reports Server (NTRS)
Solomon, Leonard H. (Principal Investigator); Kahan, Mark A.
1996-01-01
This report provides considerations and suggested approaches for design of the Optical Telescope Assembly and the segmented primary mirror of a Next Generation Space Telescope (NGST). Based on prior studies and hardware development, we provide data and design information on low-risk materials and hardware configurations most likely to meet low weight, low temperature and long-life requirements of the nominal 8-meter aperture NGST. We also provide preliminary data for cost and performance trades, and recommendations for technology development and demonstration required to support the system design effort.
System design of the Pioneer Venus spacecraft. Volume 7: Communication subsystem studies
NASA Technical Reports Server (NTRS)
Newlands, D. M.
1973-01-01
Communications subsystem tradeoffs were undertaken to establish a low cost and low weight design consistent with the mission requirements. Because of the weight constraint of the Thor/Delta launched configuration, minimum weight was emphasized in determining the Thor/Delta design. In contrast, because of the greatly relaxed weight constraint of the Atlas/Centaur launched configuration, minimum cost and off the shelf hardware were emphasized and the attendant weight penalities accepted. Communication subsystem hardware elements identified for study included probe and bus antennas (CM-6, CM-17), power amplifiers (CM-10), and the large probe transponder and small probe stable oscillator required for doppler tracking (CM-11, CM-16). In addition, particular hardware problems associated with the probe high temperature and high-g environment were investigated (CM-7).
APRON: A Cellular Processor Array Simulation and Hardware Design Tool
NASA Astrophysics Data System (ADS)
Barr, David R. W.; Dudek, Piotr
2009-12-01
We present a software environment for the efficient simulation of cellular processor arrays (CPAs). This software (APRON) is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.
JSC Metal Finishing Waste Minimization Methods
NASA Technical Reports Server (NTRS)
Sullivan, Erica
2003-01-01
THe paper discusses the following: Johnson Space Center (JSC) has achieved VPP Star status and is ISO 9001 compliant. The Structural Engineering Division in the Engineering Directorate is responsible for operating the metal finishing facility at JSC. The Engineering Directorate is responsible for $71.4 million of space flight hardware design, fabrication and testing. The JSC Metal Finishing Facility processes flight hardware to support the programs in particular schedule and mission critical flight hardware. The JSC Metal Finishing Facility is operated by Rothe Joint Venture. The Facility provides following processes: anodizing, alodining, passivation, and pickling. JSC Metal Finishing Facility completely rebuilt in 1998. Total cost of $366,000. All new tanks, electrical, plumbing, and ventilation installed. Designed to meet modern safety, environmental, and quality requirements. Designed to minimize contamination and provide the highest quality finishes.
Coupled Loads Analysis of the Modified NASA Barge Pegasus and Space Launch System Hardware
NASA Technical Reports Server (NTRS)
Knight, J. Brent
2015-01-01
A Coupled Loads Analysis (CLA) has been performed for barge transport of Space Launch System hardware on the recently modified NASA barge Pegasus. The barge re-design was facilitated with detailed finite element analyses by the ARMY Corps of Engineers - Marine Design Center. The Finite Element Model (FEM) utilized in the design was also used in the subject CLA. The Pegasus FEM and CLA results are presented as well as a comparison of the analysis process to that of a payload being transported to space via the Space Shuttle. Discussion of the dynamic forcing functions is included as well. The process of performing a dynamic CLA of NASA hardware during marine transport is thought to be a first and can likely support minimization of undue conservatism.
High-pressure LOX/hydrocarbon preburners and gas generators
NASA Technical Reports Server (NTRS)
Huebner, A. W.
1981-01-01
The objective of the program was to conduct a small scale hardware test program to establish the technology base required for LOX/hydrocarbon preburners and gas generators. The program consisted of six major tasks; Task I reviewed and assessed the performance prediction models and defined a subscale test program. Task II designed and fabricated this subscale hardware. Task III tested and analyzed the data from this hardware. Task IV analyzed the hot fire results and formulated a preliminary design for 40K preburner assemblies. Task V took the preliminary design and detailed and fabricated three 40K size preburner assemblies, one each fuel-rich LOX/CH, and LOX/RP-1 and one oxidizer rich LOX/CH4. Task VI delivered these preburner assemblies to MSFC for subsequent evaluation.
VLSI 'smart' I/O module development
NASA Astrophysics Data System (ADS)
Kirk, Dan
The developmental history, design, and operation of the MIL-STD-1553A/B discrete and serial module (DSM) for the U.S. Navy AN/AYK-14(V) avionics computer are described and illustrated with diagrams. The ongoing preplanned product improvement for the AN/AYK-14(V) includes five dual-redundant MIL-STD-1553 channels based on DSMs. The DSM is a front-end processor for transferring data to and from a common memory, sharing memory with a host processor to provide improved 'smart' input/output performance. Each DSM comprises three hardware sections: three VLSI-6000 semicustomized CMOS arrays, memory units to support the arrays, and buffers and resynchronization circuits. The DSM hardware module design, VLSI-6000 design tools, controlware and test software, and checkout procedures (using a hardware simulator) are characterized in detail.
NASA Technical Reports Server (NTRS)
Pepin, Gerard R.
1992-01-01
The Interim Service Integrated Services Digital Network (ISDN) Satellite (ISIS) Hardware Experiment Design for Advanced Satellite Designs describes the design of the ISDN Satellite Terminal Adapter (ISTA) capable of translating ISDN protocol traffic into time division multiple access (TDMA) signals for use by a communications satellite. The ISTA connects the Type 1 Network Termination (NT1) via the U-interface on the line termination side of the CPE to the V.35 interface for satellite uplink. The same ISTA converts in the opposite direction the V.35 to U-interface data with a simple switch setting.
Design of a nickel-hydrogen battery simulator for the NASA EOS testbed
NASA Technical Reports Server (NTRS)
Gur, Zvi; Mang, Xuesi; Patil, Ashok R.; Sable, Dan M.; Cho, Bo H.; Lee, Fred C.
1992-01-01
The hardware and software design of a nickel-hydrogen (Ni-H2) battery simulator (BS) with application to the NASA Earth Observation System (EOS) satellite is presented. The battery simulator is developed as a part of a complete testbed for the EOS satellite power system. The battery simulator involves both hardware and software components. The hardware component includes the capability of sourcing and sinking current at a constant programmable voltage. The software component includes the capability of monitoring the battery's ampere-hours (Ah) and programming the battery voltage according to an empirical model of the nickel-hydrogen battery stored in a computer.
Human Centered Hardware Modeling and Collaboration
NASA Technical Reports Server (NTRS)
Stambolian Damon; Lawrence, Brad; Stelges, Katrine; Henderson, Gena
2013-01-01
In order to collaborate engineering designs among NASA Centers and customers, to in clude hardware and human activities from multiple remote locations, live human-centered modeling and collaboration across several sites has been successfully facilitated by Kennedy Space Center. The focus of this paper includes innovative a pproaches to engineering design analyses and training, along with research being conducted to apply new technologies for tracking, immersing, and evaluating humans as well as rocket, vehic le, component, or faci lity hardware utilizing high resolution cameras, motion tracking, ergonomic analysis, biomedical monitoring, wor k instruction integration, head-mounted displays, and other innovative human-system integration modeling, simulation, and collaboration applications.
Prototype solar heating and combined heating and cooling systems
NASA Technical Reports Server (NTRS)
1978-01-01
Designs were completed, hardware was received, and hardware was shipped to two sites. A change was made in the heat pump working fluid. Problem investigation of shroud coatings for the collector received emphasis.
Marshall Space Flight Center's Virtual Reality Applications Program 1993
NASA Technical Reports Server (NTRS)
Hale, Joseph P., II
1993-01-01
A Virtual Reality (VR) applications program has been under development at the Marshall Space Flight Center (MSFC) since 1989. Other NASA Centers, most notably Ames Research Center (ARC), have contributed to the development of the VR enabling technologies and VR systems. This VR technology development has now reached a level of maturity where specific applications of VR as a tool can be considered. The objectives of the MSFC VR Applications Program are to develop, validate, and utilize VR as a Human Factors design and operations analysis tool and to assess and evaluate VR as a tool in other applications (e.g., training, operations development, mission support, teleoperations planning, etc.). The long-term goals of this technology program is to enable specialized Human Factors analyses earlier in the hardware and operations development process and develop more effective training and mission support systems. The capability to perform specialized Human Factors analyses earlier in the hardware and operations development process is required to better refine and validate requirements during the requirements definition phase. This leads to a more efficient design process where perturbations caused by late-occurring requirements changes are minimized. A validated set of VR analytical tools must be developed to enable a more efficient process for the design and development of space systems and operations. Similarly, training and mission support systems must exploit state-of-the-art computer-based technologies to maximize training effectiveness and enhance mission support. The approach of the VR Applications Program is to develop and validate appropriate virtual environments and associated object kinematic and behavior attributes for specific classes of applications. These application-specific environments and associated simulations will be validated, where possible, through empirical comparisons with existing, accepted tools and methodologies. These validated VR analytical tools will then be available for use in the design and development of space systems and operations and in training and mission support systems.
Hardware Evolution of Closed-Loop Controller Designs
NASA Technical Reports Server (NTRS)
Gwaltney, David; Ferguson, Ian
2002-01-01
Poster presentation will outline on-going efforts at NASA, MSFC to employ various Evolvable Hardware experimental platforms in the evolution of digital and analog circuitry for application to automatic control. Included will be information concerning the application of commercially available hardware and software along with the use of the JPL developed FPTA2 integrated circuit and supporting JPL developed software. Results to date will be presented.
Large - scale Rectangular Ruler Automated Verification Device
NASA Astrophysics Data System (ADS)
Chen, Hao; Chang, Luping; Xing, Minjian; Xie, Xie
2018-03-01
This paper introduces a large-scale rectangular ruler automated verification device, which consists of photoelectric autocollimator and self-designed mechanical drive car and data automatic acquisition system. The design of mechanical structure part of the device refer to optical axis design, drive part, fixture device and wheel design. The design of control system of the device refer to hardware design and software design, and the hardware mainly uses singlechip system, and the software design is the process of the photoelectric autocollimator and the automatic data acquisition process. This devices can automated achieve vertical measurement data. The reliability of the device is verified by experimental comparison. The conclusion meets the requirement of the right angle test procedure.
Methodology for Assessing Reusability of Spaceflight Hardware
NASA Technical Reports Server (NTRS)
Childress-Thompson, Rhonda; Thomas, L. Dale; Farrington, Phillip
2017-01-01
In 2011 the Space Shuttle, the only Reusable Launch Vehicle (RLV) in the world, returned to earth for the final time. Upon retirement of the Space Shuttle, the United States (U.S.) no longer possessed a reusable vehicle or the capability to send American astronauts to space. With the National Aeronautics and Space Administration (NASA) out of the RLV business and now only pursuing Expendable Launch Vehicles (ELV), not only did companies within the U.S. start to actively pursue the development of either RLVs or reusable components, but entities around the world began to venture into the reusable market. For example, SpaceX and Blue Origin are developing reusable vehicles and engines. The Indian Space Research Organization is developing a reusable space plane and Airbus is exploring the possibility of reusing its first stage engines and avionics housed in the flyback propulsion unit referred to as the Advanced Expendable Launcher with Innovative engine Economy (Adeline). Even United Launch Alliance (ULA) has announced plans for eventually replacing the Atlas and Delta expendable rockets with a family of RLVs called Vulcan. Reuse can be categorized as either fully reusable, the situation in which the entire vehicle is recovered, or partially reusable such as the National Space Transportation System (NSTS) where only the Space Shuttle, Space Shuttle Main Engines (SSME), and Solid Rocket Boosters (SRB) are reused. With this influx of renewed interest in reusability for space applications, it is imperative that a systematic approach be developed for assessing the reusability of spaceflight hardware. The partially reusable NSTS offered many opportunities to glean lessons learned; however, when it came to efficient operability for reuse the Space Shuttle and its associated hardware fell short primarily because of its two to four-month turnaround time. Although there have been several attempts at designing RLVs in the past with the X-33, Venture Star and Delta Clipper Experimental (DC-X), reusability within the spaceflight arena is still in its infancy. With unlimited resources (namely, time and money), almost any launch vehicle and its associated hardware can be made reusable. However, an endless supply of funds for space exploration is not the case in today's economy for neither government agencies nor their commercial counterparts. Therefore, any organization wanting to be a leader in space exploration and remain competitive in this unforgiving space faring industry must confront shrinking budgets with more cost conscious and efficient designs. Therefore, standards for developing reusable spaceflight hardware need to be established. By having standards available to existing and emerging companies, some of the potential roadblocks and limitations that plagued previous attempts at reuse may be minimized or completely avoided.
Petrovici, Mihai A.; Vogginger, Bernhard; Müller, Paul; Breitwieser, Oliver; Lundqvist, Mikael; Muller, Lyle; Ehrlich, Matthias; Destexhe, Alain; Lansner, Anders; Schüffny, René; Schemmel, Johannes; Meier, Karlheinz
2014-01-01
Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks. PMID:25303102
Petrovici, Mihai A; Vogginger, Bernhard; Müller, Paul; Breitwieser, Oliver; Lundqvist, Mikael; Muller, Lyle; Ehrlich, Matthias; Destexhe, Alain; Lansner, Anders; Schüffny, René; Schemmel, Johannes; Meier, Karlheinz
2014-01-01
Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks.
ExaSAT: An exascale co-design tool for performance modeling
Unat, Didem; Chan, Cy; Zhang, Weiqun; ...
2015-02-09
One of the emerging challenges to designing HPC systems is understanding and projecting the requirements of exascale applications. In order to determine the performance consequences of different hardware designs, analytic models are essential because they can provide fast feedback to the co-design centers and chip designers without costly simulations. However, current attempts to analytically model program performance typically rely on the user manually specifying a performance model. Here we introduce the ExaSAT framework that automates the extraction of parameterized performance models directly from source code using compiler analysis. The parameterized analytic model enables quantitative evaluation of a broad range ofmore » hardware design trade-offs and software optimizations on a variety of different performance metrics, with a primary focus on data movement as a metric. Finally, we demonstrate the ExaSAT framework’s ability to perform deep code analysis of a proxy application from the Department of Energy Combustion Co-design Center to illustrate its value to the exascale co-design process. ExaSAT analysis provides insights into the hardware and software trade-offs and lays the groundwork for exploring a more targeted set of design points using cycle-accurate architectural simulators.« less
Extravehicular activity training and hardware design consideration
NASA Technical Reports Server (NTRS)
Thuot, P. J.; Harbaugh, G. J.
1995-01-01
Preparing astronauts to perform the many complex extravehicular activity (EVA) tasks required to assemble and maintain Space Station will be accomplished through training simulations in a variety of facilities. The adequacy of this training is dependent on a thorough understanding of the task to be performed, the environment in which the task will be performed, high-fidelity training hardware and an awareness of the limitations of each particular training facility. Designing hardware that can be successfully operated, or assembled, by EVA astronauts in an efficient manner, requires an acute understanding of human factors and the capabilities and limitations of the space-suited astronaut. Additionally, the significant effect the microgravity environment has on the crew members' capabilities has to be carefully considered not only for each particular task, but also for all the overhead related to the task and the general overhead associated with EVA. This paper will describe various training methods and facilities that will be used to train EVA astronauts for Space Station assembly and maintenance. User-friendly EVA hardware design considerations and recent EVA flight experience will also be presented.
Extravehicular activity training and hardware design consideration.
Thuot, P J; Harbaugh, G J
1995-07-01
Preparing astronauts to perform the many complex extravehicular activity (EVA) tasks required to assemble and maintain Space Station will be accomplished through training simulations in a variety of facilities. The adequacy of this training is dependent on a thorough understanding of the task to be performed, the environment in which the task will be performed, high-fidelity training hardware and an awareness of the limitations of each particular training facility. Designing hardware that can be successfully operated, or assembled, by EVA astronauts in an efficient manner, requires an acute understanding of human factors and the capabilities and limitations of the space-suited astronaut. Additionally, the significant effect the microgravity environment has on the crew members' capabilities has to be carefully considered not only for each particular task, but also for all the overhead related to the task and the general overhead associated with EVA. This paper will describe various training methods and facilities that will be used to train EVA astronauts for Space Station assembly and maintenance. User-friendly EVA hardware design considerations and recent EVA flight experience will also be presented.
NASA Technical Reports Server (NTRS)
1989-01-01
The Simulation Computer System (SCS) is the computer hardware, software, and workstations that will support the Payload Training Complex (PTC) at Marshall Space Flight Center (MSFC). The PTC will train the space station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. In the first step of this task, a methodology was developed to ensure that all relevant design dimensions were addressed, and that all feasible designs could be considered. The development effort yielded the following method for generating and comparing designs in task 4: (1) Extract SCS system requirements (functions) from the system specification; (2) Develop design evaluation criteria; (3) Identify system architectural dimensions relevant to SCS system designs; (4) Develop conceptual designs based on the system requirements and architectural dimensions identified in step 1 and step 3 above; (5) Evaluate the designs with respect to the design evaluation criteria developed in step 2 above. The results of the method detailed in the above 5 steps are discussed. The results of the task 4 work provide the set of designs which two or three candidate designs are to be selected by MSFC as input to task 5-refine SCS conceptual designs. The designs selected for refinement will be developed to a lower level of detail, and further analyses will be done to begin to determine the size and speed of the components required to implement these designs.
Indoor Unmanned Airship System Airborne Control Module Design
NASA Astrophysics Data System (ADS)
YongXia, Gao; YiBo, Li
By adopting STC12C5A60S2 SCM as a system control unit, assisted by appropriate software and hardware resources, we complete the airborne control module's design of unmanned airship system. This paper introduces hardware control module's structure, airship-driven composition and software realization. Verified by the China Science and Technology Museum special-shaped airship,this control module can work well.
Burn Resuscitation Decision Support System (BRDSS)
2013-09-01
effective for burn care in the deployed and en route care settings. In this period, we completed Human Factors studies, hardware testing , software design ... designated U.S. Army Institute of Surgical Research (USAISR) clinical team. Phase 1 System Requirements and Software Development Arcos will draft a...airworthiness testing . The hardware finalists will be sent to U.S. Army Aeromedical Research Laboratory (USAARL) for critical airworthiness testing . Phase
Research on an autonomous vision-guided helicopter
NASA Technical Reports Server (NTRS)
Amidi, Omead; Mesaki, Yuji; Kanade, Takeo
1994-01-01
Integration of computer vision with on-board sensors to autonomously fly helicopters was researched. The key components developed were custom designed vision processing hardware and an indoor testbed. The custom designed hardware provided flexible integration of on-board sensors with real-time image processing resulting in a significant improvement in vision-based state estimation. The indoor testbed provided convenient calibrated experimentation in constructing real autonomous systems.
The Hermod Behavioral Synthesis System
1988-06-08
LDescription 1 lib tech-independent Transformation & Parser Optimization lib Hardware • g - utSynhesze Generator li Datapath lb Hardware liCotllb...Proc. 22nd Design Automation Conference, ACM/IEEE, June 1985, pp. 475-481. [7] G . De Micheli, "Synthesis of Control Systems", in Design Systems for...VLSI Circuits: Logic Synthesis and Silicon Compilation, G . De Micheli, A. Sangiovanni-Vincentelli, and P. Antognetti, (editor), Martinus Nijhoff
Tools reference manual for a Requirements Specification Language (RSL), version 2.0
NASA Technical Reports Server (NTRS)
Fisher, Gene L.; Cohen, Gerald C.
1993-01-01
This report describes a general-purpose Requirements Specification Language, RSL. The purpose of RSL is to specify precisely the external structure of a mechanized system and to define requirements that the system must meet. A system can be comprised of a mixture of hardware, software, and human processing elements. RSL is a hybrid of features found in several popular requirements specification languages, such as SADT (Structured Analysis and Design Technique), PSL (Problem Statement Language), and RMF (Requirements Modeling Framework). While languages such as these have useful features for structuring a specification, they generally lack formality. To overcome the deficiencies of informal requirements languages, RSL has constructs for formal mathematical specification. These constructs are similar to those found in formal specification languages such as EHDM (Enhanced Hierarchical Development Methodology), Larch, and OBJ3.
Enhanced sampling techniques in biomolecular simulations.
Spiwok, Vojtech; Sucur, Zoran; Hosek, Petr
2015-11-01
Biomolecular simulations are routinely used in biochemistry and molecular biology research; however, they often fail to match expectations of their impact on pharmaceutical and biotech industry. This is caused by the fact that a vast amount of computer time is required to simulate short episodes from the life of biomolecules. Several approaches have been developed to overcome this obstacle, including application of massively parallel and special purpose computers or non-conventional hardware. Methodological approaches are represented by coarse-grained models and enhanced sampling techniques. These techniques can show how the studied system behaves in long time-scales on the basis of relatively short simulations. This review presents an overview of new simulation approaches, the theory behind enhanced sampling methods and success stories of their applications with a direct impact on biotechnology or drug design. Copyright © 2014 Elsevier Inc. All rights reserved.
Design, Specification, and Synthesis of Aircraft Electric Power Systems Control Logic
NASA Astrophysics Data System (ADS)
Xu, Huan
Cyber-physical systems integrate computation, networking, and physical processes. Substantial research challenges exist in the design and verification of such large-scale, distributed sensing, actuation, and control systems. Rapidly improving technology and recent advances in control theory, networked systems, and computer science give us the opportunity to drastically improve our approach to integrated flow of information and cooperative behavior. Current systems rely on text-based specifications and manual design. Using new technology advances, we can create easier, more efficient, and cheaper ways of developing these control systems. This thesis will focus on design considerations for system topologies, ways to formally and automatically specify requirements, and methods to synthesize reactive control protocols, all within the context of an aircraft electric power system as a representative application area. This thesis consists of three complementary parts: synthesis, specification, and design. The first section focuses on the synthesis of central and distributed reactive controllers for an aircraft elec- tric power system. This approach incorporates methodologies from computer science and control. The resulting controllers are correct by construction with respect to system requirements, which are formulated using the specification language of linear temporal logic (LTL). The second section addresses how to formally specify requirements and introduces a domain-specific language for electric power systems. A software tool automatically converts high-level requirements into LTL and synthesizes a controller. The final sections focus on design space exploration. A design methodology is proposed that uses mixed-integer linear programming to obtain candidate topologies, which are then used to synthesize controllers. The discrete-time control logic is then verified in real-time by two methods: hardware and simulation. Finally, the problem of partial observability and dynamic state estimation is explored. Given a set placement of sensors on an electric power system, measurements from these sensors can be used in conjunction with control logic to infer the state of the system.
Non-invasive monitoring of chewing and swallowing for objective quantification of ingestive behavior
Sazonov, Edward; Schuckers, Stephanie; Lopez-Meyer, Paulo; Makeyev, Oleksandr; Sazonova, Nadezhda; Melanson, Edward L.; Neuman, Michael
2008-01-01
A methodology of studying of ingestive behavior by non-invasive monitoring of swallowing (deglutition) and chewing (mastication) has been developed. The target application for the developed methodology is to study the behavioral patterns of food consumption and producing volumetric and weight estimates of energy intake. Monitoring is non-invasive based on detecting swallowing by a sound sensor located over laryngopharynx or by a bone conduction microphone and detecting chewing through a below-the-ear strain sensor. Proposed sensors may be implemented in a wearable monitoring device, thus enabling monitoring of ingestive behavior in free living individuals. In this paper, the goals in the development of this methodology are two-fold. First, a system comprised of sensors, related hardware and software for multimodal data capture is designed for data collection in a controlled environment. Second, a protocol is developed for manual scoring of chewing and swallowing for use as a gold standard. The multi-modal data capture was tested by measuring chewing and swallowing in twenty one volunteers during periods of food intake and quiet sitting (no food intake). Video footage and sensor signals were manually scored by trained raters. Inter-rater reliability study for three raters conducted on the sample set of 5 subjects resulted in high average intra-class correlation coefficients of 0.996 for bites, 0.988 for chews, and 0.98 for swallows. The collected sensor signals and the resulting manual scores will be used in future research as a gold standard for further assessment of sensor design, development of automatic pattern recognition routines, and study of the relationship between swallowing/chewing and ingestive behavior. PMID:18427161
Sazonov, Edward; Schuckers, Stephanie; Lopez-Meyer, Paulo; Makeyev, Oleksandr; Sazonova, Nadezhda; Melanson, Edward L; Neuman, Michael
2008-05-01
A methodology of studying of ingestive behavior by non-invasive monitoring of swallowing (deglutition) and chewing (mastication) has been developed. The target application for the developed methodology is to study the behavioral patterns of food consumption and producing volumetric and weight estimates of energy intake. Monitoring is non-invasive based on detecting swallowing by a sound sensor located over laryngopharynx or by a bone-conduction microphone and detecting chewing through a below-the-ear strain sensor. Proposed sensors may be implemented in a wearable monitoring device, thus enabling monitoring of ingestive behavior in free-living individuals. In this paper, the goals in the development of this methodology are two-fold. First, a system comprising sensors, related hardware and software for multi-modal data capture is designed for data collection in a controlled environment. Second, a protocol is developed for manual scoring of chewing and swallowing for use as a gold standard. The multi-modal data capture was tested by measuring chewing and swallowing in 21 volunteers during periods of food intake and quiet sitting (no food intake). Video footage and sensor signals were manually scored by trained raters. Inter-rater reliability study for three raters conducted on the sample set of five subjects resulted in high average intra-class correlation coefficients of 0.996 for bites, 0.988 for chews and 0.98 for swallows. The collected sensor signals and the resulting manual scores will be used in future research as a gold standard for further assessment of sensor design, development of automatic pattern recognition routines and study of the relationship between swallowing/chewing and ingestive behavior.
STRS Compliant FPGA Waveform Development
NASA Technical Reports Server (NTRS)
Nappier, Jennifer; Downey, Joseph; Mortensen, Dale
2008-01-01
The Space Telecommunications Radio System (STRS) Architecture Standard describes a standard for NASA space software defined radios (SDRs). It provides a common framework that can be used to develop and operate a space SDR in a reconfigurable and reprogrammable manner. One goal of the STRS Architecture is to promote waveform reuse among multiple software defined radios. Many space domain waveforms are designed to run in the special signal processing (SSP) hardware. However, the STRS Architecture is currently incomplete in defining a standard for designing waveforms in the SSP hardware. Therefore, the STRS Architecture needs to be extended to encompass waveform development in the SSP hardware. The extension of STRS to the SSP hardware will promote easier waveform reconfiguration and reuse. A transmit waveform for space applications was developed to determine ways to extend the STRS Architecture to a field programmable gate array (FPGA). These extensions include a standard hardware abstraction layer for FPGAs and a standard interface between waveform functions running inside a FPGA. A FPGA-based transmit waveform implementation of the proposed standard interfaces on a laboratory breadboard SDR will be discussed.
NASA Technical Reports Server (NTRS)
Stephan, Amy; Erikson, Carol A.
1991-01-01
As an initial attempt to introduce expert system technology into an onboard environment, a model based diagnostic system using the TRW MARPLE software tool was integrated with prototype flight hardware and its corresponding control software. Because this experiment was designed primarily to test the effectiveness of the model based reasoning technique used, the expert system ran on a separate hardware platform, and interactions between the control software and the model based diagnostics were limited. While this project met its objective of showing that model based reasoning can effectively isolate failures in flight hardware, it also identified the need for an integrated development path for expert system and control software for onboard applications. In developing expert systems that are ready for flight, artificial intelligence techniques must be evaluated to determine whether they offer a real advantage onboard, identify which diagnostic functions should be performed by the expert systems and which are better left to the procedural software, and work closely with both the hardware and the software developers from the beginning of a project to produce a well designed and thoroughly integrated application.
Bringing the Unidata IDV to the Cloud
NASA Astrophysics Data System (ADS)
Fisher, W. I.; Oxelson Ganter, J.
2015-12-01
Maintaining software compatibility across new computing environments and the associated underlying hardware is a common problem for software engineers and scientific programmers. While traditional software engineering provides a suite of tools and methodologies which may mitigate this issue, they are typically ignored by developers lacking a background in software engineering. Causing further problems, these methodologies are best applied at the start of project; trying to apply them to an existing, mature project can require an immense effort. Visualization software is particularly vulnerable to this problem, given the inherent dependency on particular graphics hardware and software API's. As a result of these issues, there exists a large body of software which is simultaneously critical to the scientists who are dependent upon it, and yet increasingly difficult to maintain.The solution to this problem was partially provided with the advent of Cloud Computing; Application Streaming. This technology allows a program to run entirely on a remote virtual machine while still allowing for interactivity and dynamic visualizations, with little-to-no re-engineering required. When coupled with containerization technology such as Docker, we are able to easily bring the same visualization software to a desktop, a netbook, a smartphone, and the next generation of hardware, whatever it may be.Unidata has been able to harness Application Streaming to provide a tablet-compatible version of our visualization software, the Integrated Data Viewer (IDV). This work will examine the challenges associated with adapting the IDV to an application streaming platform, and include a brief discussion of the underlying technologies involved.
Knowledge-based environment for optical system design
NASA Astrophysics Data System (ADS)
Johnson, R. Barry
1991-01-01
Optical systems are extensively utilized by industry government and military organizations. The conceptual design engineering design fabrication and testing of these systems presently requires significant time typically on the order of 3-5 years. The Knowledge-Based Environment for Optical System Design (KB-OSD) Program has as its principal objectives the development of a methodology and tool(s) that will make a notable reduction in the development time of optical system projects reduce technical risk and overall cost. KB-OSD can be considered as a computer-based optical design associate for system engineers and design engineers. By utilizing artificial intelligence technology coupled with extensive design/evaluation computer application programs and knowledge bases the KB-OSD will provide the user with assistance and guidance to accomplish such activities as (i) develop system level and hardware level requirements from mission requirements (ii) formulate conceptual designs (iii) construct a statement of work for an RFP (iv) develop engineering level designs (v) evaluate an existing design and (vi) explore the sensitivity of a system to changing scenarios. The KB-OSD comprises a variety of computer platforms including a Stardent Titan supercomputer numerous design programs (lens design coating design thermal materials structural atmospherics etc. ) data bases and heuristic knowledge bases. An important element of the KB-OSD Program is the inclusion of the knowledge of individual experts in various areas of optics and optical system engineering. This knowledge is obtained by KB-OSD knowledge engineers performing
A Grounded Theory Study of the Relationship between E-Mail and Burnout
ERIC Educational Resources Information Center
Camargo, Marta Rocha
2008-01-01
Introduction: This study consisted of a qualitative investigation into the role of e-mail in work-related burnout among high technology employees working full time and on-site for Internet, hardware, and software companies. Method: Grounded theory methodology was used to provide a systemic approach in categorising, sorting, and analysing data…
PacRIM II: A review of AirSAR operations and system performance
NASA Technical Reports Server (NTRS)
Moller, D.; Chu, A.; Lou, Y.; Miller, T.; O'Leary, E.
2001-01-01
In this paper we briefly review the AirSAR system, its expected performance, and quality of data obtained during that mission. We discuss the system hardware calibration methodologies, and present quantitative performance values of radar backscatter and interferometric height errors (random and systematic) from PACRIM II calibration data.
Three Corner Sat Communications System
NASA Technical Reports Server (NTRS)
Anderson, Bobby; Horan, Stephen
2000-01-01
Three Corner Satellite is a constellation of three nanosatellites designed and built by students. New Mexico State University has taken on the design of the communications system for this constellation. The system includes the forward link, return link, and the crosslink. Due to size, mass, power, and financial constraints, we must design a small, light, power efficient, and inexpensive communications system. This thesis presents the design of a radio system to accomplish the data transmission requirements in light of the system constraints. In addition to the hardware design, the operational commands needed by the satellite's on-board computer to control and communicate with the communications hardware will be presented. In order for the hardware to communicate with the ground stations, we will examine the link budgets derived from the radiated power of the transmitters, link distance, data modulation, and data rate for each link. The antenna design for the constellation is analyzed using software and testing the physical antennas on a model satellite. After the analysis and testing, a combination of different systems will meet and exceed the requirements and constraints of the Three Corner Satellite constellation.
Narasimhan, Seetharam; Chiel, Hillel J; Bhunia, Swarup
2009-01-01
For implantable neural interface applications, it is important to compress data and analyze spike patterns across multiple channels in real time. Such a computational task for online neural data processing requires an innovative circuit-architecture level design approach for low-power, robust and area-efficient hardware implementation. Conventional microprocessor or Digital Signal Processing (DSP) chips would dissipate too much power and are too large in size for an implantable system. In this paper, we propose a novel hardware design approach, referred to as "Preferential Design" that exploits the nature of the neural signal processing algorithm to achieve a low-voltage, robust and area-efficient implementation using nanoscale process technology. The basic idea is to isolate the critical components with respect to system performance and design them more conservatively compared to the noncritical ones. This allows aggressive voltage scaling for low power operation while ensuring robustness and area efficiency. We have applied the proposed approach to a neural signal processing algorithm using the Discrete Wavelet Transform (DWT) and observed significant improvement in power and robustness over conventional design.
Towards Evolving Electronic Circuits for Autonomous Space Applications
NASA Technical Reports Server (NTRS)
Lohn, Jason D.; Haith, Gary L.; Colombano, Silvano P.; Stassinopoulos, Dimitris
2000-01-01
The relatively new field of Evolvable Hardware studies how simulated evolution can reconfigure, adapt, and design hardware structures in an automated manner. Space applications, especially those requiring autonomy, are potential beneficiaries of evolvable hardware. For example, robotic drilling from a mobile platform requires high-bandwidth controller circuits that are difficult to design. In this paper, we present automated design techniques based on evolutionary search that could potentially be used in such applications. First, we present a method of automatically generating analog circuit designs using evolutionary search and a circuit construction language. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. Using a parallel genetic algorithm, we present experimental results for five design tasks. Second, we investigate the use of coevolution in automated circuit design. We examine fitness evaluation by comparing the effectiveness of four fitness schedules. The results indicate that solution quality is highest with static and co-evolving fitness schedules as compared to the other two dynamic schedules. We discuss these results and offer two possible explanations for the observed behavior: retention of useful information, and alignment of problem difficulty with circuit proficiency.
Nanorobot Hardware Architecture for Medical Defense.
Cavalcanti, Adriano; Shirinzadeh, Bijan; Zhang, Mingjun; Kretly, Luiz C
2008-05-06
This work presents a new approach with details on the integrated platform and hardware architecture for nanorobots application in epidemic control, which should enable real time in vivo prognosis of biohazard infection. The recent developments in the field of nanoelectronics, with transducers progressively shrinking down to smaller sizes through nanotechnology and carbon nanotubes, are expected to result in innovative biomedical instrumentation possibilities, with new therapies and efficient diagnosis methodologies. The use of integrated systems, smart biosensors, and programmable nanodevices are advancing nanoelectronics, enabling the progressive research and development of molecular machines. It should provide high precision pervasive biomedical monitoring with real time data transmission. The use of nanobioelectronics as embedded systems is the natural pathway towards manufacturing methodology to achieve nanorobot applications out of laboratories sooner as possible. To demonstrate the practical application of medical nanorobotics, a 3D simulation based on clinical data addresses how to integrate communication with nanorobots using RFID, mobile phones, and satellites, applied to long distance ubiquitous surveillance and health monitoring for troops in conflict zones. Therefore, the current model can also be used to prevent and save a population against the case of some targeted epidemic disease.
Automatic Thread-Level Parallelization in the Chombo AMR Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christen, Matthias; Keen, Noel; Ligocki, Terry
2011-05-26
The increasing on-chip parallelism has some substantial implications for HPC applications. Currently, hybrid programming models (typically MPI+OpenMP) are employed for mapping software to the hardware in order to leverage the hardware?s architectural features. In this paper, we present an approach that automatically introduces thread level parallelism into Chombo, a parallel adaptive mesh refinement framework for finite difference type PDE solvers. In Chombo, core algorithms are specified in the ChomboFortran, a macro language extension to F77 that is part of the Chombo framework. This domain-specific language forms an already used target language for an automatic migration of the large number ofmore » existing algorithms into a hybrid MPI+OpenMP implementation. It also provides access to the auto-tuning methodology that enables tuning certain aspects of an algorithm to hardware characteristics. Performance measurements are presented for a few of the most relevant kernels with respect to a specific application benchmark using this technique as well as benchmark results for the entire application. The kernel benchmarks show that, using auto-tuning, up to a factor of 11 in performance was gained with 4 threads with respect to the serial reference implementation.« less
Kasabov, Nikola; Scott, Nathan Matthew; Tu, Enmei; Marks, Stefan; Sengupta, Neelava; Capecci, Elisa; Othman, Muhaini; Doborjeh, Maryam Gholami; Murli, Norhanifah; Hartono, Reggio; Espinosa-Ramos, Josafath Israel; Zhou, Lei; Alvi, Fahad Bashir; Wang, Grace; Taylor, Denise; Feigin, Valery; Gulyaev, Sergei; Mahmoud, Mahmoud; Hou, Zeng-Guang; Yang, Jie
2016-06-01
The paper describes a new type of evolving connectionist systems (ECOS) called evolving spatio-temporal data machines based on neuromorphic, brain-like information processing principles (eSTDM). These are multi-modular computer systems designed to deal with large and fast spatio/spectro temporal data using spiking neural networks (SNN) as major processing modules. ECOS and eSTDM in particular can learn incrementally from data streams, can include 'on the fly' new input variables, new output class labels or regression outputs, can continuously adapt their structure and functionality, can be visualised and interpreted for new knowledge discovery and for a better understanding of the data and the processes that generated it. eSTDM can be used for early event prediction due to the ability of the SNN to spike early, before whole input vectors (they were trained on) are presented. A framework for building eSTDM called NeuCube along with a design methodology for building eSTDM using this is presented. The implementation of this framework in MATLAB, Java, and PyNN (Python) is presented. The latter facilitates the use of neuromorphic hardware platforms to run the eSTDM. Selected examples are given of eSTDM for pattern recognition and early event prediction on EEG data, fMRI data, multisensory seismic data, ecological data, climate data, audio-visual data. Future directions are discussed, including extension of the NeuCube framework for building neurogenetic eSTDM and also new applications of eSTDM. Copyright © 2015 Elsevier Ltd. All rights reserved.
Automatic Digital Hardware Synthesis
1990-09-01
VHDL to PALASM, a hardware synthesis language. The PALASM description is then directly implemented into a field programmable gate array (FPGAI using...process of translating VHDL to PALASM, a hardware synthesis language. The PALASM description is then directly implemented into a field programmable gate...allows the engineer to use VHDL to create and validate a design, and then to implement it in a gate array. The development of software o translate VHDL
Hardware Evolution of Analog Speed Controllers for a DC Motor
NASA Technical Reports Server (NTRS)
Gwaltney, David A.; Ferguson, Michael I.
2003-01-01
Evolvable hardware provides the capability to evolve analog circuits to produce amplifier and filter functions. Conventional analog controller designs employ these same functions. Analog controllers for the control of the shaft speed of a DC motor are evolved on an evolvable hardware platform utilizing a Field Programmable Transistor Array (FPTA). The performance of these evolved controllers is compared to that of a conventional proportional-integral (PI) controller.
Framework for Development and Distribution of Hardware Acceleration
NASA Astrophysics Data System (ADS)
Thomas, David B.; Luk, Wayne W.
2002-07-01
This paper describes IGOL, a framework for developing reconfigurable data processing applications. While IGOL was originally designed to target imaging and graphics systems, its structure is sufficiently general to support a broad range of applications. IGOL adopts a four-layer architecture: application layer, operation layer, appliance layer and configuration layer. This architecture is intended to separate and co-ordinate both the development and execution of hardware and software components. Hardware developers can use IGOL as an instance testbed for verification and benchmarking, as well as for distribution. Software application developers can use IGOL to discover hardware accelerated data processors, and to access them in a transparent, non-hardware specific manner. IGOL provides extensive support for the RC1000-PP board via the Handel-C language, and a wide selection of image processing filters have been developed. IGOL also supplies plug-ins to enable such filters to be incorporated in popular applications such as Premiere, Winamp, VirtualDub and DirectShow. Moreover, IGOL allows the automatic use of multiple cards to accelerate an application, demonstrated using DirectShow. To enable transparent acceleration without sacrificing performance, a three-tiered COM (Component Object Model) API has been designed and implemented. This API provides a well-defined and extensible interface which facilitates the development of hardware data processors that can accelerate multiple applications.
NASA Astrophysics Data System (ADS)
Ketkar, Supriya; Lee, Junhan; Asokamani, Sen; Cho, Winston; Mishra, Shailendra
2018-03-01
This paper discusses the approach and solution adopted by GLOBALFOUNDRIES, a high volume manufacturing (HVM) foundry, for dry-etch related edge-signature surface particle defects issue facing the sub-nm node in the gate-etch sector. It is one of the highest die killers for the company in the 14-nm node. We have used different approaches to attack and rectify the edge signature surface particle defect. Several process-related & hardware changes have been successively implemented to achieve defect reduction improvement by 63%. Each systematic process and/or hardware approach has its own unique downstream issues and they have been dealt in a route-cause-effect technique to address the issue.
A pluggable framework for parallel pairwise sequence search.
Archuleta, Jeremy; Feng, Wu-chun; Tilevich, Eli
2007-01-01
The current and near future of the computing industry is one of multi-core and multi-processor technology. Most existing sequence-search tools have been designed with a focus on single-core, single-processor systems. This discrepancy between software design and hardware architecture substantially hinders sequence-search performance by not allowing full utilization of the hardware. This paper presents a novel framework that will aid the conversion of serial sequence-search tools into a parallel version that can take full advantage of the available hardware. The framework, which is based on a software architecture called mixin layers with refined roles, enables modules to be plugged into the framework with minimal effort. The inherent modular design improves maintenance and extensibility, thus opening up a plethora of opportunities for advanced algorithmic features to be developed and incorporated while routine maintenance of the codebase persists.
Automated control and data acquisition for a tunable diode laser heterodyne spectrometer
NASA Technical Reports Server (NTRS)
Shull, T. S.; Rinsland, P. L.
1983-01-01
This paper describes the hardware and software design, development, and implementation of the control and data electronics of a laser heterodyne spectrometer instrument being built at NASA Langley Research Center for a technology demonstration. Functional partitioning, applied at all levels of hardware and software, has been found to provide expedient design, development, and testing of the instrument. The instrument is composed of distributed microprocessor-based units. A master/slave protocol is presented which can be simulated by a terminal for unit checkout. All but one of the units are implemented using a set of core boards, plus unique boards where necessary. This design has led to reduced hardware development, reduced parts inventory, and replication of software modules, while providing the flexibility needed for a development instrument. The development tools and documentation guidelines are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moro, Erik A.
Optical fiber sensors offer advantages over traditional electromechanical sensors, making them particularly well-suited for certain measurement applications. Generally speaking, optical fiber sensors respond to a desired measurand through modulation of an optical signal's intensity, phase, or wavelength. Practically, non-contacting fiber optic displacement sensors are limited to intensity-modulated and interferometric (or phase-modulated) methodologies. Intensity-modulated fiber optic displacement sensors relate target displacement to a power measurement. The simplest intensity-modulated sensor architectures are not robust to environmental and hardware fluctuations, since such variability may cause changes in the measured power level that falsely indicate target displacement. Differential intensity-modulated sensors have been implemented, offeringmore » robustness to such intensity fluctuations, and the speed of these sensors is limited only by the combined speed of the photodetection hardware and the data acquisition system (kHz-MHz). The primary disadvantages of intensity-modulated sensing are the relatively low accuracy (?m-mm for low-power sensors) and the lack of robustness, which consequently must be designed, often with great difficulty, into the sensor's architecture. White light interferometric displacement sensors, on the other hand, offer increased accuracy and robustness. Unlike their monochromatic-interferometer counterparts, white light interferometric sensors offer absolute, unambiguous displacement measurements over large displacement ranges (cm for low-power, 5 mW, sources), necessitating no initial calibration, and requiring no environmental or feedback control. The primary disadvantage of white light interferometric displacement sensors is that their utility in dynamic testing scenarios is limited, both by hardware bandwidth and by their inherent high-sensitivity to Doppler-effects. The decision of whether to use either an intensity-modulated interferometric sensor depends on an appropriate performance function (e.g., desired displacement range, accuracy, robustness, etc.). In this dissertation, the performance limitations of a bundled differential intensity-modulated displacement sensor are analyzed, where the bundling configuration has been designed to optimize performance. The performance limitations of a white light Fabry-Perot displacement sensor are also analyzed. Both these sensors are non-contacting, but they have access to different regions of the performance-space. Further, both these sensors have different degrees of sensitivity to experimental uncertainty. Made in conjunction with careful analysis, the decision of which sensor to deploy need not be an uninformed one.« less
21 CFR 882.1440 - Neuropsychiatric interpretive electroencephalograph assessment aid.
Code of Federal Regulations, 2014 CFR
2014-04-01
... described in detail in the software requirements specification and software design specification... the device, hardware and software, must be fully characterized and must demonstrate a reasonable assurance of safety and effectiveness. (i) Hardware specifications must be provided. Appropriate...
VME rollback hardware for time warp multiprocessor systems
NASA Technical Reports Server (NTRS)
Robb, Michael J.; Buzzell, Calvin A.
1992-01-01
The purpose of the research effort is to develop and demonstrate innovative hardware to implement specific rollback and timing functions required for efficient queue management and precision timekeeping in multiprocessor discrete event simulations. The previously completed phase 1 effort demonstrated the technical feasibility of building hardware modules which eliminate the state saving overhead of the Time Warp paradigm used in distributed simulations on multiprocessor systems. The current phase 2 effort will build multiple pre-production rollback hardware modules integrated with a network of Sun workstations, and the integrated system will be tested by executing a Time Warp simulation. The rollback hardware will be designed to interface with the greatest number of multiprocessor systems possible. The authors believe that the rollback hardware will provide for significant speedup of large scale discrete event simulation problems and allow multiprocessors using Time Warp to dramatically increase performance.
Design and implementation of encrypted and decrypted file system based on USBKey and hardware code
NASA Astrophysics Data System (ADS)
Wu, Kehe; Zhang, Yakun; Cui, Wenchao; Jiang, Ting
2017-05-01
To protect the privacy of sensitive data, an encrypted and decrypted file system based on USBKey and hardware code is designed and implemented in this paper. This system uses USBKey and hardware code to authenticate a user. We use random key to encrypt file with symmetric encryption algorithm and USBKey to encrypt random key with asymmetric encryption algorithm. At the same time, we use the MD5 algorithm to calculate the hash of file to verify its integrity. Experiment results show that large files can be encrypted and decrypted in a very short time. The system has high efficiency and ensures the security of documents.
Requirements analysis for a hardware, discrete-event, simulation engine accelerator
NASA Astrophysics Data System (ADS)
Taylor, Paul J., Jr.
1991-12-01
An analysis of a general Discrete Event Simulation (DES), executing on the distributed architecture of an eight mode Intel PSC/2 hypercube, was performed. The most time consuming portions of the general DES algorithm were determined to be the functions associated with message passing of required simulation data between processing nodes of the hypercube architecture. A behavioral description, using the IEEE standard VHSIC Hardware Description and Design Language (VHDL), for a general DES hardware accelerator is presented. The behavioral description specifies the operational requirements for a DES coprocessor to augment the hypercube's execution of DES simulations. The DES coprocessor design implements the functions necessary to perform distributed discrete event simulations using a conservative time synchronization protocol.
Language Classification using N-grams Accelerated by FPGA-based Bloom Filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacob, A; Gokhale, M
N-Gram (n-character sequences in text documents) counting is a well-established technique used in classifying the language of text in a document. In this paper, n-gram processing is accelerated through the use of reconfigurable hardware on the XtremeData XD1000 system. Our design employs parallelism at multiple levels, with parallel Bloom Filters accessing on-chip RAM, parallel language classifiers, and parallel document processing. In contrast to another hardware implementation (HAIL algorithm) that uses off-chip SRAM for lookup, our highly scalable implementation uses only on-chip memory blocks. Our implementation of end-to-end language classification runs at 85x comparable software and 1.45x the competing hardware design.
Conceptual design of two-phase fluid mechanics and heat transfer facility for spacelab
NASA Technical Reports Server (NTRS)
North, B. F.; Hill, M. E.
1980-01-01
Five specific experiments were analyzed to provide definition of experiments designed to evaluate two phase fluid behavior in low gravity. The conceptual design represents a fluid mechanics and heat transfer facility for a double rack in Spacelab. The five experiments are two phase flow patterns and pressure drop, flow boiling, liquid reorientation, and interface bubble dynamics. Hardware was sized, instrumentation and data recording requirements defined, and the five experiments were installed as an integrated experimental package. Applicable available hardware was selected in the experiment design and total experiment program costs were defined.
A Computational Methodology for Simulating Thermal Loss Testing of the Advanced Stirling Convertor
NASA Technical Reports Server (NTRS)
Reid, Terry V.; Wilson, Scott D.; Schifer, Nicholas A.; Briggs, Maxwell H.
2012-01-01
The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two highefficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including the use of multidimensional numerical models. Validation test hardware has also been used to provide a direct comparison of numerical results and validate the multi-dimensional numerical models used to predict convertor net heat input and efficiency. These validation tests were designed to simulate the temperature profile of an operating Stirling convertor and resulted in a measured net heat input of 244.4 W. The methodology was applied to the multi-dimensional numerical model which resulted in a net heat input of 240.3 W. The computational methodology resulted in a value of net heat input that was 1.7 percent less than that measured during laboratory testing. The resulting computational methodology and results are discussed.
Multicore Architectures for Multiple Independent Levels of Security Applications
2012-09-01
to bolster the MILS effort. However, current MILS operating systems are not designed for multi-core platforms. They do not have the hardware support...current MILS operating systems are not designed for multi‐core platforms. They do not have the hardware support to ensure that the separation...the availability of information at different security classification levels while increasing the overall security of the computing system . Due to the
Demonstration of spectral calibration for stellar interferometry
NASA Technical Reports Server (NTRS)
Demers, Richard T.; An, Xin; Tang, Hong; Rud, Mayer; Wayne, Leonard; Kissil, Andrew; Kwack, Eug-Yun
2006-01-01
A breadboard is under development to demonstrate the calibration of spectral errors in microarcsecond stellar interferometers. Analysis shows that thermally and mechanically stable hardware in addition to careful optical design can reduce the wavelength dependent error to tens of nanometers. Calibration of the hardware can further reduce the error to the level of picometers. The results of thermal, mechanical and optical analysis supporting the breadboard design will be shown.
The design of flight hardware: Organizational and technical ideas from the MITRE/WPI Shuttle Program
NASA Technical Reports Server (NTRS)
Looft, F. J.
1986-01-01
The Mitre Corporation of Bedford Mass. and the Worcester Polytechnic Institute are developing several experiments for a future Shuttle flight. Several design practices for the development of the electrical equipment for the flight hardware have been standardized. Some of the ideas are presented, not as hard and fast rules but rather in the interest of stimulating discussions for sharing such ideas.
Development of hardwares and computer interface for a two-degree-of-freedom robot
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Pooran, Farhad J.
1987-01-01
The research results that were obtained are reviewed. Then the robot actuator, the selection of the data acquisition system, and the design of the power amplifier will be discussed. The machine design of the robot manipulator will then be presented. After that, the integration of the developed hardware into the open-loop system will also be discussed. Current and future research work is addressed.
Kant, Nasir Ali; Dar, Mohamad Rafiq; Khanday, Farooq Ahmad
2015-01-01
The output of every neuron in neural network is specified by the employed activation function (AF) and therefore forms the heart of neural networks. As far as the design of artificial neural networks (ANNs) is concerned, hardware approach is preferred over software one because it promises the full utilization of the application potential of ANNs. Therefore, besides some arithmetic blocks, designing AF in hardware is the most important for designing ANN. While attempting to design the AF in hardware, the designs should be compatible with the modern Very Large Scale Integration (VLSI) design techniques. In this regard, the implemented designs should: only be in Metal Oxide Semiconductor (MOS) technology in order to be compatible with the digital designs, provide electronic tunability feature, and be able to operate at ultra-low voltage. Companding is one of the promising circuit design techniques for achieving these goals. In this paper, 0.5 V design of Liao's AF using sinh-domain technique is introduced. Furthermore, the function is tested by implementing inertial neuron model. The performance of the AF and inertial neuron model have been evaluated through simulation results, using the PSPICE software with the MOS transistor models provided by the 0.18-μm Taiwan Semiconductor Manufacturer Complementary Metal Oxide Semiconductor (TSM CMOS) process.
Event and Pulse Node Hardware Design for Nuclear Fusion Experiments
NASA Astrophysics Data System (ADS)
Fortunato, J. C.; Batista, A.; Sousa, J.; Fernandes, H.; Varandas, C. A. F.
2008-04-01
This article presents an event and pulse node hardware module (EPN) developed for use in control and data acquisition (CODAC) in current and upcoming long discharges nuclear fusion experiments. Its purpose is to allow real time event management and trigger distribution. The use of a mixture of digital signal processing and field programmable gate arrays, with fiber optic channels for event broadcast between CODAC nodes, and short length paths between the EPN and CODAC hardware, allows an effective and low latency communication path. This hardware will be integrated in the ISTTOK CODAC to allow long AC plasma discharges.
The JPL telerobot operator control station. Part 1: Hardware
NASA Technical Reports Server (NTRS)
Kan, Edwin P.; Tower, John T.; Hunka, George W.; Vansant, Glenn J.
1989-01-01
The Operator Control Station of the Jet Propulsion Laboratory (JPL)/NASA Telerobot Demonstrator System provides the man-machine interface between the operator and the system. It provides all the hardware and software for accepting human input for the direct and indirect (supervised) manipulation of the robot arms and tools for task execution. Hardware and software are also provided for the display and feedback of information and control data for the operator's consumption and interaction with the task being executed. The hardware design, system architecture, and its integration and interface with the rest of the Telerobot Demonstrator System are discussed.
Design of a Representative Low Earth Orbit Satellite to Improve Existing Debris Models
NASA Technical Reports Server (NTRS)
Clark, S.; Dietrich, A.; Werremeyer, M.; Fitz-Coy, N.; Liou, J.-C.
2012-01-01
This paper summarizes the process and methodologies used in the design of a small-satellite, DebriSat, that represents materials and construction methods used in modern day Low Earth Orbit (LEO) satellites. This satellite will be used in a future hypervelocity impact test with the overall purpose to investigate the physical characteristics of modern LEO satellites after an on-orbit collision. The major ground-based satellite impact experiment used by DoD and NASA in their development of satellite breakup models was conducted in 1992. The target used for that experiment was a Navy Transit satellite (40 cm, 35 kg) fabricated in the 1960 s. Modern satellites are very different in materials and construction techniques from a satellite built 40 years ago. Therefore, there is a need to conduct a similar experiment using a modern target satellite to improve the fidelity of the satellite breakup models. The design of DebriSat will focus on designing and building a next-generation satellite to more accurately portray modern satellites. The design of DebriSat included a comprehensive study of historical LEO satellite designs and missions within the past 15 years for satellites ranging from 10 kg to 5000 kg. This study identified modern trends in hardware, material, and construction practices utilized in recent LEO missions, and helped direct the design of DebriSat.
Use of Field Programmable Gate Array Technology in Future Space Avionics
NASA Technical Reports Server (NTRS)
Ferguson, Roscoe C.; Tate, Robert
2005-01-01
Fulfilling NASA's new vision for space exploration requires the development of sustainable, flexible and fault tolerant spacecraft control systems. The traditional development paradigm consists of the purchase or fabrication of hardware boards with fixed processor and/or Digital Signal Processing (DSP) components interconnected via a standardized bus system. This is followed by the purchase and/or development of software. This paradigm has several disadvantages for the development of systems to support NASA's new vision. Building a system to be fault tolerant increases the complexity and decreases the performance of included software. Standard bus design and conventional implementation produces natural bottlenecks. Configuring hardware components in systems containing common processors and DSPs is difficult initially and expensive or impossible to change later. The existence of Hardware Description Languages (HDLs), the recent increase in performance, density and radiation tolerance of Field Programmable Gate Arrays (FPGAs), and Intellectual Property (IP) Cores provides the technology for reprogrammable Systems on a Chip (SOC). This technology supports a paradigm better suited for NASA's vision. Hardware and software production are melded for more effective development; they can both evolve together over time. Designers incorporating this technology into future avionics can benefit from its flexibility. Systems can be designed with improved fault isolation and tolerance using hardware instead of software. Also, these designs can be protected from obsolescence problems where maintenance is compromised via component and vendor availability.To investigate the flexibility of this technology, the core of the Central Processing Unit and Input/Output Processor of the Space Shuttle AP101S Computer were prototyped in Verilog HDL and synthesized into an Altera Stratix FPGA.
NASA Technical Reports Server (NTRS)
Stoica, A.; Keymeulen, D.; Zebulum, R. S.; Ferguson, M. I.; Guo, X.
2002-01-01
This paper comments on some directions of growth for evolvable hardware, proposes research directions that address the scalability problem and gives examples of results in novel areas approached by EHW.
Network Reduction Algorithm for Developing Distribution Feeders for Real-Time Simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagarajan, Adarsh; Nelson, Austin A; Prabakar, Kumaraguru
As advanced grid-support functions (AGF) become more widely used in grid-connected photovoltaic (PV) inverters, utilities are increasingly interested in their impacts when implemented in the field. These effects can be understood by modeling feeders in real-time simulators and test PV inverters using power hardware-in-the-loop (PHIL) techniques. This paper presents a novel feeder model reduction algorithm using a ruin & reconstruct methodology that enables large feeders to be solved and operated on real-time computing platforms. Two Hawaiian Electric feeder models in Synergi Electric's load flow software were converted to reduced order models in OpenDSS, and subsequently implemented in the OPAL-RT real-timemore » digital testing platform. Smart PV inverters were added to the realtime model with AGF responses modeled after characterizing commercially available hardware inverters. Finally, hardware inverters were tested in conjunction with the real-time model using PHIL techniques so that the effects of AGFs on the feeders could be analyzed.« less
Exploiting current-generation graphics hardware for synthetic-scene generation
NASA Astrophysics Data System (ADS)
Tanner, Michael A.; Keen, Wayne A.
2010-04-01
Increasing seeker frame rate and pixel count, as well as the demand for higher levels of scene fidelity, have driven scene generation software for hardware-in-the-loop (HWIL) and software-in-the-loop (SWIL) testing to higher levels of parallelization. Because modern PC graphics cards provide multiple computational cores (240 shader cores for a current NVIDIA Corporation GeForce and Quadro cards), implementation of phenomenology codes on graphics processing units (GPUs) offers significant potential for simultaneous enhancement of simulation frame rate and fidelity. To take advantage of this potential requires algorithm implementation that is structured to minimize data transfers between the central processing unit (CPU) and the GPU. In this paper, preliminary methodologies developed at the Kinetic Hardware In-The-Loop Simulator (KHILS) will be presented. Included in this paper will be various language tradeoffs between conventional shader programming, Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), including performance trades and possible pathways for future tool development.
Open source hardware and software platform for robotics and artificial intelligence applications
NASA Astrophysics Data System (ADS)
Liang, S. Ng; Tan, K. O.; Lai Clement, T. H.; Ng, S. K.; Mohammed, A. H. Ali; Mailah, Musa; Azhar Yussof, Wan; Hamedon, Zamzuri; Yussof, Zulkifli
2016-02-01
Recent developments in open source hardware and software platforms (Android, Arduino, Linux, OpenCV etc.) have enabled rapid development of previously expensive and sophisticated system within a lower budget and flatter learning curves for developers. Using these platform, we designed and developed a Java-based 3D robotic simulation system, with graph database, which is integrated in online and offline modes with an Android-Arduino based rubbish picking remote control car. The combination of the open source hardware and software system created a flexible and expandable platform for further developments in the future, both in the software and hardware areas, in particular in combination with graph database for artificial intelligence, as well as more sophisticated hardware, such as legged or humanoid robots.
Medical evaluations on the KC-135 1990 flight report summary
NASA Technical Reports Server (NTRS)
Lloyd, Charles W.; Guess, Terrell M.; Whiting, Charles W.; Doarn, Charles R.
1991-01-01
The medical investigations completed on the KC-135 during FY 1990 in support of the development of the Health Maintenance Facility and Medical Operations are discussed. The experiments are comprised of engineering evaluations of medical hardware and medical procedures. The investigating teams are made up of both medical and engineering personnel responsible for the development of medical hardware and medical operations. The hardware evaluated includes dental equipment, a coagulation analyzer, selected pharmaceutical aerosol devices, a prototype air/fluid separator, a prototype packaging and stowage system for medical supplies, a microliter metering system, and a workstation for minor surgical procedures. The results of these engineering evaluations will be used in the design of fleet hardware as well as to identify hardware specific training requirements.
MSFC Skylab corollary experiment systems mission evaluation
NASA Technical Reports Server (NTRS)
1974-01-01
Evaluations are presented of the performances of corollary experiment hardware developed by the George C. Marshall Space Flight Center and operated during the three manned Skylab missions. Also presented are assessments of the functional adequacy of the experiment hardware and its supporting systems, and indications are given as to the degrees by which experiment constraints and interfaces were met. It is shown that most of the corollary experiment hardware performed satisfactorily and within design specifications.
Development of robotics facility docking test hardware
NASA Technical Reports Server (NTRS)
Loughead, T. E.; Winkler, R. V.
1984-01-01
Design and fabricate test hardware for NASA's George C. Marshall Space Flight Center (MSFC) are reported. A docking device conceptually developed was fabricated, and two docking targets which provide high and low mass docking loads were required and were represented by an aft 61.0 cm section of a Hubble space telescope (ST) mockup and an upgrading of an existing multimission modular spacecraft (MSS) mockup respectively. A test plan is developed for testing the hardware.
Electrically Driven Single Phase Thermal Management: STP-H5 EHD Experiment
NASA Technical Reports Server (NTRS)
Didion, Jeffrey R.
2016-01-01
The Electrically Driven Single Phase Thermal Management: STP-H5 iEHDS Experiment is a technology demonstration of prototype proof of concept hardware to establish the feasilibilty and long term operation of this hardware. This is a structural thermal plate that will operate continuous as part of the STP-H5 ISEM experiment for up to 18 months. This presentation discusses the design, fabrication and environmental operational paramertes of the experiment hardware.
Human Integration Design Processes (HIDP)
NASA Technical Reports Server (NTRS)
Boyer, Jennifer
2014-01-01
The purpose of the Human Integration Design Processes (HIDP) document is to provide human-systems integration design processes, including methodologies and best practices that NASA has used to meet human systems and human rating requirements for developing crewed spacecraft. HIDP content is framed around human-centered design methodologies and processes in support of human-system integration requirements and human rating. NASA-STD-3001, Space Flight Human-System Standard, is a two-volume set of National Aeronautics and Space Administration (NASA) Agency-level standards established by the Office of the Chief Health and Medical Officer, directed at minimizing health and performance risks for flight crews in human space flight programs. Volume 1 of NASA-STD-3001, Crew Health, sets standards for fitness for duty, space flight permissible exposure limits, permissible outcome limits, levels of medical care, medical diagnosis, intervention, treatment and care, and countermeasures. Volume 2 of NASASTD- 3001, Human Factors, Habitability, and Environmental Health, focuses on human physical and cognitive capabilities and limitations and defines standards for spacecraft (including orbiters, habitats, and suits), internal environments, facilities, payloads, and related equipment, hardware, and software with which the crew interfaces during space operations. The NASA Procedural Requirements (NPR) 8705.2B, Human-Rating Requirements for Space Systems, specifies the Agency's human-rating processes, procedures, and requirements. The HIDP was written to share NASA's knowledge of processes directed toward achieving human certification of a spacecraft through implementation of human-systems integration requirements. Although the HIDP speaks directly to implementation of NASA-STD-3001 and NPR 8705.2B requirements, the human-centered design, evaluation, and design processes described in this document can be applied to any set of human-systems requirements and are independent of reference missions. The HIDP is a reference document that is intended to be used during the development of crewed space systems and operations to guide human-systems development process activities.
NASA Technical Reports Server (NTRS)
Shubert, W. C.
1973-01-01
Transportation requirements are considered during the engine design layout reviews and maintenance engineering analyses. Where designs cannot be influenced to avoid transportation problems, the transportation representative is advised of the problems permitting remedies early in the program. The transportation representative will monitor and be involved in the shipment of development engine and GSE hardware between FRDC and vehicle manufacturing plant and thereby will be provided an early evaluation of the transportation plans, methods and procedures to be used in the space tug support program. Unanticipated problems discovered in the shipment of development hardware will be known early enough to permit changes in packaging designs and transportation plans before the start of production hardware and engine shipments. All conventional transport media can be used for the movement of space tug engines. However, truck transport is recommended for ready availability, variety of routes, short transit time, and low cost.
Reconfigurable Hardware for Compressing Hyperspectral Image Data
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua
2010-01-01
High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of the FPGAs makes it possible to effectively alter the design to some extent to satisfy different requirements without adding hardware. The implementation could be easily propagated to future FPGA generations and/or to custom application-specific integrated circuits.
Space Station Freedom - Approaching the critical design phase
NASA Technical Reports Server (NTRS)
Kohrs, Richard H.; Huckins, Earle, III
1992-01-01
The status and future developments of the Space Station Freedom are discussed. To date detailed design drawings are being produced to manufacture SSF hardware. A critical design review (CDR) for the man-tended capability configuration is planned to be performed in 1993 under the SSF program. The main objective of the CDR is to enable the program to make a full commitment to proceed to manufacture parts and assemblies. NASA recently signed a contract with the Russian space company, NPO Energia, to evaluate potential applications of various Russian space hardware for on-going NASA programs.
Use of CCSDS Packets Over SpaceWire to Control Hardware
NASA Technical Reports Server (NTRS)
Haddad, Omar; Blau, Michael; Haghani, Noosha; Yuknis, William; Albaijes, Dennis
2012-01-01
For the Lunar Reconnaissance Orbiter, the Command and Data Handling subsystem consisted of several electronic hardware assemblies that were connected with SpaceWire serial links. Electronic hardware would be commanded/controlled and telemetry data was obtained using the SpaceWire links. Prior art focused on parallel data buses and other types of serial buses, which were not compatible with the SpaceWire and the core flight executive (CFE) software bus. This innovation applies to anything that utilizes both SpaceWire networks and the CFE software. The CCSDS (Consultative Committee for Space Data Systems) packet contains predetermined values in its payload fields that electronic hardware attached at the terminus of the SpaceWire node would decode, interpret, and execute. The hardware s interpretation of the packet data would enable the hardware to change its state/configuration (command) or generate status (telemetry). The primary purpose is to provide an interface that is compatible with the hardware and the CFE software bus. By specifying the format of the CCSDS packet, it is possible to specify how the resulting hardware is to be built (in terms of digital logic) that results in a hardware design that can be controlled by the CFE software bus in the final application
Cobalt: A GPU-based correlator and beamformer for LOFAR
NASA Astrophysics Data System (ADS)
Broekema, P. Chris; Mol, J. Jan David; Nijboer, R.; van Amesfoort, A. S.; Brentjens, M. A.; Loose, G. Marcel; Klijn, W. F. A.; Romein, J. W.
2018-04-01
For low-frequency radio astronomy, software correlation and beamforming on general purpose hardware is a viable alternative to custom designed hardware. LOFAR, a new-generation radio telescope centered in the Netherlands with international stations in Germany, France, Ireland, Poland, Sweden and the UK, has successfully used software real-time processors based on IBM Blue Gene technology since 2004. Since then, developments in technology have allowed us to build a system based on commercial off-the-shelf components that combines the same capabilities with lower operational cost. In this paper, we describe the design and implementation of a GPU-based correlator and beamformer with the same capabilities as the Blue Gene based systems. We focus on the design approach taken, and show the challenges faced in selecting an appropriate system. The design, implementation and verification of the software system show the value of a modern test-driven development approach. Operational experience, based on three years of operations, demonstrates that a general purpose system is a good alternative to the previous supercomputer-based system or custom-designed hardware.
Hardware Trojans - Prevention, Detection, Countermeasures (A Literature Review)
2011-07-01
Phase and Location . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Hardware Trojan Actions...12 3.4 Trigger Design Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4 Prevention 14 4.1 Prevention...The specification is then realised into specific tar- 4 UNCLASSIFIED UNCLASSIFIED DSTO–TN–1012 get technologies with consideration of functional and
Fastener Retention Requirements and Practices in Spaceflight Hardware
NASA Technical Reports Server (NTRS)
Dasgupta, Rajib
2004-01-01
This presentation reviews the requirements for safety critical fasteners in spaceflight hardware. Included in the presentation are design guidelines and information for Locking Helicoils, key locked inserts and thinwalled inserts, self locking screws and bolts. locknuts, and a locking adhesives, Loctite and Vibratite.
AFOSR BRI: Co-Design of Hardware/Software for Predicting MAV Aerodynamics
2016-09-27
DOCUMENTATION PAGE Form ApprovedOMB No. 0704-0188 1. REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER 6. AUTHOR(S) 7...703-588-8494 AFOSR BRI While Moore’s Law theoretically doubles processor performance every 24 months, much of the realizable performance remains...past efforts to develop such CFD codes on accelerated processors showed limited success, our hardware/software co-design approach created malleable