Simulation verification techniques study
NASA Technical Reports Server (NTRS)
Schoonmaker, P. B.; Wenglinski, T. H.
1975-01-01
Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided.
NASA Technical Reports Server (NTRS)
Daelemans, Gerard; Goldsmith, Theodore
1999-01-01
The NASA/GSFC Shuttle Small Payloads Projects Office (SSPPO) has been studying the feasibility of migrating Hitchhiker customers past present and future to the International Space Station via a "Hitchhiker like" carrier system. SSPPO has been tasked to make the most use of existing hardware and software systems and infrastructure in its study of an ISS based carrier system. This paper summarizes the results of the SSPPO Hitchhiker on International Space Station (ISS) study. Included are a number of "Hitchhiker like" carrier system concepts that take advantage of the various ISS attached payload accommodation sites. Emphasis will be given to a HH concept that attaches to the Japanese Experiment Module - Exposed Facility (JEM-EF).
Migrating the STARLINK Network from VMS to Unix
NASA Astrophysics Data System (ADS)
Clayton, C.
The Starlink Project is a UK-wide astronomical computing service consisting of a network of computers used by UK astronomers at over 25 sites, a collection of software to calibrate and analyze astronomical data, and a team of people to give hardware, software, and administrative support. In order to exploit the most cost-effective hardware and to maintain compatibility with the international community, Starlink is migrating from an entirely VAX/VMS based service to UNIX-based systems. This migration is almost complete, and this paper describes some of the solutions adopted for the wide variety of problems which were encountered. Migration of the hardware platform is discussed first. Equipment which can be re-used under Unix is identified. System software and non-astronomical applications which are required to allow a smooth transition from VMS to Unix are considered next. While many VMS functions can be replaced with Unix equivalents, it has become apparent that there is a small number of key VMS applications which must be provided on the replacement Unix platform to avoid considerable disruption to users. Various strategies for moving the users themselves from VMS to UNIX are considered and their relative merits compared. Fast migration routes are considered to be more effective as long as certain key applications and user aids are already in place. The porting of the Starlink Software Collection is discussed, as is the problem of migrating large quantities of private user code.
47 CFR 400.7 - Eligible uses for grant funds.
Code of Federal Regulations, 2012 CFR
2012-10-01
... the acquisition and deployment of hardware and software that enables the implementation and operation of Phase II E-911 services, for the acquisition and deployment of hardware and software to enable the migration to an IP-enabled emergency network, for the training in the use of such hardware and software, or...
47 CFR 400.7 - Eligible uses for grant funds.
Code of Federal Regulations, 2013 CFR
2013-10-01
... the acquisition and deployment of hardware and software that enables the implementation and operation of Phase II E-911 services, for the acquisition and deployment of hardware and software to enable the migration to an IP-enabled emergency network, for the training in the use of such hardware and software, or...
47 CFR 400.7 - Eligible uses for grant funds.
Code of Federal Regulations, 2011 CFR
2011-10-01
... the acquisition and deployment of hardware and software that enables the implementation and operation of Phase II E-911 services, for the acquisition and deployment of hardware and software to enable the migration to an IP-enabled emergency network, for the training in the use of such hardware and software, or...
47 CFR 400.7 - Eligible uses for grant funds.
Code of Federal Regulations, 2014 CFR
2014-10-01
... the acquisition and deployment of hardware and software that enables the implementation and operation of Phase II E-911 services, for the acquisition and deployment of hardware and software to enable the migration to an IP-enabled emergency network, for the training in the use of such hardware and software, or...
47 CFR 400.7 - Eligible uses for grant funds.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the acquisition and deployment of hardware and software that enables the implementation and operation of Phase II E-911 services, for the acquisition and deployment of hardware and software to enable the migration to an IP-enabled emergency network, for the training in the use of such hardware and software, or...
NASA Astrophysics Data System (ADS)
Bellerby, Tim
2015-04-01
PM (Parallel Models) is a new parallel programming language specifically designed for writing environmental and geophysical models. The language is intended to enable implementers to concentrate on the science behind the model rather than the details of running on parallel hardware. At the same time PM leaves the programmer in control - all parallelisation is explicit and the parallel structure of any given program may be deduced directly from the code. This paper describes a PM implementation based on the Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) standards, looking at issues involved with translating the PM parallelisation model to MPI/OpenMP protocols and considering performance in terms of the competing factors of finer-grained parallelisation and increased communication overhead. In order to maximise portability, the implementation stays within the MPI 1.3 standard as much as possible, with MPI-2 MPI-IO file handling the only significant exception. Moreover, it does not assume a thread-safe implementation of MPI. PM adopts a two-tier abstract representation of parallel hardware. A PM processor is a conceptual unit capable of efficiently executing a set of language tasks, with a complete parallel system consisting of an abstract N-dimensional array of such processors. PM processors may map to single cores executing tasks using cooperative multi-tasking, to multiple cores or even to separate processing nodes, efficiently sharing tasks using algorithms such as work stealing. While tasks may move between hardware elements within a PM processor, they may not move between processors without specific programmer intervention. Tasks are assigned to processors using a nested parallelism approach, building on ideas from Reyes et al. (2009). The main program owns all available processors. When the program enters a parallel statement then either processors are divided out among the newly generated tasks (number of new tasks < number of processors) or tasks are divided out among the available processors (number of tasks > number of processors). Nested parallel statements may further subdivide the processor set owned by a given task. Tasks or processors are distributed evenly by default, but uneven distributions are possible under programmer control. It is also possible to explicitly enable child tasks to migrate within the processor set owned by their parent task, reducing load unbalancing at the potential cost of increased inter-processor message traffic. PM incorporates some programming structures from the earlier MIST language presented at a previous EGU General Assembly, while adopting a significantly different underlying parallelisation model and type system. PM code is available at www.pm-lang.org under an unrestrictive MIT license. Reference Ruymán Reyes, Antonio J. Dorta, Francisco Almeida, Francisco de Sande, 2009. Automatic Hybrid MPI+OpenMP Code Generation with llc, Recent Advances in Parallel Virtual Machine and Message Passing Interface, Lecture Notes in Computer Science Volume 5759, 185-195
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
Pratx, Guillem; Xing, Lei
2011-01-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
High-pressure LOX/hydrocarbon preburners and gas generators
NASA Technical Reports Server (NTRS)
Huebner, A. W.
1981-01-01
The objective of the program was to conduct a small scale hardware test program to establish the technology base required for LOX/hydrocarbon preburners and gas generators. The program consisted of six major tasks; Task I reviewed and assessed the performance prediction models and defined a subscale test program. Task II designed and fabricated this subscale hardware. Task III tested and analyzed the data from this hardware. Task IV analyzed the hot fire results and formulated a preliminary design for 40K preburner assemblies. Task V took the preliminary design and detailed and fabricated three 40K size preburner assemblies, one each fuel-rich LOX/CH, and LOX/RP-1 and one oxidizer rich LOX/CH4. Task VI delivered these preburner assemblies to MSFC for subsequent evaluation.
Definition of an auxiliary processor dedicated to real-time operating system kernels
NASA Technical Reports Server (NTRS)
Halang, Wolfgang A.
1988-01-01
In order to increase the efficiency of process control data processing, it is necessary to enhance the productivity of real time high level languages and to automate the task administration, because presently 60 percent or more of the applications are still programmed in assembly languages. This may be achieved by migrating apt functions for the support of process control oriented languages into the hardware, i.e., by new architectures. Whereas numerous high level languages have already been defined or realized, there are no investigations yet on hardware assisted implementation of real time features. The requirements to be fulfilled by languages and operating systems in hard real time environment are summarized. A comparison of the most prominent languages, viz. Ada, HAL/S, LTR, Pearl, as well as the real time extensions of FORTRAN and PL/1, reveals how existing languages meet these demands and which features still need to be incorporated to enable the development of reliable software with predictable program behavior, thus making it possible to carry out a technical safety approval. Accordingly, Pearl proved to be the closest match to the mentioned requirements.
Relational Data Bases--Are You Ready?
ERIC Educational Resources Information Center
Marshall, Dorothy M.
1989-01-01
Migrating from a traditional to a relational database technology requires more than traditional project management techniques. An overview of what to consider before migrating to relational database technology is presented. Leadership, staffing, vendor support, hardware, software, and application development are discussed. (MLW)
Extravehicular activity training and hardware design consideration
NASA Technical Reports Server (NTRS)
Thuot, P. J.; Harbaugh, G. J.
1995-01-01
Preparing astronauts to perform the many complex extravehicular activity (EVA) tasks required to assemble and maintain Space Station will be accomplished through training simulations in a variety of facilities. The adequacy of this training is dependent on a thorough understanding of the task to be performed, the environment in which the task will be performed, high-fidelity training hardware and an awareness of the limitations of each particular training facility. Designing hardware that can be successfully operated, or assembled, by EVA astronauts in an efficient manner, requires an acute understanding of human factors and the capabilities and limitations of the space-suited astronaut. Additionally, the significant effect the microgravity environment has on the crew members' capabilities has to be carefully considered not only for each particular task, but also for all the overhead related to the task and the general overhead associated with EVA. This paper will describe various training methods and facilities that will be used to train EVA astronauts for Space Station assembly and maintenance. User-friendly EVA hardware design considerations and recent EVA flight experience will also be presented.
Extravehicular activity training and hardware design consideration.
Thuot, P J; Harbaugh, G J
1995-07-01
Preparing astronauts to perform the many complex extravehicular activity (EVA) tasks required to assemble and maintain Space Station will be accomplished through training simulations in a variety of facilities. The adequacy of this training is dependent on a thorough understanding of the task to be performed, the environment in which the task will be performed, high-fidelity training hardware and an awareness of the limitations of each particular training facility. Designing hardware that can be successfully operated, or assembled, by EVA astronauts in an efficient manner, requires an acute understanding of human factors and the capabilities and limitations of the space-suited astronaut. Additionally, the significant effect the microgravity environment has on the crew members' capabilities has to be carefully considered not only for each particular task, but also for all the overhead related to the task and the general overhead associated with EVA. This paper will describe various training methods and facilities that will be used to train EVA astronauts for Space Station assembly and maintenance. User-friendly EVA hardware design considerations and recent EVA flight experience will also be presented.
The JPL telerobot operator control station. Part 1: Hardware
NASA Technical Reports Server (NTRS)
Kan, Edwin P.; Tower, John T.; Hunka, George W.; Vansant, Glenn J.
1989-01-01
The Operator Control Station of the Jet Propulsion Laboratory (JPL)/NASA Telerobot Demonstrator System provides the man-machine interface between the operator and the system. It provides all the hardware and software for accepting human input for the direct and indirect (supervised) manipulation of the robot arms and tools for task execution. Hardware and software are also provided for the display and feedback of information and control data for the operator's consumption and interaction with the task being executed. The hardware design, system architecture, and its integration and interface with the rest of the Telerobot Demonstrator System are discussed.
A haptic interface for virtual simulation of endoscopic surgery.
Rosenberg, L B; Stredney, D
1996-01-01
Virtual reality can be described as a convincingly realistic and naturally interactive simulation in which the user is given a first person illusion of being immersed within a computer generated environment While virtual reality systems offer great potential to reduce the cost and increase the quality of medical training, many technical challenges must be overcome before such simulation platforms offer effective alternatives to more traditional training means. A primary challenge in developing effective virtual reality systems is designing the human interface hardware which allows rich sensory information to be presented to users in natural ways. When simulating a given manual procedure, task specific human interface requirements dictate task specific human interface hardware. The following paper explores the design of human interface hardware that satisfies the task specific requirements of virtual reality simulation of Endoscopic surgical procedures. Design parameters were derived through direct cadaver studies and interviews with surgeons. Final hardware design is presented.
Closed-Loop Neuromorphic Benchmarks
Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris
2015-01-01
Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820
Learning in Neural Networks: VLSI Implementation Strategies
NASA Technical Reports Server (NTRS)
Duong, Tuan Anh
1995-01-01
Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.
Water Processor and Oxygen Generation Assembly
NASA Technical Reports Server (NTRS)
Bedard, John
1997-01-01
This report documents the results of the tasks which initiated efforts on design issues relating to the Water Processor (WP) and the Oxygen Generation Assembly (OGA) Flight Hardware for the International Space Station. This report fulfills the Statement of Work deliverables requirement for contract H-29387D. The following lists the tasks required by contract H-29387D: (1) HSSSI shall coordinate a detailed review of WP/OGA Flight Hardware program requirements with personnel from MSFC to identify requirements that can be eliminated without affecting the technical integrity of the WP/OGA Hardware; (2) HSSSI shall conduct the technical interchanges with personnel from MSFC to resolve design issues related to WP/OGA Flight Hardware; (3) HSSSI will initiate discussions with Zellwegger Analytics, Inc. to address design issues related to WP and PCWQM interfaces.
Human-computer dialogue: Interaction tasks and techniques. Survey and categorization
NASA Technical Reports Server (NTRS)
Foley, J. D.
1983-01-01
Interaction techniques are described. Six basic interaction tasks, requirements for each task, requirements related to interaction techniques, and a technique's hardware prerequisites affective device selection are discussed.
Accelerating Pathology Image Data Cross-Comparison on CPU-GPU Hybrid Systems
Wang, Kaibo; Huai, Yin; Lee, Rubao; Wang, Fusheng; Zhang, Xiaodong; Saltz, Joel H.
2012-01-01
As an important application of spatial databases in pathology imaging analysis, cross-comparing the spatial boundaries of a huge amount of segmented micro-anatomic objects demands extremely data- and compute-intensive operations, requiring high throughput at an affordable cost. However, the performance of spatial database systems has not been satisfactory since their implementations of spatial operations cannot fully utilize the power of modern parallel hardware. In this paper, we provide a customized software solution that exploits GPUs and multi-core CPUs to accelerate spatial cross-comparison in a cost-effective way. Our solution consists of an efficient GPU algorithm and a pipelined system framework with task migration support. Extensive experiments with real-world data sets demonstrate the effectiveness of our solution, which improves the performance of spatial cross-comparison by over 18 times compared with a parallelized spatial database approach. PMID:23355955
Inspection of small multi-layered plastic tubing during extrusion, using low-energy X-ray beams
NASA Astrophysics Data System (ADS)
Armentrout, C.; Basinger, T.; Beyer, J.; Colesa, B.; Olsztyn, P.; Smith, K.; Strandberg, C.; Sullivan, D.; Thomson, J.
1999-02-01
The automotive industry uses nylon tubing with a thin ETFE (ethylene-tetrafluroethylene) inner layer to carry fuel from the tank to the engine. This fluorocarbon inner barrier layer is important to reduce the migration of hydrocarbons into the environment. Pilot Industries has developed a series of real-time inspection stations for dimensional measurements and flaw detection during the extrusion of this tubing. These stations are named LERA TM (low-energy radioscopic analysis), use a low energy X-ray source, a special high-resolution image converter and intensifier (ICI) stage, image capture hardware, a personal computer, and software that was specially designed to meet this task. Each LERA TM station operates up to 20 h a day, 6 days a week and nearly every week of the year. The tubing walls are 1-2 mm thick and the outer layer is nylon and the inner 0.2 mm thick layer is ethylene-tetrafluroethylene.
Transitioning to Intel-based Linux Servers in the Payload Operations Integration Center
NASA Technical Reports Server (NTRS)
Guillebeau, P. L.
2004-01-01
The MSFC Payload Operations Integration Center (POIC) is the focal point for International Space Station (ISS) payload operations. The POIC contains the facilities, hardware, software and communication interface necessary to support payload operations. ISS ground system support for processing and display of real-time spacecraft and telemetry and command data has been operational for several years. The hardware components were reaching end of life and vendor costs were increasing while ISS budgets were becoming severely constrained. Therefore it has been necessary to migrate the Unix portions of our ground systems to commodity priced Intel-based Linux servers. hardware architecture including networks, data storage, and highly available resources. This paper will concentrate on the Linux migration implementation for the software portion of our ground system. The migration began with 3.5 million lines of code running on Unix platforms with separate servers for telemetry, command, Payload information management systems, web, system control, remote server interface and databases. The Intel-based system is scheduled to be available for initial operational use by August 2004 The overall migration to Intel-based Linux servers in the control center involves changes to the This paper will address the Linux migration study approach including the proof of concept, criticality of customer buy-in and importance of beginning with POSlX compliant code. It will focus on the development approach explaining the software lifecycle. Other aspects of development will be covered including phased implementation, interim milestones and metrics measurements and reporting mechanisms. This paper will also address the testing approach covering all levels of testing including development, development integration, IV&V, user beta testing and acceptance testing. Test results including performance numbers compared with Unix servers will be included. need for a smooth transition while maintaining real-time support. An important aspect of the paper will involve challenges and lessons learned. product compatibility, implications of phasing decisions and tracking of dependencies, particularly non- software dependencies. The paper will also discuss scheduling challenges providing real-time flight support during the migration and the requirement to incorporate in the migration changes being made simultaneously for flight support. This paper will also address the deployment approach including user involvement in testing and the , This includes COTS product compatibility, implications of phasing decisions and tracking of dependencies, particularly non- software dependencies. The paper will also discuss scheduling challenges providing real-time flight support during the migration and the requirement to incorporate in the migration changes being made simultaneously for flight support.
Simulation verification techniques study: Simulation self test hardware design and techniques report
NASA Technical Reports Server (NTRS)
1974-01-01
The final results are presented of the hardware verification task. The basic objectives of the various subtasks are reviewed along with the ground rules under which the overall task was conducted and which impacted the approach taken in deriving techniques for hardware self test. The results of the first subtask and the definition of simulation hardware are presented. The hardware definition is based primarily on a brief review of the simulator configurations anticipated for the shuttle training program. The results of the survey of current self test techniques are presented. The data sources that were considered in the search for current techniques are reviewed, and results of the survey are presented in terms of the specific types of tests that are of interest for training simulator applications. Specifically, these types of tests are readiness tests, fault isolation tests and incipient fault detection techniques. The most applicable techniques were structured into software flows that are then referenced in discussions of techniques for specific subsystems.
Designers workbench: toward real-time immersive modeling
NASA Astrophysics Data System (ADS)
Kuester, Falko; Duchaineau, Mark A.; Hamann, Bernd; Joy, Kenneth I.; Ma, Kwan-Liu
2000-05-01
This paper introduces the Designers Workbench, a semi- immersive virtual environment for two-handed modeling, sculpting and analysis tasks. The paper outlines the fundamental tools, design metaphors and hardware components required for an intuitive real-time modeling system. As companies focus on streamlining productivity to cope with global competition, the migration to computer-aided design (CAD), computer-aided manufacturing, and computer-aided engineering systems has established a new backbone of modern industrial product development. However, traditionally a product design frequently originates form a clay model that, after digitization, forms the basis for the numerical description of CAD primitives. The Designers Workbench aims at closing this technology or 'digital gap' experienced by design and CAD engineers by transforming the classical design paradigm into its fully integrate digital and virtual analog allowing collaborative development in a semi- immersive virtual environment. This project emphasizes two key components form the classical product design cycle: freeform modeling and analysis. In the freedom modeling stage, content creation in the form of two-handed sculpting of arbitrary objects using polygonal, volumetric or mathematically defined primitives is emphasized, whereas the analysis component provides the tools required for pre- and post-processing steps for finite element analysis tasks applied to the created models.
Space station common module power system network topology and hardware development
NASA Technical Reports Server (NTRS)
Landis, D. M.
1985-01-01
Candidate power system newtork topologies for the space station common module are defined and developed and the necessary hardware for test and evaluation is provided. Martin Marietta's approach to performing the proposed program is presented. Performance of the tasks described will assure systematic development and evaluation of program results, and will provide the necessary management tools, visibility, and control techniques for performance assessment. The plan is submitted in accordance with the data requirements given and includes a comprehensive task logic flow diagram, time phased manpower requirements, a program milestone schedule, and detailed descriptions of each program task.
Rapid Production of Composite Prototype Hardware
NASA Technical Reports Server (NTRS)
DeLay, T. K.
2000-01-01
The objective of this research was to provide a mechanism to cost-effectively produce composite hardware prototypes. The task was to take a hands-on approach to developing new technologies that could benefit multiple future programs.
Shuttle/TDRSS Ku-band downlink study
NASA Technical Reports Server (NTRS)
Meyer, R.
1976-01-01
Assessing the adequacy of the baseline signal design approach, developing performance specifications for the return link hardware, and performing detailed design and parameter optimization tasks was accomplished by completing five specific study tasks. The results of these tasks show that the basic signal structure design is sound and that the goals can be met. Constraints placed on return link hardware by this structure allow reasonable specifications to be written so that no extreme technical risk areas in equipment design are foreseen. A third channel can be added to the PM mode without seriously degrading the other services. The feasibility of using only a PM mode was shown to exist, however, this will require use of some digital TV transmission techniques. Each task and its results are summarized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiu, George L.; Eichenberger, Alexandre E.; O'Brien, John K. P.
The present disclosure relates generally to a dedicated memory structure (that is, hardware device) holding data for detecting available worker thread(s) and informing available worker thread(s) of task(s) to execute.
FLASH fly-by-light flight control demonstration results overview
NASA Astrophysics Data System (ADS)
Halski, Don J.
1996-10-01
The Fly-By-Light Advanced Systems Hardware (FLASH) program developed Fly-By-Light (FBL) and Power-By-Wire (PBW) technologies for military and commercial aircraft. FLASH consists of three tasks. Task 1 developed the fiber optic cable, connectors, testers and installation and maintenance procedures. Task 3 developed advanced smart, rotary thin wing and electro-hydrostatic (EHA) actuators. Task 2, which is the subject of this paper,l focused on integration of fiber optic sensors and data buses with cable plant components from Task 1 and actuators from Task 3 into centralized and distributed flight control systems. Both open loop and piloted hardware-in-the-loop demonstrations were conducted with centralized and distributed flight control architectures incorporating the AS-1773A optical bus, active hand controllers, optical sensors, optimal flight control laws in high speed 32-bit processors, and neural networks for EHA monitoring and fault diagnosis. This paper overviews the systems level testing conducted under the FLASH Flight Control task. Preliminary results are summarized. Companion papers provide additional information.
Exploiting Vector and Multicore Parallelsim for Recursive, Data- and Task-Parallel Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Bin; Krishnamoorthy, Sriram; Agrawal, Kunal
Modern hardware contains parallel execution resources that are well-suited for data-parallelism-vector units-and task parallelism-multicores. However, most work on parallel scheduling focuses on one type of hardware or the other. In this work, we present a scheduling framework that allows for a unified treatment of task- and data-parallelism. Our key insight is an abstraction, task blocks, that uniformly handles data-parallel iterations and task-parallel tasks, allowing them to be scheduled on vector units or executed independently as multicores. Our framework allows us to define schedulers that can dynamically select between executing task- blocks on vector units or multicores. We show that thesemore » schedulers are asymptotically optimal, and deliver the maximum amount of parallelism available in computation trees. To evaluate our schedulers, we develop program transformations that can convert mixed data- and task-parallel pro- grams into task block-based programs. Using a prototype instantiation of our scheduling framework, we show that, on an 8-core system, we can simultaneously exploit vector and multicore parallelism to achieve 14×-108× speedup over sequential baselines.« less
Shorebird Migration Patterns in Response to Climate Change: A Modeling Approach
NASA Technical Reports Server (NTRS)
Smith, James A.
2010-01-01
The availability of satellite remote sensing observations at multiple spatial and temporal scales, coupled with advances in climate modeling and information technologies offer new opportunities for the application of mechanistic models to predict how continental scale bird migration patterns may change in response to environmental change. In earlier studies, we explored the phenotypic plasticity of a migratory population of Pectoral sandpipers by simulating the movement patterns of an ensemble of 10,000 individual birds in response to changes in stopover locations as an indicator of the impacts of wetland loss and inter-annual variability on the fitness of migratory shorebirds. We used an individual based, biophysical migration model, driven by remotely sensed land surface data, climate data, and biological field data. Mean stop-over durations and stop-over frequency with latitude predicted from our model for nominal cases were consistent with results reported in the literature and available field data. In this study, we take advantage of new computing capabilities enabled by recent GP-GPU computing paradigms and commodity hardware (general purchase computing on graphics processing units). Several aspects of our individual based (agent modeling) approach lend themselves well to GP-GPU computing. We have been able to allocate compute-intensive tasks to the graphics processing units, and now simulate ensembles of 400,000 birds at varying spatial resolutions along the central North American flyway. We are incorporating additional, species specific, mechanistic processes to better reflect the processes underlying bird phenotypic plasticity responses to different climate change scenarios in the central U.S.
Migrating EO/IR sensors to cloud-based infrastructure as service architectures
NASA Astrophysics Data System (ADS)
Berglie, Stephen T.; Webster, Steven; May, Christopher M.
2014-06-01
The Night Vision Image Generator (NVIG), a product of US Army RDECOM CERDEC NVESD, is a visualization tool used widely throughout Army simulation environments to provide fully attributed synthesized, full motion video using physics-based sensor and environmental effects. The NVIG relies heavily on contemporary hardware-based acceleration and GPU processing techniques, which push the envelope of both enterprise and commodity-level hypervisor support for providing virtual machines with direct access to hardware resources. The NVIG has successfully been integrated into fully virtual environments where system architectures leverage cloudbased technologies to various extents in order to streamline infrastructure and service management. This paper details the challenges presented to engineers seeking to migrate GPU-bound processes, such as the NVIG, to virtual machines and, ultimately, Cloud-Based IAS architectures. In addition, it presents the path that led to success for the NVIG. A brief overview of Cloud-Based infrastructure management tool sets is provided, and several virtual desktop solutions are outlined. A discrimination is made between general purpose virtual desktop technologies compared to technologies that expose GPU-specific capabilities, including direct rendering and hard ware-based video encoding. Candidate hypervisor/virtual machine configurations that nominally satisfy the virtualized hardware-level GPU requirements of the NVIG are presented , and each is subsequently reviewed in light of its implications on higher-level Cloud management techniques. Implementation details are included from the hardware level, through the operating system, to the 3D graphics APls required by the NVIG and similar GPU-bound tools.
The JPL telerobot operator control station. Part 2: Software
NASA Technical Reports Server (NTRS)
Kan, Edwin P.; Landell, B. Patrick; Oxenberg, Sheldon; Morimoto, Carl
1989-01-01
The Operator Control Station of the Jet Propulsion Laboratory (JPL)/NASA Telerobot Demonstrator System provides the man-machine interface between the operator and the system. It provides all the hardware and software for accepting human input for the direct and indirect (supervised) manipulation of the robot arms and tools for task execution. Hardware and software are also provided for the display and feedback of information and control data for the operator's consumption and interaction with the task being executed. The software design of the operator control system is discussed.
Hardware interface unit for control of shuttle RMS vibrations
NASA Technical Reports Server (NTRS)
Lindsay, Thomas S.; Hansen, Joseph M.; Manouchehri, Davoud; Forouhar, Kamran
1994-01-01
Vibration of the Shuttle Remote Manipulator System (RMS) increases the time for task completion and reduces task safety for manipulator-assisted operations. If the dynamics of the manipulator and the payload can be physically isolated, performance should improve. Rockwell has developed a self contained hardware unit which interfaces between a manipulator arm and payload. The End Point Control Unit (EPCU) is built and is being tested at Rockwell and at the Langley/Marshall Coupled, Multibody Spacecraft Control Research Facility in NASA's Marshall Space Flight Center in Huntsville, Alabama.
Dynamically allocating sets of fine-grained processors to running computations
NASA Technical Reports Server (NTRS)
Middleton, David
1988-01-01
Researchers explore an approach to using general purpose parallel computers which involves mapping hardware resources onto computations instead of mapping computations onto hardware. Problems such as processor allocation, task scheduling and load balancing, which have traditionally proven to be challenging, change significantly under this approach and may become amenable to new attacks. Researchers describe the implementation of this approach used by the FFP Machine whose computation and communication resources are repeatedly partitioned into disjoint groups that match the needs of available tasks from moment to moment. Several consequences of this system are examined.
Designers Workbench: Towards Real-Time Immersive Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuester, F; Duchaineau, M A; Hamann, B
2001-10-03
This paper introduces the DesignersWorkbench, a semi-immersive virtual environment for two-handed modeling, sculpting and analysis tasks. The paper outlines the fundamental tools, design metaphors and hardware components required for an intuitive real-time modeling system. As companies focus on streamlining productivity to cope with global competition, the migration to computer-aided design (CAD), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) systems has established a new backbone of modern industrial product development. However, traditionally a product design frequently originates from a clay model that, after digitization, forms the basis for the numerical description of CAD primitives. The DesignersWorkbench aims at closing this technologymore » or ''digital gap'' experienced by design and CAD engineers by transforming the classical design paradigm into its filly integrated digital and virtual analog allowing collaborative development in a semi-immersive virtual environment. This project emphasizes two key components from the classical product design cycle: freeform modeling and analysis. In the freeform modeling stage, content creation in the form of two-handed sculpting of arbitrary objects using polygonal, volumetric or mathematically defined primitives is emphasized, whereas the analysis component provides the tools required for pre- and post-processing steps for finite element analysis tasks applied to the created models.« less
Using DMA for copying performance counter data to memory
Gara, Alan; Salapura, Valentina; Wisniewski, Robert W.
2012-09-25
A device for copying performance counter data includes hardware path that connects a direct memory access (DMA) unit to a plurality of hardware performance counters and a memory device. Software prepares an injection packet for the DMA unit to perform copying, while the software can perform other tasks. In one aspect, the software that prepares the injection packet runs on a processing core other than the core that gathers the hardware performance counter data.
Using DMA for copying performance counter data to memory
Gara, Alan; Salapura, Valentina; Wisniewski, Robert W
2013-12-31
A device for copying performance counter data includes hardware path that connects a direct memory access (DMA) unit to a plurality of hardware performance counters and a memory device. Software prepares an injection packet for the DMA unit to perform copying, while the software can perform other tasks. In one aspect, the software that prepares the injection packet runs on a processing core other than the core that gathers the hardware performance data.
IDEAS and App Development Internship in Hardware and Software Design
NASA Technical Reports Server (NTRS)
Alrayes, Rabab D.
2016-01-01
In this report, I will discuss the tasks and projects I have completed while working as an electrical engineering intern during the spring semester of 2016 at NASA Kennedy Space Center. In the field of software development, I completed tasks for the G-O Caching Mobile App and the Asbestos Management Information System (AMIS) Web App. The G-O Caching Mobile App was written in HTML, CSS, and JavaScript on the Cordova framework, while the AMIS Web App is written in HTML, CSS, JavaScript, and C# on the AngularJS framework. My goals and objectives on these two projects were to produce an app with an eye-catching and intuitive User Interface (UI), which will attract more employees to participate; to produce a fully-tested, fully functional app which supports workforce engagement and exploration; to produce a fully-tested, fully functional web app that assists technicians working in asbestos management. I also worked in hardware development on the Integrated Display and Environmental Awareness System (IDEAS) wearable technology project. My tasks on this project were focused in PCB design and camera integration. My goals and objectives for this project were to successfully integrate fully functioning custom hardware extenders on the wearable technology headset to minimize the size of hardware on the smart glasses headset for maximum user comfort; to successfully integrate fully functioning camera onto the headset. By the end of this semester, I was able to successfully develop four extender boards to minimize hardware on the headset, and assisted in integrating a fully-functioning camera into the system.
Evolutionary online behaviour learning and adaptation in real robots.
Silva, Fernando; Correia, Luís; Christensen, Anders Lyhne
2017-07-01
Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm.
PACS archive upgrade and data migration: clinical experiences
NASA Astrophysics Data System (ADS)
Liu, Brent J.; Documet, Luis; Sarti, Dennis A.; Huang, H. K.; Donnelly, John
2002-05-01
Saint John's Health Center PACS data volumes have increased dramatically since the hospital became filmless in April of 1999. This is due in part of continuous image accumulation, and the integration of a new multi-slice detector CT scanner into PACS. The original PACS archive would not be able to handle the distribution and archiving load and capacity in the near future. Furthermore, there is no secondary copy backup of all the archived PACS image data for disaster recovery purposes. The purpose of this paper is to present a clinical and technical process template to upgrade and expand the PACS archive, migrate existing PACs image data to the new archive, and provide a back-up and disaster recovery function not currently available. Discussion of the technical and clinical pitfalls and challenges involved in this process will be presented as well. The server hardware configuration was upgraded and a secondary backup implemented for disaster recovery. The upgrade includes new software versions, database reconfiguration, and installation of a new tape jukebox to replace the current MOD jukebox. Upon completion, all PACS image data from the original MOD jukebox was migrated to the new tape jukebox and verified. The migration was performed during clinical operation continuously in the background. Once the data migration was completed the MOD jukebox was removed. All newly acquired PACS exams are now archived to the new tape jukebox. All PACs image data residing on the original MOD jukebox have been successfully migrated into the new archive. In addition, a secondary backup of all PACS image data has been implemented for disaster recovery and has been verified using disaster scenario testing. No PACS image data was lost during the entire process and there was very little clinical impact during the entire upgrade and data migration. Some of the pitfalls and challenges during this upgrade process included hardware reconfiguration for the original archive server, clinical downtime involved with the upgrade, and data migration planning to minimize impact on clinical workflow. The impact was minimized with a downtime contingency plan.
PANDA: A distributed multiprocessor operating system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chubb, P.
1989-01-01
PANDA is a design for a distributed multiprocessor and an operating system. PANDA is designed to allow easy expansion of both hardware and software. As such, the PANDA kernel provides only message passing and memory and process management. The other features needed for the system (device drivers, secondary storage management, etc.) are provided as replaceable user tasks. The thesis presents PANDA's design and implementation, both hardware and software. PANDA uses multiple 68010 processors sharing memory on a VME bus, each such node potentially connected to others via a high speed network. The machine is completely homogeneous: there are no differencesmore » between processors that are detectable by programs running on the machine. A single two-processor node has been constructed. Each processor contains memory management circuits designed to allow processors to share page tables safely. PANDA presents a programmers' model similar to the hardware model: a job is divided into multiple tasks, each having its own address space. Within each task, multiple processes share code and data. Tasks can send messages to each other, and set up virtual circuits between themselves. Peripheral devices such as disc drives are represented within PANDA by tasks. PANDA divides secondary storage into volumes, each volume being accessed by a volume access task, or VAT. All knowledge about the way that data is stored on a disc is kept in its volume's VAT. The design is such that PANDA should provide a useful testbed for file systems and device drivers, as these can be installed without recompiling PANDA itself, and without rebooting the machine.« less
NASA Technical Reports Server (NTRS)
Salmon, Ellen; Tarshish, Adina; Palm, Nancy; Patel, Sanjay; Saletta, Marty; Vanderlan, Ed; Rouch, Mike; Burns, Lisa; Duffy, Daniel; Caine, Robert
2004-01-01
This paper presents the data management issues associated with a large center like the NCCS and how these issues are addressed. More specifically, the focus of this paper is on the recent transition from a legacy UniTree (Legato) system to a SAM-QFS (Sun) system. Therefore, this paper will describe the motivations, from both a hardware and software perspective, for migrating from one system to another. Coupled with the migration from UniTree into SAM-QFS, the complete mass storage environment was upgraded to provide high availability, redundancy, and enhanced performance. This paper will describe the resulting solution and lessons learned throughout the migration process.
Accelerating a MPEG-4 video decoder through custom software/hardware co-design
NASA Astrophysics Data System (ADS)
Díaz, Jorge L.; Barreto, Dacil; García, Luz; Marrero, Gustavo; Carballo, Pedro P.; Núñez, Antonio
2007-05-01
In this paper we present a novel methodology to accelerate an MPEG-4 video decoder using software/hardware co-design for wireless DAB/DMB networks. Software support includes the services provided by the embedded kernel μC/OS-II, and the application tasks mapped to software. Hardware support includes several custom co-processors and a communication architecture with bridges to the main system bus and with a dual port SRAM. Synchronization among tasks is achieved at two levels, by a hardware protocol and by kernel level scheduling services. Our reference application is an MPEG-4 video decoder composed of several software functions and written using a special C++ library named CASSE. Profiling and space exploration techniques were used previously over the Advanced Simple Profile (ASP) MPEG-4 decoder to determinate the best HW/SW partition developed here. This research is part of the ARTEMI project and its main goal is the establishment of methodologies for the design of real-time complex digital systems using Programmable Logic Devices with embedded microprocessors as target technology and the design of multimedia systems for broadcasting networks as reference application.
Web-Based Seamless Migration for Task-Oriented Mobile Distance Learning
ERIC Educational Resources Information Center
Zhang, Degan; Li, Yuan-chao; Zhang, Huaiyu; Zhang, Xinshang; Zeng, Guangping
2006-01-01
As a new kind of computing paradigm, pervasive computing will meet the requirements of human being that anybody maybe obtain services in anywhere and at anytime, task-oriented seamless migration is one of its applications. Apparently, the function of seamless mobility is suitable for mobile services, such as mobile Web-based learning. In this…
Space shuttle solid rocket booster cost-per-flight analysis technique
NASA Technical Reports Server (NTRS)
Forney, J. A.
1979-01-01
A cost per flight computer model is described which considers: traffic model, component attrition, hardware useful life, turnaround time for refurbishment, manufacturing rates, learning curves on the time to perform tasks, cost improvement curves on quantity hardware buys, inflation, spares philosophy, long lead, hardware funding requirements, and other logistics and scheduling constraints. Additional uses of the model include assessing the cost per flight impact of changing major space shuttle program parameters and searching for opportunities to make cost effective management decisions.
Rearchitecting IT: Simplify. Simplify
ERIC Educational Resources Information Center
Panettieri, Joseph C.
2006-01-01
Simplifying and securing an IT infrastructure is not easy. It frequently requires rethinking years of hardware and software investments, and a gradual migration to modern systems. Even so, writes the author, universities can take six practical steps to success: (1) Audit software infrastructure; (2) Evaluate current applications; (3) Centralize…
Uranus: a rapid prototyping tool for FPGA embedded computer vision
NASA Astrophysics Data System (ADS)
Rosales-Hernández, Victor; Castillo-Jimenez, Liz; Viveros-Velez, Gilberto; Zuñiga-Grajeda, Virgilio; Treviño Torres, Abel; Arias-Estrada, M.
2007-01-01
The starting point for all successful system development is the simulation. Performing high level simulation of a system can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in software and provides the necessary support to read and display image sequences as well as video files. The user can use the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the migration to FPGA accelerator platform, and it is distributed for academic purposes.
Software Requirements for the Move to Unix
NASA Astrophysics Data System (ADS)
Rees, Paul
This document provides information concerning the software requirements of each STARLINK site to move entirely to UNIX. It provides a list of proposed UNIX migration deadlines for all sites and lists of software requirements, both STARLINK and non-STARLINK software, which must be met before the existing VMS hardware can be switched off. The information presented in this document is used for the planning of software porting and distribution activities and also for setting realistic migration deadlines for STARLINK sites. The information on software requirements has been provided by STARLINK Site Managers.
USDA-ARS?s Scientific Manuscript database
Service oriented architectures allow modelling engines to be hosted over the Internet abstracting physical hardware configuration and software deployments from model users. Many existing environmental models are deployed as desktop applications running on user's personal computers (PCs). Migration ...
Multigeneration data migration from legacy systems
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Liu, Brent J.; Kho, Hwa T.; Tao, Wenchao; Wang, Cun; McCoy, J. Michael
2003-05-01
The migration of image data from different generations of legacy archive systems represents a technical challenge and in incremental cost in transitions to newer generations of PACS. UCLA medical center has elected to completely replace the existing PACS infrastructure encompassing several generations of legacy systems by a new commercial system providing enterprise-wide image management and communication. One of the most challenging parts of the project was the migration of large volumes of legacy images into the new system. Planning of the migration required the development of specialized software and hardware, and included different phases of data mediation from existing databases to the new PACS database prior to the migration of the image data. The project plan included a detailed analysis of resources and cost of data migration to optimize the process and minimize the delay of a hybrid operation where the legacy systems need to remain operational. Our analysis and project planning showed that the data migration represents the most critical path in the process of PACS renewal. Careful planning and optimization of the project timeline and resources allocated is critical to minimize the financial impact and the time delays that such migrations can impose on the implementation plan.
Virtual Reality Training System for Anytime/Anywhere Acquisition of Surgical Skills: A Pilot Study.
Zahiri, Mohsen; Booton, Ryan; Nelson, Carl A; Oleynikov, Dmitry; Siu, Ka-Chun
2018-03-01
This article presents a hardware/software simulation environment suitable for anytime/anywhere surgical skills training. It blends the advantages of physical hardware and task analogs with the flexibility of virtual environments. This is further enhanced by a web-based implementation of training feedback accessible to both trainees and trainers. Our training system provides a self-paced and interactive means to attain proficiency in basic tasks that could potentially be applied across a spectrum of trainees from first responder field medical personnel to physicians. This results in a powerful training tool for surgical skills acquisition relevant to helping injured warfighters.
Evolutionary online behaviour learning and adaptation in real robots
Correia, Luís; Christensen, Anders Lyhne
2017-01-01
Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm. PMID:28791130
A Hardware-Supported Algorithm for Self-Managed and Choreographed Task Execution in Sensor Networks.
Bordel, Borja; Miguel, Carlos; Alcarria, Ramón; Robles, Tomás
2018-03-07
Nowadays, sensor networks are composed of a great number of tiny resource-constraint nodes, whose management is increasingly more complex. In fact, although collaborative or choreographic task execution schemes are which fit in the most perfect way with the nature of sensor networks, they are rarely implemented because of the high resource consumption of these algorithms (especially if networks include many resource-constrained devices). On the contrary, hierarchical networks are usually designed, in whose cusp it is included a heavy orchestrator with a remarkable processing power, being able to implement any necessary management solution. However, although this orchestration approach solves most practical management problems of sensor networks, a great amount of the operation time is wasted while nodes request the orchestrator to address a conflict and they obtain the required instructions to operate. Therefore, in this paper it is proposed a new mechanism for self-managed and choreographed task execution in sensor networks. The proposed solution considers only a lightweight gateway instead of traditional heavy orchestrators and a hardware-supported algorithm, which consume a negligible amount of resources in sensor nodes. The gateway avoids the congestion of the entire sensor network and the hardware-supported algorithm enables a choreographed task execution scheme, so no particular node is overloaded. The performance of the proposed solution is evaluated through numerical and electronic ModelSim-based simulations.
Neuroimaging of Human Balance Control: A Systematic Review
Wittenberg, Ellen; Thompson, Jessica; Nam, Chang S.; Franz, Jason R.
2017-01-01
This review examined 83 articles using neuroimaging modalities to investigate the neural correlates underlying static and dynamic human balance control, with aims to support future mobile neuroimaging research in the balance control domain. Furthermore, this review analyzed the mobility of the neuroimaging hardware and research paradigms as well as the analytical methodology to identify and remove movement artifact in the acquired brain signal. We found that the majority of static balance control tasks utilized mechanical perturbations to invoke feet-in-place responses (27 out of 38 studies), while cognitive dual-task conditions were commonly used to challenge balance in dynamic balance control tasks (20 out of 32 studies). While frequency analysis and event related potential characteristics supported enhanced brain activation during static balance control, that in dynamic balance control studies was supported by spatial and frequency analysis. Twenty-three of the 50 studies utilizing EEG utilized independent component analysis to remove movement artifacts from the acquired brain signals. Lastly, only eight studies used truly mobile neuroimaging hardware systems. This review provides evidence to support an increase in brain activation in balance control tasks, regardless of mechanical, cognitive, or sensory challenges. Furthermore, the current body of literature demonstrates the use of advanced signal processing methodologies to analyze brain activity during movement. However, the static nature of neuroimaging hardware and conventional balance control paradigms prevent full mobility and limit our knowledge of neural mechanisms underlying balance control. PMID:28443007
A Hardware-Supported Algorithm for Self-Managed and Choreographed Task Execution in Sensor Networks
2018-01-01
Nowadays, sensor networks are composed of a great number of tiny resource-constraint nodes, whose management is increasingly more complex. In fact, although collaborative or choreographic task execution schemes are which fit in the most perfect way with the nature of sensor networks, they are rarely implemented because of the high resource consumption of these algorithms (especially if networks include many resource-constrained devices). On the contrary, hierarchical networks are usually designed, in whose cusp it is included a heavy orchestrator with a remarkable processing power, being able to implement any necessary management solution. However, although this orchestration approach solves most practical management problems of sensor networks, a great amount of the operation time is wasted while nodes request the orchestrator to address a conflict and they obtain the required instructions to operate. Therefore, in this paper it is proposed a new mechanism for self-managed and choreographed task execution in sensor networks. The proposed solution considers only a lightweight gateway instead of traditional heavy orchestrators and a hardware-supported algorithm, which consume a negligible amount of resources in sensor nodes. The gateway avoids the congestion of the entire sensor network and the hardware-supported algorithm enables a choreographed task execution scheme, so no particular node is overloaded. The performance of the proposed solution is evaluated through numerical and electronic ModelSim-based simulations. PMID:29518986
Task Decomposition in Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, Ronald Laurids; Joe, Jeffrey Clark
2014-06-01
In the probabilistic safety assessments (PSAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approachesmore » should arrive at the same set of HFEs. This question remains central as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PSAs tend to be top-down— defined as a subset of the PSA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) are more likely to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.« less
EVA Development and Verification Testing at NASA's Neutral Buoyancy Laboratory
NASA Technical Reports Server (NTRS)
Jairala, Juniper C.; Durkin, Robert; Marak, Ralph J.; Sipila, Stepahnie A.; Ney, Zane A.; Parazynski, Scott E.; Thomason, Arthur H.
2012-01-01
As an early step in the preparation for future Extravehicular Activities (EVAs), astronauts perform neutral buoyancy testing to develop and verify EVA hardware and operations. Neutral buoyancy demonstrations at NASA Johnson Space Center's Sonny Carter Training Facility to date have primarily evaluated assembly and maintenance tasks associated with several elements of the International Space Station (ISS). With the retirement of the Shuttle, completion of ISS assembly, and introduction of commercial players for human transportation to space, evaluations at the Neutral Buoyancy Laboratory (NBL) will take on a new focus. Test objectives are selected for their criticality, lack of previous testing, or design changes that justify retesting. Assembly tasks investigated are performed using procedures developed by the flight hardware providers and the Mission Operations Directorate (MOD). Orbital Replacement Unit (ORU) maintenance tasks are performed using a more systematic set of procedures, EVA Concept of Operations for the International Space Station (JSC-33408), also developed by the MOD. This paper describes the requirements and process for performing a neutral buoyancy test, including typical hardware and support equipment requirements, personnel and administrative resource requirements, examples of ISS systems and operations that are evaluated, and typical operational objectives that are evaluated.
Microgravity Manufacturing Via Fused Deposition
NASA Technical Reports Server (NTRS)
Cooper, K. G.; Griffin, M. R.
2003-01-01
Manufacturing polymer hardware during space flight is currently outside the state of the art. A process called fused deposition modeling (FDM) can make this approach a reality by producing net-shaped components of polymer materials directly from a CAE model. FDM is a rapid prototyping process developed by Stratasys, Inc.. which deposits a fine line of semi-molten polymer onto a substrate while moving via computer control to form the cross-sectional shape of the part it is building. The build platen is then lowered and the process is repeated, building a component directly layer by layer. This method enables direct net-shaped production of polymer components directly from a computer file. The layered manufacturing process allows for the manufacture of complex shapes and internal cavities otherwise impossible to machine. This task demonstrated the benefits of the FDM technique to quickly and inexpensively produce replacement components or repair broken hardware in a Space Shuttle or Space Station environment. The intent of the task was to develop and fabricate an FDM system that was lightweight, compact, and required minimum power consumption to fabricate ABS plastic hardware in microgravity. The final product of the shortened task turned out to be a ground-based breadboard device, demonstrating miniaturization capability of the system.
Proactive Fault Tolerance for HPC with Xen Virtualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagarajan, Arun Babu; Mueller, Frank; Engelmann, Christian
2007-01-01
with thousands of processors. At such large counts of compute nodes, faults are becoming common place. Current techniques to tolerate faults focus on reactive schemes to recover from faults and generally rely on a checkpoint/restart mechanism. Yet, in today's systems, node failures can often be anticipated by detecting a deteriorating health status. Instead of a reactive scheme for fault tolerance (FT), we are promoting a proactive one where processes automatically migrate from unhealthy nodes to healthy ones. Our approach relies on operating system virtualization techniques exemplied by but not limited to Xen. This paper contributes an automatic and transparent mechanismmore » for proactive FT for arbitrary MPI applications. It leverages virtualization techniques combined with health monitoring and load-based migration. We exploit Xen's live migration mechanism for a guest operating system (OS) to migrate an MPI task from a health-deteriorating node to a healthy one without stopping the MPI task during most of the migration. Our proactive FT daemon orchestrates the tasks of health monitoring, load determination and initiation of guest OS migration. Experimental results demonstrate that live migration hides migration costs and limits the overhead to only a few seconds making it an attractive approach to realize FT in HPC systems. Overall, our enhancements make proactive FT a valuable asset for long-running MPI application that is complementary to reactive FT using full checkpoint/ restart schemes since checkpoint frequencies can be reduced as fewer unanticipated failures are encountered. In the context of OS virtualization, we believe that this is the rst comprehensive study of proactive fault tolerance where live migration is actually triggered by health monitoring.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, K; Freeman, J; Zavalkovskiy, B
Purpose: Situated 20 miles from 5 major fault lines in California’s Bay Area, Stanford Universityhas a critical need for IT infrastructure planning to handle the high probability devastating earthquakes. Recently, a multi-million dollar project has been underway to overhaul Stanford’s radiation oncology information systems, maximizing planning system performance and providing true disaster recovery abilities. An overview of the project will be given with particular focus on lessons learned throughout the build. Methods: In this implementation, two isolated external datacenters provide geographical redundancy to Stanford’s main campus datacenter. Real-time mirroring is made of all data stored to our serial attached networkmore » (SAN) storage. In each datacenter, hardware/software virtualization was heavily implemented to maximize server efficiency and provide a robust mechanism to seamlessly migrate users in the event of an earthquake. System performance is routinely assessed through the use of virtualized data robots, able to log in to the system at scheduled times, perform routine planning tasks and report timing results to a performance dashboard. A substantial dose calculation framework (608 CPU cores) has been constructed as part of the implementation. Results: Migration to a virtualized server environment with a high performance SAN has resulted in up to a 45% speed up of common treatment planning tasks. Switching to a 608 core DCF has resulted in a 280% speed increase in dose calculations. Server tuning was found to further improved read/write performance by 20%. Disaster recovery tests are carried out quarterly and, although successful, remain time consuming to perform and verify functionality. Conclusion: Achieving true disaster recovery capabilities is possible through server virtualization, support from skilled IT staff and leadership. Substantial performance improvements are also achievable through careful tuning of server resources and disk read/write operations. Developing a streamlined method to comprehensively test failover is a key requirement to the system’s success.« less
CD-ROM and Local Area Networks.
ERIC Educational Resources Information Center
Marks, Kenneth E.; And Others
1993-01-01
This special section on local area networks includes three articles: (1) a description of migration at Joyner Library, East Carolina University (North Carolina) to a new network server; (2) a discussion of factors to consider for network planning in school libraries; and (3) a directory of companies supplying cable, hardware, software, and…
NASA Technical Reports Server (NTRS)
Steele, John; Metselaar, Carol; Peyton, Barbara; Rector, Tony; Rossato, Robert; Macias, Brian; Weigel, Dana; Holder, Don
2015-01-01
Water entered the Extravehicular Mobility Unit (EMU) helmet during extravehicular activity (EVA) no. 23 aboard the International Space Station on July 16, 2013, resulting in the termination of the EVA approximately 1 hour after it began. It was estimated that 1.5 liters of water had migrated up the ventilation loop into the helmet, adversely impacting the astronaut's hearing, vision, and verbal communication. Subsequent on-board testing and ground-based test, tear-down, and evaluation of the affected EMU hardware components determined that the proximate cause of the mishap was blockage of all water separator drum holes with a mixture of silica and silicates. The blockages caused a failure of the water separator degassing function, which resulted in EMU cooling water spilling into the ventilation loop, migrating around the circulating fan, and ultimately pushing into the helmet. The root cause of the failure was determined to be ground-processing shortcomings of the Airlock Cooling Loop Recovery (ALCLR) Ion Filter Beds, which led to various levels of contaminants being introduced into the filters before they left the ground. Those contaminants were thereafter introduced into the EMU hardware on-orbit during ALCLR scrubbing operations. This paper summarizes the failure analysis results along with identified process, hardware, and operational corrective actions that were implemented as a result of findings from this investigation.
Piromalis, Dimitrios; Arvanitis, Konstantinos
2016-08-04
Wireless Sensor and Actuators Networks (WSANs) constitute one of the most challenging technologies with tremendous socio-economic impact for the next decade. Functionally and energy optimized hardware systems and development tools maybe is the most critical facet of this technology for the achievement of such prospects. Especially, in the area of agriculture, where the hostile operating environment comes to add to the general technological and technical issues, reliable and robust WSAN systems are mandatory. This paper focuses on the hardware design architectures of the WSANs for real-world agricultural applications. It presents the available alternatives in hardware design and identifies their difficulties and problems for real-life implementations. The paper introduces SensoTube, a new WSAN hardware architecture, which is proposed as a solution to the various existing design constraints of WSANs. The establishment of the proposed architecture is based, firstly on an abstraction approach in the functional requirements context, and secondly, on the standardization of the subsystems connectivity, in order to allow for an open, expandable, flexible, reconfigurable, energy optimized, reliable and robust hardware system. The SensoTube implementation reference model together with its encapsulation design and installation are analyzed and presented in details. Furthermore, as a proof of concept, certain use cases have been studied in order to demonstrate the benefits of migrating existing designs based on the available open-source hardware platforms to SensoTube architecture.
FTAP: a Linux-based program for tapping and music experiments.
Finney, S A
2001-02-01
This paper describes FTAP, a flexible data collection system for tapping and music experiments. FTAP runs on standard PC hardware with the Linux operating system and can process input keystrokes and auditory output with reliable millisecond resolution. It uses standard MIDI devices for input and output and is particularly flexible in the area of auditory feedback manipulation. FTAP can run a wide variety of experiments, including synchronization/continuation tasks (Wing & Kristofferson, 1973), synchronization tasks combined with delayed auditory feedback (Aschersleben & Prinz, 1997), continuation tasks with isolated feedback perturbations (Wing, 1977), and complex alterations of feedback in music performance (Finney, 1997). Such experiments have often been implemented with custom hardware and software systems, but with FTAP they can be specified by a simple ASCII text parameter file. FTAP is available at no cost in source-code form.
Transparent process migration: Design alternatives and the Sprite implementation
NASA Technical Reports Server (NTRS)
Douglis, Fred; Ousterhout, John
1991-01-01
The Sprite operating system allows executing processes to be moved between hosts at any time. We use this process migration mechanism to offload work onto idle machines, and also to evict migrated processes when idle workstations are reclaimed by their owners. Sprite's migration mechanism provides a high degree of transparency both for migrated processes and for users. Idle machines are identified, and eviction is invoked, automatically by daemon processes. On Sprite it takes up to a few hundred milliseconds on SPARCstation 1 workstations to perform a remote exec, while evictions typically occur in a few seconds. The pmake program uses remote invocation to invoke tasks concurrently. Compilations commonly obtain speedup factors in the range of three to six; they are limited primarily by contention for centralized resources such as file servers. CPU-bound tasks such as simulations can make more effective use of idle hosts, obtaining as much as eight-fold speedup over a period of hours. Process migration has been in regular service for over two years.
Nimble Compiler Environment for Agile Hardware. Volume 1
2001-10-01
APPENDIX G . XIMA - THE NIMBLE DATAPATH COMPILER .......................................................................... 172 ABSTRACT...Approach of the Nimble Compiler Task 3 G Xima - The Nimble Datapath Compiler Task 4 H Domain Generator Tutorial for the Nimble Compiler Project Task 5 I...a loop example. Nodes A- G are basic blocks inside the loop. It is obvious that there are four distinct paths inside the loop (without counting the
Composite Structures Damage Tolerance Analysis Methodologies
NASA Technical Reports Server (NTRS)
Chang, James B.; Goyal, Vinay K.; Klug, John C.; Rome, Jacob I.
2012-01-01
This report presents the results of a literature review as part of the development of composite hardware fracture control guidelines funded by NASA Engineering and Safety Center (NESC) under contract NNL04AA09B. The objectives of the overall development tasks are to provide a broad information and database to the designers, analysts, and testing personnel who are engaged in space flight hardware production.
Beating the tyranny of scale with a private cloud configured for Big Data
NASA Astrophysics Data System (ADS)
Lawrence, Bryan; Bennett, Victoria; Churchill, Jonathan; Juckes, Martin; Kershaw, Philip; Pepler, Sam; Pritchard, Matt; Stephens, Ag
2015-04-01
The Joint Analysis System, JASMIN, consists of a five significant hardware components: a batch computing cluster, a hypervisor cluster, bulk disk storage, high performance disk storage, and access to a tape robot. Each of the computing clusters consists of a heterogeneous set of servers, supporting a range of possible data analysis tasks - and a unique network environment makes it relatively trivial to migrate servers between the two clusters. The high performance disk storage will include the world's largest (publicly visible) deployment of the Panasas parallel disk system. Initially deployed in April 2012, JASMIN has already undergone two major upgrades, culminating in a system which by April 2015, will have in excess of 16 PB of disk and 4000 cores. Layered on the basic hardware are a range of services, ranging from managed services, such as the curated archives of the Centre for Environmental Data Archival or the data analysis environment for the National Centres for Atmospheric Science and Earth Observation, to a generic Infrastructure as a Service (IaaS) offering for the UK environmental science community. Here we present examples of some of the big data workloads being supported in this environment - ranging from data management tasks, such as checksumming 3 PB of data held in over one hundred million files, to science tasks, such as re-processing satellite observations with new algorithms, or calculating new diagnostics on petascale climate simulation outputs. We will demonstrate how the provision of a cloud environment closely coupled to a batch computing environment, all sharing the same high performance disk system allows massively parallel processing without the necessity to shuffle data excessively - even as it supports many different virtual communities, each with guaranteed performance. We will discuss the advantages of having a heterogeneous range of servers with available memory from tens of GB at the low end to (currently) two TB at the high end. There are some limitations of the JASMIN environment, the high performance disk environment is not fully available in the IaaS environment, and a planned ability to burst compute heavy jobs into the public cloud is not yet fully available. There are load balancing and performance issues that need to be understood. We will conclude with projections for future usage, and our plans to meet those requirements.
Biomedical applications engineering tasks
NASA Technical Reports Server (NTRS)
Laenger, C. J., Sr.
1976-01-01
The engineering tasks performed in response to needs articulated by clinicians are described. Initial contacts were made with these clinician-technology requestors by the Southwest Research Institute NASA Biomedical Applications Team. The basic purpose of the program was to effectively transfer aerospace technology into functional hardware to solve real biomedical problems.
Microcomputers: Software Evaluation. Evaluation Guides. Guide Number 17.
ERIC Educational Resources Information Center
Gray, Peter J.
This guide discusses three critical steps in selecting microcomputer software and hardware: setting the context, software evaluation, and managing microcomputer use. Specific topics addressed include: (1) conducting an informal task analysis to determine how the potential user's time is spent; (2) identifying tasks amenable to computerization and…
Programmable hardware for reconfigurable computing systems
NASA Astrophysics Data System (ADS)
Smith, Stephen
1996-10-01
In 1945 the work of J. von Neumann and H. Goldstein created the principal architecture for electronic computation that has now lasted fifty years. Nevertheless alternative architectures have been created that have computational capability, for special tasks, far beyond that feasible with von Neumann machines. The emergence of high capacity programmable logic devices has made the realization of these architectures practical. The original ENIAC and EDVAC machines were conceived to solve special mathematical problems that were far from today's concept of 'killer applications.' In a similar vein programmable hardware computation is being used today to solve unique mathematical problems. Our programmable hardware activity is focused on the research and development of novel computational systems based upon the reconfigurability of our programmable logic devices. We explore our programmable logic architectures and their implications for programmable hardware. One programmable hardware board implementation is detailed.
NASA Technical Reports Server (NTRS)
Moses, John F.; Memarsadeghi, Nargess; Overoye, David; Littlefield, Brain
2017-01-01
The Global Learning and Observation to Benefit the Environment (GLOBE) Data and Information System supports an international science and education program with capabilities to accept local environment observations, archive, display and visualize them along with global satellite observations. Since its inception twenty years ago, the Web and database system has been upgraded periodically to accommodate the changes in technology and the steady growth of GLOBEs education community and collection of observations. Recently, near the end-of-life of the system hardware, new commercial computer platform options were explored and a decision made to utilize Cloud services. Now the GLOBE DIS has been fully deployed and maintained using Amazon Cloud services for over two years now. This paper reviews the early risks, actual challenges, and some unexpected findings as a result of the GLOBE DIS migration. We describe the plans, cost drivers and estimates, highlight adjustments that were made and suggest improvements. We present the trade studies for provisioning, for load balancing, networks, processing, storage, as well as production, staging and backup systems. We outline the migration teams skills and required level of effort for transition, and resulting changes in the overall maintenance and operations activities. Examples include incremental adjustments to processing capacity and frequency of backups, and efforts previously expended on hardware maintenance that were refocused onto application-specific enhancements.
NASA Technical Reports Server (NTRS)
Moses, John F.; Memarsadeghi, Nargess; Overoye, David; Littlefield, Bryan
2016-01-01
The Global Learning and Observation to Benefit the Environment (GLOBE) Data and Information System supports an international science and education program with capabilities to accept local environment observations, archive, display and visualize them along with global satellite observations. Since its inception twenty years ago, the Web and database system has been upgraded periodically to accommodate the changes in technology and the steady growth of GLOBEs education community and collection of observations. Recently, near the end-of-life of the system hardware, new commercial computer platform options were explored and a decision made to utilize Cloud services. Now the GLOBE DIS has been fully deployed and maintained using Amazon Cloud services for over two years now. This paper reviews the early risks, actual challenges, and some unexpected findings as a result of the GLOBE DIS migration. We describe the plans, cost drivers and estimates, highlight adjustments that were made and suggest improvements. We present the trade studies for provisioning, for load balancing, networks, processing, storage, as well as production, staging and backup systems. We outline the migration teams skills and required level of effort for transition, and resulting changes in the overall maintenance and operations activities. Examples include incremental adjustments to processing capacity and frequency of backups, and efforts previously expended on hardware maintenance that were refocused onto application-specific enhancements.
NASA Astrophysics Data System (ADS)
Moses, J. F.; Memarsadeghi, N.; Overoye, D.; Littlefield, B.
2016-12-01
The Global Learning and Observation to Benefit the Environment (GLOBE) Data and Information System supports an international science and education program with capabilities to accept local environment observations, archive, display and visualize them along with global satellite observations. Since its inception twenty years ago, the Web and database system has been upgraded periodically to accommodate the changes in technology and the steady growth of GLOBE's education community and collection of observations. Recently, near the end-of-life of the system hardware, new commercial computer platform options were explored and a decision made to utilize Cloud services. Now the GLOBE DIS has been fully deployed and maintained using Amazon Cloud services for over two years now. This paper reviews the early risks, actual challenges, and some unexpected findings as a result of the GLOBE DIS migration. We describe the plans, cost drivers and estimates, highlight adjustments that were made and suggest improvements. We present the trade studies for provisioning, for load balancing, networks, processing , storage, as well as production, staging and backup systems. We outline the migration team's skills and required level of effort for transition, and resulting changes in the overall maintenance and operations activities. Examples include incremental adjustments to processing capacity and frequency of backups, and efforts previously expended on hardware maintenance that were refocused onto application-specific enhancements.
Human factors in the Naval Air Systems Command: Computer based training
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seamster, T.L.; Snyder, C.E.; Terranova, M.
1988-01-01
Military standards applied to the private sector contracts have a substantial effect on the quality of Computer Based Training (CBT) systems procured for the Naval Air Systems Command. This study evaluated standards regulating the following areas in CBT development and procurement: interactive training systems, cognitive task analysis, and CBT hardware. The objective was to develop some high-level recommendations for evolving standards that will govern the next generation of CBT systems. One of the key recommendations is that there be an integration of the instructional systems development, the human factors engineering, and the software development standards. Recommendations were also made formore » task analysis and CBT hardware standards. (9 refs., 3 figs.)« less
Impact of Machine Virtualization on Timing Precision for Performance-critical Tasks
NASA Astrophysics Data System (ADS)
Karpov, Kirill; Fedotova, Irina; Siemens, Eduard
2017-07-01
In this paper we present a measurement study to characterize the impact of hardware virtualization on basic software timing, as well as on precise sleep operations of an operating system. We investigated how timer hardware is shared among heavily CPU-, I/O- and Network-bound tasks on a virtual machine as well as on the host machine. VMware ESXi and QEMU/KVM have been chosen as commonly used examples of hypervisor- and host-based models. Based on statistical parameters of retrieved distributions, our results provide a very good estimation of timing behavior. It is essential for real-time and performance-critical applications such as image processing or real-time control.
Advanced information processing system: Local system services
NASA Technical Reports Server (NTRS)
Burkhardt, Laura; Alger, Linda; Whittredge, Roy; Stasiowski, Peter
1989-01-01
The Advanced Information Processing System (AIPS) is a multi-computer architecture composed of hardware and software building blocks that can be configured to meet a broad range of application requirements. The hardware building blocks are fault-tolerant, general-purpose computers, fault-and damage-tolerant networks (both computer and input/output), and interfaces between the networks and the computers. The software building blocks are the major software functions: local system services, input/output, system services, inter-computer system services, and the system manager. The foundation of the local system services is an operating system with the functions required for a traditional real-time multi-tasking computer, such as task scheduling, inter-task communication, memory management, interrupt handling, and time maintenance. Resting on this foundation are the redundancy management functions necessary in a redundant computer and the status reporting functions required for an operator interface. The functional requirements, functional design and detailed specifications for all the local system services are documented.
Digital ultrasonics signal processing: Flaw data post processing use and description
NASA Technical Reports Server (NTRS)
Buel, V. E.
1981-01-01
A modular system composed of two sets of tasks which interprets the flaw data and allows compensation of the data due to transducer characteristics is described. The hardware configuration consists of two main units. A DEC LSI-11 processor running under the RT-11 sngle job, version 2C-02 operating system, controls the scanner hardware and the ultrasonic unit. A DEC PDP-11/45 processor also running under the RT-11, version 2C-02, operating system, stores, processes and displays the flaw data. The software developed the Ultrasonics Evaluation System, is divided into two catagories; transducer characterization and flaw classification. Each category is divided further into two functional tasks: a data acquisition and a postprocessor ask. The flaw characterization collects data, compresses its, and writes it to a disk file. The data is then processed by the flaw classification postprocessing task. The use and operation of a flaw data postprocessor is described.
Risk Management of Digital Information: A File Format Investigation.
ERIC Educational Resources Information Center
Lawrence, Gregory W.; Kehoe, William R.; Rieger, Oya Y.; Walters, William H.; Kenney, Anne R.
Given the right hardware and software, digital information is easy to create, copy, and disseminate; however it is very hard to preserve. At present, it is impossible to guarantee the longevity and legibility of digital information for even one human generation. Migration can be defined as the periodic transfer of digital materials from one…
Enhancements and Algorithms for Avionic Information Processing System Design Methodology.
1982-06-16
programming algorithm is enhanced by incorporating task precedence constraints and hardware failures. Stochastic network methods are used to analyze...allocations in the presence of random fluctuations. Graph theoretic methods are used to analyze hardware designs, and new designs are constructed with...There, spatial dynamic programming (SDP) was used to solve a static, deterministic software allocation problem. Under the current contract the SDP
Migration impact on load balancing - an experience on Amoeba
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, W.; Socko, P.
1996-12-31
Load balancing has been extensive study by simulation, positive results were received in most of the researches. With the increase of the availability oftlistributed systems, a few experiments have been carried out on different systems. These experimental studies either depend on task initiation or task initiation plus task migration. In this paper, we present the results of an 0 study of load balancing using a centralizedpolicy to manage the load on a set of processors, which was carried out on an Amoeba system which consists of a set of 386s and linked by 10 Mbps Ethernet. The results on onemore » hand indicate the necessity of a load balancing facility for a distributed system. On the other hand, the results question the impact of using process migration to increase system performance under the configuration used in our experiments.« less
Computer vision camera with embedded FPGA processing
NASA Astrophysics Data System (ADS)
Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel
2000-03-01
Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.
Piromalis, Dimitrios; Arvanitis, Konstantinos
2016-01-01
Wireless Sensor and Actuators Networks (WSANs) constitute one of the most challenging technologies with tremendous socio-economic impact for the next decade. Functionally and energy optimized hardware systems and development tools maybe is the most critical facet of this technology for the achievement of such prospects. Especially, in the area of agriculture, where the hostile operating environment comes to add to the general technological and technical issues, reliable and robust WSAN systems are mandatory. This paper focuses on the hardware design architectures of the WSANs for real-world agricultural applications. It presents the available alternatives in hardware design and identifies their difficulties and problems for real-life implementations. The paper introduces SensoTube, a new WSAN hardware architecture, which is proposed as a solution to the various existing design constraints of WSANs. The establishment of the proposed architecture is based, firstly on an abstraction approach in the functional requirements context, and secondly, on the standardization of the subsystems connectivity, in order to allow for an open, expandable, flexible, reconfigurable, energy optimized, reliable and robust hardware system. The SensoTube implementation reference model together with its encapsulation design and installation are analyzed and presented in details. Furthermore, as a proof of concept, certain use cases have been studied in order to demonstrate the benefits of migrating existing designs based on the available open-source hardware platforms to SensoTube architecture. PMID:27527180
Trends in computer hardware and software.
Frankenfeld, F M
1993-04-01
Previously identified and current trends in the development of computer systems and in the use of computers for health care applications are reviewed. Trends identified in a 1982 article were increasing miniaturization and archival ability, increasing software costs, increasing software independence, user empowerment through new software technologies, shorter computer-system life cycles, and more rapid development and support of pharmaceutical services. Most of these trends continue today. Current trends in hardware and software include the increasing use of reduced instruction-set computing, migration to the UNIX operating system, the development of large software libraries, microprocessor-based smart terminals that allow remote validation of data, speech synthesis and recognition, application generators, fourth-generation languages, computer-aided software engineering, object-oriented technologies, and artificial intelligence. Current trends specific to pharmacy and hospitals are the withdrawal of vendors of hospital information systems from the pharmacy market, improved linkage of information systems within hospitals, and increased regulation by government. The computer industry and its products continue to undergo dynamic change. Software development continues to lag behind hardware, and its high cost is offsetting the savings provided by hardware.
Thread concept for automatic task parallelization in image analysis
NASA Astrophysics Data System (ADS)
Lueckenhaus, Maximilian; Eckstein, Wolfgang
1998-09-01
Parallel processing of image analysis tasks is an essential method to speed up image processing and helps to exploit the full capacity of distributed systems. However, writing parallel code is a difficult and time-consuming process and often leads to an architecture-dependent program that has to be re-implemented when changing the hardware. Therefore it is highly desirable to do the parallelization automatically. For this we have developed a special kind of thread concept for image analysis tasks. Threads derivated from one subtask may share objects and run in the same context but may process different threads of execution and work on different data in parallel. In this paper we describe the basics of our thread concept and show how it can be used as basis of an automatic task parallelization to speed up image processing. We further illustrate the design and implementation of an agent-based system that uses image analysis threads for generating and processing parallel programs by taking into account the available hardware. The tests made with our system prototype show that the thread concept combined with the agent paradigm is suitable to speed up image processing by an automatic parallelization of image analysis tasks.
Automated Tracking of Cell Migration with Rapid Data Analysis.
DuChez, Brian J
2017-09-01
Cell migration is essential for many biological processes including development, wound healing, and metastasis. However, studying cell migration often requires the time-consuming and labor-intensive task of manually tracking cells. To accelerate the task of obtaining coordinate positions of migrating cells, we have developed a graphical user interface (GUI) capable of automating the tracking of fluorescently labeled nuclei. This GUI provides an intuitive user interface that makes automated tracking accessible to researchers with no image-processing experience or familiarity with particle-tracking approaches. Using this GUI, users can interactively determine a minimum of four parameters to identify fluorescently labeled cells and automate acquisition of cell trajectories. Additional features allow for batch processing of numerous time-lapse images, curation of unwanted tracks, and subsequent statistical analysis of tracked cells. Statistical outputs allow users to evaluate migratory phenotypes, including cell speed, distance, displacement, and persistence, as well as measures of directional movement, such as forward migration index (FMI) and angular displacement. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.
Crew interface with a telerobotic control station
NASA Technical Reports Server (NTRS)
Mok, Eva
1987-01-01
A method for apportioning crew-telerobot tasks has been derived to facilitate the design of a crew-friendly telerobot control station. To identify the most appropriate state-of-the-art hardware for the control station, task apportionment must first be conducted to identify if an astronaut or a telerobot is best to execute the task and which displays and controls are required for monitoring and performance. Basic steps that comprise the task analysis process are: (1) identify space station tasks; (2) define tasks; (3) define task performance criteria and perform task apportionment; (4) verify task apportionment; (5) generate control station requirements; (6) develop design concepts to meet requirements; and (7) test and verify design concepts.
1991-12-01
December, 1991 i--" NASA-Lewis Research Center Cleveland, Ohio 44135 94-08573 Contract No. NAS3-23773 .0l•!ill~• 111 l94 3 16 09V PISULATIXI NOTICI... 3 3.1 Test Hardware and Facility Description...V - Drawings and Layouts of Calorimeter Insert and Related Hardware .... 133 - Ui - FIGURES NUMBER PIALE GE 3 -1 Integrated Component Evaluator (I.C.E
The implementation and use of Ada on distributed systems with high reliability requirements
NASA Technical Reports Server (NTRS)
Knight, J. C.; Gregory, S. T.; Urquhart, J. I. A.
1984-01-01
The use and implementation of Ada (a trade mark of the US Dept. of Defense) in distributed environments in which the hardware are assumed to be unreliable were investigated. The possibility that a distributed system is programmed entirely in Ada so that the individual tasks of the system are unconcerned with which processors they are executing on and failures occurring in the underlying hardware were examined.
Computer Simulation of a Multiaxis Air-to-Air Tracking Task Using the Optimal Pilot Control Model.
1982-12-01
v ABSTRACT ........ ............................. .. vi CHAPTER 1 - INTRODUCTION ....... ..................... 1 1.1 Motivation... Introduction ......... . 4 2.2 Optimal Pilot Control Model and Control Synthesis 4 2.3 Pitch Tracking Task ...... ................... 6 2.4 Multiaxis...CHAPTER 3 - SIMULATION SYSTEM ...... .................. 33 3.1 Introduction ........ ....................... 33 3.2 System Hardware
2015-01-27
placed on the user by the required tasks. Design areas that are of concern include seating , input and output device location and design , ambient...software, hardware, and workspace design for the test function of operability that influence operator performance in a computer-based system. 15...PRESENTATION ................... 23 APPENDIX A. SAMPLE DESIGN CHECKLISTS ...................................... A-1 B. SAMPLE TASK CHECKLISTS
HPC Programming on Intel Many-Integrated-Core Hardware with MAGMA Port to Xeon Phi
Dongarra, Jack; Gates, Mark; Haidar, Azzam; ...
2015-01-01
This paper presents the design and implementation of several fundamental dense linear algebra (DLA) algorithms for multicore with Intel Xeon Phi coprocessors. In particular, we consider algorithms for solving linear systems. Further, we give an overview of the MAGMA MIC library, an open source, high performance library, that incorporates the developments presented here and, more broadly, provides the DLA functionality equivalent to that of the popular LAPACK library while targeting heterogeneous architectures that feature a mix of multicore CPUs and coprocessors. The LAPACK-compliance simplifies the use of the MAGMA MIC library in applications, while providing them with portably performant DLA.more » High performance is obtained through the use of the high-performance BLAS, hardware-specific tuning, and a hybridization methodology whereby we split the algorithm into computational tasks of various granularities. Execution of those tasks is properly scheduled over the heterogeneous hardware by minimizing data movements and mapping algorithmic requirements to the architectural strengths of the various heterogeneous hardware components. Our methodology and programming techniques are incorporated into the MAGMA MIC API, which abstracts the application developer from the specifics of the Xeon Phi architecture and is therefore applicable to algorithms beyond the scope of DLA.« less
ERIC Educational Resources Information Center
Lippert, Margaret
2000-01-01
This abstract of a planned session on access to scientific and technical journals addresses policy and standard issues related to long-term archives; digital archiving models; economic factors; hardware and software issues; multi-publisher electronic journal content integration; format considerations; and future data migration needs. (LRW)
A Reasoning Hardware Platform for Real-Time Common-Sense Inference
Barba, Jesús; Santofimia, Maria J.; Dondo, Julio; Rincón, Fernando; Sánchez, Francisco; López, Juan Carlos
2012-01-01
Enabling Ambient Intelligence systems to understand the activities that are taking place in a supervised context is a rather complicated task. Moreover, this task cannot be successfully addressed while overlooking the mechanisms (common-sense knowledge and reasoning) that entitle us, as humans beings, to successfully undertake it. This work is based on the premise that Ambient Intelligence systems will be able to understand and react to context events if common-sense capabilities are embodied in them. However, there are some difficulties that need to be resolved before common-sense capabilities can be fully deployed to Ambient Intelligence. This work presents a hardware accelerated implementation of a common-sense knowledge-base system intended to improve response time and efficiency. PMID:23012540
Design and evaluation of a fault-tolerant multiprocessor using hardware recovery blocks
NASA Technical Reports Server (NTRS)
Lee, Y. H.; Shin, K. G.
1982-01-01
A fault-tolerant multiprocessor with a rollback recovery mechanism is discussed. The rollback mechanism is based on the hardware recovery block which is a hardware equivalent to the software recovery block. The hardware recovery block is constructed by consecutive state-save operations and several state-save units in every processor and memory module. When a fault is detected, the multiprocessor reconfigures itself to replace the faulty component and then the process originally assigned to the faulty component retreats to one of the previously saved states in order to resume fault-free execution. A mathematical model is proposed to calculate both the coverage of multi-step rollback recovery and the risk of restart. A performance evaluation in terms of task execution time is also presented.
Refinement of Objective Motion Cueing Criteria Investigation Based on Three Flight Tasks
NASA Technical Reports Server (NTRS)
Zaal, Petrus M. T.; Schroeder, Jeffery A.; Chung, William W.
2017-01-01
The objective of this paper is to refine objective motion cueing criteria for commercial transport simulators based on pilots' performance in three flying tasks. Actuator hardware and software algorithms determine motion cues. Today, during a simulator qualification, engineers objectively evaluate only the hardware. Pilot inspectors subjectively assess the overall motion cueing system (i.e., hardware plus software); however, it is acknowledged that pinpointing any deficiencies that might arise to either hardware or software is challenging. ICAO 9625 has an Objective Motion Cueing Test (OMCT), which is now a required test in the FAA's part 60 regulations for new devices, evaluating the software and hardware together; however, it lacks accompanying fidelity criteria. Hosman has documented OMCT results for a statistical sample of eight simulators which is useful, but having validated criteria would be an improvement. In a previous experiment, we developed initial objective motion cueing criteria that this paper is trying to refine. Sinacori suggested simple criteria which are in reasonable agreement with much of the literature. These criteria often necessitate motion displacements greater than most training simulators can provide. While some of the previous work has used transport aircraft in their studies, the majority used fighter aircraft or helicopters. Those that used transport aircraft considered degraded flight characteristics. As a result, earlier criteria lean more towards being sufficient, rather than necessary, criteria for typical transport aircraft training applications. Considering the prevalence of 60-inch, six-legged hexapod training simulators, a relevant question is "what are the necessary criteria that can be used with the ICAO 9625 diagnostic?" This study adds to the literature as follows. First, it examines well-behaved transport aircraft characteristics, but in three challenging tasks. The tasks are equivalent to the ones used in our previous experiment, allowing us to directly compare the results and add to the previous data. Second, it uses the Vertical Motion Simulator (VMS), the world's largest vertical displacement simulator. This allows inclusion of relatively large motion conditions, much larger than a typical training simulator can provide. Six new motion configurations were used that explore the motion responses between the initial objective motion cueing boundaries found in a previous experiment and what current hexapod simulators typically provide. Finally, a sufficiently large pilot pool added statistical reliability to the results.
Active Nodal Task Seeking for High-Performance, Ultra-Dependable Computing
1994-07-01
implementation. Figure 1 shows a hardware organization of ANTS: stand-alone computing nodes inter - connected by buses. 2.1 Run Time Partitioning The...nodes in 14 respond to changing loads [27] or system reconfiguration [26]. Existing techniques are all source-initiated or server-initiated [27]. 5.1...short-running task segments. The task segments must be short-running in order that processors will become avalable often enough to satisfy changing
Operator Workload: Comprehensive Review and Evaluation of Operator Workload Methodologies
1989-06-01
chocking for system failures or emergency conditions. It seems fair to characterize the changes In operator functions as more mental or cognitive In nature ...that the operator, the system hardware, and the evMronment all interact in affecting performance and this Interaction can change the nature of the task...a) classifying the nature of the operator tasks and (b) classifying workload assessment techniques. Task taxonomies are useful because some workload
Extravehicular Activity training and hardware design considerations
NASA Technical Reports Server (NTRS)
Thuot, Pierre J.; Harbaugh, Gregory J.
1993-01-01
Designing hardware that can be successfully operated by EVA astronauts for EVA tasks required to assemble and maintain Space Station Freedom requires a thorough understanding of human factors and of the capabilities and limitations of the space-suited astronaut, as well as of the effect of microgravity environment on the crew member's capabilities and on the overhead associated with EVA. This paper describes various training methods and facilities that are being designed for training EVA astronauts for Space Station assembly and maintenance, taking into account the above discussed factors. Particular attention is given to the user-friendly hardware design for EVA and to recent EVA flight experience.
Special environmental control and life support equipment test analyses and hardware
NASA Technical Reports Server (NTRS)
Callahan, David M.
1995-01-01
This final report summarizes NAS8-38250 contract events, 'Special Environmental Control and Life Support Systems Test Analysis and Hardware'. This report is technical and includes programmatic development. Key to the success of this contract was the evaluation of Environmental Control and Life Support Systems (ECLSS) test results via sophisticated laboratory analysis capabilities. The history of the contract, including all subcontracts, is followed by the support and development of each Task.
NASA Technical Reports Server (NTRS)
Mejzak, R. S.
1980-01-01
The distributed processing concept is defined in terms of control primitives, variables, and structures and their use in performing a decomposed discrete Fourier transform (DET) application function. The design assumes interprocessor communications to be anonymous. In this scheme, all processors can access an entire common database by employing control primitives. Access to selected areas within the common database is random, enforced by a hardware lock, and determined by task and subtask pointers. This enables the number of processors to be varied in the configuration without any modifications to the control structure. Decompositional elements of the DFT application function in terms of tasks and subtasks are also described. The experimental hardware configuration consists of IMSAI 8080 chassis which are independent, 8 bit microcomputer units. These chassis are linked together to form a multiple processing system by means of a shared memory facility. This facility consists of hardware which provides a bus structure to enable up to six microcomputers to be interconnected. It provides polling and arbitration logic so that only one processor has access to shared memory at any one time.
Reconfigurable vision system for real-time applications
NASA Astrophysics Data System (ADS)
Torres-Huitzil, Cesar; Arias-Estrada, Miguel
2002-03-01
Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.
Multicore Considerations for Legacy Flight Software Migration
NASA Technical Reports Server (NTRS)
Vines, Kenneth; Day, Len
2013-01-01
In this paper we will discuss potential benefits and pitfalls when considering a migration from an existing single core code base to a multicore processor implementation. The results of this study present options that should be considered before migrating fault managers, device handlers and tasks with time-constrained requirements to a multicore flight software environment. Possible future multicore test bed demonstrations are also discussed.
Facilitating preemptive hardware system design using partial reconfiguration techniques.
Dondo Gazzano, Julio; Rincon, Fernando; Vaderrama, Carlos; Villanueva, Felix; Caba, Julian; Lopez, Juan Carlos
2014-01-01
In FPGA-based control system design, partial reconfiguration is especially well suited to implement preemptive systems. In real-time systems, the deadline for critical task can compel the preemption of noncritical one. Besides, an asynchronous event can demand immediate attention and, then, force launching a reconfiguration process for high-priority task implementation. If the asynchronous event is previously scheduled, an explicit activation of the reconfiguration process is performed. If the event cannot be previously programmed, such as in dynamically scheduled systems, an implicit activation to the reconfiguration process is demanded. This paper provides a hardware-based approach to explicit and implicit activation of the partial reconfiguration process in dynamically reconfigurable SoCs and includes all the necessary tasks to cope with this issue. Furthermore, the reconfiguration service introduced in this work allows remote invocation of the reconfiguration process and then the remote integration of off-chip components. A model that offers component location transparency is also presented to enhance and facilitate system integration.
Facilitating Preemptive Hardware System Design Using Partial Reconfiguration Techniques
Rincon, Fernando; Vaderrama, Carlos; Villanueva, Felix; Caba, Julian; Lopez, Juan Carlos
2014-01-01
In FPGA-based control system design, partial reconfiguration is especially well suited to implement preemptive systems. In real-time systems, the deadline for critical task can compel the preemption of noncritical one. Besides, an asynchronous event can demand immediate attention and, then, force launching a reconfiguration process for high-priority task implementation. If the asynchronous event is previously scheduled, an explicit activation of the reconfiguration process is performed. If the event cannot be previously programmed, such as in dynamically scheduled systems, an implicit activation to the reconfiguration process is demanded. This paper provides a hardware-based approach to explicit and implicit activation of the partial reconfiguration process in dynamically reconfigurable SoCs and includes all the necessary tasks to cope with this issue. Furthermore, the reconfiguration service introduced in this work allows remote invocation of the reconfiguration process and then the remote integration of off-chip components. A model that offers component location transparency is also presented to enhance and facilitate system integration. PMID:24672292
System for Anomaly and Failure Detection (SAFD) system development
NASA Technical Reports Server (NTRS)
Oreilly, D.
1992-01-01
This task specified developing the hardware and software necessary to implement the System for Anomaly and Failure Detection (SAFD) algorithm, developed under Technology Test Bed (TTB) Task 21, on the TTB engine stand. This effort involved building two units; one unit to be installed in the Block II Space Shuttle Main Engine (SSME) Hardware Simulation Lab (HSL) at Marshall Space Flight Center (MSFC), and one unit to be installed at the TTB engine stand. Rocketdyne personnel from the HSL performed the task. The SAFD algorithm was developed as an improvement over the current redline system used in the Space Shuttle Main Engine Controller (SSMEC). Simulation tests and execution against previous hot fire tests demonstrated that the SAFD algorithm can detect engine failure as much as tens of seconds before the redline system recognized the failure. Although the current algorithm only operates during steady state conditions (engine not throttling), work is underway to expand the algorithm to work during transient condition.
The Use Of Videography For Three-Dimensional Motion Analysis
NASA Astrophysics Data System (ADS)
Hawkins, D. A.; Hawthorne, D. L.; DeLozier, G. S.; Campbell, K. R.; Grabiner, M. D.
1988-02-01
Special video path editing capabilities with custom hardware and software, have been developed for use in conjunction with existing video acquisition hardware and firmware. This system has simplified the task of quantifying the kinematics of human movement. A set of retro-reflective markers are secured to a subject performing a given task (i.e. walking, throwing, swinging a golf club, etc.). Multiple cameras, a video processor, and a computer work station collect video data while the task is performed. Software has been developed to edit video files, create centroid data, and identify marker paths. Multi-camera path files are combined to form a 3D path file using the DLT method of cinematography. A separate program converts the 3D path file into kinematic data by creating a set of local coordinate axes and performing a series of coordinate transformations from one local system to the next. The kinematic data is then displayed for appropriate review and/or comparison.
Algorithms and Architectures for Elastic-Wave Inversion Final Report CRADA No. TC02144.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, S.; Lindtjorn, O.
2017-08-15
This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and Schlumberger Technology Corporation (STC), to perform a computational feasibility study that investigates hardware platforms and software algorithms applicable to STC for Reverse Time Migration (RTM) / Reverse Time Inversion (RTI) of 3-D seismic data.
Trainable hardware for dynamical computing using error backpropagation through physical media.
Hermans, Michiel; Burm, Michaël; Van Vaerenbergh, Thomas; Dambre, Joni; Bienstman, Peter
2015-03-24
Neural networks are currently implemented on digital Von Neumann machines, which do not fully leverage their intrinsic parallelism. We demonstrate how to use a novel class of reconfigurable dynamical systems for analogue information processing, mitigating this problem. Our generic hardware platform for dynamic, analogue computing consists of a reciprocal linear dynamical system with nonlinear feedback. Thanks to reciprocity, a ubiquitous property of many physical phenomena like the propagation of light and sound, the error backpropagation-a crucial step for tuning such systems towards a specific task-can happen in hardware. This can potentially speed up the optimization process significantly, offering important benefits for the scalability of neuro-inspired hardware. In this paper, we show, using one experimentally validated and one conceptual example, that such systems may provide a straightforward mechanism for constructing highly scalable, fully dynamical analogue computers.
NASA Technical Reports Server (NTRS)
Srivas, Mandayam; Bickford, Mark
1991-01-01
The design and formal verification of a hardware system for a task that is an important component of a fault tolerant computer architecture for flight control systems is presented. The hardware system implements an algorithm for obtaining interactive consistancy (byzantine agreement) among four microprocessors as a special instruction on the processors. The property verified insures that an execution of the special instruction by the processors correctly accomplishes interactive consistency, provided certain preconditions hold. An assumption is made that the processors execute synchronously. For verification, the authors used a computer aided design hardware design verification tool, Spectool, and the theorem prover, Clio. A major contribution of the work is the demonstration of a significant fault tolerant hardware design that is mechanically verified by a theorem prover.
Simulation Control Graphical User Interface Logging Report
NASA Technical Reports Server (NTRS)
Hewling, Karl B., Jr.
2012-01-01
One of the many tasks of my project was to revise the code of the Simulation Control Graphical User Interface (SIM GUI) to enable logging functionality to a file. I was also tasked with developing a script that directed the startup and initialization flow of the various LCS software components. This makes sure that a software component will not spin up until all the appropriate dependencies have been configured properly. Also I was able to assist hardware modelers in verifying the configuration of models after they have been upgraded to a new software version. I developed some code that analyzes the MDL files to determine if any error were generated due to the upgrade process. Another one of the projects assigned to me was supporting the End-to-End Hardware/Software Daily Tag-up meeting.
NASA Astrophysics Data System (ADS)
Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.
2012-12-01
Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.
Avionics Simulation, Development and Software Engineering
NASA Technical Reports Server (NTRS)
Francis, Ronald C.; Settle, Gray; Tobbe, Patrick A.; Kissel, Ralph; Glaese, John; Blanche, Jim; Wallace, L. D.
2001-01-01
This monthly report summarizes the work performed under contract NAS8-00114 for Marshall Space Flight Center in the following tasks: 1) Purchase Order No. H-32831D, Task Order 001A, GPB Program Software Oversight; 2) Purchase Order No. H-32832D, Task Order 002, ISS EXPRESS Racks Software Support; 3) Purchase Order No. H-32833D, Task Order 003, SSRMS Math Model Integration; 4) Purchase Order No. H-32834D, Task Order 004, GPB Program Hardware Oversight; 5) Purchase Order No. H-32835D, Task Order 005, Electrodynamic Tether Operations and Control Analysis; 6) Purchase Order No. H-32837D, Task Order 007, SRB Command Receiver/Decoder; and 7) Purchase Order No. H-32838D, Task Order 008, AVGS/DART SW and Simulation Support
Report of the DoD Joint Service Task Force on Software Problems
1982-07-30
technology, but this lead can quickly vanish much as the steel and automobile industries’ leads vanished during the last decade. There would be a signi...number of definitions of firmware are in vogue, the most common state that firmware is: o Reprogrammable hardware O Hardware implementation of...and thus would be expected to be easily reprogrammable . In fact, A-49 one of the trade-off considerations would be whether this should be handled as
Human Motion Tracking and Glove-Based User Interfaces for Virtual Environments in ANVIL
NASA Technical Reports Server (NTRS)
Dumas, Joseph D., II
2002-01-01
The Army/NASA Virtual Innovations Laboratory (ANVIL) at Marshall Space Flight Center (MSFC) provides an environment where engineers and other personnel can investigate novel applications of computer simulation and Virtual Reality (VR) technologies. Among the many hardware and software resources in ANVIL are several high-performance Silicon Graphics computer systems and a number of commercial software packages, such as Division MockUp by Parametric Technology Corporation (PTC) and Jack by Unigraphics Solutions, Inc. These hardware and software platforms are used in conjunction with various VR peripheral I/O (input / output) devices, CAD (computer aided design) models, etc. to support the objectives of the MSFC Engineering Systems Department/Systems Engineering Support Group (ED42) by studying engineering designs, chiefly from the standpoint of human factors and ergonomics. One of the more time-consuming tasks facing ANVIL personnel involves the testing and evaluation of peripheral I/O devices and the integration of new devices with existing hardware and software platforms. Another important challenge is the development of innovative user interfaces to allow efficient, intuitive interaction between simulation users and the virtual environments they are investigating. As part of his Summer Faculty Fellowship, the author was tasked with verifying the operation of some recently acquired peripheral interface devices and developing new, easy-to-use interfaces that could be used with existing VR hardware and software to better support ANVIL projects.
DISTA: a portable software solution for 3D compilation of photogrammetric image blocks
NASA Astrophysics Data System (ADS)
Boochs, Frank; Mueller, Hartmut; Neifer, Markus
2001-04-01
A photogrammetric evaluation system used for the precise determination of 3D-coordinates from blocks of large metric images will be presented. First, the motivation for the development is shown, which is placed in the field of processing tools for photogrammetric evaluation tasks. As the use and availability of metric images of digital type rapidly increases corresponding equipment for the measuring process is needed. Systems which have been developed up to now are either very special ones, founded on high end graphics workstations with an according pricing or simple ones with restricted measuring functionality. A new conception will be shown, avoiding special high end graphics hardware but providing a complete processing chain for all elementary photogrammetric tasks ranging from preparatory steps over the formation of image blocks up to the automatic and interactive 3D-evaluation within digital stereo models. The presented system is based on PC-hardware equipped with off the shelf graphics boards and uses an object oriented design. The specific needs of a flexible measuring system and the corresponding requirements which have to be met by the system are shown. Important aspects as modularity and hardware independence and their value for the solution are shown. The design of the software will be presented and first results with a prototype realised on a powerful PC-hardware configuration will be featured
Automatic Thread-Level Parallelization in the Chombo AMR Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christen, Matthias; Keen, Noel; Ligocki, Terry
2011-05-26
The increasing on-chip parallelism has some substantial implications for HPC applications. Currently, hybrid programming models (typically MPI+OpenMP) are employed for mapping software to the hardware in order to leverage the hardware?s architectural features. In this paper, we present an approach that automatically introduces thread level parallelism into Chombo, a parallel adaptive mesh refinement framework for finite difference type PDE solvers. In Chombo, core algorithms are specified in the ChomboFortran, a macro language extension to F77 that is part of the Chombo framework. This domain-specific language forms an already used target language for an automatic migration of the large number ofmore » existing algorithms into a hybrid MPI+OpenMP implementation. It also provides access to the auto-tuning methodology that enables tuning certain aspects of an algorithm to hardware characteristics. Performance measurements are presented for a few of the most relevant kernels with respect to a specific application benchmark using this technique as well as benchmark results for the entire application. The kernel benchmarks show that, using auto-tuning, up to a factor of 11 in performance was gained with 4 threads with respect to the serial reference implementation.« less
Time-lapse microscopy and image processing for stem cell research: modeling cell migration
NASA Astrophysics Data System (ADS)
Gustavsson, Tomas; Althoff, Karin; Degerman, Johan; Olsson, Torsten; Thoreson, Ann-Catrin; Thorlin, Thorleif; Eriksson, Peter
2003-05-01
This paper presents hardware and software procedures for automated cell tracking and migration modeling. A time-lapse microscopy system equipped with a computer controllable motorized stage was developed. The performance of this stage was improved by incorporating software algorithms for stage motion displacement compensation and auto focus. The microscope is suitable for in-vitro stem cell studies and allows for multiple cell culture image sequence acquisition. This enables comparative studies concerning rate of cell splits, average cell motion velocity, cell motion as a function of cell sample density and many more. Several cell segmentation procedures are described as well as a cell tracking algorithm. Statistical methods for describing cell migration patterns are presented. In particular, the Hidden Markov Model (HMM) was investigated. Results indicate that if the cell motion can be described as a non-stationary stochastic process, then the HMM can adequately model aspects of its dynamic behavior.
A portable fetal heart monitor and its adaption to the detection of certain prenatal abnormalities
NASA Technical Reports Server (NTRS)
Zahorian, Stephen A.
1994-01-01
There were three primary objectives for this task: (1) The investigation of the feasibility of making the fetal heart rate monitor portable, using a laptop computer; (2) Improvements in the signal processing for the monitor; and (3) Implementation of a real-time hardware software system. These tasks have been completed as discussed in the following section.
1972-01-01
This chart details Skylab's Time and Motion experiment (M151), a medical study to measure performance differences between tasks undertaken on Earth and the same tasks performed by Skylab crew members in orbit. Data collected from this experiment evaluated crew members' zero-gravity behavior for designs and work programs for future space exploration. The Marshall Space Flight Center had program management responsibility for the development of Skylab hardware and experiments.
TDRSS system configuration study for space shuttle program
NASA Technical Reports Server (NTRS)
1978-01-01
This study was set up to assure that operation of the shuttle orbiter communications systems met the program requirements when subjected to electrical conditions similar to those which will be encountered during the operational mission. The test program intended to implement an integrated test bed, consisting of applicable orbiter, EVA, payload simulator, STDN, and AF/SCF, as well as the TDRSS equipment. The stated intention of Task 501 Program was to configure the test bed with prototype hardware for a system development test and production hardware for a system verification test. In case of TDRSS when the hardware was not available, simulators whose functional performance was certified to meet appropriate end item specification were used.
One Size Does Not Fit All: Human Failure Event Decomposition and Task Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald Laurids Boring, PhD
2014-09-01
In the probabilistic safety assessments (PSAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered or exacerbated by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally,more » both approaches should arrive at the same set of HFEs. This question remains central as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PSAs tend to be top-down—defined as a subset of the PSA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) are more likely to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications. In this paper, I first review top-down and bottom-up approaches for defining HFEs and then present a seven-step guideline to ensure a task analysis completed as part of human error identification decomposes to a level suitable for use as HFEs. This guideline illustrates an effective way to bridge the bottom-up approach with top-down requirements.« less
1997-01-17
SHOWDirect Control Systems (6) Betacam SP Players (Video Backup) (6) Betacam SP Recorders (Show Record) (2) CRV Laser Disc Rec/Players (GoTo) (14) Multi...IK Scoops (3)lKDP’s (1) Schedule 40 Light Pole (Flown) Control Console Dimming Cables & Distribution PRODUCTION HARDWARE (1) Sony Betacam SP...Shooters Package (1) Folsom Hi-Res Video Scan Converter (20) Betacam SP VideoTapes STAGING HARDWARE (1) Custom Screen Divider / Support 44 This
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshii, Kazutomo; Llopis, Pablo; Zhang, Kaicheng
As CMOS scaling nears its end, parameter variations (process, temperature and voltage) are becoming a major concern. To overcome parameter variations and provide stability, modern processors are becoming dynamic, opportunistically adjusting voltage and frequency based on thermal and energy constraints, which negatively impacts traditional bulk-synchronous parallelism-minded hardware and software designs. As node-level architecture is growing in complexity, implementing variation control mechanisms only with hardware can be a challenging task. In this paper we investigate a software strategy to manage hardwareinduced variations, leveraging low-level monitoring/controlling mechanisms.
NASA Technical Reports Server (NTRS)
1973-01-01
A specification catalog to define the equipment to be used for conducting life sciences experiments in a space laboratory is presented. The specification sheets list the purpose of the equipment item, and any specific technical requirements which can be identified. The status of similar hardware for ground use is stated with comments regarding modifications required to achieve spaceflight qualified hardware. Pertinent sketches, commercial catalog sheets, or drawings of the applicable equipment are included.
JSC Wireless Sensor Network Update
NASA Technical Reports Server (NTRS)
Wagner, Robert
2010-01-01
Sensor nodes composed of three basic components... radio module: COTS radio module implementing standardized WSN protocol; treated as WSN modem by main board main board: contains application processor (TI MSP430 microcontroller), memory, power supply; responsible for sensor data acquisition, pre-processing, and task scheduling; re-used in every application with growing library of embedded C code sensor card: contains application-specific sensors, data conditioning hardware, and any advanced hardware not built into main board (DSPs, faster A/D, etc.); requires (re-) development for each application.
NASA Technical Reports Server (NTRS)
Miller, Darcy
2000-01-01
Foreign object debris (FOD) is an important concern while processing space flight hardware. FOD can be defined as "The debris that is left in or around flight hardware, where it could cause damage to that flight hardware," (United Space Alliance, 2000). Just one small screw left unintentionally in the wrong place could delay a launch schedule while it is retrieved, increase the cost of processing, or cause a potentially fatal accident. At this time, there is not a single solution to help reduce the number of dropped parts such as screws, bolts, nuts, and washers during installation. Most of the effort is currently focused on training employees and on capturing the parts once they are dropped. Advances in ergonomics and hand tool design suggest that a solution may be possible, in the form of specialty hand tools, which secure the small parts while they are being handled. To assist in the development of these new advances, a test methodology was developed to conduct a usability evaluation of hand tools, while performing tasks with risk of creating FOD. The methodology also includes hardware in the form of a testing board and the small parts that can be installed onto the board during a test. The usability of new hand tools was determined based on efficiency and the number of dropped parts. To validate the methodology, participants were tested while performing a task that is representative of the type of work that may be done when processing space flight hardware. Test participants installed small parts using their hands and two commercially available tools. The participants were from three groups: (1) students, (2) engineers / managers and (3) technicians. The test was conducted to evaluate the differences in performance when using the three installation methods, as well as the difference in performance of the three participant groups.
Achieving behavioral control with millisecond resolution in a high-level programming environment.
Asaad, Wael F; Eskandar, Emad N
2008-08-30
The creation of psychophysical tasks for the behavioral neurosciences has generally relied upon low-level software running on a limited range of hardware. Despite the availability of software that allows the coding of behavioral tasks in high-level programming environments, many researchers are still reluctant to trust the temporal accuracy and resolution of programs running in such environments, especially when they run atop non-real-time operating systems. Thus, the creation of behavioral paradigms has been slowed by the intricacy of the coding required and their dissemination across labs has been hampered by the various types of hardware needed. However, we demonstrate here that, when proper measures are taken to handle the various sources of temporal error, accuracy can be achieved at the 1 ms time-scale that is relevant for the alignment of behavioral and neural events.
Criteria-based evaluation of group 3 level memory telefacsimile equipment for interlibrary loan.
Bennett, V M; Wood, M S; Malcom, D L
1990-01-01
The Interlibrary Loan, Document Delivery, and Union List Task Force of the Health Sciences Libraries Consortium (HSLC)--with nineteen libraries located in Philadelphia, Pittsburgh, and Hershey, Pennsylvania, and Delaware--accepted the charge of evaluating and recommending for purchase telefacsimile hardware to further interlibrary loan among HSLC members. To allow a thorough and scientific evaluation of group 3 level telefacsimile equipment, the task force identified ninety-six hardware features, which were grouped into nine broad criteria. These features formed the basis of a weighted analysis that identified three final candidates, with one model recommended to the HSLC board. This article details each of the criteria and discusses features in terms of library applications. The evaluation grid developed in the weighted analysis process should aid librarians charged with the selection of level 3 telefacsimile equipment. PMID:2328361
Space Environmental Effects on Materials and Processes
NASA Technical Reports Server (NTRS)
Sabbann, Leslie M.
2009-01-01
The Materials and Processes (M&P) Branch of the Structural Engineering Division at Johnson Space Center (JSC) seeks to uphold the production of dependable space hardware through materials research, which fits into NASA's purpose of advancing human exploration, use, and development of space. The Space Environmental Effects projects fully support these Agency goals. Two tasks were assigned to support M&P. Both assignments were to further the research of material behavior outside of Earth's atmosphere in order to determine which materials are most durable and safe to use in space for mitigating risks. One project, the Materials on International Space Station Experiments (MISSE) task, was to compile data from International Space Station (ISS) experiments to pinpoint beneficial space hardware. The other project was researching the effects on composite materials of exposure to high doses of radiation for a Lunar habitat project.
High Pressure, Earth-storable Rocket Technology. Volume 1
NASA Technical Reports Server (NTRS)
Jassowski, D. M.
1997-01-01
The effect of elevated chamber pressure on combustion efficiency and heat transfer has been determined at the 100 lbf (445 N) thrust level for nitrogen tetroxide propellants. Measurements were made up to 500 psia (3.45 MPa) with testbed hardware; tests at 100 psia (0.690 MPa) and 250 psia (1.72 MPa) were made with radiation-cooled rhenium chambers. The first task of the program served to determine desirable thruster applications and operating conditions: high total impulse, i.e., communication satellite or spacecraft bus axial engines, at chamber pressures up to 250 psia (1.72 MPa) pressure-fed, or up to 500 psia (3.45 MPa) pump-fed. The hardware modifications and testing required to obtain the data were determined in Task 2, which included design-support hot fire tests; supplemental hardware, including a 250 psia (1.72 MPa) Pc rhenium chamber and a 20% fuel-film cooled platelet injector was fabricated in Task 3. Testing showed that satisfactory operation of Ir-Re radiation chambers is assured at pressures up to 250 psia and may be possible up to 500. The heat transfer data obtained show good correlation with throat Reynolds number and are generally under values given by the simplified Bartz equation; chambers equilibrium temperatures match predicted values. Preliminary optimization of trip configuration and mixture ratio were made; Isp performance from thrust measurements was within 1% of predicted values. Stability, compatibility, and front-end thermal management were determined to be satisfactory.
High Pressure, Earth-storable Rocket Technology. Volume 2
NASA Technical Reports Server (NTRS)
Jassowski, D. M.
1997-01-01
The effect of elevated chamber pressure on combustion efficiency and heat transfer has been determined at the 100 lbf (445 N) thrust level for nitrogen tetroxide propellants. Measurements were made up to 500 psia (3.45 Mpa) with testbed hardware; tests at 100 psia (0.690 MPa) and 250 psia (1.72 MPa) were made with radiation-cooled rhenium chambers. The first task of the program served to determine desirable thruster applications and operating conditions: high total impulse, i.e. communication satellite or spacecraft bus axial engines, at chamber pressures up to 250 psia (1.72 MPa) pressure-fed, or up to 500 psia (3.45 MPa) pump-fed. The hardware modifications and testing required to obtain the data were determined in Task 2, which included design-support hot fire tests; supplemental hardware, including a 250 psia (1.72 MPa) Pc rhenium chamber and a 20% fuel-film cooled platelet injector was fabricated in Task 3. Testing showed that satisfactory operation of Ir-Re radiation chambers is assured at pressures up to 250 psia and may be possible up to 500. The heat transfer data obtained show good correlation with throat Reynolds number and are generally under values given by the simplified Bartz equation; chambers equilibrium temperatures match predicted values. Preliminary optimization of trip configuration and mixture ratio were made; Isp performance from thrust measurements was within 1% of predicted values. Stability, compatibility, and front-end thermal management were determined to be satisfactory.
High Pressure, Earth-Storable Rocket Technology. Volume 3; Appendices C and D
NASA Technical Reports Server (NTRS)
Jassowski, D. M.
1997-01-01
The effect of elevated chamber pressure on combustion efficiency and heat transfer has been determined at the 100 lbf (445 N) thrust level for nitrogen tetroxide propellants. Measurements were made up to 500 psia (3.45 MPa) with testbed hardware; tests at 100 psia (0.690 MPa) and 250 psia (1.72 MPa) were made with radiation-cooled rhenium chambers. The first task of the program served to determine desirable thruster applications and operating conditions: high total impulse, i.e. communication satellite or spacecraft bus axial engines, at chamber pressures up to 250 psia (1.72 MPa) pressure-fed, or up to 500 psia (3.45 MPa) pump-fed. The hardware modifications and testing required to obtain the data were determined in Task 2, which included design-support hot fire tests; supplemental hardware, including a 250 psia (1.72 MPa) Pc rhenium chamber and a 20% fuel-film cooled platelet injector was fabricated in Task 3. Testing showed that satisfactory operation of Ir-Re radiation chambers is assured at pressures up to 250 psia and may be possible up to 500. The heat transfer data obtained show good correlation with throat Reynolds number and are generally under values given by the simplified Bartz equation; chambers equilibrium temperatures match predicted values. Preliminary optimization of trip configuration and mixture ratio were made; Isp performance from thrust measurements was within 1% of predicted values. Stability, compatibility, and front-end thermal management were determined to be satisfactory.
Kingston, David C; Riddell, Maureen F; McKinnon, Colin D; Gallagher, Kaitlin M; Callaghan, Jack P
2016-02-01
We evaluated the effect of work surface angle and input hardware on upper-limb posture when using a hybrid computer workstation. Offices use sit-stand and/or tablet workstations to increase worker mobility. These workstations may have negative effects on upper-limb joints by increasing time spent in non-neutral postures, but a hybrid standing workstation may improve working postures. Fourteen participants completed office tasks in four workstation configurations: a horizontal or sloped 15° working surface with computer or tablet hardware. Three-dimensional right upper-limb postures were recorded during three tasks: reading, form filling, and writing e-mails. Amplitude probability distribution functions determined the median and range of upper-limb postures. The sloped-surface tablet workstation decreased wrist ulnar deviation by 5° when compared to the horizontal-surface computer when reading. When using computer input devices (keyboard and mouse), the shoulder, elbow, and wrist were closest to neutral joint postures when working on a horizontal work surface. The elbow was 23° and 15° more extended, whereas the wrist was 6° less ulnar deviated, when reading compared to typing forms or e-mails. We recommend that the horizontal-surface computer configuration be used for typing and the sloped-surface tablet configuration be used for intermittent reading tasks in this hybrid workstation. Offices with mobile employees could use this workstation for alternating their upper-extremity postures; however, other aspects of the device need further investigation. © 2015, Human Factors and Ergonomics Society.
DOE/JPL advanced thermionic technology program
NASA Technical Reports Server (NTRS)
1979-01-01
Progress made in different tasks of the advanced thermionic technology program is described. The tasks include surface and plasma investigations (surface characterization, spectroscopic plasma experiments, and converter theory); low temperature converter development (tungsten emitter, tungsten oxide collector and tungsten emitter, nickel collector); component hardware development (hot shell development); flame-fired silicon carbide converters; high temperature and advanced converter studies; postoperational diagnostics; and correlation of design interfaces.
1970-01-01
This 1970 photograph shows Skylab's Time and Motion experiment (M151) control unit, a medical study to measure performance differences between tasks undertaken on Earth and the same tasks performed by Skylab crew members in orbit. Data collected from this experiment evaluated crew members' zero-gravity behavior for designs and work programs for future space exploration. The Marshall Space Flight Center had program management responsibility for the development of Skylab hardware and experiments.
Apollo experience report: Engineering and analysis mission support
NASA Technical Reports Server (NTRS)
Fricke, R. W., Jr.
1975-01-01
The tasks performed by the team of specialists that evaluated hardware performance during prelaunch checkout and in-flight operation are discussed. The organizational structure, operational procedures, and interfaces as well as the facilities and software required to perform these tasks are discussed. The scope of the service performed by the team and the evaluation philosophy are described. Summaries of problems and their resolution are included as appendixes.
The Secure Distributed Operating System Design Project
1988-06-01
a di- verse group of people . Its organization isolates different aspects of the project, such as expected results, preliminary results, and technical...modeled after these procedures. " Automation: computers are commonly used to automate tasks previously performed by people ; many of these tasks are... people commonly con- sidered the threats anticipated to the system and mechanisms that are used to prevent those threats. Both hardware and software
Comparative Effects of Antihistamines on Aircrew Mission Effectiveness under Sustained Operations
1992-06-01
measures consist mainly of process measures. Process measures are measures of activities used to accomplish the mission and produce the final results...They include task completion times and response variability, and information processing rates as they relate to unique task assignment. Performance...contains process measures that assess the Individual contributions of hardware/software and human components to overall system performance. Measures
Vaccum Gas Tungsten Arc Welding, phase 1
NASA Astrophysics Data System (ADS)
Weeks, J. L.; Krotz, P. D.; Todd, D. T.; Liaw, Y. K.
1995-03-01
This two year program will investigate Vacuum Gas Tungsten Arc Welding (VGTAW) as a method to modify or improve the weldability of normally difficult-to-weld materials. VGTAW appears to offer a significant improvement in weldability because of the clean environment and lower heat input needed. The overall objective of the program is to develop the VGTAW technology and implement it into a manufacturing environment that will result in lower cost, better quality and higher reliability aerospace components for the space shuttle and other NASA space systems. Phase 1 of this program was aimed at demonstrating the process's ability to weld normally difficult-to-weld materials. Phase 2 will focus on further evaluation, a hardware demonstration and a plan to implement VGTAW technology into a manufacturing environment. During Phase 1, the following tasks were performed: (1) Task 11000 Facility Modification - an existing vacuum chamber was modified and adapted to a GTAW power supply; (2) Task 12000 Materials Selection - four difficult-to-weld materials typically used in the construction of aerospace hardware were chosen for study; (3) Task 13000 VGTAW Experiments - welding experiments were conducted under vacuum using the hollow tungsten electrode and evaluation. As a result of this effort, two materials, NARloy Z and Incoloy 903, were downselected for further characterization in Phase 2; and (4) Task 13100 Aluminum-Lithium Weld Studies - this task was added to the original work statement to investigate the effects of vacuum welding and weld pool vibration on aluminum-lithium alloys.
Vaccum Gas Tungsten Arc Welding, phase 1
NASA Technical Reports Server (NTRS)
Weeks, J. L.; Krotz, P. D.; Todd, D. T.; Liaw, Y. K.
1995-01-01
This two year program will investigate Vacuum Gas Tungsten Arc Welding (VGTAW) as a method to modify or improve the weldability of normally difficult-to-weld materials. VGTAW appears to offer a significant improvement in weldability because of the clean environment and lower heat input needed. The overall objective of the program is to develop the VGTAW technology and implement it into a manufacturing environment that will result in lower cost, better quality and higher reliability aerospace components for the space shuttle and other NASA space systems. Phase 1 of this program was aimed at demonstrating the process's ability to weld normally difficult-to-weld materials. Phase 2 will focus on further evaluation, a hardware demonstration and a plan to implement VGTAW technology into a manufacturing environment. During Phase 1, the following tasks were performed: (1) Task 11000 Facility Modification - an existing vacuum chamber was modified and adapted to a GTAW power supply; (2) Task 12000 Materials Selection - four difficult-to-weld materials typically used in the construction of aerospace hardware were chosen for study; (3) Task 13000 VGTAW Experiments - welding experiments were conducted under vacuum using the hollow tungsten electrode and evaluation. As a result of this effort, two materials, NARloy Z and Incoloy 903, were downselected for further characterization in Phase 2; and (4) Task 13100 Aluminum-Lithium Weld Studies - this task was added to the original work statement to investigate the effects of vacuum welding and weld pool vibration on aluminum-lithium alloys.
When do letter features migrate? A boundary condition for feature-integration theory.
Butler, B E; Mewhort, D J; Browse, R A
1991-01-01
Feature-integration theory postulates that a lapse of attention will allow letter features to change position and to recombine as illusory conjunctions (Treisman & Paterson, 1984). To study such errors, we used a set of uppercase letters known to yield illusory conjunctions in each of three tasks. The first, a bar-probe task, showed whole-character mislocations but not errors based on feature migration and recombination. The second, a two-alternative forced-choice detection task, allowed subjects to focus on the presence or absence of subletter features and showed illusory conjunctions based on feature migration and recombination. The third was also a two-alternative forced-choice detection task, but we manipulated the subjects' knowledge of the shape of the stimuli: In the case-certain condition, the stimuli were always in uppercase, but in the case-uncertain condition, the stimuli could appear in either upper- or lowercase. Subjects in the case-certain condition produced illusory conjunctions based on feature recombination, whereas subjects in the case-uncertain condition did not. The results suggest that when subjects can view the stimuli as feature groups, letter features regroup as illusory conjunctions; when subjects encode the stimuli as letters, whole items may be mislocated, but subletter features are not. Thus, illusory conjunctions reflect the subject's processing strategy, rather than the architecture of the visual system.
Task-based data-acquisition optimization for sparse image reconstruction systems
NASA Astrophysics Data System (ADS)
Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.
2017-03-01
Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.
Vestibular Function Research (VFR) experiment. Phase B: Design definition study
NASA Technical Reports Server (NTRS)
1978-01-01
The Vestibular Functions Research (VFR) Experiment was established to investigate the neurosensory and related physiological processes believed to be associated with the space flight nausea syndrome and to develop logical means for its prediction, prevention and treatment. The VFR Project consists of ground and spaceflight experimentation using frogs as specimens. The phase B Preliminary Design Study provided for the preliminary design of the experiment hardware, preparation of performance and hardware specification and a Phase C/D development plan, establishment of STS (Space Transportation System) interfaces and mission operations, and the study of a variety of hardware, experiment and mission options. The study consist of three major tasks: (1) mission mode trade-off; (2) conceptual design; and (3) preliminary design.
Crew Health Care System (CHeCS) Design Research, Documentations, and Evaluations
NASA Technical Reports Server (NTRS)
CLement, Bethany M.
2011-01-01
The Crew Health Care System (CHeCS) is a group within the Space Life Science Directorate (SLSD) that focuses on the overall health of astronauts by reinforcing the three divisions - the Environmental Maintenance System (EMS), the Countermeasures System (CMS), and the Health Maintenance System (HMS). This internship provided opportunity to gain knowledge, experience, and skills in CHeCS engineering and operations tasks. Various and differing tasks allowed for occasions to work independently, network to get things done, and show leadership abilities. Specific exercises included reviewing hardware certification, operations, and documentation within the ongoing Med Kit Redesign (MKR) project, and learning, writing, and working various common pieces of paperwork used in the engineering and design process. Another project focused on the distribution of various pieces of hardware to off-site research facilities with an interest in space flight health care. The main focus of this internship, though, was on a broad and encompassing understanding of the engineering process as time was spent looking at each individual step in a variety of settings and tasks.
Real Time Target Tracking Using Dedicated Vision Hardware
NASA Astrophysics Data System (ADS)
Kambies, Keith; Walsh, Peter
1988-03-01
This paper describes a real-time vision target tracking system developed by Adaptive Automation, Inc. and delivered to NASA's Launch Equipment Test Facility, Kennedy Space Center, Florida. The target tracking system is part of the Robotic Application Development Laboratory (RADL) which was designed to provide NASA with a general purpose robotic research and development test bed for the integration of robot and sensor systems. One of the first RADL system applications is the closing of a position control loop around a six-axis articulated arm industrial robot using a camera and dedicated vision processor as the input sensor so that the robot can locate and track a moving target. The vision system is inside of the loop closure of the robot tracking system, therefore, tight throughput and latency constraints are imposed on the vision system that can only be met with specialized hardware and a concurrent approach to the processing algorithms. State of the art VME based vision boards capable of processing the image at frame rates were used with a real-time, multi-tasking operating system to achieve the performance required. This paper describes the high speed vision based tracking task, the system throughput requirements, the use of dedicated vision hardware architecture, and the implementation design details. Important to the overall philosophy of the complete system was the hierarchical and modular approach applied to all aspects of the system, hardware and software alike, so there is special emphasis placed on this topic in the paper.
Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models
NASA Astrophysics Data System (ADS)
Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro
2017-10-01
Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here, we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.
Interchangeable end effector tools utilized on the PFMA
NASA Technical Reports Server (NTRS)
Cody, Joe; Carroll, John; Crow, George; Gierow, Paul; Littles, Jay; Maness, Michael; Morrison, Jim
1992-01-01
An instrumented task board, used for measuring forces applied by the Protoflight Manipulator Arm (PFMA) to the task board, was fabricated and delivered to Marshall Space Flight Center. SRS Technologies phased out the existing IBM compatible data acquisition system, used with a instrumented task board, and integrated the force measuring electronic hardware in with the Macintosh II data acquisition system. The purpose of this change was to acquire all data with the same time tag, allowing easier and more accurate data reduction in addition to real-time graphics. A three-dimensional optical position sensing system for determining the location of the PFMA's end effect or in reference to the center of the instrumented task board was also designed and delivered under. An improved task board was fabricated which included an improved instrumented beam design. The modified design of the task board improved the force/torque measurement system by increasing the sensitivity, reliability, load range and ease of maintenance. A calibration panel for the optical position system was also designed and fabricated. The calibration method developed for the position sensors enhanced the performance of the sensors as well as simplified the installation and calibration procedures required. The modifications made under this effort expanded the capabilities of the task board system. The system developed determines the arm's position relative to the task board and measures the signals to the joints resulting from the operator's control signals in addition to the task board forces. The software and hardware required to calculate and record the position of the PFMA during the performance of tasks with the instrumented task board were defined, designed and delivered to MSFC. PFMA joint input signals can be measured from a breakout box to evaluate the sensitivity or response of the arm operation to control commands. The data processing system provides the capability for post processing of time-history graphics and plots of the PFMA positions, the operator's actions, and the PFMA servo reactions in addition to realtime force and position sensor data presentation.
Interchangeable end effector tools utilized on the PFMA
NASA Astrophysics Data System (ADS)
Cody, Joe; Carroll, John; Crow, George; Gierow, Paul; Littles, Jay; Maness, Michael; Morrison, Jim
1992-02-01
An instrumented task board, used for measuring forces applied by the Protoflight Manipulator Arm (PFMA) to the task board, was fabricated and delivered to Marshall Space Flight Center. SRS Technologies phased out the existing IBM compatible data acquisition system, used with a instrumented task board, and integrated the force measuring electronic hardware in with the Macintosh II data acquisition system. The purpose of this change was to acquire all data with the same time tag, allowing easier and more accurate data reduction in addition to real-time graphics. A three-dimensional optical position sensing system for determining the location of the PFMA's end effect or in reference to the center of the instrumented task board was also designed and delivered under. An improved task board was fabricated which included an improved instrumented beam design. The modified design of the task board improved the force/torque measurement system by increasing the sensitivity, reliability, load range and ease of maintenance. A calibration panel for the optical position system was also designed and fabricated. The calibration method developed for the position sensors enhanced the performance of the sensors as well as simplified the installation and calibration procedures required. The modifications made under this effort expanded the capabilities of the task board system. The system developed determines the arm's position relative to the task board and measures the signals to the joints resulting from the operator's control signals in addition to the task board forces. The software and hardware required to calculate and record the position of the PFMA during the performance of tasks with the instrumented task board were defined, designed and delivered to MSFC. PFMA joint input signals can be measured from a breakout box to evaluate the sensitivity or response of the arm operation to control commands. The data processing system provides the capability for post processing of time-history graphics and plots of the PFMA positions, the operator's actions, and the PFMA servo reactions in addition to realtime force and position sensor data presentation.
Complications of deep brain stimulation: a collective review.
Chan, Danny T M; Zhu, Xian Lun; Yeung, Jonas H M; Mok, Vincent C T; Wong, Edith; Lau, Clara; Wong, Rosanna; Lau, Christine; Poon, Wai S
2009-10-01
Since the first deep brain stimulation (DBS) performed for movement disorder more than a decade ago, DBS has become a standard operation for advanced Parkinson's disease. Its indications are expanding to areas of dystonia, psychiatric conditions and refractory epilepsy. Additionally, a new set of DBS-related complications have arisen. Many teams found a slow learning curve from this complication-prone operation. We would like to investigate complications arising from 100 DBS electrode insertions and its prevention. We performed an audit in all DBS patients for operation-related complications in our centre from 1997 to 2008. Complications were classified into operation-related, hardware-related and stimulation-related. Operation-related complications included intracranial haemorrhages and electrode malposition. Hardware-related complications included fracture of electrodes, electrode migration, infection and erosion. Stimulation-related complications included sensorimotor conditions, psychiatric conditions and life-threatening conditions. From 1997 to the end of 2008, 100 DBS electrodes were inserted in 55 patients for movement disorders, mostly for Parkinsons disease (50 patients). There was one symptomatic cerebral haemorrhage (1%) and two electrode malpositions (2%). Meticulous surgical planning, use of microdriver and a reliable electrode anchorage device would minimise this group of complications. There were two electrode fractures, one electrode migration and one pulse-generator infection which contributed to the hardware-related complication rate of 5%. There were no sensorimotor or life-threatening complications in our group. However, three patients suffered from reversible psychiatric symptoms after DBS. DBS is, on the one hand, an effective surgical treatment for movement disorders. On the other hand, it is a complication-prone operation. A dedicated "Movement Disorder Team" consisting of neurologists, neurophysiologists, functional neurosurgeons, neuropsychologists and nursing specialists is essential. Liaison among team members in peri-operative periods and postoperative care is the key to avoiding complications and having a successful patient outcome.
Shuttle mission simulator requirements report, volume 1, revision C
NASA Technical Reports Server (NTRS)
Burke, J. F.
1973-01-01
The contractor tasks required to produce a shuttle mission simulator for training crew members and ground personnel are discussed. The tasks will consist of the design, development, production, installation, checkout, and field support of a simulator with two separate crew stations. The tasks include the following: (1) review of spacecraft changes and incorporation of appropriate changes in simulator hardware and software design, and (2) the generation of documentation of design, configuration management, and training used by maintenance and instructor personnel after acceptance for each of the crew stations.
A Summary of Taxonomies of Digital System Failure Modes Provided by the DigRel Task Group
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu T. L.; Yue M.; Postma, W.
2012-06-25
Recently, the CSNI directed WGRisk to set up a task group called DIGREL to initiate a new task on developing a taxonomy of failure modes of digital components for the purposes of PSA. It is an important step towards standardized digital I&C reliability assessment techniques for PSA. The objective of this paper is to provide a comparison of the failure mode taxonomies provided by the participants. The failure modes are classified in terms of their levels of detail. Software and hardware failure modes are discussed separately.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, Harold C.; Ibanez, Daniel Alejandro
This report documents the ASC/ATDM Kokkos deliverable "Production Portable Dy- namic Task DAG Capability." This capability enables applications to create and execute a dynamic task DAG ; a collection of heterogeneous computational tasks with a directed acyclic graph (DAG) of "execute after" dependencies where tasks and their dependencies are dynamically created and destroyed as tasks execute. The Kokkos task scheduler executes the dynamic task DAG on the target execution resource; e.g. a multicore CPU, a manycore CPU such as Intel's Knights Landing (KNL), or an NVIDIA GPU. Several major technical challenges had to be addressed during development of Kokkos' Taskmore » DAG capability: (1) portability to a GPU with it's simplified hardware and micro- runtime, (2) thread-scalable memory allocation and deallocation from a bounded pool of memory, (3) thread-scalable scheduler for dynamic task DAG, (4) usability by applications.« less
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Youngblood, John N.; Saha, Aindam
1987-01-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carroll, C.C.; Youngblood, J.N.; Saha, A.
1987-12-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processingmore » elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.« less
TASK ALLOCATION IN GEO-DISTRIBUTED CYBER-PHYSICAL SYSTEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aggarwal, Rachit; Smidts, Carol
This paper studies the task allocation algorithm for a distributed test facility (DTF), which aims to assemble geo-distributed cyber (software) and physical (hardware in the loop components into a prototype cyber-physical system (CPS). This allows low cost testing on an early conceptual prototype (ECP) of the ultimate CPS (UCPS) to be developed. The DTF provides an instrumentation interface for carrying out reliability experiments remotely such as fault propagation analysis and in-situ testing of hardware and software components in a simulated environment. Unfortunately, the geo-distribution introduces an overhead that is not inherent to the UCPS, i.e. a significant time delay inmore » communication that threatens the stability of the ECP and is not an appropriate representation of the behavior of the UCPS. This can be mitigated by implementing a task allocation algorithm to find a suitable configuration and assign the software components to appropriate computational locations, dynamically. This would allow the ECP to operate more efficiently with less probability of being unstable due to the delays introduced by geo-distribution. The task allocation algorithm proposed in this work uses a Monte Carlo approach along with Dynamic Programming to identify the optimal network configuration to keep the time delays to a minimum.« less
Embedded real-time image processing hardware for feature extraction and clustering
NASA Astrophysics Data System (ADS)
Chiu, Lihu; Chang, Grant
2003-08-01
Printronix, Inc. uses scanner-based image systems to perform print quality measurements for line-matrix printers. The size of the image samples and image definition required make commercial scanners convenient to use. The image processing is relatively well defined, and we are able to simplify many of the calculations into hardware equations and "c" code. The process of rapidly prototyping the system using DSP based "c" code gets the algorithms well defined early in the development cycle. Once a working system is defined, the rest of the process involves splitting the task up for the FPGA and the DSP implementation. Deciding which of the two to use, the DSP or the FPGA, is a simple matter of trial benchmarking. There are two kinds of benchmarking: One for speed, and the other for memory. The more memory intensive algorithms should run in the DSP, and the simple real time tasks can use the FPGA most effectively. Once the task is split, we can decide which platform the algorithm should be executed. This involves prototyping all the code in the DSP, then timing various blocks of the algorithm. Slow routines can be optimized using the compiler tools, and if further reduction in time is needed, into tasks that the FPGA can perform.
Optimization-based methods for road image registration
DOT National Transportation Integrated Search
2008-02-01
A number of transportation agencies are now relying on direct imaging for monitoring and cataloguing the state of their roadway systems. Images provide objective information to characterize the pavement as well as roadside hardware. The tasks of proc...
Real-Time Considerations for Rugged Embedded Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Ceriani, Marco; Palermo, Gianluca
This chapter introduces the characterizing aspects of embedded systems, and discusses the specific features that a designer should address to an embedded system “rugged”, i.e., able to operate reliably in harsh environments. The chapter addresses both the hardware and the less obvious software aspect. After presenting a current list of certifications for ruggedization, the chapters present a case study that focuses on the interaction of the hardware and software layers in reactive real-time system. In particular, it shows how the use of fast FPGA prototyping could provide insights on unexpected factors that influence the performance and thus responsiveness to eventsmore » of a scheduling algorithm for multiprocessor systems that manages both periodic, hard real-time task, and aperiodic tasks. The main lesson is that to make the system “rugged”, a designer should consider these issues by, for example, overprovisioning resources and/or computation capabilities.« less
Achieving behavioral control with millisecond resolution in a high-level programming environment
Asaad, Wael F.; Eskandar, Emad N.
2008-01-01
The creation of psychophysical tasks for the behavioral neurosciences has generally relied upon low-level software running on a limited range of hardware. Despite the availability of software that allows the coding of behavioral tasks in high-level programming environments, many researchers are still reluctant to trust the temporal accuracy and resolution of programs running in such environments, especially when they run atop non-real-time operating systems. Thus, the creation of behavioral paradigms has been slowed by the intricacy of the coding required and their dissemination across labs has been hampered by the various types of hardware needed. However, we demonstrate here that, when proper measures are taken to handle the various sources of temporal error, accuracy can be achieved at the one millisecond time-scale that is relevant for the alignment of behavioral and neural events. PMID:18606188
NASA Astrophysics Data System (ADS)
Tokareva, Victoria
2018-04-01
New generation medicine demands a better quality of analysis increasing the amount of data collected during checkups, and simultaneously decreasing the invasiveness of a procedure. Thus it becomes urgent not only to develop advanced modern hardware, but also to implement special software infrastructure for using it in everyday clinical practice, so-called Picture Archiving and Communication Systems (PACS). Developing distributed PACS is a challenging task for nowadays medical informatics. The paper discusses the architecture of distributed PACS server for processing large high-quality medical images, with respect to technical specifications of modern medical imaging hardware, as well as international standards in medical imaging software. The MapReduce paradigm is proposed for image reconstruction by server, and the details of utilizing the Hadoop framework for this task are being discussed in order to provide the design of distributed PACS as ergonomic and adapted to the needs of end users as possible.
Low Power, Low Mass, Modular, Multi-band Software-defined Radios
NASA Technical Reports Server (NTRS)
Haskins, Christopher B. (Inventor); Millard, Wesley P. (Inventor)
2013-01-01
Methods and systems to implement and operate software-defined radios (SDRs). An SDR may be configured to perform a combination of fractional and integer frequency synthesis and direct digital synthesis under control of a digital signal processor, which may provide a set of relatively agile, flexible, low-noise, and low spurious, timing and frequency conversion signals, and which may be used to maintain a transmit path coherent with a receive path. Frequency synthesis may include dithering to provide additional precision. The SDR may include task-specific software-configurable systems to perform tasks in accordance with software-defined parameters or personalities. The SDR may include a hardware interface system to control hardware components, and a host interface system to provide an interface to the SDR with respect to a host system. The SDR may be configured for one or more of communications, navigation, radio science, and sensors.
Towards a visual modeling approach to designing microelectromechanical system transducers
NASA Astrophysics Data System (ADS)
Dewey, Allen; Srinivasan, Vijay; Icoz, Evrim
1999-12-01
In this paper, we address initial design capture and system conceptualization of microelectromechanical system transducers based on visual modeling and design. Visual modeling frames the task of generating hardware description language (analog and digital) component models in a manner similar to the task of generating software programming language applications. A structured topological design strategy is employed, whereby microelectromechanical foundry cell libraries are utilized to facilitate the design process of exploring candidate cells (topologies), varying key aspects of the transduction for each topology, and determining which topology best satisfies design requirements. Coupled-energy microelectromechanical system characterizations at a circuit level of abstraction are presented that are based on branch constitutive relations and an overall system of simultaneous differential and algebraic equations. The resulting design methodology is called visual integrated-microelectromechanical VHDL-AMS interactive design (VHDL-AMS is visual hardware design language for analog and mixed signal).
Targeting multiple heterogeneous hardware platforms with OpenCL
NASA Astrophysics Data System (ADS)
Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.
2014-06-01
The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware-specific optimizations as necessary.
The Jet Propulsion Laboratory shared control architecture and implementation
NASA Technical Reports Server (NTRS)
Backes, Paul G.; Hayati, Samad
1990-01-01
A hardware and software environment for shared control of telerobot task execution has been implemented. Modes of task execution range from fully teleoperated to fully autonomous as well as shared where hand controller inputs from the human operator are mixed with autonomous system inputs in real time. The objective of the shared control environment is to aid the telerobot operator during task execution by merging real-time operator control from hand controllers with autonomous control to simplify task execution for the operator. The operator is the principal command source and can assign as much autonomy for a task as desired. The shared control hardware environment consists of two PUMA 560 robots, two 6-axis force reflecting hand controllers, Universal Motor Controllers for each of the robots and hand controllers, a SUN4 computer, and VME chassis containing 68020 processors and input/output boards. The operator interface for shared control, the User Macro Interface (UMI), is a menu driven interface to design a task and assign the levels of teleoperated and autonomous control. The operator also sets up the system monitor which checks safety limits during task execution. Cartesian-space degrees of freedom for teleoperated and/or autonomous control inputs are selected within UMI as well as the weightings for the teleoperation and autonmous inputs. These are then used during task execution to determine the mix of teleoperation and autonomous inputs. Some of the autonomous control primitives available to the user are Joint-Guarded-Move, Cartesian-Guarded-Move, Move-To-Touch, Pin-Insertion/Removal, Door/Crank-Turn, Bolt-Turn, and Slide. The operator can execute a task using pure teleoperation or mix control execution from the autonomous primitives with teleoperated inputs. Presently the shared control environment supports single arm task execution. Work is presently underway to provide the shared control environment for dual arm control. Teleoperation during shared control is only Cartesian space control and no force-reflection is provided. Force-reflecting teleoperation and joint space operator inputs are planned extensions to the environment.
Leadership Development Program Final Project
NASA Technical Reports Server (NTRS)
Parrish, Teresa C.
2016-01-01
TOSC is NASA's prime contractor tasked to successfully assemble, test, and launch the EM1 spacecraft. TOSC success is highly dependent on design products from the other NASA Programs manufacturing and delivering the flight hardware; Space Launch System(SLS) and Multi-Purpose Crew Vehicle(MPCV). Design products directly feed into TOSC's: Procedures, Personnel training, Hardware assembly, Software development, Integrated vehicle test and checkout, Launch. TOSC senior management recognized a significant schedule risk as these products are still being developed by the other two (2) programs; SVE and ACE positions were created.
Skylab SO71/SO72 circadian periodicity experiment. [experimental design and checkout of hardware
NASA Technical Reports Server (NTRS)
Fairchild, M. K.; Hartmann, R. A.
1973-01-01
The circadian rhythm hardware activities from 1965 through 1973 are considered. A brief history of the programs leading to the development of the combined Skylab SO71/SO72 Circadian Periodicity Experiment (CPE) is given. SO71 is the Skylab experiment number designating the pocket mouse circadian experiment, and SO72 designates the vinegar gnat circadian experiment. Final design modifications and checkout of the CPE, integration testing with the Apollo service module CSM 117 and the launch preparation and support tasks at Kennedy Space Center are reported.
Extravehicular Activity (EVA) Power, Avionics, and Software (PAS) 101
NASA Technical Reports Server (NTRS)
Irimies, David
2011-01-01
EVA systems consist of a spacesuit or garment, a PLSS, a PAS system, and spacesuit interface hardware. The PAS system is responsible for providing power for the suit, communication of several types of data between the suit and other mission assets, avionics hardware to perform numerous data display and processing functions, and information systems that provide crewmembers data to perform their tasks with more autonomy and efficiency. Irimies discussed how technology development efforts have advanced the state-of-the-art in these areas and shared technology development challenges.
Electrochemical carbon dioxide concentrator advanced technology tasks
NASA Technical Reports Server (NTRS)
Schneider, J. J.; Schubert, F. H.; Hallick, T. M.; Woods, R. R.
1975-01-01
Technology advancement studies are reported on the basic electrochemical CO2 removal process to provide a basis for the design of the next generation cell, module and subsystem hardware. An Advanced Electrochemical Depolarized Concentrator Module (AEDCM) is developed that has the characteristics of low weight, low volume, high CO2, removal, good electrical performance and low process air pressure drop. Component weight and noise reduction for the hardware of a six man capacity CO2 collection subsystem was developed for the air revitalization group of the Space Station Prototype (SSP).
A High-Throughput Processor for Flight Control Research Using Small UAVs
NASA Technical Reports Server (NTRS)
Klenke, Robert H.; Sleeman, W. C., IV; Motter, Mark A.
2006-01-01
There are numerous autopilot systems that are commercially available for small (<100 lbs) UAVs. However, they all share several key disadvantages for conducting aerodynamic research, chief amongst which is the fact that most utilize older, slower, 8- or 16-bit microcontroller technologies. This paper describes the development and testing of a flight control system (FCS) for small UAV s based on a modern, high throughput, embedded processor. In addition, this FCS platform contains user-configurable hardware resources in the form of a Field Programmable Gate Array (FPGA) that can be used to implement custom, application-specific hardware. This hardware can be used to off-load routine tasks such as sensor data collection, from the FCS processor thereby further increasing the computational throughput of the system.
NASA Technical Reports Server (NTRS)
Vanvalkenburgh, C. N.
1984-01-01
Underwater simulations of EVA contingency operations such as manual jettison, payload disconnect, and payload clamp actuation were used to define crew aid needs and mockup pecularities and characteristics to verify the validity of simulation using the trainer. A set of mockup instrument pointing system tests was conducted and minor modifications and refinements were made. Flight configuration struts were tested and verified to be operable by the flight crew. Tasks involved in developing the following end items are described: IPS gimbal system, payload, and payload clamp assembly; the igloos (volumetric); spacelab pallets, experiments, and hardware; experiment, and hardware; experiment 7; and EVA hand tools, support hardware (handrails and foot restraints). The test plan preparation and test support are also covered.
A Stream lined Approach for the Payload Customer in Identifying Payload Design Requirements
NASA Technical Reports Server (NTRS)
Miller, Ladonna J.; Schneider, Walter F.; Johnson, Dexer E.; Roe, Lesa B.
2001-01-01
NASA payload developers from across various disciplines were asked to identify areas where process changes would simplify their task of developing and flying flight hardware. Responses to this query included a central location for consistent hardware design requirements for middeck payloads. The multidisciplinary team assigned to review the numerous payload interface design documents is assessing the Space Shuttle middeck, the SPACEHAB Inc. locker, as well as the MultiPurpose Logistics Module (MPLM) and EXpedite the PRocessing of Experiments to Space Station (EXPRESS) rack design requirements for the payloads. They are comparing the multiple carriers and platform requirements and developing a matrix which illustrates the individual requirements, and where possible, the envelope that encompasses all of the possibilities. The matrix will be expanded to form an overall envelope that the payload developers will have the option to utilize when designing their payload's hardware. This will optimize the flexibility for payload hardware and ancillary items to be manifested on multiple carriers and platforms with minimal impact to the payload developer.
NASA Technical Reports Server (NTRS)
Heard, Walter L., Jr.; Lake, Mark S.; Bush, Harold G.; Jensen, J. Kermit; Phelps, James E.; Wallsom, Richard E.
1992-01-01
This report presents results of tests performed in neutral buoyancy by two pressure-suited test subjects to simulate Extravehicular Activity (EVA) tasks associated with the on-orbit construction and repair of a precision reflector spacecraft. Two complete neutral buoyancy assemblies of the test article (tetrahedral truss with three attached reflector panels) were performed. Truss joint hardware, two different panel attachment hardware concepts, and a panel replacement tool were evaluated. The test subjects found the operation and size of the truss joint hardware to be acceptable. Both panel attachment concepts were found to be EVA compatible, although one concept was judged by the test subjects to be considerably easier to operate. The average time to install a panel from a position within arm's reach of the test subjects was 1 min 14 sec. The panel replacement tool was used successfully to demonstrate the removal and replacement of a damaged reflector panel in 10 min 25 sec.
Space hardware designs, volume 1
NASA Technical Reports Server (NTRS)
Meyer, Rudolf X.; Cribbs, Richard; Honda, Mark; Ma, Christina; Robson, Christopher
1994-01-01
The design of a solar sail space vehicle with a novel sail deployment mechanism is described. The sail is triangular in shape and is deployed and stabilized by three miniature spacecraft, one at each corner of the triangle. A concept demonstrator for a spherical microrover for the exploration of a planetary surface is described. Lastly, laboratory experiments have been conducted to study the migration of thin oil films on metal surfaces in the presence of a thermal gradient.
Liberating Virtual Machines from Physical Boundaries through Execution Knowledge
2015-12-01
trivial infrastructures such as VM distribution networks, clients need to wait for an extended period of time before launching a VM. In cloud settings...hardware support. MobiDesk [28] efficiently supports virtual desktops in mobile environments by decou- pling the user’s workload from host systems and...experiment set-up. VMs are migrated between a pair of source and destination hosts, which are connected through a backend 10 Gbps network for
Top-down and bottom-up definitions of human failure events in human reliability analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, Ronald Laurids
2014-10-01
In the probabilistic risk assessments (PRAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approachesmore » should arrive at the same set of HFEs. This question is crucial, however, as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PRAs tend to be top-down—defined as a subset of the PRA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) often tend to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.« less
Microcomputer data acquisition and control.
East, T D
1986-01-01
In medicine and biology there are many tasks that involve routine well defined procedures. These tasks are ideal candidates for computerized data acquisition and control. As the performance of microcomputers rapidly increases and cost continues to go down the temptation to automate the laboratory becomes great. To the novice computer user the choices of hardware and software are overwhelming and sadly most of the computer sales persons are not at all familiar with real-time applications. If you want to bill your patients you have hundreds of packaged systems to choose from; however, if you want to do real-time data acquisition the choices are very limited and confusing. The purpose of this chapter is to provide the novice computer user with the basics needed to set up a real-time data acquisition system with the common microcomputers. This chapter will cover the following issues necessary to establish a real time data acquisition and control system: Analysis of the research problem: Definition of the problem; Description of data and sampling requirements; Cost/benefit analysis. Choice of Microcomputer hardware and software: Choice of microprocessor and bus structure; Choice of operating system; Choice of layered software. Digital Data Acquisition: Parallel Data Transmission; Serial Data Transmission; Hardware and software available. Analog Data Acquisition: Description of amplitude and frequency characteristics of the input signals; Sampling theorem; Specification of the analog to digital converter; Hardware and software available; Interface to the microcomputer. Microcomputer Control: Analog output; Digital output; Closed-Loop Control. Microcomputer data acquisition and control in the 21st Century--What is in the future? High speed digital medical equipment networks; Medical decision making and artificial intelligence.
Database for propagation models
NASA Astrophysics Data System (ADS)
Kantak, Anil V.
1991-07-01
A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.
Software requirements flow-down and preliminary software design for the G-CLEF spectrograph
NASA Astrophysics Data System (ADS)
Evans, Ian N.; Budynkiewicz, Jamie A.; DePonte Evans, Janet; Miller, Joseph B.; Onyuksel, Cem; Paxson, Charles; Plummer, David A.
2016-08-01
The Giant Magellan Telescope (GMT)-Consortium Large Earth Finder (G-CLEF) is a fiber-fed, precision radial velocity (PRV) optical echelle spectrograph that will be the first light instrument on the GMT. The G-CLEF instrument device control subsystem (IDCS) provides software control of the instrument hardware, including the active feedback loops that are required to meet the G-CLEF PRV stability requirements. The IDCS is also tasked with providing operational support packages that include data reduction pipelines and proposal preparation tools. A formal, but ultimately pragmatic approach is being used to establish a complete and correct set of requirements for both the G-CLEF device control and operational support packages. The device control packages must integrate tightly with the state-machine driven software and controls reference architecture designed by the GMT Organization. A model-based systems engineering methodology is being used to develop a preliminary design that meets these requirements. Through this process we have identified some lessons that have general applicability to the development of software for ground-based instrumentation. For example, tasking an individual with overall responsibility for science/software/hardware integration is a key step to ensuring effective integration between these elements. An operational concept document that includes detailed routine and non- routine operational sequences should be prepared in parallel with the hardware design process to tie together these elements and identify any gaps. Appropriate time-phasing of the hardware and software design phases is important, but revisions to driving requirements that impact software requirements and preliminary design are inevitable. Such revisions must be carefully managed to ensure efficient use of resources.
Automated personnel data base system specifications, Task V. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartley, H.J.; Bocast, A.K.; Deppner, F.O.
1978-11-01
The full title of this study is 'Development of Qualification Requirements, Training Programs, Career Plans, and Methodologies for Effective Management and Training of Inspection and Enforcement Personnel.' Task V required the development of an automated personnel data base system for NRC/IE. This system is identified as the NRC/IE Personnel, Assignment, Qualifications, and Training System (PAQTS). This Task V report provides the documentation for PAQTS including the Functional Requirements Document (FRD), the Data Requirements Document (DRD), the Hardware and Software Capabilities Assessment, and the Detailed Implementation Schedule. Specific recommendations to facilitate implementation of PAQTS are also included.
Oxygen Generation System Laptop Bus Controller Flight Software
NASA Technical Reports Server (NTRS)
Rowe, Chad; Panter, Donna
2009-01-01
The Oxygen Generation System Laptop Bus Controller Flight Software was developed to allow the International Space Station (ISS) program to activate specific components of the Oxygen Generation System (OGS) to perform a checkout of key hardware operation in a microgravity environment, as well as to perform preventative maintenance operations of system valves during a long period of what would otherwise be hardware dormancy. The software provides direct connectivity to the OGS Firmware Controller with pre-programmed tasks operated by on-orbit astronauts to exercise OGS valves and motors. The software is used to manipulate the pump, separator, and valves to alleviate the concerns of hardware problems due to long-term inactivity and to allow for operational verification of microgravity-sensitive components early enough so that, if problems are found, they can be addressed before the hardware is required for operation on-orbit. The decision was made to use existing on-orbit IBM ThinkPad A31p laptops and MIL-STD-1553B interface cards as the hardware configuration. The software at the time of this reporting was developed and tested for use under the Windows 2000 Professional operating system to ensure compatibility with the existing on-orbit computer systems.
Remote hardware-reconfigurable robotic camera
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.
2001-10-01
In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.
A fault-tolerant intelligent robotic control system
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Tso, Kam Sing
1993-01-01
This paper describes the concept, design, and features of a fault-tolerant intelligent robotic control system being developed for space and commercial applications that require high dependability. The comprehensive strategy integrates system level hardware/software fault tolerance with task level handling of uncertainties and unexpected events for robotic control. The underlying architecture for system level fault tolerance is the distributed recovery block which protects against application software, system software, hardware, and network failures. Task level fault tolerance provisions are implemented in a knowledge-based system which utilizes advanced automation techniques such as rule-based and model-based reasoning to monitor, diagnose, and recover from unexpected events. The two level design provides tolerance of two or more faults occurring serially at any level of command, control, sensing, or actuation. The potential benefits of such a fault tolerant robotic control system include: (1) a minimized potential for damage to humans, the work site, and the robot itself; (2) continuous operation with a minimum of uncommanded motion in the presence of failures; and (3) more reliable autonomous operation providing increased efficiency in the execution of robotic tasks and decreased demand on human operators for controlling and monitoring the robotic servicing routines.
Open control/display system for a telerobotics work station
NASA Technical Reports Server (NTRS)
Keslowitz, Saul
1987-01-01
A working Advanced Space Cockpit was developed that integrated advanced control and display devices into a state-of-the-art multimicroprocessor hardware configuration, using window graphics and running under an object-oriented, multitasking real-time operating system environment. This Open Control/Display System supports the idea that the operator should be able to interactively monitor, select, control, and display information about many payloads aboard the Space Station using sets of I/O devices with a single, software-reconfigurable workstation. This is done while maintaining system consistency, yet the system is completely open to accept new additions and advances in hardware and software. The Advanced Space Cockpit, linked to Grumman's Hybrid Computing Facility and Large Amplitude Space Simulator (LASS), was used to test the Open Control/Display System via full-scale simulation of the following tasks: telerobotic truss assembly, RCS and thermal bus servicing, CMG changeout, RMS constrained motion and space constructible radiator assembly, HPA coordinated control, and OMV docking and tumbling satellite retrieval. The proposed man-machine interface standard discussed has evolved through many iterations of the tasks, and is based on feedback from NASA and Air Force personnel who performed those tasks in the LASS.
NASA Astrophysics Data System (ADS)
Hassan, A. H.; Fluke, C. J.; Barnes, D. G.
2012-09-01
Upcoming and future astronomy research facilities will systematically generate terabyte-sized data sets moving astronomy into the Petascale data era. While such facilities will provide astronomers with unprecedented levels of accuracy and coverage, the increases in dataset size and dimensionality will pose serious computational challenges for many current astronomy data analysis and visualization tools. With such data sizes, even simple data analysis tasks (e.g. calculating a histogram or computing data minimum/maximum) may not be achievable without access to a supercomputing facility. To effectively handle such dataset sizes, which exceed today's single machine memory and processing limits, we present a framework that exploits the distributed power of GPUs and many-core CPUs, with a goal of providing data analysis and visualizing tasks as a service for astronomers. By mixing shared and distributed memory architectures, our framework effectively utilizes the underlying hardware infrastructure handling both batched and real-time data analysis and visualization tasks. Offering such functionality as a service in a “software as a service” manner will reduce the total cost of ownership, provide an easy to use tool to the wider astronomical community, and enable a more optimized utilization of the underlying hardware infrastructure.
Intravascular migration of a broken cerclage wire into the left heart.
Leonardi, Francesco; Rivera, Fabrizio
2014-10-01
This article describes a patient in whom a broken cerclage wire migrated from the left hip into the left ventricle. A 71-year-old woman was admitted to the authors' hospital for preoperative examination before femoral hernia repair. Chest radiograph showed a metallic wire in the left ventricle. Twenty-four years earlier, she had a revision arthroplasty. During revision surgery, fragments of the osteotomy were fixed to the femur with multiple cerclage wires. During the past 5 years, radiographic follow-up showed progressive multiple ruptures of cerclage wires. The cerclage wiring was not removed because the patient had no related clinical symptoms. Radiograph of the left hip showed a well-fixed cemented acetabular ring and an uncemented femoral stem with a healed trochanteric osteotomy. All cerclage wires were broken into multiple parts, and it was very difficult to determine which part had migrated into the heart. Thoracic computed tomography scan showed wire that had migrated into the anterior left ventricular myocardial wall at the atrioventricular level. The patient had no clinical symptoms. Electrocardiogram showed a normal sinus rhythm and right bundle branch block. Because of the high risk of surgical left ventriculotomy associated with searching for wire that had migrated into the myocardial wall, patient monitoring was planned. Definitive management of this complication constitutes a dilemma. Although this complication is highly unusual, the possibility of intracardiac migration of broken wire should be considered when deciding on prophylactic surgical removal of hardware after fracture or osteotomy healing. Copyright 2014, SLACK Incorporated.
1977-10-01
These modules make up a multi-task priority real - time operating system in which each of the functions of the Supervisor is performed by one or more tasks. The Initialization module performs the initialization of the Supervisor software and hardware including the Input Buffer, the FIFO, and the Track Correlator This module is used both at initial program load time and upon receipt of a SC Initialization Command.
1987-10-01
19 treated in interaction with each other and the hardware and software design. The authors point out some of the inadequacies in HP technologies and...life cycle costs recognition performance on secondary tasks effort/efficiency number of wins ( gaming tasks) number of instructors needed amount of...student interacts with this material in real time via a terminal and display system. The computer performs many functions, such as diagnose student
A dual-task investigation of automaticity in visual word processing
NASA Technical Reports Server (NTRS)
McCann, R. S.; Remington, R. W.; Van Selst, M.
2000-01-01
An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.
An improved real time superresolution FPGA system
NASA Astrophysics Data System (ADS)
Lakshmi Narasimha, Pramod; Mudigoudar, Basavaraj; Yue, Zhanfeng; Topiwala, Pankaj
2009-05-01
In numerous computer vision applications, enhancing the quality and resolution of captured video can be critical. Acquired video is often grainy and low quality due to motion, transmission bottlenecks, etc. Postprocessing can enhance it. Superresolution greatly decreases camera jitter to deliver a smooth, stabilized, high quality video. In this paper, we extend previous work on a real-time superresolution application implemented in ASIC/FPGA hardware. A gradient based technique is used to register the frames at the sub-pixel level. Once we get the high resolution grid, we use an improved regularization technique in which the image is iteratively modified by applying back-projection to get a sharp and undistorted image. The algorithm was first tested in software and migrated to hardware, to achieve 320x240 -> 1280x960, about 30 fps, a stunning superresolution by 16X in total pixels. Various input parameters, such as size of input image, enlarging factor and the number of nearest neighbors, can be tuned conveniently by the user. We use a maximum word size of 32 bits to implement the algorithm in Matlab Simulink as well as in FPGA hardware, which gives us a fine balance between the number of bits and performance. The proposed system is robust and highly efficient. We have shown the performance improvement of the hardware superresolution over the software version (C code).
EVA manipulation and assembly of space structure columns
NASA Technical Reports Server (NTRS)
Loughead, T. E.; Pruett, E. C.
1980-01-01
Assembly techniques and hardware configurations used in assembly of the basic tetrahedral cell by A7LB pressure-suited subjects in a neutral bouyancy simulator were studied. Eleven subjects participated in assembly procedures which investigated two types of structural members and two configurations of attachment hardware. The assembly was accomplished through extra-vehicular activity (EVA) only, EVA with simulated manned maneuvering unit (MMU), and EVA with simulated MMU and simulated remote manipulator system (RMS). Assembly times as low as 10.20 minutes per tetrahedron were achieved. Task element data, as well as assembly procedures, are included.
Spares Management : Optimizing Hardware Usage for the Space Shuttle Main Engine
NASA Technical Reports Server (NTRS)
Gulbrandsen, K. A.
1999-01-01
The complexity of the Space Shuttle Main Engine (SSME), combined with mounting requirements to reduce operations costs have increased demands for accurate tracking, maintenance, and projections of SSME assets. The SSME Logistics Team is developing an integrated asset management process. This PC-based tool provides a user-friendly asset database for daily decision making, plus a variable-input hardware usage simulation with complex logic yielding output that addresses essential asset management issues. Cycle times on critical tasks are significantly reduced. Associated costs have decreased as asset data quality and decision-making capability has increased.
Robotics control using isolated word recognition of voice input
NASA Technical Reports Server (NTRS)
Weiner, J. M.
1977-01-01
A speech input/output system is presented that can be used to communicate with a task oriented system. Human speech commands and synthesized voice output extend conventional information exchange capabilities between man and machine by utilizing audio input and output channels. The speech input facility is comprised of a hardware feature extractor and a microprocessor implemented isolated word or phrase recognition system. The recognizer offers a medium sized (100 commands), syntactically constrained vocabulary, and exhibits close to real time performance. The major portion of the recognition processing required is accomplished through software, minimizing the complexity of the hardware feature extractor.
Automatisms in EMIR instrument to improve operation, safety and maintenance
NASA Astrophysics Data System (ADS)
Fernández Izquierdo, Patricia; Núñez Cagigal, Miguel; Barreto Rodríguez, Roberto; Martínez Rey, Noelia; Santana Tschudi, Samuel; Barreto Cabrera, Maria; Patrón Recio, Jesús; Garzón López, Francisco
2014-08-01
EMIR is the NIR imager and multiobject spectrograph being built as a common user instrument for the 10-m class GTC. Big cryogenic instruments demand a reliable design and a specific hardware and software to increase its safety and productivity. EMIR vacuum, cooling and heating systems are monitored and partially controlled by a Programmable Logic Controller (PLC) in industrial format with a touch screen. The PLC aids the instrument operator in the maintenance tasks recovering autonomously vacuum if required or proposing preventive maintenance actions. The PLC and its associated hardware improve EMIR safety having immediate reactions against eventual failure modes in the instrument or in external supplies, including hardware failures during the heating procedure or failure in the PLC itself. EMIR PLC provides detailed information periodically about status and alarms of vacuum and cooling components or external supplies.
Development of simulation computer complex specification
NASA Technical Reports Server (NTRS)
1973-01-01
The Training Simulation Computer Complex Study was one of three studies contracted in support of preparations for procurement of a shuttle mission simulator for shuttle crew training. The subject study was concerned with definition of the software loads to be imposed on the computer complex to be associated with the shuttle mission simulator and the development of procurement specifications based on the resulting computer requirements. These procurement specifications cover the computer hardware and system software as well as the data conversion equipment required to interface the computer to the simulator hardware. The development of the necessary hardware and software specifications required the execution of a number of related tasks which included, (1) simulation software sizing, (2) computer requirements definition, (3) data conversion equipment requirements definition, (4) system software requirements definition, (5) a simulation management plan, (6) a background survey, and (7) preparation of the specifications.
Maintenance Decision Support System: Pilot Study and Cost-Benefit Analysis (Phase 2.5)
DOT National Transportation Integrated Search
2014-07-01
This project focused on several tasks: development of in-vehicle hardware that permits implementation of an MDSS, development of software to collect and process road and weather data, a cost-benefit study, and pilot-scale implementation. Two Automati...
Maintenance Decision Support System : Pilot Study and Cost-Benefit Analysis (Phase 2)
DOT National Transportation Integrated Search
2014-07-01
This project focused on several tasks: development of in-vehicle hardware that permits implementation of an MDSS, development of software to collect and process road and weather data, a cost-benefit study, and pilot-scale implementation. Two Automati...
James, Conrad D.; Aimone, James B.; Miner, Nadine E.; ...
2017-01-04
In this study, biological neural networks continue to inspire new developments in algorithms and microelectronic hardware to solve challenging data processing and classification problems. Here in this research, we survey the history of neural-inspired and neuromorphic computing in order to examine the complex and intertwined trajectories of the mathematical theory and hardware developed in this field. Early research focused on adapting existing hardware to emulate the pattern recognition capabilities of living organisms. Contributions from psychologists, mathematicians, engineers, neuroscientists, and other professions were crucial to maturing the field from narrowly-tailored demonstrations to more generalizable systems capable of addressing difficult problem classesmore » such as object detection and speech recognition. Algorithms that leverage fundamental principles found in neuroscience such as hierarchical structure, temporal integration, and robustness to error have been developed, and some of these approaches are achieving world-leading performance on particular data classification tasks. Additionally, novel microelectronic hardware is being developed to perform logic and to serve as memory in neuromorphic computing systems with optimized system integration and improved energy efficiency. Key to such advancements was the incorporation of new discoveries in neuroscience research, the transition away from strict structural replication and towards the functional replication of neural systems, and the use of mathematical theory frameworks to guide algorithm and hardware developments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, Conrad D.; Aimone, James B.; Miner, Nadine E.
In this study, biological neural networks continue to inspire new developments in algorithms and microelectronic hardware to solve challenging data processing and classification problems. Here in this research, we survey the history of neural-inspired and neuromorphic computing in order to examine the complex and intertwined trajectories of the mathematical theory and hardware developed in this field. Early research focused on adapting existing hardware to emulate the pattern recognition capabilities of living organisms. Contributions from psychologists, mathematicians, engineers, neuroscientists, and other professions were crucial to maturing the field from narrowly-tailored demonstrations to more generalizable systems capable of addressing difficult problem classesmore » such as object detection and speech recognition. Algorithms that leverage fundamental principles found in neuroscience such as hierarchical structure, temporal integration, and robustness to error have been developed, and some of these approaches are achieving world-leading performance on particular data classification tasks. Additionally, novel microelectronic hardware is being developed to perform logic and to serve as memory in neuromorphic computing systems with optimized system integration and improved energy efficiency. Key to such advancements was the incorporation of new discoveries in neuroscience research, the transition away from strict structural replication and towards the functional replication of neural systems, and the use of mathematical theory frameworks to guide algorithm and hardware developments.« less
Real-time high speed generator system emulation with hardware-in-the-loop application
NASA Astrophysics Data System (ADS)
Stroupe, Nicholas
The emerging emphasis and benefits of distributed generation on smaller scale networks has prompted much attention and focus to research in this field. Much of the research that has grown in distributed generation has also stimulated the development of simulation software and techniques. Testing and verification of these distributed power networks is a complex task and real hardware testing is often desired. This is where simulation methods such as hardware-in-the-loop become important in which an actual hardware unit can be interfaced with a software simulated environment to verify proper functionality. In this thesis, a simulation technique is taken one step further by utilizing a hardware-in-the-loop technique to emulate the output voltage of a generator system interfaced to a scaled hardware distributed power system for testing. The purpose of this thesis is to demonstrate a new method of testing a virtually simulated generation system supplying a scaled distributed power system in hardware. This task is performed by using the Non-Linear Loads Test Bed developed by the Energy Conversion and Integration Thrust at the Center for Advanced Power Systems. This test bed consists of a series of real hardware developed converters consistent with the Navy's All-Electric-Ship proposed power system to perform various tests on controls and stability under the expected non-linear load environment of the Navy weaponry. This test bed can also explore other distributed power system research topics and serves as a flexible hardware unit for a variety of tests. In this thesis, the test bed will be utilized to perform and validate this newly developed method of generator system emulation. In this thesis, the dynamics of a high speed permanent magnet generator directly coupled with a micro turbine are virtually simulated on an FPGA in real-time. The calculated output stator voltage will then serve as a reference for a controllable three phase inverter at the input of the test bed that will emulate and reproduce these voltages on real hardware. The output of the inverter is then connected with the rest of the test bed and can consist of a variety of distributed system topologies for many testing scenarios. The idea is that the distributed power system under test in hardware can also integrate real generator system dynamics without physically involving an actual generator system. The benefits of successful generator system emulation are vast and lead to much more detailed system studies without the draw backs of needing physical generator units. Some of these advantages are safety, reduced costs, and the ability of scaling while still preserving the appropriate system dynamics. This thesis will introduce the ideas behind generator emulation and explain the process and necessary steps to obtaining such an objective. It will also demonstrate real results and verification of numerical values in real-time. The final goal of this thesis is to introduce this new idea and show that it is in fact obtainable and can prove to be a highly useful tool in the simulation and verification of distributed power systems.
Office Automation in Student Affairs.
ERIC Educational Resources Information Center
Johnson, Sharon L.; Hamrick, Florence A.
1987-01-01
Offers recommendations to assist in introducing or expanding computer assistance in student affairs. Describes need for automation and considers areas of choosing hardware and software, funding and competitive bidding, installation and training, and system management. Cites greater efficiency in handling tasks and data and increased levels of…
Microcomputers and Workstations in Libraries: Trends and Opportunities.
ERIC Educational Resources Information Center
Welsch, Erwin K.
1990-01-01
Summarizes opinions of scholars in various disciplines on workstation history, definition, and functions. Networks and configurations for library workstations, including hardware and software recommendations, are described. The impact of workstations on the workplace resulting in task, process, and institutional transformation, is also considered.…
14 CFR § 1232.103 - Definitions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... animal subjects. (a) Activity includes research, testing of hardware for animal use, flight experimentation, and any other tasks involving the use of animal subjects. (b) Animal is any live vertebrate....103 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CARE AND USE OF ANIMALS IN THE...
Optical Property Measurements on the Stardust Sample Return Capsule
NASA Technical Reports Server (NTRS)
Finckenor, Miria
2007-01-01
The Advanced Materials for Exploration (AME) task Materials Analysis of Returned Hardware from Stardust received funding to perform non-destructive analyses of the non-primary science hardware components of the Stardust sample return capsule. These components were (a) the blunt body reentry heatshield, encased in Phenolic Impregnated Carbon Ablator (PICA); (b) the backshell of Super Lightweight Ablator 561 (SLA-561) material handpacked into phenolic Flexcore and coated with CV-1100 silicone; (c) the rope seal used in between the heatshield and backshell; (d) the internal multi-layer insulation (MLI) blankets; and (e) parts of the Kevlar straps left attached to the backshell. These components were analyzed to determine the materials' durability in the space environment. The goals of the task were (a) to determine how the various materials from which the components were built weathered the extreme temperatures and harsh space environment during the capsule's nearly 7-year voyage to and from its rendezvous with Comet Wild 2 and (b) to provide lessons-learned data for designers of future missions.
Design and Implementation of a Modern Automatic Deformation Monitoring System
NASA Astrophysics Data System (ADS)
Engel, Philipp; Schweimler, Björn
2016-03-01
The deformation monitoring of structures and buildings is an important task field of modern engineering surveying, ensuring the standing and reliability of supervised objects over a long period. Several commercial hardware and software solutions for the realization of such monitoring measurements are available on the market. In addition to them, a research team at the University of Applied Sciences in Neubrandenburg (NUAS) is actively developing a software package for monitoring purposes in geodesy and geotechnics, which is distributed under an open source licence and free of charge. The task of managing an open source project is well-known in computer science, but it is fairly new in a geodetic context. This paper contributes to that issue by detailing applications, frameworks, and interfaces for the design and implementation of open hardware and software solutions for sensor control, sensor networks, and data management in automatic deformation monitoring. It will be discussed how the development effort of networked applications can be reduced by using free programming tools, cloud computing technologies, and rapid prototyping methods.
Cache Sharing and Isolation Tradeoffs in Multicore Mixed-Criticality Systems
2015-05-01
form of lockdown registers, to provide way-based partitioning. These alternatives are illustrated in Fig. 1 with respect to a quad-core ARM Cortex A9... processor (as we do for Level-A and -B tasks), but they did not consider MC systems. Altmeyer et al. [1] considered uniprocessor scheduling on a system with a...framework. We randomly generated task sets and determined the fraction that were schedulable on our target hardware platform, the quad-core ARM Cortex A9
Operational experience and design recommendations for teleoperated flight hardware
NASA Technical Reports Server (NTRS)
Burgess, T. W.; Kuban, D. P.; Hankins, W. W.; Mixon, R. W.
1988-01-01
Teleoperation (remote manipulation) will someday supplement/minimize astronaut extravehicular activity in space to perform such tasks as satellite servicing and repair, and space station construction and servicing. This technology is being investigated by NASA with teleoperation of two space-related tasks having been demonstrated at the Oak Ridge National Lab. The teleoperator experiments are discussed and the results of these experiments are summarized. The related equipment design recommendations are also presented. In addition, a general discussion of equipment design for teleoperation is also presented.
Infrastructure for deployment of power systems
NASA Technical Reports Server (NTRS)
Sprouse, Kenneth M.
1991-01-01
A preliminary effort in characterizing the types of stationary lunar power systems which may be considered for emplacement on the lunar surface from the proposed initial 100-kW unit in 2003 to later units ranging in power from 25 to 825 kW is presented. Associated with these power systems are their related infrastructure hardware including: (1) electrical cable, wiring, switchgear, and converters; (2) deployable radiator panels; (3) deployable photovoltaic (PV) panels; (4) heat transfer fluid piping and connection joints; (5) power system instrumentation and control equipment; and (6) interface hardware between lunar surface construction/maintenance equipment and power system. This report: (1) presents estimates of the mass and volumes associated with these power systems and their related infrastructure hardware; (2) provides task breakdown description for emplacing this equipment; (3) gives estimated heat, forces, torques, and alignment tolerances for equipment assembly; and (4) provides other important equipment/machinery requirements where applicable. Packaging options for this equipment will be discussed along with necessary site preparation requirements. Design and analysis issues associated with the final emplacement of this power system hardware are also described.
VIDANA: Data Management System for Nano Satellites
NASA Astrophysics Data System (ADS)
Montenegro, Sergio; Walter, Thomas; Dilger, Erik
2013-08-01
A Vidana data management system is a network of software and hardware components. This implies a software network, a hardware network and a smooth connection between both of them. Our strategy is based on our innovative middleware. A reliable interconnection network (SW & HW) which can interconnect many unreliable redundant components such as sensors, actuators, communication devices, computers, and storage elements,... and software components! Component failures are detected, the affected device is disabled and its function is taken over by a redundant component. Our middleware doesn't connect only software, but also devices and software together. Software and hardware communicate with each other without having to distinguish which functions are in software and which are implemented in hardware. Components may be turned on and off at any time, and the whole system will autonomously adapt to its new configuration in order to continue fulfilling its task. In VIDANA we aim dynamic adaptability (run tine), static adaptability (tailoring), and unified HW/SW communication protocols. For many of these aspects we use "learn from the nature" where we can find astonishing reference implementations.
NASA Technical Reports Server (NTRS)
Schumacher, W.; Geiser, G.
1978-01-01
The basic concepts of Petri nets are reviewed as well as their application as the fundamental model of technical systems with concurrent discrete events such as hardware systems and software models of computers. The use of Petri nets is proposed for modeling the human operator dealing with concurrent discrete tasks. Their properties useful in modeling the human operator are discussed and practical examples are given. By means of and experimental investigation of binary concurrent tasks which are presented in a serial manner, the representation of human behavior by Petri nets is demonstrated.
M.U.S.T. 2007 Summer Research Project at NASA's KSC MILA Facility
NASA Technical Reports Server (NTRS)
PintoRey, Christian R.
2007-01-01
The summer research activity at Kennedy Space Center (KSC) aims to introduce the student to the basic principles in their field of study. While at KSC, a specific research project awaits the student to complete. As an Aeronautical Engineering student, my assigned project is to assist the cognizant engineer, Mr. Troy Hamilton, in the six engineering phases for replacing the Ponce De Leon (PDL)4.3M Antenna Control Unit (ACU). Although the project mainly requires the attention of two engineers and two students, it also involves the participation of many colleagues at various points during the course of the engineering change (EC). Since the PDL 4.3M ACU engineering change makes both hardware and software changes, it calls upon the expertise of a Hardware Engineer as well as a Software Engineer. As students, Mr. Jeremy Bresette and I have worked side by side with the engineers, gaining invaluable experience. We work in two teams, the hardware team and the software team, On certain tasks, we assist the engineers, while on others we assume their roles. By diligently working in this fashion, we are learning how to communicate effectively as professionals, despite the fact that we are studying different engineering fields. This project has been a great fit for my field of study, as it has highly improved my awareness of the many critical tasks involved in carrying out an engineering project.
Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus
2016-05-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Millar, A. P.; Baranova, T.; Behrmann, G.
For over a decade, dCache has been synonymous with large-capacity, fault-tolerant storage using commodity hardware that supports seamless data migration to and from tape. In this paper we provide some recent news of changes within dCache and the community surrounding it. We describe the flexible nature of dCache that allows both externally developed enhancements to dCache facilities and the adoption of new technologies. Finally, we present information about avenues the dCache team is exploring for possible future improvements in dCache.
Transparency in Distributed File Systems
1989-01-01
ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK Computer Science Department AREA & WORK UNIT NUMBERS 734 Comouter Studies Bldc . University of...sistency control , file and director) placement, and file and directory migration in a way that pro- 3 vides full network transparency. This transparency...areas of naming, replication, con- sistency control , file and directory placement, and file and directory migration in a way that pro- 3 vides full
From a Bird's Eye View: An Interdisciplinary Approach to Migration
ERIC Educational Resources Information Center
Benson, Juliann
2007-01-01
Inspiring students to learn about birds can be a daunting task--students see birds just about every day and often don't think twice about them. The activity described here is designed to excite students to "become" birds. Students are asked to create a model and tell the life story of a bird by mapping its migration pattern. (Contains 6 figures, 6…
Enabling a New Planning and Scheduling Paradigm
NASA Technical Reports Server (NTRS)
Jaap, John; Davis, Elizabeth
2004-01-01
The Flight Projects Directorate at NASA's Marshall Space Flight Center is developing a new planning and scheduling environment and a new scheduling algorithm to enable a paradigm shift in planning and scheduling concepts. Over the past 33 years Marshall has developed and evolved a paradigm for generating payload timelines for Skylab, Spacelab, various other Shuttle payloads, and the International Space Station. The current paradigm starts by collecting the requirements, called "tasks models," from the scientists and technologists for the tasks that they want to be done. Because of shortcomings in the current modeling schema, some requirements are entered as notes. Next a cadre with knowledge of vehicle and hardware modifies these models to encompass and be compatible with the hardware model; again, notes are added when the modeling schema does not provide a better way to represent the requirements. Finally, another cadre further modifies the models to be compatible with the scheduling engine. This last cadre also submits the models to the scheduling engine or builds the timeline manually to accommodate requirements that are expressed in notes. A future paradigm would provide a scheduling engine that accepts separate science models and hardware models. The modeling schema would have the capability to represent all the requirements without resorting to notes. Furthermore, the scheduling engine would not require that the models be modified to account for the capabilities (limitations) of the scheduling engine. The enabling technology under development at Marshall has three major components. (1) A new modeling schema allows expressing all the requirements of the tasks without resorting to notes or awkward contrivances. The chosen modeling schema is both maximally expressive and easy to use. It utilizes graphics methods to show hierarchies of task constraints and networks of temporal relationships. (2) A new scheduling algorithm automatically schedules the models without the intervention of a scheduling expert. The algorithm is tuned for the constraint hierarchies and the complex temporal relationships provided by the modeling schema. It has an extensive search algorithm which can exploit timing flexibilities and constraint and relationship options. (3) A web-based architecture allows multiple remote users to simultaneously model science and technology requirements and other users to model vehicle and hardware characteristics. The architecture allows the users to submit scheduling requests directly to the scheduling engine and immediately see the results. These three components are integrated so that science and technology experts with no knowledge of the vehicle or hardware subsystems and no knowledge of the internal workings of the scheduling engine have the ability to build and submit scheduling requests and see the results. The immediate feedback will hone the users' modeling skills and ultimately enable them to produce the desired timeline. This paper summarizes the three components of the enabling technology and describes how this technology would make a new paradigm possible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radtke, M.A.
This paper will chronicle the activity at Wisconsin Public Service Corporation (WPSC) that resulted in the complete migration of a traditional, late 1970`s vintage, Energy Management System (EMS). The new environment includes networked microcomputers, minicomputers, and the corporate mainframe, and provides on-line access to employees outside the energy control center and some WPSC customers. In the late 1980`s, WPSC was forecasting an EMS computer upgrade or replacement to address both capacity and technology needs. Reasoning that access to diverse computing resources would best position the company to accommodate the uncertain needs of the energy industry in the 90`s, WPSC chosemore » to investigate an in-place migration to a network of computers, able to support heterogeneous hardware and operating systems. The system was developed in a modular fashion, with individual modules being deployed as soon as they were completed. The functional and technical specification was continuously enhanced as operating experience was gained from each operational module. With the migration off the original EMS computers complete, the networked system called DEMAXX (Distributed Energy Management Architecture with eXtensive eXpandability) has exceeded expectations in the areas of: cost, performance, flexibility, and reliability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radtke, M.A.
This paper will chronicle the activity at Wisconsin Public Service Corporation (WPSC) that resulted in the complete migration of a traditional, late 1970`s vintage, Energy management System (EMS). The new environment includes networked microcomputers, minicomputers, and the corporate mainframe, and provides on-line access to employees outside the energy control center and some WPSC customers. In the late 1980`s, WPSC was forecasting an EMS computer upgrade or replacement to address both capacity and technology needs. Reasoning that access to diverse computing resources would best position the company to accommodate the uncertain needs of the energy industry in the 90`s, WPSC chosemore » to investigate an in-place migration to a network of computers, able to support heterogeneous hardware and operating systems. The system was developed in a modular fashion, with individual modules being deployed as soon as they were completed. The functional and technical specification was continuously enhanced as operating experience was gained from each operational module. With the migration of the original EMS computers complete, the networked system called DEMAXX (Distributed Energy Management Architecture with eXtensive eXpandability) has exceeded expectations in the areas of: cost, performance, flexibility, and reliability.« less
A Fast Technology Infusion Model for Aerospace Organizations
NASA Technical Reports Server (NTRS)
Shapiro, Andrew A.; Schone, Harald; Brinza, David E.; Garrett, Henry B.; Feather, Martin S.
2006-01-01
A multi-year Fast Technology Infusion initiative proposes a model for aerospace organizations to improve the cost-effectiveness by which they mature new, in-house developed software and hardware technologies for space mission use. The first year task under the umbrella of this initiative will provide the framework to demonstrate and document the fast infusion process. The viability of this approach will be demonstrated on two technologies developed in prior years with internal Jet Propulsion Laboratory (JPL) funding. One hardware technology and one software technology were selected for maturation within one calendar year or less. The overall objective is to achieve cost and time savings in the qualification of technologies. At the end of the recommended three-year effort, we will have demonstrated for six or more in-house developed technologies a clear path to insertion using a documented process that permits adaptation to a broad range of hardware and software projects.
NASA Astrophysics Data System (ADS)
Prezioso, M.; Merrikh-Bayat, F.; Chakrabarti, B.; Strukov, D.
2016-02-01
Artificial neural networks have been receiving increasing attention due to their superior performance in many information processing tasks. Typically, scaling up the size of the network results in better performance and richer functionality. However, large neural networks are challenging to implement in software and customized hardware are generally required for their practical implementations. In this work, we will discuss our group's recent efforts on the development of such custom hardware circuits, based on hybrid CMOS/memristor circuits, in particular of CMOL variety. We will start by reviewing the basics of memristive devices and of CMOL circuits. We will then discuss our recent progress towards demonstration of hybrid circuits, focusing on the experimental and theoretical results for artificial neural networks based on crossbarintegrated metal oxide memristors. We will conclude presentation with the discussion of the remaining challenges and the most pressing research needs.
Implementing real-time robotic systems using CHIMERA II
NASA Technical Reports Server (NTRS)
Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.
1990-01-01
A description is given of the CHIMERA II programming environment and operating system, which was developed for implementing real-time robotic systems. Sensor-based robotic systems contain both general- and special-purpose hardware, and thus the development of applications tends to be a time-consuming task. The CHIMERA II environment is designed to reduce the development time by providing a convenient software interface between the hardware and the user. CHIMERA II supports flexible hardware configurations which are based on one or more VME-backplanes. All communication across multiple processors is transparent to the user through an extensive set of interprocessor communication primitives. CHIMERA II also provides a high-performance real-time kernel which supports both deadline and highest-priority-first scheduling. The flexibility of CHIMERA II allows hierarchical models for robot control, such as NASREM, to be implemented with minimal programming time and effort.
DOT National Transportation Integrated Search
1995-05-01
KEYWORDS : ADVANCED VEHICLE CONTROL & SAFETY SYSTEMS OR AVCSS, COLLISION WARNING/AVOIDANCE SYSTEMS, CRASH REDUCTION, INTELLIGENT VEHICLE INITIATIVE OR IVI : RESULTS FROM THE TESTING OF ELEVEN COLLISION AVOIDANCE SYSTEMS (CAS) FOR LANE CHANGE, ...
Software Auditing: A New Task for U.K. Universities.
ERIC Educational Resources Information Center
Fletcher, Mark
1997-01-01
Based on a pilot project at Exeter University (Devon, England) a software audit, comparing number of copies of software installed with number of license agreements, is described. Discussion includes auditing budgets, workstation questionnaires, the scanner program which detects the hardware configuration and staff training, analysis and…
A Proposal for Modeling Real Hardware, Weather and Marine Conditions for Underwater Sensor Networks
Climent, Salvador; Capella, Juan Vicente; Blanc, Sara; Perles, Angel; Serrano, Juan José
2013-01-01
Network simulators are useful for researching protocol performance, appraising new hardware capabilities and evaluating real application scenarios. However, these tasks can only be achieved when using accurate models and real parameters that enable the extraction of trustworthy results and conclusions. This paper presents an underwater wireless sensor network ecosystem for the ns-3 simulator. This ecosystem is composed of a new energy-harvesting model and a low-cost, low-power underwater wake-up modem model that, alongside existing models, enables the performance of accurate simulations by providing real weather and marine conditions from the location where the real application is to be deployed. PMID:23748171
How to create successful Open Hardware projects — About White Rabbits and open fields
NASA Astrophysics Data System (ADS)
van der Bij, E.; Arruat, M.; Cattin, M.; Daniluk, G.; Gonzalez Cobas, J. D.; Gousiou, E.; Lewis, J.; Lipinski, M. M.; Serrano, J.; Stana, T.; Voumard, N.; Wlostowski, T.
2013-12-01
CERN's accelerator control group has embraced ''Open Hardware'' (OH) to facilitate peer review, avoid vendor lock-in and make support tasks scalable. A web-based tool for easing collaborative work was set up and the CERN OH Licence was created. New ADC, TDC, fine delay and carrier cards based on VITA and PCI-SIG standards were designed and drivers for Linux were written. Often industry was paid for developments, while quality and documentation was controlled by CERN. An innovative timing network was also developed with the OH paradigm. Industry now sells and supports these designs that find their way into new fields.
NASA Technical Reports Server (NTRS)
Dischinger, H. Charles, Jr.; Stambolian, Damon B.; Miller, Darcy H.
2008-01-01
The National Aeronautics and Space Administration has long applied standards-derived human engineering requirements to the development of hardware and software for use by astronauts while in flight. The most important source of these requirements has been NASA-STD-3000. While there have been several ground systems human engineering requirements documents, none has been applicable to the flight system as handled at NASA's launch facility at Kennedy Space Center. At the time of the development of previous human launch systems, there were other considerations that were deemed more important than developing worksites for ground crews; e.g., hardware development schedule and vehicle performance. However, experience with these systems has shown that failure to design for ground tasks has resulted in launch schedule delays, ground operations that are more costly than they might be, and threats to flight safety. As the Agency begins the development of new systems to return humans to the moon, the new Constellation Program is addressing this issue with a new set of human engineering requirements. Among these requirements is a subset that will apply to the design of the flight components and that is intended to assure ground crew success in vehicle assembly and maintenance tasks. These requirements address worksite design for usability and for ground crew safety.
Control of intelligent robots in space
NASA Technical Reports Server (NTRS)
Freund, E.; Buehler, CH.
1989-01-01
In view of space activities like International Space Station, Man-Tended-Free-Flyer (MTFF) and free flying platforms, the development of intelligent robotic systems is gaining increasing importance. The range of applications that have to be performed by robotic systems in space includes e.g., the execution of experiments in space laboratories, the service and maintenance of satellites and flying platforms, the support of automatic production processes or the assembly of large network structures. Some of these tasks will require the development of bi-armed or of multiple robotic systems including functional redundancy. For the development of robotic systems which are able to perform this variety of tasks a hierarchically structured modular concept of automation is required. This concept is characterized by high flexibility as well as by automatic specialization to the particular sequence of tasks that have to be performed. On the other hand it has to be designed such that the human operator can influence or guide the system on different levels of control supervision, and decision. This leads to requirements for the hardware and software concept which permit a range of application of the robotic systems from telemanipulation to autonomous operation. The realization of this goal requires strong efforts in the development of new methods, software and hardware concepts, and the integration into an automation concept.
Devices and circuits for nanoelectronic implementation of artificial neural networks
NASA Astrophysics Data System (ADS)
Turel, Ozgur
Biological neural networks perform complicated information processing tasks at speeds better than conventional computers based on conventional algorithms. This has inspired researchers to look into the way these networks function, and propose artificial networks that mimic their behavior. Unfortunately, most artificial neural networks, either software or hardware, do not provide either the speed or the complexity of a human brain. Nanoelectronics, with high density and low power dissipation that it provides, may be used in developing more efficient artificial neural networks. This work consists of two major contributions in this direction. First is the proposal of the CMOL concept, hybrid CMOS-molecular hardware [1-8]. CMOL may circumvent most of the problems in posed by molecular devices, such as low yield, vet provide high active device density, ˜1012/cm 2. The second contribution is CrossNets, artificial neural networks that are based on CMOL. We showed that CrossNets, with their fault tolerance, exceptional speed (˜ 4 to 6 orders of magnitude faster than biological neural networks) can perform any task any artificial neural network can perform. Moreover, there is a hope that if their integration scale is increased to that of human cerebral cortex (˜ 1010 neurons and ˜ 1014 synapses), they may be capable of performing more advanced tasks.
Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Zheng, Steven; Suaning, Gregg J
2014-01-01
Simulated prosthetic vision (SPV) in normally sighted subjects is an established way of investigating the prospective efficacy of visual prosthesis designs in visually guided tasks such as mobility. To perform meaningful SPV mobility studies in computer-based environments, a credible representation of both the virtual scene to navigate and the experienced artificial vision has to be established. It is therefore prudent to make optimal use of existing hardware and software solutions when establishing a testing framework. The authors aimed at improving the realism and immersion of SPV by integrating state-of-the-art yet low-cost consumer technology. The feasibility of body motion tracking to control movement in photo-realistic virtual environments was evaluated in a pilot study. Five subjects were recruited and performed an obstacle avoidance and wayfinding task using either keyboard and mouse, gamepad or Kinect motion tracking. Walking speed and collisions were analyzed as basic measures for task performance. Kinect motion tracking resulted in lower performance as compared to classical input methods, yet results were more uniform across vision conditions. The chosen framework was successfully applied in a basic virtual task and is suited to realistically simulate real-world scenes under SPV in mobility research. Classical input peripherals remain a feasible and effective way of controlling the virtual movement. Motion tracking, despite its limitations and early state of implementation, is intuitive and can eliminate between-subject differences due to familiarity to established input methods.
NASA Astrophysics Data System (ADS)
Wang, Zi; Pakzad, Shamim; Cheng, Liang
2012-04-01
In recent years, wireless sensor network (WSN), as a powerful tool, has been widely applied to structural health monitoring (SHM) due to its low cost of deployment. Several commercial hardware platforms of wireless sensor networks (WSN) have been developed and used for structural monitoring applications [1,2]. A typical design of a node includes a sensor board and a mote connected to it. Sensing units, analog filters and analog-to-digital converters (ADCs) are integrated on the sensor board and the mote consists of a microcontroller and a wireless transceiver. Generally, there are a set of sensor boards compatible with the same model of mote and the selection of the sensor board depends on the specific applications. A WSN system based on this node lacks the capability of interrupting its scheduled task to start a higher priority task. This shortcoming is rooted in the hardware architecture of the node. The proposed sandwich-node architecture is designed to remedy the shortcomings of the existing one for task preemption. A sandwich node is composed of a sensor board and two motes. The first mote is dedicated to managing the sensor board and processing acquired data. The second mote controls the first mote via commands. A prototype has been implemented using Imote2 and verified by an emulation in which one mote is triggered by a remote base station and then preempts the running task at the other mote for handling an emergency event.
ERIC Educational Resources Information Center
Polzella, Donald J.; And Others
Modern aircrew training devices (ATDs) are equipped with sophisticated hardware and software capabilities, known as advanced instructional features (AIFs), that permit a simulator instructor to prepare briefings, manage training, vary task difficulty/fidelity, monitor performance, and provide feedback for flight simulation training missions. The…
Crash test and evaluation of temporary wood sign support system for large guide signs.
DOT National Transportation Integrated Search
2016-07-01
The objective of this research task was to evaluate the impact performance of a temporary wood sign support : system for large guide signs. It was desired to use existing TxDOT sign hardware in the design to the extent possible. : The full-scale cras...
Low-Latency Embedded Vision Processor (LLEVS)
2016-03-01
26 3.2.3 Task 3 Projected Performance Analysis of FPGA- based Vision Processor ........... 31 3.2.3.1 Algorithms Latency Analysis ...Programmable Gate Array Custom Hardware for Real- Time Multiresolution Analysis . ............................................... 35...conduct data analysis for performance projections. The data acquired through measurements , simulation and estimation provide the requisite platform for
14 CFR 1232.103 - Definitions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... and apply to the conduct of all NASA activities related to the care and use of animal subjects. (a) Activity includes research, testing of hardware for animal use, flight experimentation, and any other tasks... Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CARE AND USE OF ANIMALS IN THE CONDUCT OF...
14 CFR 1232.103 - Definitions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... and apply to the conduct of all NASA activities related to the care and use of animal subjects. (a) Activity includes research, testing of hardware for animal use, flight experimentation, and any other tasks... Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CARE AND USE OF ANIMALS IN THE CONDUCT OF...
14 CFR 1232.103 - Definitions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... and apply to the conduct of all NASA activities related to the care and use of animal subjects. (a) Activity includes research, testing of hardware for animal use, flight experimentation, and any other tasks... Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CARE AND USE OF ANIMALS IN THE CONDUCT OF...
14 CFR 1232.103 - Definitions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and apply to the conduct of all NASA activities related to the care and use of animal subjects. (a) Activity includes research, testing of hardware for animal use, flight experimentation, and any other tasks... Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CARE AND USE OF ANIMALS IN THE CONDUCT OF...
NASA Technical Reports Server (NTRS)
Martin, F. H.
1972-01-01
An overview of the executive system design task is presented. The flight software executive system, software verification, phase B baseline avionics system review, higher order languages and compilers, and computer hardware features are also discussed.
In-line task 57: Component evaluation. [of circuit breakers, panel switches, etc. for space shuttle
NASA Technical Reports Server (NTRS)
Boykin, J. C.
1974-01-01
Design analysis tests were performed on selected power switching components to determine the possible applicability of off-the-shelf hardware to space shuttles. Various characteristics were also evaluated in those devices to determine the most desirable properties for the space shuttle.
Applying reconfigurable hardware to the analysis of multispectral and hyperspectral imagery
NASA Astrophysics Data System (ADS)
Leeser, Miriam E.; Belanovic, Pavle; Estlick, Michael; Gokhale, Maya; Szymanski, John J.; Theiler, James P.
2002-01-01
Unsupervised clustering is a powerful technique for processing multispectral and hyperspectral images. Last year, we reported on an implementation of k-means clustering for multispectral images. Our implementation in reconfigurable hardware processed 10 channel multispectral images two orders of magnitude faster than a software implementation of the same algorithm. The advantage of using reconfigurable hardware to accelerate k-means clustering is clear; the disadvantage is the hardware implementation worked for one specific dataset. It is a non-trivial task to change this implementation to handle a dataset with different number of spectral channels, bits per spectral channel, or number of pixels; or to change the number of clusters. These changes required knowledge of the hardware design process and could take several days of a designer's time. Since multispectral data sets come in many shapes and sizes, being able to easily change the k-means implementation for these different data sets is important. For this reason, we have developed a parameterized implementation of the k-means algorithm. Our design is parameterized by the number of pixels in an image, the number of channels per pixel, and the number of bits per channel as well as the number of clusters. These parameters can easily be changed in a few minutes by someone not familiar with the design process. The resulting implementation is very close in performance to the original hardware implementation. It has the added advantage that the parameterized design compiles approximately three times faster than the original.
Modeling and design of a cone-beam CT head scanner using task-based imaging performance optimization
NASA Astrophysics Data System (ADS)
Xu, J.; Sisniega, A.; Zbijewski, W.; Dang, H.; Stayman, J. W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.
2016-04-01
Detection of acute intracranial hemorrhage (ICH) is important for diagnosis and treatment of traumatic brain injury, stroke, postoperative bleeding, and other head and neck injuries. This paper details the design and development of a cone-beam CT (CBCT) system developed specifically for the detection of low-contrast ICH in a form suitable for application at the point of care. Recognizing such a low-contrast imaging task to be a major challenge in CBCT, the system design began with a rigorous analysis of task-based detectability including critical aspects of system geometry, hardware configuration, and artifact correction. The imaging performance model described the three-dimensional (3D) noise-equivalent quanta using a cascaded systems model that included the effects of scatter, scatter correction, hardware considerations of complementary metal-oxide semiconductor (CMOS) and flat-panel detectors (FPDs), and digitization bit depth. The performance was analyzed with respect to a low-contrast (40-80 HU), medium-frequency task representing acute ICH detection. The task-based detectability index was computed using a non-prewhitening observer model. The optimization was performed with respect to four major design considerations: (1) system geometry (including source-to-detector distance (SDD) and source-to-axis distance (SAD)); (2) factors related to the x-ray source (including focal spot size, kVp, dose, and tube power); (3) scatter correction and selection of an antiscatter grid; and (4) x-ray detector configuration (including pixel size, additive electronics noise, field of view (FOV), and frame rate, including both CMOS and a-Si:H FPDs). Optimal design choices were also considered with respect to practical constraints and available hardware components. The model was verified in comparison to measurements on a CBCT imaging bench as a function of the numerous design parameters mentioned above. An extended geometry (SAD = 750 mm, SDD = 1100 mm) was found to be advantageous in terms of patient dose (20 mGy) and scatter reduction, while a more isocentric configuration (SAD = 550 mm, SDD = 1000 mm) was found to give a more compact and mechanically favorable configuration with minor tradeoff in detectability. An x-ray source with a 0.6 mm focal spot size provided the best compromise between spatial resolution requirements and x-ray tube power. Use of a modest anti-scatter grid (8:1 GR) at a 20 mGy dose provided slight improvement (~5-10%) in the detectability index, but the benefit was lost at reduced dose. The potential advantages of CMOS detectors over FPDs were quantified, showing that both detectors provided sufficient spatial resolution for ICH detection, while the former provided a potentially superior low-dose performance, and the latter provided the requisite FOV for volumetric imaging in a centered-detector geometry. Task-based imaging performance modeling provides an important starting point for CBCT system design, especially for the challenging task of ICH detection, which is somewhat beyond the capabilities of existing CBCT platforms. The model identifies important tradeoffs in system geometry and hardware configuration, and it supports the development of a dedicated CBCT system for point-of-care application. A prototype suitable for clinical studies is in development based on this analysis.
Diamond, Alan; Nowotny, Thomas; Schmuker, Michael
2016-01-01
Neuromorphic computing employs models of neuronal circuits to solve computing problems. Neuromorphic hardware systems are now becoming more widely available and “neuromorphic algorithms” are being developed. As they are maturing toward deployment in general research environments, it becomes important to assess and compare them in the context of the applications they are meant to solve. This should encompass not just task performance, but also ease of implementation, speed of processing, scalability, and power efficiency. Here, we report our practical experience of implementing a bio-inspired, spiking network for multivariate classification on three different platforms: the hybrid digital/analog Spikey system, the digital spike-based SpiNNaker system, and GeNN, a meta-compiler for parallel GPU hardware. We assess performance using a standard hand-written digit classification task. We found that whilst a different implementation approach was required for each platform, classification performances remained in line. This suggests that all three implementations were able to exercise the model's ability to solve the task rather than exposing inherent platform limits, although differences emerged when capacity was approached. With respect to execution speed and power consumption, we found that for each platform a large fraction of the computing time was spent outside of the neuromorphic device, on the host machine. Time was spent in a range of combinations of preparing the model, encoding suitable input spiking data, shifting data, and decoding spike-encoded results. This is also where a large proportion of the total power was consumed, most markedly for the SpiNNaker and Spikey systems. We conclude that the simulation efficiency advantage of the assessed specialized hardware systems is easily lost in excessive host-device communication, or non-neuronal parts of the computation. These results emphasize the need to optimize the host-device communication architecture for scalability, maximum throughput, and minimum latency. Moreover, our results indicate that special attention should be paid to minimize host-device communication when designing and implementing networks for efficient neuromorphic computing. PMID:26778950
Kalia, Anoop; Khatri, Kavin; Singh, Jagdeep; Bansal, Kapil; Sagy, Mohammed
2016-01-01
Introduction: The migration of circlage wires used in tension band wiring construct of patella fractures in the posterior soft tissue envelope surrounding the knee joint has been rarely reported. Case Presentation: A 60-year-old woman presented to us with pain over medial aspect of right knee joint. She underwent open reduction and internal fixation for a patellar fracture which she sustained 4 years back and subsequently underwent kirschner wire(k wire) removal for the same around 2 years back. X-rays of the knee joint shows that the circlage wire used in tension band construct which was left in place had broken into multiple pieces and was lying in the soft tissue envelope surrounding the knee joint and one piece migrate to the popliteal fossa. On examination patient did not had distal neuro-vascular deficit. The pain of the patient was due to the osteo-arthritic changes in her medial side of knee joint rather than broken wire pieces. Patient was advised to undergo total knee replacement along with subsequent removal of broken wires but patient refused for any type of surgery and is kept on regular follow up Conclusion: This case report summarizes a rare complication resulting from hardware failure used for fixing patella fractures and throws a light on potential unwarned complications due to broken wires along with early recogonition and removal of broken hardware by surgeons. PMID:28116277
Virtualization for the LHCb Online system
NASA Astrophysics Data System (ADS)
Bonaccorsi, Enrico; Brarda, Loic; Moine, Gary; Neufeld, Niko
2011-12-01
Virtualization has long been advertised by the IT-industry as a way to cut down cost, optimise resource usage and manage the complexity in large data-centers. The great number and the huge heterogeneity of hardware, both industrial and custom-made, has up to now led to reluctance in the adoption of virtualization in the IT infrastructure of large experiment installations. Our experience in the LHCb experiment has shown that virtualization improves the availability and the manageability of the whole system. We have done an evaluation of available hypervisors / virtualization solutions and find that the Microsoft HV technology provides a high level of maturity and flexibility for our purpose. We present the results of these comparison tests, describing in detail, the architecture of our virtualization infrastructure with a special emphasis on the security for services visible to the outside world. Security is achieved by a sophisticated combination of VLANs, firewalls and virtual routing - the cost and benefits of this solution are analysed. We have adapted our cluster management tools, notably Quattor, for the needs of virtual machines and this allows us to migrate smoothly services on physical machines to the virtualized infrastructure. The procedures for migration will also be described. In the final part of the document we describe our recent R&D activities aiming to replacing the SAN-backend for the virtualization by a cheaper iSCSI solution - this will allow to move all servers and related services to the virtualized infrastructure, excepting the ones doing hardware control via non-commodity PCI plugin cards.
Automated personnel data base system specifications, Task V. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartley, H.J.; Bocast, A.K.; Deppner, F.O.
1978-09-01
This document is the General Research Corporation report on Task V of a study for the Office of Inspection and Enforcement of the Nuclear Regulatory Commission (NRC/IE). The full title of this study is ''Development of Qualification Requirements, Training Programs, Career Plans, and Methodologies for Effective Management and Training of Inspection and Enforcement Personnel.'' Task V required the development of an automated personnel data base system for NRC/IE. This system is identified as the NRC/IE Personnel, Assignment, Qualifications, and Training System (PAQTS). This Task V report provides the documentation for PAQTS including the Functional Requirements Document (FRD), the Data Requirementsmore » Document (DRD), the Hardware and Software Capabilities Assessment, and the Detailed Implementation Schedule. Specific recommendations to facilitate implementation of PAQTS are also included.« less
A Scheduling Algorithm for Replicated Real-Time Tasks
NASA Technical Reports Server (NTRS)
Yu, Albert C.; Lin, Kwei-Jay
1991-01-01
We present an algorithm for scheduling real-time periodic tasks on a multiprocessor system under fault-tolerant requirement. Our approach incorporates both the redundancy and masking technique and the imprecise computation model. Since the tasks in hard real-time systems have stringent timing constraints, the redundancy and masking technique are more appropriate than the rollback techniques which usually require extra time for error recovery. The imprecise computation model provides flexible functionality by trading off the quality of the result produced by a task with the amount of processing time required to produce it. It therefore permits the performance of a real-time system to degrade gracefully. We evaluate the algorithm by stochastic analysis and Monte Carlo simulations. The results show that the algorithm is resilient under hardware failures.
NASA Astrophysics Data System (ADS)
Nightingale, James; Wang, Qi; Grecos, Christos
2015-02-01
In recent years video traffic has become the dominant application on the Internet with global year-on-year increases in video-oriented consumer services. Driven by improved bandwidth in both mobile and fixed networks, steadily reducing hardware costs and the development of new technologies, many existing and new classes of commercial and industrial video applications are now being upgraded or emerging. Some of the use cases for these applications include areas such as public and private security monitoring for loss prevention or intruder detection, industrial process monitoring and critical infrastructure monitoring. The use of video is becoming commonplace in defence, security, commercial, industrial, educational and health contexts. Towards optimal performances, the design or optimisation in each of these applications should be context aware and task oriented with the characteristics of the video stream (frame rate, spatial resolution, bandwidth etc.) chosen to match the use case requirements. For example, in the security domain, a task-oriented consideration may be that higher resolution video would be required to identify an intruder than to simply detect his presence. Whilst in the same case, contextual factors such as the requirement to transmit over a resource-limited wireless link, may impose constraints on the selection of optimum task-oriented parameters. This paper presents a novel, conceptually simple and easily implemented method of assessing video quality relative to its suitability for a particular task and dynamically adapting videos streams during transmission to ensure that the task can be successfully completed. Firstly we defined two principle classes of tasks: recognition tasks and event detection tasks. These task classes are further subdivided into a set of task-related profiles, each of which is associated with a set of taskoriented attributes (minimum spatial resolution, minimum frame rate etc.). For example, in the detection class, profiles for intruder detection will require different temporal characteristics (frame rate) from those used for detection of high motion objects such as vehicles or aircrafts. We also define a set of contextual attributes that are associated with each instance of a running application that include resource constraints imposed by the transmission system employed and the hardware platforms used as source and destination of the video stream. Empirical results are presented and analysed to demonstrate the advantages of the proposed schemes.
WISP information display system user's manual
NASA Technical Reports Server (NTRS)
Alley, P. L.; Smith, G. R.
1978-01-01
The wind shears program (WISP) supports the collection of data on magnetic tape for permanent storage or analysis. The document structure provides: (1) the hardware and software configuration required to execute the WISP system and start up procedure from a power down condition; (2) data collection task, calculations performed on the incoming data, and a description of the magnetic tape format; (3) the data display task and examples of displays obtained from execution of the real time simulation program; and (4) the raw data dump task and examples of operator actions required to obtained the desired format. The procedures outlines herein will allow continuous data collection at the expense of real time visual displays.
NASA Astrophysics Data System (ADS)
Ginosar, Ran; Aviely, Peleg; Liran, Tuvia; Alon, Dov; Dobkin, Reuven; Goldberg, Michael
2013-08-01
RC64, a novel 64-core many-core signal processing chip targets DSP performance of 12.8 GIPS, 100 GOPS and 12.8 single precision GFLOS while dissipating only 3 Watts. RC64 employs advanced DSP cores, a multi-bank shared memory and a hardware scheduler, supports DDR2 memory and communicates over five proprietary 6.4 Gbps channels. The programming model employs sequential fine-grain tasks and a separate task map to define task dependencies. RC64 is implemented as a 200 MHz ASIC on Tower 130nm CMOS technology, assembled in hermetically sealed ceramic QFP package and qualified to the highest space standards.
Magnetosensitive neurons mediate geomagnetic orientation in Caenorhabditis elegans
Vidal-Gadea, Andrés; Ward, Kristi; Beron, Celia; Ghorashian, Navid; Gokce, Sertan; Russell, Joshua; Truong, Nicholas; Parikh, Adhishri; Gadea, Otilia; Ben-Yakar, Adela; Pierce-Shimomura, Jonathan
2015-01-01
Many organisms spanning from bacteria to mammals orient to the earth's magnetic field. For a few animals, central neurons responsive to earth-strength magnetic fields have been identified; however, magnetosensory neurons have yet to be identified in any animal. We show that the nematode Caenorhabditis elegans orients to the earth's magnetic field during vertical burrowing migrations. Well-fed worms migrated up, while starved worms migrated down. Populations isolated from around the world, migrated at angles to the magnetic vector that would optimize vertical translation in their native soil, with northern- and southern-hemisphere worms displaying opposite migratory preferences. Magnetic orientation and vertical migrations required the TAX-4 cyclic nucleotide-gated ion channel in the AFD sensory neuron pair. Calcium imaging showed that these neurons respond to magnetic fields even without synaptic input. C. elegans may have adapted magnetic orientation to simplify their vertical burrowing migration by reducing the orientation task from three dimensions to one. DOI: http://dx.doi.org/10.7554/eLife.07493.001 PMID:26083711
Development of a preprototype times wastewater recovery subsystem, addendum
NASA Technical Reports Server (NTRS)
Dehner, G. F.
1984-01-01
Six tasks are described reflecting subsystem hardware and software modifications and test evaluation of a TIMES wastewater recovery subsystem. The overall results are illustrated in a figure which shows the water production rate, the specific energy corrected to 26.5 VDC, and the product water conductivity at various points in the testing. Four tasks are described reflecting studies performed to develop a preliminary design concept for a next generation TIMES. The overall results of the study are the completion of major design analyses and preliminary configuration layout drawings.
System For Research On Multiple-Arm Robots
NASA Technical Reports Server (NTRS)
Backes, Paul G.; Hayati, Samad; Tso, Kam S.; Hayward, Vincent
1991-01-01
Kali system of computer programs and equipment provides environment for research on distributed programming and distributed control of coordinated-multiple-arm robots. Suitable for telerobotics research involving sensing and execution of low level tasks. Software and configuration of hardware designed flexible so system modified easily to test various concepts in control and programming of robots, including multiple-arm control, redundant-arm control, shared control, traded control, force control, force/position hybrid control, design and integration of sensors, teleoperation, task-space description and control, methods of adaptive control, control of flexible arms, and human factors.
PTS performance by flight- and control-group macaques
NASA Technical Reports Server (NTRS)
Washburn, D. A.; Rumbaugh, D. M.; Richardson, W. K.; Gulledge, J. P.; Shlyk, G. G.; Vasilieva, O. N.
2000-01-01
A total of 25 young monkeys (Macaca mulatta) were trained with the Psychomotor Test System, a package of software tasks and computer hardware developed for spaceflight research with nonhuman primates. Two flight monkeys and two control monkeys were selected from this pool and performed a psychomotor task before and after the Bion 11 flight or a ground-control period. Monkeys from both groups showed significant disruption in performance after the 14-day flight or simulation (plus one anesthetized day of biopsies and other tests), and this disruption appeared to be magnified for the flight animal.
On the applicability of STDP-based learning mechanisms to spiking neuron network models
NASA Astrophysics Data System (ADS)
Sboev, A.; Vlasov, D.; Serenko, A.; Rybka, R.; Moloshnikov, I.
2016-11-01
The ways to creating practically effective method for spiking neuron networks learning, that would be appropriate for implementing in neuromorphic hardware and at the same time based on the biologically plausible plasticity rules, namely, on STDP, are discussed. The influence of the amount of correlation between input and output spike trains on the learnability by different STDP rules is evaluated. A usability of alternative combined learning schemes, involving artificial and spiking neuron models is demonstrated on the iris benchmark task and on the practical task of gender recognition.
Liquid Nitrogen Removal of Critical Aerospace Materials
NASA Technical Reports Server (NTRS)
Noah, Donald E.; Merrick, Jason; Hayes, Paul W.
2005-01-01
Identification of innovative solutions to unique materials problems is an every-day quest for members of the aerospace community. Finding a technique that will minimize costs, maximize throughput, and generate quality results is always the target. United Space Alliance Materials Engineers recently conducted such a search in their drive to return the Space Shuttle fleet to operational status. The removal of high performance thermal coatings from solid rocket motors represents a formidable task during post flight disassembly on reusable expended hardware. The removal of these coatings from unfired motors increases the complexity and safety requirements while reducing the available facilities and approved processes. A temporary solution to this problem was identified, tested and approved during the Solid Rocket Booster (SRB) return to flight activities. Utilization of ultra high-pressure liquid nitrogen (LN2) to strip the protective coating from assembled space shuttle hardware marked the first such use of the technology in the aerospace industry. This process provides a configurable stream of liquid nitrogen (LN2) at pressures of up to 55,000 psig. The performance of a one-time certification for the removal of thermal ablatives from SRB hardware involved extensive testing to ensure adequate material removal without causing undesirable damage to the residual materials or aluminum substrates. Testing to establish appropriate process parameters such as flow, temperature and pressures of the liquid nitrogen stream provided an initial benchmark for process testing. Equipped with these initial parameters engineers were then able to establish more detailed test criteria that set the process limits. Quantifying the potential for aluminum hardware damage represented the greatest hurdle for satisfying engineers as to the safety of this process. Extensive testing for aluminum erosion, surface profiling, and substrate weight loss was performed. This successful project clearly demonstrated that the liquid nitrogen jet possesses unique strengths that align remarkably well with the unusual challenges that space hardware and missile manufacturers face on a regular basis. Performance of this task within the confines of a critical manufacturing facility marks a milestone in advanced processing.
The Art of Space Flight Exercise Hardware: Design and Implementation
NASA Technical Reports Server (NTRS)
Beyene, Nahom M.
2004-01-01
The design of space flight exercise hardware depends on experience with crew health maintenance in a microgravity environment, history in development of flight-quality exercise hardware, and a foundation for certifying proper project management and design methodology. Developed over the past 40 years, the expertise in designing exercise countermeasures hardware at the Johnson Space Center stems from these three aspects of design. The medical community has steadily pursued an understanding of physiological changes in humans in a weightless environment and methods of counteracting negative effects on the cardiovascular and musculoskeletal system. The effects of weightlessness extend to the pulmonary and neurovestibular system as well with conditions ranging from motion sickness to loss of bone density. Results have shown losses in water weight and muscle mass in antigravity muscle groups. With the support of university-based research groups and partner space agencies, NASA has identified exercise to be the primary countermeasure for long-duration space flight. The history of exercise hardware began during the Apollo Era and leads directly to the present hardware on the International Space Station. Under the classifications of aerobic and resistive exercise, there is a clear line of development from the early devices to the countermeasures hardware used today. In support of all engineering projects, the engineering directorate has created a structured framework for project management. Engineers have identified standards and "best practices" to promote efficient and elegant design of space exercise hardware. The quality of space exercise hardware depends on how well hardware requirements are justified by exercise performance guidelines and crew health indicators. When considering the microgravity environment of the device, designers must consider performance of hardware separately from the combined human-in-hardware system. Astronauts are the caretakers of the hardware while it is deployed and conduct all sanitization, calibration, and maintenance for the devices. Thus, hardware designs must account for these issues with a goal of minimizing crew time on orbit required to complete these tasks. In the future, humans will venture to Mars and exercise countermeasures will play a critical role in allowing us to continue in our spirit of exploration. NASA will benefit from further experimentation on Earth, through the International Space Station, and with advanced biomechanical models to quantify how each device counteracts specific symptoms of weightlessness. With the continued support of international space agencies and the academic research community, we will usher the next frontier in human space exploration.
Testing the accuracy of timing reports in visual timing tasks with a consumer-grade digital camera.
Smyth, Rachael E; Oram Cardy, Janis; Purcell, David
2017-06-01
This study tested the accuracy of a visual timing task using a readily available and relatively inexpensive consumer grade digital camera. A visual inspection time task was recorded using short high-speed video clips and the timing as reported by the task's program was compared to the timing as recorded in the video clips. Discrepancies in these two timing reports were investigated further and based on display refresh rate, a decision was made whether the discrepancy was large enough to affect the results as reported by the task. In this particular study, the errors in timing were not large enough to impact the results of the study. The procedure presented in this article offers an alternative method for performing a timing test, which uses readily available hardware and can be used to test the timing in any software program on any operating system and display.
REVEAL: Software Documentation and Platform Migration
NASA Technical Reports Server (NTRS)
Wilson, Michael A.; Veibell, Victoir T.; Freudinger, Lawrence C.
2008-01-01
The Research Environment for Vehicle Embedded Analysis on Linux (REVEAL) is reconfigurable data acquisition software designed for network-distributed test and measurement applications. In development since 2001, it has been successfully demonstrated in support of a number of actual missions within NASA s Suborbital Science Program. Improvements to software configuration control were needed to properly support both an ongoing transition to operational status and continued evolution of REVEAL capabilities. For this reason the project described in this report targets REVEAL software source documentation and deployment of the software on a small set of hardware platforms different from what is currently used in the baseline system implementation. This report specifically describes the actions taken over a ten week period by two undergraduate student interns and serves as a final report for that internship. The topics discussed include: the documentation of REVEAL source code; the migration of REVEAL to other platforms; and an end-to-end field test that successfully validates the efforts.
Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus
2016-01-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922
The Astronaut-Athlete: Optimizing Human Performance in Space.
Hackney, Kyle J; Scott, Jessica M; Hanson, Andrea M; English, Kirk L; Downs, Meghan E; Ploutz-Snyder, Lori L
2015-12-01
It is well known that long-duration spaceflight results in deconditioning of neuromuscular and cardiovascular systems, leading to a decline in physical fitness. On reloading in gravitational environments, reduced fitness (e.g., aerobic capacity, muscular strength, and endurance) could impair human performance, mission success, and crew safety. The level of fitness necessary for the performance of routine and off-nominal terrestrial mission tasks remains an unanswered and pressing question for scientists and flight physicians. To mitigate fitness loss during spaceflight, resistance and aerobic exercise are the most effective countermeasure available to astronauts. Currently, 2.5 h·d, 6-7 d·wk is allotted in crew schedules for exercise to be performed on highly specialized hardware on the International Space Station (ISS). Exercise hardware provides up to 273 kg of loading capability for resistance exercise, treadmill speeds between 0.44 and 5.5 m·s, and cycle workloads from 0 and 350 W. Compared to ISS missions, future missions beyond low earth orbit will likely be accomplished with less vehicle volume and power allocated for exercise hardware. Concomitant factors, such as diet and age, will also affect the physiologic responses to exercise training (e.g., anabolic resistance) in the space environment. Research into the potential optimization of exercise countermeasures through use of dietary supplementation, and pharmaceuticals may assist in reducing physiological deconditioning during long-duration spaceflight and have the potential to enhance performance of occupationally related astronaut tasks (e.g., extravehicular activity, habitat construction, equipment repairs, planetary exploration, and emergency response).
DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.
Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei
2017-07-18
Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.
Software and Hardware Utilization in Computer Medicine Education.
ERIC Educational Resources Information Center
Pitts, Gerald N.; Bateman, Barry L.
Computers are currently being used to perform medical tasks such as: (1) taking medical histories; (2) patient care and health-unit care management; (3) clinical and laboratory work; (4) physiological signal monitoring; and (5) multiphasic screening. In a survey of over 200 institutions, over 339 computer language applications were found, many of…
A Survey on the Use of Microcomputers in Special Libraries.
ERIC Educational Resources Information Center
Krieger, Tillie
1986-01-01
Describes a survey on the use of microcomputers in special libraries. The discussion of the findings includes types of hardware and software in use; applications in public services, technical processes, and administrative tasks; data back-up techniques; training received; evaluation of software; and future plans for microcomputer applications. (1…
Data Acquisition System(DAS) Sustaining Engineering
NASA Technical Reports Server (NTRS)
1998-01-01
This paper presents general information describing the Data Acquisition System contract, a summary of objectives, tasks performed and completed. The hardware deliverables which are comprised of: 1) Two ground DAS units; 2) Two flight DAS units; 3) Logistic spares; and 4) Shipping containers are described. Also included are the data requirements and scope of the contract.
Collective Machine Learning: Team Learning and Classification in Multi-Agent Systems
ERIC Educational Resources Information Center
Gifford, Christopher M.
2009-01-01
This dissertation focuses on the collaboration of multiple heterogeneous, intelligent agents (hardware or software) which collaborate to learn a task and are capable of sharing knowledge. The concept of collaborative learning in multi-agent and multi-robot systems is largely under studied, and represents an area where further research is needed to…
NASA Astrophysics Data System (ADS)
Engel, P.; Schweimler, B.
2016-04-01
The deformation monitoring of structures and buildings is an important task field of modern engineering surveying, ensuring the standing and reliability of supervised objects over a long period. Several commercial hardware and software solutions for the realization of such monitoring measurements are available on the market. In addition to them, a research team at the Neubrandenburg University of Applied Sciences (NUAS) is actively developing a software package for monitoring purposes in geodesy and geotechnics, which is distributed under an open source licence and free of charge. The task of managing an open source project is well-known in computer science, but it is fairly new in a geodetic context. This paper contributes to that issue by detailing applications, frameworks, and interfaces for the design and implementation of open hardware and software solutions for sensor control, sensor networks, and data management in automatic deformation monitoring. It will be discussed how the development effort of networked applications can be reduced by using free programming tools, cloud computing technologies, and rapid prototyping methods.
NASA Astrophysics Data System (ADS)
Antonik, Piotr; Haelterman, Marc; Massar, Serge
2017-05-01
Reservoir computing is a bioinspired computing paradigm for processing time-dependent signals. Its hardware implementations have received much attention because of their simplicity and remarkable performance on a series of benchmark tasks. In previous experiments, the output was uncoupled from the system and, in most cases, simply computed off-line on a postprocessing computer. However, numerical investigations have shown that feeding the output back into the reservoir opens the possibility of long-horizon time-series forecasting. Here, we present a photonic reservoir computer with output feedback, and we demonstrate its capacity to generate periodic time series and to emulate chaotic systems. We study in detail the effect of experimental noise on system performance. In the case of chaotic systems, we introduce several metrics, based on standard signal-processing techniques, to evaluate the quality of the emulation. Our work significantly enlarges the range of tasks that can be solved by hardware reservoir computers and, therefore, the range of applications they could potentially tackle. It also raises interesting questions in nonlinear dynamics and chaos theory.
Flow visualization of CFD using graphics workstations
NASA Technical Reports Server (NTRS)
Lasinski, Thomas; Buning, Pieter; Choi, Diana; Rogers, Stuart; Bancroft, Gordon
1987-01-01
High performance graphics workstations are used to visualize the fluid flow dynamics obtained from supercomputer solutions of computational fluid dynamic programs. The visualizations can be done independently on the workstation or while the workstation is connected to the supercomputer in a distributed computing mode. In the distributed mode, the supercomputer interactively performs the computationally intensive graphics rendering tasks while the workstation performs the viewing tasks. A major advantage of the workstations is that the viewers can interactively change their viewing position while watching the dynamics of the flow fields. An overview of the computer hardware and software required to create these displays is presented. For complex scenes the workstation cannot create the displays fast enough for good motion analysis. For these cases, the animation sequences are recorded on video tape or 16 mm film a frame at a time and played back at the desired speed. The additional software and hardware required to create these video tapes or 16 mm movies are also described. Photographs illustrating current visualization techniques are discussed. Examples of the use of the workstations for flow visualization through animation are available on video tape.
Digital Autonomous Terminal Access Communication (DATAC) system
NASA Technical Reports Server (NTRS)
Novacki, Stanley M., III
1987-01-01
In order to accommodate the increasing number of computerized subsystems aboard today's more fuel efficient aircraft, the Boeing Co. has developed the DATAC (Digital Autonomous Terminal Access Control) bus to minimize the need for point-to-point wiring to interconnect these various systems, thereby reducing total aircraft weight and maintaining an economical flight configuration. The DATAC bus is essentially a local area network providing interconnections for any of the flight management and control systems aboard the aircraft. The task of developing a Bus Monitor Unit was broken down into four subtasks: (1) providing a hardware interface between the DATAC bus and the Z8000-based microcomputer system to be used as the bus monitor; (2) establishing a communication link between the Z8000 system and a CP/M-based computer system; (3) generation of data reduction and display software to output data to the console device; and (4) development of a DATAC Terminal Simulator to facilitate testing of the hardware and software which transfer data between the DATAC's bus and the operator's console in a near real time environment. These tasks are briefly discussed.
Evaluation methodologies for an advanced information processing system
NASA Technical Reports Server (NTRS)
Schabowsky, R. S., Jr.; Gai, E.; Walker, B. K.; Lala, J. H.; Motyka, P.
1984-01-01
The system concept and requirements for an Advanced Information Processing System (AIPS) are briefly described, but the emphasis of this paper is on the evaluation methodologies being developed and utilized in the AIPS program. The evaluation tasks include hardware reliability, maintainability and availability, software reliability, performance, and performability. Hardware RMA and software reliability are addressed with Markov modeling techniques. The performance analysis for AIPS is based on queueing theory. Performability is a measure of merit which combines system reliability and performance measures. The probability laws of the performance measures are obtained from the Markov reliability models. Scalar functions of this law such as the mean and variance provide measures of merit in the AIPS performability evaluations.
Data management system advanced development
NASA Technical Reports Server (NTRS)
Douglas, Katherine; Humphries, Terry
1990-01-01
The Data Management System (DMS) Advanced Development task provides for the development of concepts, new tools, DMS services, and for the testing of the Space Station DMS hardware and software. It also provides for the development of techniques capable of determining the effects of system changes/enhancements, additions of new technology, and/or hardware and software growth on system performance. This paper will address the built-in characteristics which will support network monitoring requirements in the design of the evolving DMS network implementation, functional and performance requirements for a real-time, multiprogramming, multiprocessor operating system, and the possible use of advanced development techniques such as expert systems and artificial intelligence tools in the DMS design.
NASA Technical Reports Server (NTRS)
Hartley, Garen
2018-01-01
NASA's vision for humans pursuing deep space flight involves the collection of science in low earth orbit aboard the International Space Station (ISS). As a service to the science community, Johnson Space Center (JSC) has developed hardware and processes to preserve collected science on the ISS and transfer it safely back to the Principal Investigators. This hardware includes an array of freezers, refrigerators, and incubators. The Cold Stowage team is part of the International Space Station (ISS) program. JSC manages the operation, support and integration tasks provided by Jacobs Technology and the University of Alabama Birmingham (UAB). Cold Stowage provides controlled environments to meet temperature requirements during ascent, on-orbit operations and return, in relation to International Space Station Payload Science.
Shielded battery syndrome: a new hardware complication of deep brain stimulation.
Chelvarajah, Ramesh; Lumsden, Daniel; Kaminska, Margaret; Samuel, Michael; Hulse, Natasha; Selway, Richard P; Lin, Jean-Pierre; Ashkan, Keyoumars
2012-01-01
Deep brain stimulation hardware is constantly advancing. The last few years have seen the introduction of rechargeable cell technology into the implanted pulse generator design, allowing for longer battery life and fewer replacement operations. The Medtronic® system requires an additional pocket adaptor when revising a non-rechargeable battery such as their Kinetra® to their rechargeable Activa® RC. This additional hardware item can, if it migrates superficially, become an impediment to the recharging of the battery and negate the intended technological advance. To report the emergence of the 'shielded battery syndrome', which has not been previously described. We reviewed our deep brain stimulation database to identify cases of recharging difficulties reported by patients with Activa RC implanted pulse generators. Two cases of shielded battery syndrome were identified. The first required surgery to reposition the adaptor to the deep aspect of the subcutaneous pocket. In the second case, it was possible to perform external manual manipulation to restore the adaptor to its original position deep to the battery. We describe strategies to minimise the occurrence of the shielded battery syndrome and advise vigilance in all patients who experience difficulty with recharging after replacement surgery of this type for the implanted pulse generator. Copyright © 2012 S. Karger AG, Basel.
Hong, Keum-Shik; Khan, Muhammad Jawad
2017-01-01
In this article, non-invasive hybrid brain-computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain-computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided.
Hong, Keum-Shik; Khan, Muhammad Jawad
2017-01-01
In this article, non-invasive hybrid brain–computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided. PMID:28790910
NASA Technical Reports Server (NTRS)
St.denis, R. W.
1981-01-01
The feasibility of using optical data handling methods to transmit payload checkout and telemetry is discussed. Optical communications are superior to conventional communication systems for the following reasons: high data capacity optical channels; small and light weight optical cables; and optical signal immunity to electromagnetic interference. Task number one analyzed the ground checkout data requirements that may be expected from the payload community. Task number two selected the optical approach based on the interface requirements, the location of the interface, the amount of time required to reconfigure hardware, and the method of transporting the optical signal. Task number three surveyed and selected optical components for the two payload data link. Task number four makes a qualitative comparison of the conventional electrical communication system and the proposed optical communication system.
Ames Research Center SR&T program and earth observations
NASA Technical Reports Server (NTRS)
Poppoff, I. G.
1972-01-01
An overview is presented of the research activities in earth observations at Ames Research Center. Most of the tasks involve the use of research aircraft platforms. The program is also directed toward the use of the Illiac 4 computer for statistical analysis. Most tasks are weighted toward Pacific coast and Pacific basin problems with emphasis on water applications, air applications, animal migration studies, and geophysics.
Liu, Na; Yu, Ruifeng
2018-06-01
This study aimed to determine the touch characteristics during tapping tasks on membrane touch interface and investigate the effects of posture and gender on touch characteristics variables. One hundred participants tapped digits displayed on a membrane touch interface on sitting and standing positions using all fingers of the dominant hand. Touch characteristics measures included average force, contact area, and dwell time. Across fingers and postures, males exerted larger force and contact area than females, but similar dwell time. Across genders and postures, thumb exerted the largest force and the force of the other four fingers showed no significant difference. The contact area of the thumb was the largest, whereas that of the little finger was the smallest; the dwell time of the thumb was the longest, whereas that of the middle finger was the shortest. Relationships among finger sizes, gender, posture and touch characteristics were proposed. The findings helped direct membrane touch interface design for digital and numerical control products from hardware and software perspectives. Practitioner Summary: This study measured force, contact area, and dwell time in tapping tasks on membrane touch interface and examined effects of gender and posture on force, contact area, and dwell time. The findings will direct membrane touch interface design for digital and numerical control products from hardware and software perspectives.
A new approach to telemetry data processing. Ph.D. Thesis - Maryland Univ.
NASA Technical Reports Server (NTRS)
Broglio, C. J.
1973-01-01
An approach for a preprocessing system for telemetry data processing was developed. The philosophy of the approach is the development of a preprocessing system to interface with the main processor and relieve it of the burden of stripping information from a telemetry data stream. To accomplish this task, a telemetry preprocessing language was developed. Also, a hardware device for implementing the operation of this language was designed using a cellular logic module concept. In the development of the hardware device and the cellular logic module, a distributed form of control was implemented. This is accomplished by a technique of one-to-one intermodule communications and a set of privileged communication operations. By transferring this control state from module to module, the control function is dispersed through the system. A compiler for translating the preprocessing language statements into an operations table for the hardware device was also developed. Finally, to complete the system design and verify it, a simulator for the collular logic module was written using the APL/360 system.
Hardware design and implementation of fast DOA estimation method based on multicore DSP
NASA Astrophysics Data System (ADS)
Guo, Rui; Zhao, Yingxiao; Zhang, Yue; Lin, Qianqiang; Chen, Zengping
2016-10-01
In this paper, we present a high-speed real-time signal processing hardware platform based on multicore digital signal processor (DSP). The real-time signal processing platform shows several excellent characteristics including high performance computing, low power consumption, large-capacity data storage and high speed data transmission, which make it able to meet the constraint of real-time direction of arrival (DOA) estimation. To reduce the high computational complexity of DOA estimation algorithm, a novel real-valued MUSIC estimator is used. The algorithm is decomposed into several independent steps and the time consumption of each step is counted. Based on the statistics of the time consumption, we present a new parallel processing strategy to distribute the task of DOA estimation to different cores of the real-time signal processing hardware platform. Experimental results demonstrate that the high processing capability of the signal processing platform meets the constraint of real-time direction of arrival (DOA) estimation.
Development of an optoelectronic holographic platform for otolaryngology applications
NASA Astrophysics Data System (ADS)
Harrington, Ellery; Dobrev, Ivo; Bapat, Nikhil; Flores, Jorge Mauricio; Furlong, Cosme; Rosowski, John; Cheng, Jeffery Tao; Scarpino, Chris; Ravicz, Michael
2010-08-01
In this paper, we present advances on our development of an optoelectronic holographic computing platform with the ability to quantitatively measure full-field-of-view nanometer-scale movements of the tympanic membrane (TM). These measurements can facilitate otologists' ability to study and diagnose hearing disorders in humans. The holographic platform consists of a laser delivery system and an otoscope. The control software, called LaserView, is written in Visual C++ and handles communication and synchronization between hardware components. It provides a user-friendly interface to allow viewing of holographic images with several tools to automate holography-related tasks and facilitate hardware communication. The software uses a series of concurrent threads to acquire images, control the hardware, and display quantitative holographic data at video rates and in two modes of operation: optoelectronic holography and lensless digital holography. The holographic platform has been used to perform experiments on several live and post-mortem specimens, and is to be deployed in a medical research environment with future developments leading to its eventual clinical use.
UAS-Systems Integration, Validation, and Diagnostics Simulation Capability
NASA Technical Reports Server (NTRS)
Buttrill, Catherine W.; Verstynen, Harry A.
2014-01-01
As part of the Phase 1 efforts of NASA's UAS-in-the-NAS Project a task was initiated to explore the merits of developing a system simulation capability for UAS to address airworthiness certification requirements. The core of the capability would be a software representation of an unmanned vehicle, including all of the relevant avionics and flight control system components. The specific system elements could be replaced with hardware representations to provide Hardware-in-the-Loop (HWITL) test and evaluation capability. The UAS Systems Integration and Validation Laboratory (UAS-SIVL) was created to provide a UAS-systems integration, validation, and diagnostics hardware-in-the-loop simulation capability. This paper discusses how SIVL provides a robust and flexible simulation framework that permits the study of failure modes, effects, propagation paths, criticality, and mitigation strategies to help develop safety, reliability, and design data that can assist with the development of certification standards, means of compliance, and design best practices for civil UAS.
Tuple spaces in hardware for accelerated implicit routing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Zachary Kent; Tripp, Justin
2010-12-01
Organizing and optimizing data objects on networks with support for data migration and failing nodes is a complicated problem to handle as systems grow. The goal of this work is to demonstrate that high levels of speedup can be achieved by moving responsibility for finding, fetching, and staging data into an FPGA-based network card. We present a system for implicit routing of data via FPGA-based network cards. In this system, data structures are requested by name, and the network of FPGAs finds the data within the network and relays the structure to the requester. This is acheived through successive examinationmore » of hardware hash tables implemented in the FPGA. By avoiding software stacks between nodes, the data is quickly fetched entirely through FPGA-FPGA interaction. The performance of this system is orders of magnitude faster than software implementations due to the improved speed of the hash tables and lowered latency between the network nodes.« less
NASA Technical Reports Server (NTRS)
1987-01-01
The objectives consisted of three major tasks. The first was to establish the definition of Space Station and Orbital Maneuvering Vehicle (OMV) user requirements and interfaces and to evaluate system requirements of a water tanker to be used at the station. The second task is to conduct trade studies of system requirements, hardware/software, and operations to evaluate the effect of automatic operation at the station or remote from the station in consonance with the OMV. The last task is to evaluate automatic refueling concepts and to evaluate the impact to Orbital Spacecraft Consumable Resupply System (OSCRS) concept/design to use expendable launch vehicles (ELV) to place the tank into orbit. Progress in each area is discussed.
RC64, a Rad-Hard Many-Core High- Performance DSP for Space Applications
NASA Astrophysics Data System (ADS)
Ginosar, Ran; Aviely, Peleg; Gellis, Hagay; Liran, Tuvia; Israeli, Tsvika; Nesher, Roy; Lange, Fredy; Dobkin, Reuven; Meirov, Henri; Reznik, Dror
2015-09-01
RC64, a novel rad-hard 64-core signal processing chip targets DSP performance of 75 GMACs (16bit), 150 GOPS and 38 single precision GFLOPS while dissipating less than 10 Watts. RC64 integrates advanced DSP cores with a multi-bank shared memory and a hardware scheduler, also supporting DDR2/3 memory and twelve 3.125 Gbps full duplex high speed serial links using SpaceFibre and other protocols. The programming model employs sequential fine-grain tasks and a separate task map to define task dependencies. RC64 is implemented as a 300 MHz integrated circuit on a 65nm CMOS technology, assembled in hermetically sealed ceramic CCGA624 package and qualified to the highest space standards.
RC64, a Rad-Hard Many-Core High-Performance DSP for Space Applications
NASA Astrophysics Data System (ADS)
Ginosar, Ran; Aviely, Peleg; Liran, Tuvia; Alon, Dov; Mandler, Alberto; Lange, Fredy; Dobkin, Reuven; Goldberg, Miki
2014-08-01
RC64, a novel rad-hard 64-core signal processing chip targets DSP performance of 75 GMACs (16bit), 150 GOPS and 20 single precision GFLOPS while dissipating less than 10 Watts. RC64 integrates advanced DSP cores with a multi-bank shared memory and a hardware scheduler, also supporting DDR2/3 memory and twelve 2.5 Gbps full duplex high speed serial links using SpaceFibre and other protocols. The programming model employs sequential fine-grain tasks and a separate task map to define task dependencies. RC64 is implemented as a 300 MHz integrated circuit on a 65nm CMOS technology, assembled in hermetically sealed ceramic CCGA624 package and qualified to the highest space standards.
Return migration: changing roles of men and women.
Sakka, D; Dikaiou, M; Kiosseoglou, G
1999-01-01
This article addresses changes in gender roles among returning migrant families. It focuses on Greek returnees from the Federal Republic of Germany and explores changes in task sharing behavior and gender role attitudes resulting from changes in the sociocultural environments. A group of return migrants was compared with a group of non-migrants, both living in villages in the District of Drama, Greece. Groups were interviewed to investigate the extent to which each spouse shared house tasks, as well as their attitudes towards sharing and gender role in the family. The t-test for independent samples was used to determine mean differences between the two groups. In addition to demographic variables, those concerning the "time lived abroad" and the "number of years in Greece" after return were inserted into a series of regression analyses. Findings showed that migrants' task sharing and gender role attitudes were influenced differently by the migration-repatriation experience and subsequent cultural alternation. Results also suggest that migrant couples either take on new patterns of behavior or maintain traditional ones only when these were congruent with the financial aims of the family or can be integrated into living conditions in Greece upon return. Furthermore, migrants seem to adopt a more "traditional" attitude than non-migrants toward the participation of women in family decision making. From the study, it is suggested that gender role change is an on-going process influenced by the migration-repatriation experience, as well the factors, which accompany movement between the two countries.
ERIC Educational Resources Information Center
Reed, Penny; Bowser, Gayl
This guide defines assistive technology as specialized hardware and software equipment used by students with disabilities to increase their ability to participate in tasks of learning and daily living and function as independently as possible. Types of assistive technology are listed, and information resources about assistive technology are noted.…
NASA Technical Reports Server (NTRS)
Alexander, J. Iwan D.
1991-01-01
Work was completed on all aspects of the following tasks: order of magnitude estimates; thermo-capillary convection - two-dimensional (fixed planar surface); thermo-capillary convection - three-dimensional and axisymmetric; liquid bridge/floating zone sensitivity; transport in closed containers; interaction: design and development stages; interaction: testing flight hardware; and reporting. Results are included in the Appendices.
Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Disney, Adam; Reynolds, John
2015-01-01
Dynamic Adaptive Neural Network Array (DANNA) is a neuromorphic hardware implementation. It differs from most other neuromorphic projects in that it allows for programmability of structure, and it is trained or designed using evolutionary optimization. This paper describes the DANNA structure, how DANNA is trained using evolutionary optimization, and an application of DANNA to a very simple classification task.
ERIC Educational Resources Information Center
Peterson, Dale
1984-01-01
Discusses the works of Darcy Gerbarg, Ruth Leavitt, David Em, Duane Palyka, and Harold Cohen, visual artists who work with computers to create art works by relying on standard hardware/software tools, using custom tools created for nonartistic tasks, manipulating images at the programing level, and programing creativity into computers themselves.…
Faster than a Speeding Bullet (or, How To Keep up with the Internet).
ERIC Educational Resources Information Center
Fingerman, Susan
1999-01-01
Discusses how librarians can keep up with Internet developments. Advice includes: step back and think; recognize that it is a hopeless task; stay focused; and "stand on the shoulders of others." World Wide Web sites that provide access to information on specific subjects, hardware/software, Internet issues, and search engines are cited,…
ERIC Educational Resources Information Center
Vallee, Jacques; And Others
To explore the feasibility and usefulness of group communication via computer, a system called FORUM was constructed and used in research and management tasks using ARPANET, an international computer network. Working softward and data regarding the dynamics of groups using network communication were developed, and a prototype hardware system for…
NASA Technical Reports Server (NTRS)
Nashman, Marilyn; Chaconas, Karen J.
1988-01-01
The sensory processing system for the NASA/NBS Standard Reference Model (NASREM) for telerobotic control is described. This control system architecture was adopted by NASA of the Flight Telerobotic Servicer. The control system is hierarchically designed and consists of three parallel systems: task decomposition, world modeling, and sensory processing. The Sensory Processing System is examined, and in particular the image processing hardware and software used to extract features at low levels of sensory processing for tasks representative of those envisioned for the Space Station such as assembly and maintenance are described.
NASA Technical Reports Server (NTRS)
1973-01-01
The manufacturing tasks for the program included the fabrication and assembly of an epoxy fiberglass purge bag to encapsulate an insulated cryogenic propellant tank. Purge, repressurization and venting hardware were procured and installed on the purge bag assembly in preparation for performance testing. The fabrication and installation of the superfloc multilayer insulation (MLI) on the cryogenic tank was accomplished as part of a continuing program. An abstraction of the results of the MLI fabrication task is included to describe the complete fabrication requirements for a reusable cryogenic propellant space storage system.
Development of magnetic resonance technology for noninvasive boron quantification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradshaw, K.M.
1990-11-01
Boron magnetic resonance imaging (MRI) and spectroscopy (MRS) were developed in support of the noninvasive boron quantification task of the Idaho National Engineering Laboratory (INEL) Power Burst Facility/Boron Neutron Capture Therapy (PBF/BNCT) program. The hardware and software described in this report are modifications specific to a GE Signa{trademark} MRI system, release 3.X and are necessary for boron magnetic resonance operation. The technology developed in this task has been applied to obtaining animal pharmacokinetic data of boron compounds (drug time response) and the in-vivo localization of boron in animal tissue noninvasively. 9 refs., 21 figs.
Recent Electric Propulsion Development Activities for NASA Science Missions
NASA Technical Reports Server (NTRS)
Pencil, Eric J.
2009-01-01
(The primary source of electric propulsion development throughout NASA is managed by the In-Space Propulsion Technology Project at the NASA Glenn Research Center for the Science Mission Directorate. The objective of the Electric Propulsion project area is to develop near-term electric propulsion technology to enhance or enable science missions while minimizing risk and cost to the end user. Major hardware tasks include developing NASA s Evolutionary Xenon Thruster (NEXT), developing a long-life High Voltage Hall Accelerator (HIVHAC), developing an advanced feed system, and developing cross-platform components. The objective of the NEXT task is to advance next generation ion propulsion technology readiness. The baseline NEXT system consists of a high-performance, 7-kW ion thruster; a high-efficiency, 7-kW power processor unit (PPU); a highly flexible advanced xenon propellant management system (PMS); a lightweight engine gimbal; and key elements of a digital control interface unit (DCIU) including software algorithms. This design approach was selected to provide future NASA science missions with the greatest value in mission performance benefit at a low total development cost. The objective of the HIVHAC task is to advance the Hall thruster technology readiness for science mission applications. The task seeks to increase specific impulse, throttle-ability and lifetime to make Hall propulsion systems applicable to deep space science missions. The primary application focus for the resulting Hall propulsion system would be cost-capped missions, such as competitively selected, Discovery-class missions. The objective of the advanced xenon feed system task is to demonstrate novel manufacturing techniques that will significantly reduce mass, volume, and footprint size of xenon feed systems over conventional feed systems. This task has focused on the development of a flow control module, which consists of a three-channel flow system based on a piezo-electrically actuated valve concept, as well as a pressure control module, which will regulate pressure from the propellant tank. Cross-platform component standardization and simplification are being investigated through the Standard Architecture task to reduce first user costs for implementing electric propulsion systems. Progress on current hardware development, recent test activities and future plans are discussed.
Chabaud, Mélanie; Heuzé, Mélina L.; Bretou, Marine; Vargas, Pablo; Maiuri, Paolo; Solanes, Paola; Maurin, Mathieu; Terriac, Emmanuel; Le Berre, Maël; Lankar, Danielle; Piolot, Tristan; Adelstein, Robert S.; Zhang, Yingfan; Sixt, Michael; Jacobelli, Jordan; Bénichou, Olivier; Voituriez, Raphaël; Piel, Matthieu; Lennon-Duménil, Ana-Maria
2015-01-01
The immune response relies on the migration of leukocytes and on their ability to stop in precise anatomical locations to fulfil their task. How leukocyte migration and function are coordinated is unknown. Here we show that in immature dendritic cells, which patrol their environment by engulfing extracellular material, cell migration and antigen capture are antagonistic. This antagonism results from transient enrichment of myosin IIA at the cell front, which disrupts the back-to-front gradient of the motor protein, slowing down locomotion but promoting antigen capture. We further highlight that myosin IIA enrichment at the cell front requires the MHC class II-associated invariant chain (Ii). Thus, by controlling myosin IIA localization, Ii imposes on dendritic cells an intermittent antigen capture behaviour that might facilitate environment patrolling. We propose that the requirement for myosin II in both cell migration and specific cell functions may provide a general mechanism for their coordination in time and space. PMID:26109323
Information management system study results. Volume 1: IMS study results
NASA Technical Reports Server (NTRS)
1971-01-01
The information management system (IMS) special emphasis task was performed as an adjunct to the modular space station study, with the objective of providing extended depth of analysis and design in selected key areas of the information management system. Specific objectives included: (1) in-depth studies of IMS requirements and design approaches; (2) design and fabricate breadboard hardware for demonstration and verification of design concepts; (3) provide a technological base to identify potential design problems and influence long range planning (4) develop hardware and techniques to permit long duration, low cost, manned space operations; (5) support SR&T areas where techniques or equipment are considered inadequate; and (6) permit an overall understanding of the IMS as an integrated component of the space station.
Development of a 32-bit UNIX-based ELAS workstation
NASA Technical Reports Server (NTRS)
Spiering, Bruce A.; Pearson, Ronnie W.; Cheng, Thomas D.
1987-01-01
A mini/microcomputer UNIX-based image analysis workstation has been designed and is being implemented to use the Earth Resources Laboratory Applications Software (ELAS). The hardware system includes a MASSCOMP 5600 computer, which is a 32-bit UNIX-based system (compatible with AT&T System V and Berkeley 4.2 BSD operating system), a floating point accelerator, a 474-megabyte fixed disk, a tri-density magnetic tape drive, and an 1152 by 910 by 12-plane color graphics/image interface. The software conversion includes reconfiguring the ELAs driver Master Task, recompiling and then testing the converted application modules. This hardware and software configuration is a self-sufficient image analysis workstation which can be used as a stand-alone system, or networked with other compatible workstations.
NASA Astrophysics Data System (ADS)
Sawin, Charles F.; Hayes, Judith; Francisco, David R.; House, Nancy
2007-02-01
Countermeasures are necessary to offset or minimize the deleterious changes in human physiology resulting from long duration space flight. Exposure to microgravity alters musculoskeletal, neurosensory, and cardiovascular systems with resulting deconditioning that may compromise crew health and performance. Maintaining health and fitness at acceptable levels is critical for preserving performance capabilities required to accomplish specific mission tasks (e.g.—extravehicular activity) and to optimize performance after landing. To enable the goals of the exploration program, NASA is developing a new suite of exercise hardware such as the improved loading device, the SchRED. This presentation will update the status of current countermeasures, correlate hardware advances with improvements in exercise countermeasures, and discuss future activities for safe and productive exploration missions.
Systems Maintenance Automated Repair Tasks (SMART)
NASA Technical Reports Server (NTRS)
2008-01-01
SMART is an interactive decision analysis and refinement software system that uses evaluation criteria for discrepant conditions to automatically provide and populate a document/procedure with predefined steps necessary to repair a discrepancy safely, effectively, and efficiently. SMART can store the tacit (corporate) knowledge merging the hardware specification requirements with the actual "how to" repair methods, sequences, and required equipment, all within a user-friendly interface. Besides helping organizations retain repair knowledge in streamlined procedures and sequences, SMART can also help them in saving processing time and expense, increasing productivity, improving quality, and adhering more closely to safety and other guidelines. Though SMART was developed for Space Shuttle applications, its interface is easily adaptable to any hardware that can be broken down by component, subcomponent, discrepancy, and repair.
Deep learning for medical image segmentation - using the IBM TrueNorth neurosynaptic system
NASA Astrophysics Data System (ADS)
Moran, Steven; Gaonkar, Bilwaj; Whitehead, William; Wolk, Aidan; Macyszyn, Luke; Iyer, Subramanian S.
2018-03-01
Deep convolutional neural networks have found success in semantic image segmentation tasks in computer vision and medical imaging. These algorithms are executed on conventional von Neumann processor architectures or GPUs. This is suboptimal. Neuromorphic processors that replicate the structure of the brain are better-suited to train and execute deep learning models for image segmentation by relying on massively-parallel processing. However, given that they closely emulate the human brain, on-chip hardware and digital memory limitations also constrain them. Adapting deep learning models to execute image segmentation tasks on such chips, requires specialized training and validation. In this work, we demonstrate for the first-time, spinal image segmentation performed using a deep learning network implemented on neuromorphic hardware of the IBM TrueNorth Neurosynaptic System and validate the performance of our network by comparing it to human-generated segmentations of spinal vertebrae and disks. To achieve this on neuromorphic hardware, the training model constrains the coefficients of individual neurons to {-1,0,1} using the Energy Efficient Deep Neuromorphic (EEDN)1 networks training algorithm. Given the 1 million neurons and 256 million synapses, the scale and size of the neural network implemented by the IBM TrueNorth allows us to execute the requisite mapping between segmented images and non-uniform intensity MR images >20 times faster than on a GPU-accelerated network and using <0.1 W. This speed and efficiency implies that a trained neuromorphic chip can be deployed in intra-operative environments where real-time medical image segmentation is necessary.
Tribus, Clifford B; Garvey, Kathleen E
2003-05-15
A case report describes unilateral complete laminar erosion of the caudal thoracic spine and late-presenting infection in a patient 10 years after anteroposterior reconstruction for scoliosis. To present an unusual but significant complication that may occur after implantation of spinal instrumentation. The reported patient presented with a deep infection and persistent back pain 10 years after successful anteroposterior reconstruction for adult idiopathic scoliosis. Delayed onset infections after implantation of spinal instrumentation are infrequent, yet when present, often require hardware removal. The case of a 51-year-old woman who underwent irrigation and debridement for a late-presenting infection and removal of posterior hardware 10 years after her index procedure is presented. Interoperatively, it was noted that full-thickness laminar erosion was present from T4 to T12. The patient was taken to the operating room for wound irrigation, debridement, and hardware removal. It was discovered that a Cotrel-Dubousset rod placed on the convexity of the curve had completely eroded through the lamina of T7-T12. Infectious material was found along the entire length of both the convex and concave Cotrel-Dubousset rods. Intraoperative cultures grew Staphylococcus epidermidis and Propionibacterium acnes. Intravenous and oral antibiotics were administered, resulting in resolution of the infection and preoperative pain. The exact role of late-presenting infection with regard to the laminar erosion and rod migration seen in this case remains to be elucidated. However, the authors believe the primary cause of bony erosion was mechanical in origin. Regardless, most spine surgeons will treat many patients who have had posterior spinal implants and will perform hardware removal on a significant number of these patients during their careers. A full-thickness laminar erosion exposes the spinal cord to traumatic injury during hardware removal and debridement. This case is presented as a cautionary note to help surgeons become cognizant of a potentially devastating complication.
PyNCS: a microkernel for high-level definition and configuration of neuromorphic electronic systems
Stefanini, Fabio; Neftci, Emre O.; Sheik, Sadique; Indiveri, Giacomo
2014-01-01
Neuromorphic hardware offers an electronic substrate for the realization of asynchronous event-based sensory-motor systems and large-scale spiking neural network architectures. In order to characterize these systems, configure them, and carry out modeling experiments, it is often necessary to interface them to workstations. The software used for this purpose typically consists of a large monolithic block of code which is highly specific to the hardware setup used. While this approach can lead to highly integrated hardware/software systems, it hampers the development of modular and reconfigurable infrastructures thus preventing a rapid evolution of such systems. To alleviate this problem, we propose PyNCS, an open-source front-end for the definition of neural network models that is interfaced to the hardware through a set of Python Application Programming Interfaces (APIs). The design of PyNCS promotes modularity, portability and expandability and separates implementation from hardware description. The high-level front-end that comes with PyNCS includes tools to define neural network models as well as to create, monitor and analyze spiking data. Here we report the design philosophy behind the PyNCS framework and describe its implementation. We demonstrate its functionality with two representative case studies, one using an event-based neuromorphic vision sensor, and one using a set of multi-neuron devices for carrying out a cognitive decision-making task involving state-dependent computation. PyNCS, already applicable to a wide range of existing spike-based neuromorphic setups, will accelerate the development of hybrid software/hardware neuromorphic systems, thanks to its code flexibility. The code is open-source and available online at https://github.com/inincs/pyNCS. PMID:25232314
PyNCS: a microkernel for high-level definition and configuration of neuromorphic electronic systems.
Stefanini, Fabio; Neftci, Emre O; Sheik, Sadique; Indiveri, Giacomo
2014-01-01
Neuromorphic hardware offers an electronic substrate for the realization of asynchronous event-based sensory-motor systems and large-scale spiking neural network architectures. In order to characterize these systems, configure them, and carry out modeling experiments, it is often necessary to interface them to workstations. The software used for this purpose typically consists of a large monolithic block of code which is highly specific to the hardware setup used. While this approach can lead to highly integrated hardware/software systems, it hampers the development of modular and reconfigurable infrastructures thus preventing a rapid evolution of such systems. To alleviate this problem, we propose PyNCS, an open-source front-end for the definition of neural network models that is interfaced to the hardware through a set of Python Application Programming Interfaces (APIs). The design of PyNCS promotes modularity, portability and expandability and separates implementation from hardware description. The high-level front-end that comes with PyNCS includes tools to define neural network models as well as to create, monitor and analyze spiking data. Here we report the design philosophy behind the PyNCS framework and describe its implementation. We demonstrate its functionality with two representative case studies, one using an event-based neuromorphic vision sensor, and one using a set of multi-neuron devices for carrying out a cognitive decision-making task involving state-dependent computation. PyNCS, already applicable to a wide range of existing spike-based neuromorphic setups, will accelerate the development of hybrid software/hardware neuromorphic systems, thanks to its code flexibility. The code is open-source and available online at https://github.com/inincs/pyNCS.
Planetary micro-rover operations on Mars using a Bayesian framework for inference and control
NASA Astrophysics Data System (ADS)
Post, Mark A.; Li, Junquan; Quine, Brendan M.
2016-03-01
With the recent progress toward the application of commercially-available hardware to small-scale space missions, it is now becoming feasible for groups of small, efficient robots based on low-power embedded hardware to perform simple tasks on other planets in the place of large-scale, heavy and expensive robots. In this paper, we describe design and programming of the Beaver micro-rover developed for Northern Light, a Canadian initiative to send a small lander and rover to Mars to study the Martian surface and subsurface. For a small, hardware-limited rover to handle an uncertain and mostly unknown environment without constant management by human operators, we use a Bayesian network of discrete random variables as an abstraction of expert knowledge about the rover and its environment, and inference operations for control. A framework for efficient construction and inference into a Bayesian network using only the C language and fixed-point mathematics on embedded hardware has been developed for the Beaver to make intelligent decisions with minimal sensor data. We study the performance of the Beaver as it probabilistically maps a simple outdoor environment with sensor models that include uncertainty. Results indicate that the Beaver and other small and simple robotic platforms can make use of a Bayesian network to make intelligent decisions in uncertain planetary environments.
Hippocampal Astrocytes in Migrating and Wintering Semipalmated Sandpiper Calidris pusilla.
Carvalho-Paulo, Dario; de Morais Magalhães, Nara G; de Almeida Miranda, Diego; Diniz, Daniel G; Henrique, Ediely P; Moraes, Isis A M; Pereira, Patrick D C; de Melo, Mauro A D; de Lima, Camila M; de Oliveira, Marcus A; Guerreiro-Diniz, Cristovam; Sherry, David F; Diniz, Cristovam W P
2017-01-01
Seasonal migratory birds return to the same breeding and wintering grounds year after year, and migratory long-distance shorebirds are good examples of this. These tasks require learning and long-term spatial memory abilities that are integrated into a navigational system for repeatedly locating breeding, wintering, and stopover sites. Previous investigations focused on the neurobiological basis of hippocampal plasticity and numerical estimates of hippocampal neurogenesis in birds but only a few studies investigated potential contributions of glial cells to hippocampal-dependent tasks related to migration. Here we hypothesized that the astrocytes of migrating and wintering birds may exhibit significant morphological and numerical differences connected to the long-distance flight. We used as a model the semipalmated sandpiper Calidris pusilla , that migrates from northern Canada and Alaska to South America. Before the transatlantic non-stop long-distance component of their flight, the birds make a stopover at the Bay of Fundy in Canada. To test our hypothesis, we estimated total numbers and compared the three-dimensional (3-D) morphological features of adult C. pusilla astrocytes captured in the Bay of Fundy ( n = 249 cells) with those from birds captured in the coastal region of Bragança, Brazil, during the wintering period ( n = 250 cells). Optical fractionator was used to estimate the number of astrocytes and for 3-D reconstructions we used hierarchical cluster analysis. Both morphological phenotypes showed reduced morphological complexity after the long-distance non-stop flight, but the reduction in complexity was much greater in Type I than in Type II astrocytes. Coherently, we also found a significant reduction in the total number of astrocytes after the transatlantic flight. Taken together these findings suggest that the long-distance non-stop flight altered significantly the astrocytes population and that morphologically distinct astrocytes may play different physiological roles during migration.
NASA Technical Reports Server (NTRS)
Nuckolls, C.; Frank, Mark
1990-01-01
The overall goal of this study was to develop new concepts and technology for the Comet Rendezvous Asteroid Flyby (CRAF), Cassini, and other future deep space missions which maximally conform to the Functional Specification for the NASA X-Band Transponder (NXT), FM513778 (preliminary, revised July 26, 1988). The study is composed of two tasks. The first task was to investigate a new digital signal processing technique which involves the processing of 1-bit samples and has the potential for significant size, mass, power, and electrical performance improvements over conventional analog approaches. The entire X-band receiver tracking loop was simulated on a digital computer using a high-level programming language. Simulations on this 'software breadboard' showed the technique to be well-behaved and a good approximation to its analog predecessor from threshold to strong signal levels in terms of tracking-loop performance, command signal-to-noise ratio and ranging signal-to-noise ratio. The successful completion of this task paves the way for building a hardware breadboard, the recommended next step in confirming this approach is ready for incorporation into flight hardware. The second task in this study was to investigate another technique which provides considerable simplification in the synthesis of the receiver first LO over conventional phase-locked multiplier schemes and in this approach, provides down-conversion for an S-band emergency receive mode without the need of an additional LO. The objective of this study was to develop methodology and models to predict the conversion loss, input RF bandwidth, and output RF bandwidth of a series GaAs FET sampling mixer and to breadboard and test a circuit design suitable for the X and S-band down-conversion applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herndon, J.N.
1992-12-31
The field of remote technology is continuing to evolve to support man`s efforts to perform tasks in hostile environments. The technology which we recognize today as remote technology has evolved over the last 45 years to support human operations in hostile environments such as nuclear fission and fusion, space, underwater, hazardous chemical, and hazardous manufacturing. The four major categories of approach to remote technology have been (1) protective clothing and equipment for direct human entry, (2) extended reach tools using distance for safety, (3) telemanipulators with barriers for safety, and (4) teleoperators incorporating mobility with distance and/or barriers for safety.more » The government and commercial nuclear industry has driven the development of the majority of the actual teleoperator hardware available today. This hardware has been developed largely due to the unsatisfactory performance of the protective-clothing approach in many hostile applications. Manipulation systems which have been developed include crane/impact wrench systems, unilateral power manipulators, mechanical master/slaves, and servomanipulators. Viewing systems have included periscopes, shield windows, and television systems. Experience over the past 45 years indicates that maintenance system flexibility is essential to typical repair tasks because they are usually not repetitive, structured, or planned. Fully remote design (manipulation, task provisions, remote tooling, and facility synergy) is essential to work task efficiency. Work for space applications has been primarily research oriented with relatively few successful space applications, although the shuttle`s remote manipulator system has been quite successful. In the last decade, underwater applications have moved forward significantly, with the offshore oil industry and military applications providing the primary impetus.« less
A Cloud-Computing Service for Environmental Geophysics and Seismic Data Processing
NASA Astrophysics Data System (ADS)
Heilmann, B. Z.; Maggi, P.; Piras, A.; Satta, G.; Deidda, G. P.; Bonomi, E.
2012-04-01
Cloud computing is establishing worldwide as a new high performance computing paradigm that offers formidable possibilities to industry and science. The presented cloud-computing portal, part of the Grida3 project, provides an innovative approach to seismic data processing by combining open-source state-of-the-art processing software and cloud-computing technology, making possible the effective use of distributed computation and data management with administratively distant resources. We substituted the user-side demanding hardware and software requirements by remote access to high-performance grid-computing facilities. As a result, data processing can be done quasi in real-time being ubiquitously controlled via Internet by a user-friendly web-browser interface. Besides the obvious advantages over locally installed seismic-processing packages, the presented cloud-computing solution creates completely new possibilities for scientific education, collaboration, and presentation of reproducible results. The web-browser interface of our portal is based on the commercially supported grid portal EnginFrame, an open framework based on Java, XML, and Web Services. We selected the hosted applications with the objective to allow the construction of typical 2D time-domain seismic-imaging workflows as used for environmental studies and, originally, for hydrocarbon exploration. For data visualization and pre-processing, we chose the free software package Seismic Un*x. We ported tools for trace balancing, amplitude gaining, muting, frequency filtering, dip filtering, deconvolution and rendering, with a customized choice of options as services onto the cloud-computing portal. For structural imaging and velocity-model building, we developed a grid version of the Common-Reflection-Surface stack, a data-driven imaging method that requires no user interaction at run time such as manual picking in prestack volumes or velocity spectra. Due to its high level of automation, CRS stacking can benefit largely from the hardware parallelism provided by the cloud deployment. The resulting output, post-stack section, coherence, and NMO-velocity panels are used to generate a smooth migration-velocity model. Residual static corrections are calculated as a by-product of the stack and can be applied iteratively. As a final step, a time migrated subsurface image is obtained by a parallelized Kirchhoff time migration scheme. Processing can be done step-by-step or using a graphical workflow editor that can launch a series of pipelined tasks. The status of the submitted jobs is monitored by a dedicated service. All results are stored in project directories, where they can be downloaded of viewed directly in the browser. Currently, the portal has access to three research clusters having a total number of 70 nodes with 4 cores each. They are shared with four other cloud-computing applications bundled within the GRIDA3 project. To demonstrate the functionality of our "seismic cloud lab", we will present results obtained for three different types of data, all taken from hydrogeophysical studies: (1) a seismic reflection data set, made of compressional waves from explosive sources, recorded in Muravera, Sardinia; (2) a shear-wave data set from, Sardinia; (3) a multi-offset Ground-Penetrating-Radar data set from Larreule, France. The presented work was funded by the government of the Autonomous Region of Sardinia and by the Italian Ministry of Research and Education.
NASA Technical Reports Server (NTRS)
Evertt, Shonn F.; Collins, Michael; Hahn, William
2008-01-01
The International Space Station (ISS) Configuration Analysis Modeling and Mass Properties (CAMMP) Team is presenting a demo of certain CAMMP capabilities at a Booz Allen Hamilton conference in San Antonio. The team will be showing pictures of low fidelity, simplified ISS models, but no dimensions or technical data. The presentation will include a brief description of the contract and task, description and picture of the Topology, description of Generic Ground Rules and Constraints (GGR&C), description of Stage Analysis with constraints applied, and wrap up with description of other tasks such as Special Studies, Cable Routing, etc. The models include conceptual Crew Exploration Vehicle (CEV) and Lunar Lander images and animations created for promotional purposes, which are based entirely on public domain conceptual images from public NASA web sites and publicly available magazine articles and are not based on any actual designs, measurements, or 3D models. Conceptual Mars rover and lander are completely conceptual and are not based on any NASA designs or data. The demonstration includes High Fidelity Computer Aided Design (CAD) models of ISS provided by the ISS 3D CAD Team which will be used in a visual display to demonstrate the capabilities of the Teamcenter Visualization software. The demonstration will include 3D views of the CAD models including random measurements that will be taken to demonstrate the measurement tool. A 3D PDF file will be demonstrated of the Blue Book fidelity assembly complete model with no vehicles attached. The 3D zoom and rotation will be displayed as well as random measurements from the measurement tool. The External Configuration Analysis and Tracking Tool (ExCATT) Microsoft Access Database will be demonstrated to show its capabilities to organize and track hardware on ISS. The data included will be part numbers, serial numbers, historical, current, and future locations, of external hardware components on station. It includes dates of all external ISS events and flights and the associated hardware changes for each event. The hardware location information does not always reveal the exact location of the hardware, only the general location. In some cases the location is a module or carrier, in other cases it is a WIF socket, handrail, or attach point. Only small portions of the data will be displayed for demonstration purposes.
NASA Technical Reports Server (NTRS)
Steele, John; Metselaar, Carol; Peyton, Barbara; Rector, Tony; Rossato, Robert; Macias, Brian; Weigel, Dana; Holder, Don
2015-01-01
During EVA (Extravehicular Activity) No. 23 aboard the ISS (International Space Station) on 07/16/2013 water entered the EMU (Extravehicular Mobility Unit) helmet resulting in the termination of the EVA (Extravehicular Activity) approximately 1-hour after it began. It was estimated that 1.5-L of water had migrated up the ventilation loop into the helmet, adversely impacting the astronauts hearing, vision and verbal communication. Subsequent on-board testing and ground-based TT and E (Test, Tear-down and Evaluation) of the affected EMU hardware components led to the determination that the proximate cause of the mishap was blockage of all water separator drum holes with a mixture of silica and silicates. The blockages caused a failure of the water separator function which resulted in EMU cooling water spilling into the ventilation loop, around the circulating fan, and ultimately pushing into the helmet. The root cause of the failure was determined to be ground-processing short-comings of the ALCLR (Airlock Cooling Loop Recovery) Ion Filter Beds which led to various levels of contaminants being introduced into the Filters before they left the ground. Those contaminants were thereafter introduced into the EMU hardware on-orbit during ALCLR scrubbing operations. This paper summarizes the failure analysis results along with identified process, hardware and operational corrective actions that were implemented as a result of findings from this investigation.
2015-05-01
LLC and DRAM banks. For each µB task and isolation configuration, we ran experiments with all 256 possible LLC area sizes (given by 1 to 16 ways and 1...isolation on multicoore platforms. In RTAS ’14. [29] H. Yun, G. Yao, R. Pellizzoni, M. Caccamo, and L. Sha . Memory access control in multiprocessor
Wake Sensor Evaluation Program and Results of JFK-1 Wake Vortex Sensor Intercomparisons
NASA Technical Reports Server (NTRS)
Barker, Ben C., Jr.; Burnham, David C.; Rudis, Robert P.
1997-01-01
The overall approach should be to: (1) Seek simplest, sufficiently robust, integrated ground based sensor systems (wakes and weather) for AVOSS; (2) Expand all sensor performance cross-comparisons and data mergings in on-going field deployments; and (3) Achieve maximal cost effectiveness through hardware/info sharing. An effective team is in place to accomplish the above tasks.
Safety and Quality Training Simulator
NASA Technical Reports Server (NTRS)
Scobby, Pete T.
2009-01-01
A portable system of electromechanical and electronic hardware and documentation has been developed as an automated means of instructing technicians in matters of safety and quality. The system enables elimination of most of the administrative tasks associated with traditional training. Customized, performance-based, hands-on training with integral testing is substituted for the traditional instructional approach of passive attendance in class followed by written examination.
A Force-Controllable Macro-Micro Manipulator and its Application to Medical Robotics
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Uecker, Darrin R.; Wang, Yulun
1993-01-01
This paper describes an 8-degrees-of-freedom macro-micro robot. This robot is capable of performing tasks that require accurate force control, such as polishing, finishing, grinding, deburring, and cleaning. The design of the macro-micro mechanism, the control algorithms, and the hardware/sofware implemtation of the algotithms are described in this paper. Initial experimental results are reported.
Robust performance of multiple tasks by a mobile robot
NASA Technical Reports Server (NTRS)
Beckerman, Martin; Barnett, Deanna L.; Dickens, Mike; Weisbin, Charles R.
1989-01-01
While there have been many successful mobile robot experiments, only a few papers have addressed issues pertaining to the range of applicability, or robustness, of robotic systems. The purpose of this paper is to report results of a series of benchmark experiments done to determine and quantify the robustness of an integrated hardware and software system of a mobile robot.
Direct Visualization of Valence Electron Motion Using Strong-Field Photoelectron Holography
NASA Astrophysics Data System (ADS)
He, Mingrui; Li, Yang; Zhou, Yueming; Li, Min; Cao, Wei; Lu, Peixiang
2018-03-01
Watching the valence electron move in molecules on its intrinsic timescale has been one of the central goals of attosecond science and it requires measurements with subatomic spatial and attosecond temporal resolutions. The time-resolved photoelectron holography in strong-field tunneling ionization holds the promise to access this realm. However, it remains to be a challenging task hitherto. Here we reveal how the information of valence electron motion is encoded in the hologram of the photoelectron momentum distribution (PEMD) and develop a novel approach of retrieval. As a demonstration, applying it to the PEMDs obtained by solving the time-dependent Schrödinger equation for the prototypical molecule H2+ , the attosecond charge migration is directly visualized with picometer spatial and attosecond temporal resolutions. Our method represents a general approach for monitoring attosecond charge migration in more complex polyatomic and biological molecules, which is one of the central tasks in the newly emerging attosecond chemistry.
Direct Visualization of Valence Electron Motion Using Strong-Field Photoelectron Holography.
He, Mingrui; Li, Yang; Zhou, Yueming; Li, Min; Cao, Wei; Lu, Peixiang
2018-03-30
Watching the valence electron move in molecules on its intrinsic timescale has been one of the central goals of attosecond science and it requires measurements with subatomic spatial and attosecond temporal resolutions. The time-resolved photoelectron holography in strong-field tunneling ionization holds the promise to access this realm. However, it remains to be a challenging task hitherto. Here we reveal how the information of valence electron motion is encoded in the hologram of the photoelectron momentum distribution (PEMD) and develop a novel approach of retrieval. As a demonstration, applying it to the PEMDs obtained by solving the time-dependent Schrödinger equation for the prototypical molecule H_{2}^{+}, the attosecond charge migration is directly visualized with picometer spatial and attosecond temporal resolutions. Our method represents a general approach for monitoring attosecond charge migration in more complex polyatomic and biological molecules, which is one of the central tasks in the newly emerging attosecond chemistry.
Reconstruction of audio waveforms from spike trains of artificial cochlea models
Zai, Anja T.; Bhargava, Saurabh; Mesgarani, Nima; Liu, Shih-Chii
2015-01-01
Spiking cochlea models describe the analog processing and spike generation process within the biological cochlea. Reconstructing the audio input from the artificial cochlea spikes is therefore useful for understanding the fidelity of the information preserved in the spikes. The reconstruction process is challenging particularly for spikes from the mixed signal (analog/digital) integrated circuit (IC) cochleas because of multiple non-linearities in the model and the additional variance caused by random transistor mismatch. This work proposes an offline method for reconstructing the audio input from spike responses of both a particular spike-based hardware model called the AEREAR2 cochlea and an equivalent software cochlea model. This method was previously used to reconstruct the auditory stimulus based on the peri-stimulus histogram of spike responses recorded in the ferret auditory cortex. The reconstructed audio from the hardware cochlea is evaluated against an analogous software model using objective measures of speech quality and intelligibility; and further tested in a word recognition task. The reconstructed audio under low signal-to-noise (SNR) conditions (SNR < –5 dB) gives a better classification performance than the original SNR input in this word recognition task. PMID:26528113
An FPGA-Based Silicon Neuronal Network with Selectable Excitability Silicon Neurons
Li, Jing; Katori, Yuichi; Kohno, Takashi
2012-01-01
This paper presents a digital silicon neuronal network which simulates the nerve system in creatures and has the ability to execute intelligent tasks, such as associative memory. Two essential elements, the mathematical-structure-based digital spiking silicon neuron (DSSN) and the transmitter release based silicon synapse, allow us to tune the excitability of silicon neurons and are computationally efficient for hardware implementation. We adopt mixed pipeline and parallel structure and shift operations to design a sufficient large and complex network without excessive hardware resource cost. The network with 256 full-connected neurons is built on a Digilent Atlys board equipped with a Xilinx Spartan-6 LX45 FPGA. Besides, a memory control block and USB control block are designed to accomplish the task of data communication between the network and the host PC. This paper also describes the mechanism of associative memory performed in the silicon neuronal network. The network is capable of retrieving stored patterns if the inputs contain enough information of them. The retrieving probability increases with the similarity between the input and the stored pattern increasing. Synchronization of neurons is observed when the successful stored pattern retrieval occurs. PMID:23269911
Minnig, Shawn; Bragg, Robert M; Tiwana, Hardeep S; Solem, Wes T; Hovander, William S; Vik, Eva-Mari S; Hamilton, Madeline; Legg, Samuel R W; Shuttleworth, Dominic D; Coffey, Sydney R; Cantle, Jeffrey P; Carroll, Jeffrey B
2018-02-02
Apathy is one of the most prevalent and progressive psychiatric symptoms in Huntington's disease (HD) patients. However, preclinical work in HD mouse models tends to focus on molecular and motor, rather than affective, phenotypes. Measuring behavior in mice often produces noisy data and requires large cohorts to detect phenotypic rescue with appropriate power. The operant equipment necessary for measuring affective phenotypes is typically expensive, proprietary to commercial entities, and bulky which can render adequately sized mouse cohorts as cost-prohibitive. Thus, we describe here a home-built, open-source alternative to commercial hardware that is reliable, scalable, and reproducible. Using off-the-shelf hardware, we adapted and built several of the rodent operant buckets (ROBucket) to test Htt Q111/+ mice for attention deficits in fixed ratio (FR) and progressive ratio (PR) tasks. We find that, despite normal performance in reward attainment in the FR task, Htt Q111/+ mice exhibit reduced PR performance at 9-11 months of age, suggesting motivational deficits. We replicated this in two independent cohorts, demonstrating the reliability and utility of both the apathetic phenotype, and these ROBuckets, for preclinical HD studies.
VIEW-Station software and its graphical user interface
NASA Astrophysics Data System (ADS)
Kawai, Tomoaki; Okazaki, Hiroshi; Tanaka, Koichiro; Tamura, Hideyuki
1992-04-01
VIEW-Station is a workstation-based image processing system which merges the state-of-the- art software environment of Unix with the computing power of a fast image processor. VIEW- Station has a hierarchical software architecture, which facilitates device independence when porting across various hardware configurations, and provides extensibility in the development of application systems. The core image computing language is V-Sugar. V-Sugar provides a set of image-processing datatypes and allows image processing algorithms to be simply expressed, using a functional notation. VIEW-Station provides a hardware independent window system extension called VIEW-Windows. In terms of GUI (Graphical User Interface) VIEW-Station has two notable aspects. One is to provide various types of GUI as visual environments for image processing execution. Three types of interpreters called (mu) V- Sugar, VS-Shell and VPL are provided. Users may choose whichever they prefer based on their experience and tasks. The other notable aspect is to provide facilities to create GUI for new applications on the VIEW-Station system. A set of widgets are available for construction of task-oriented GUI. A GUI builder called VIEW-Kid is developed for WYSIWYG interactive interface design.
Modified tension band wiring of medial malleolar ankle fractures.
Georgiadis, G M; White, D B
1995-02-01
Twenty-two displaced medial malleolar ankle fractures that were treated surgically using the modified tension band method of Cleak and Dawson were retrospectively reviewed at an average follow-up of 25 months. The technique involves the use of a screw to anchor a figure-of-eight wire. There were no malreductions and all fractures healed. Problems with the technique included technical errors with hardware placement, medial ankle pain, and asymptomatic wire migration. Despite this, modified tension band wiring remains an acceptable method for fixation of selected displaced medial malleolar fractures. It is especially suited for small fracture fragments and osteoporotic bone.
Creating Simple Windchill Admin Tools Using Info*Engine
NASA Technical Reports Server (NTRS)
Jones, Corey; Kapatos, Dennis; Skradski, Cory
2012-01-01
Being a Windchill administrator often requires performing simple yet repetitive tasks on large sets of objects. These can include renaming, deleting, checking in, undoing checkout, and much more. This is especially true during a migration. Fortunately, PTC has provided a simple way to dynamically interact with Windchill using Info*Engine. This presentation will describe how to create simple Info*Engine tasks capable of saving Windchill 10.0 administrators hours of tedious work. It will also show how these tasks can be combined and displayed on a simple JSP page that acts as a "Windchill Administrator Dashboard/Toolbox". The attendee will learn some valuable tasks Info*Engine capable of performing. The attendee will gain a basic understanding of how to perform and implement Info*Engine tasks. The attendee will learn what's involved in creating a JSP page that displays Info*Engine tasks
System Administrator for LCS Development Sets
NASA Technical Reports Server (NTRS)
Garcia, Aaron
2013-01-01
The Spaceport Command and Control System Project is creating a Checkout and Control System that will eventually launch the next generation of vehicles from Kennedy Space Center. KSC has a large set of Development and Operational equipment already deployed in several facilities, including the Launch Control Center, which requires support. The position of System Administrator will complete tasks across multiple platforms (Linux/Windows), many of them virtual. The Hardware Branch of the Control and Data Systems Division at the Kennedy Space Center uses system administrators for a variety of tasks. The position of system administrator comes with many responsibilities which include maintaining computer systems, repair or set up hardware, install software, create backups and recover drive images are a sample of jobs which one must complete. Other duties may include working with clients in person or over the phone and resolving their computer system needs. Training is a major part of learning how an organization functions and operates. Taking that into consideration, NASA is no exception. Training on how to better protect the NASA computer infrastructure will be a topic to learn, followed by NASA work polices. Attending meetings and discussing progress will be expected. A system administrator will have an account with root access. Root access gives a user full access to a computer system and or network. System admins can remove critical system files and recover files using a tape backup. Problem solving will be an important skill to develop in order to complete the many tasks.
NASA Technical Reports Server (NTRS)
Greening, Gage J.
2016-01-01
The Project Management and Engineering Branch (SF4) supports the Human Health and Performance Directorate (HH&P) and is responsible for developing and supporting human systems hardware for the International Space Station (ISS). When a principal investigator's (PI) medical research project on the ISS is accepted, SF4 develops the necessary hardware and software to transport to the ISS. The two projects I primarily worked on were the centrifuge and ultrasound projects. Centrifuge: One concern with spacecraft such as the ISS is electromagnetic interference (EMI) from onboard equipment, typically from radio waves (frequencies of 3 kHz to 300 GHz), which can negatively affect nearby circuitry. Standard commercial centrifuges produce EMI above safety limits, so my task was to help reduce EMI production from this equipment. Two centrifuges were tested: one unmodified as a control and one modified. To reduce EMI below safety limits, one centrifuge was modified to become a Faraday shield, in which significant electrical contact was made between all regions of the centrifuge housing. This included removing non-conductive paint, applying conductive fabric to the lid and foam sealer, adding a 10,000 µF decoupling capacitor across the power supply, and adding copper adhesive-mount gaskets to the housing interior. EMI testing of both centrifuges was performed in the EMI/EMC Control Test and Measurement Facility. EMI for both centrifuges was below safety limits for frequencies between 10 MHz and 15 GHz (pass); however, between 14 kHz and 10 MHz, EMI for the unmodified centrifuge exceeded safety limits (fail) as expected. Alternatively, for the modified centrifuge with the Faraday shield, EMI was below the safely limit of 55 dBµV/m for electromagnetic frequencies between 14 kHz and 10 MHz. This result indicates our modifications were successful. The successful EMI test allowed us to communicate with the vendor what modifications they needed to make to their commercial unit to meet our specifications and to understand what needs to be done in lab to the new centrifuge. Our modifications will provide a standard for readying centrifuges for future missions. Once the new modified centrifuge arrives by the vendor, it will need to undergo EMI testing again for validation. The centrifuge is also in the process of compatibility testing with a custom stowage drawer, which is an ongoing project in SF4. Both of these items will be payloads on future missions to the ISS for various research purposes. Ultrasound: ISS currently has an onboard ultrasound (Ultrasound 2 system) for research and medical purposes. Every piece of medical flight hardware has an equivalent ground-unit so instrumentation can be routinely evaluated and transported to the ISS if necessary. The ground-unit ultrasound equipment must be evaluated every six months using a task performance sheet (TPS). A TPS is a document, written by the appropriate scientists and engineers, which describes how to run equipment and is written in such a way that astronauts with unspecialized training can follow the tasks. I was responsible for performing six TPSs on a combination of three ultrasounds and two video power converters (VPCs). Performing a TPS involves checking out and computationally documenting each piece of equipment removed from storage locations, setting up hardware and software, performing tasks to verify functionality, returning equipment, and logging items back into the computerized system. My work revealed all ground-unit ultrasounds were functioning properly. Because of proper function, a discrepancy report (DR) did not have to be opened. The TPS was then passed along to the Quality Engineering (QE) for review and ultimately given to Quality Assurance (QA). Other projects: In addition to my main projects, I participated in other tasks including troubleshooting an EEG headband, volunteering for an ultrasound training research study, and conformal coating printed circuit boards. My internship at SF4 has helped me understand how space systems hardware development for the ISS fits into NASA's mission and vision.
Eleven quick tips for architecting biomedical informatics workflows with cloud computing.
Cole, Brian S; Moore, Jason H
2018-03-01
Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world's largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction.
Eleven quick tips for architecting biomedical informatics workflows with cloud computing
Moore, Jason H.
2018-01-01
Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world’s largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction. PMID:29596416
Generic algorithms for high performance scalable geocomputing
NASA Astrophysics Data System (ADS)
de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek
2016-04-01
During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system. This contrasts with practices in which code for distributing of compute tasks is mixed with model-specific code, and results in a better maintainable model. For flexibility and efficiency, the algorithms are configurable at compile-time with the respect to the following aspects: data type, value type, no-data handling, input value domain handling, and output value range handling. This makes the algorithms usable in very different contexts, without the need for making intrusive changes to existing models when using them. Applications that benefit from using the Fern library include the construction of forward simulation models in (global) hydrology (e.g. PCR-GLOBWB (Van Beek et al. 2011)), ecology, geomorphology, or land use change (e.g. PLUC (Verstegen et al. 2014)) and manipulation of hyper-resolution land surface data such as digital elevation models and remote sensing data. Using the Fern library, we have also created an add-on to the PCRaster Python Framework (Karssenberg et al. 2010) allowing its users to speed up their spatio-temporal models, sometimes by changing just a single line of Python code in their model. In our presentation we will give an overview of the design of the algorithms, providing examples of different contexts where they can be used to replace existing sequential algorithms, including the PCRaster environmental modeling software (www.pcraster.eu). We will show how the algorithms can be configured to behave differently when necessary. References Karssenberg, D., Schmitz, O., Salamon, P., De Jong, K. and Bierkens, M.F.P., 2010, A software framework for construction of process-based stochastic spatio-temporal models and data assimilation. Environmental Modelling & Software, 25, pp. 489-502, Link. Best Paper Award 2010: Software and Decision Support. Van Beek, L. P. H., Y. Wada, and M. F. P. Bierkens. 2011. Global monthly water stress: 1. Water balance and water availability. Water Resources Research. 47. Verstegen, J. A., D. Karssenberg, F. van der Hilst, and A. P. C. Faaij. 2014. Identifying a land use change cellular automaton by Bayesian data assimilation. Environmental Modelling & Software 53:121-136.
Metascalable molecular dynamics simulation of nano-mechano-chemistry
NASA Astrophysics Data System (ADS)
Shimojo, F.; Kalia, R. K.; Nakano, A.; Nomura, K.; Vashishta, P.
2008-07-01
We have developed a metascalable (or 'design once, scale on new architectures') parallel application-development framework for first-principles based simulations of nano-mechano-chemical processes on emerging petaflops architectures based on spatiotemporal data locality principles. The framework consists of (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms, (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these scalable algorithms onto hardware. The EDC-STEP-HCD framework exposes and expresses maximal concurrency and data locality, thereby achieving parallel efficiency as high as 0.99 for 1.59-billion-atom reactive force field molecular dynamics (MD) and 17.7-million-atom (1.56 trillion electronic degrees of freedom) quantum mechanical (QM) MD in the framework of the density functional theory (DFT) on adaptive multigrids, in addition to 201-billion-atom nonreactive MD, on 196 608 IBM BlueGene/L processors. We have also used the framework for automated execution of adaptive hybrid DFT/MD simulation on a grid of six supercomputers in the US and Japan, in which the number of processors changed dynamically on demand and tasks were migrated according to unexpected faults. The paper presents the application of the framework to the study of nanoenergetic materials: (1) combustion of an Al/Fe2O3 thermite and (2) shock initiation and reactive nanojets at a void in an energetic crystal.
An Enabling Technology for New Planning and Scheduling Paradigms
NASA Technical Reports Server (NTRS)
Jaap, John; Davis, Elizabeth
2004-01-01
The Night Projects Directorate at NASA's Marshall Space Flight Center is developing a new planning and scheduling environment and a new scheduling algorithm to enable a paradigm shift in planning and scheduling concepts. Over the past 33 years Marshall has developed and evolved a paradigm for generating payload timelines for Skylab, Spacelab, various other Shuttle payloads, and the International Space Station. The current paradigm starts by collecting the requirements, called ?ask models," from the scientists and technologists for the tasks that are to be scheduled. Because of shortcomings in the current modeling schema, some requirements are entered as notes. Next, a cadre with knowledge of vehicle and hardware modifies these models to encompass and be compatible with the hardware model; again, notes are added when the modeling schema does not provide a better way to represent the requirements. Finally, the models are modified to be compatible with the scheduling engine. Then the models are submitted to the scheduling engine for automatic scheduling or, when requirements are expressed in notes, the timeline is built manually. A future paradigm would provide a scheduling engine that accepts separate science models and hardware models. The modeling schema would have the capability to represent all the requirements without resorting to notes. Furthermore, the scheduling engine would not require that the models be modified to account for the capabilities (limitations) of the scheduling engine. The enabling technology under development at Marshall has three major components: (1) A new modeling schema allows expressing all the requirements of the tasks without resorting to notes or awkward contrivances. The chosen modeling schema is both maximally expressive and easy to use. It utilizes graphical methods to show hierarchies of task constraints and networks of temporal relationships. (2) A new scheduling algorithm automatically schedules the models without the intervention of a scheduling expert. The algorithm is tuned for the constraint hierarchies and the complex temporal relationships provided by the modeling schema. It has an extensive search algorithm that can exploit timing flexibilities and constraint and relationship options. (3) An innovative architecture allows multiple remote users to simultaneously model science and technology requirements and other users to model vehicle and hardware characteristics. The architecture allows the remote users to submit scheduling requests directly to the scheduling engine and immediately see the results. These three components are integrated so that science and technology experts with no knowledge of the vehicle or hardware subsystems and no knowledge of the internal workings of the scheduling engine have the ability to build and submit scheduling requests and see the results. The immediate feedback will hone the users' modeling skills and ultimately enable them to produce the desired timeline. This paper summarizes the three components of the enabling technology and describes how this technology would make a new paradigm possible.
NASA Technical Reports Server (NTRS)
1985-01-01
Task 2 in the Space Station Data System (SSDS) Analysis/Architecture Study is the development of an information base that will support the conduct of trade studies and provide sufficient data to make design/programmatic decisions. This volume identifies the preferred options in the programmatic category and characterizes these options with respect to performance attributes, constraints, costs, and risks. The programmatic category includes methods used to administrate/manage the development, operation and maintenance of the SSDS. The specific areas discussed include standardization/commonality; systems management; and systems development, including hardware procurement, software development and system integration, test and verification.
Software support environment design knowledge capture
NASA Technical Reports Server (NTRS)
Dollman, Tom
1990-01-01
The objective of this task is to assess the potential for using the software support environment (SSE) workstations and associated software for design knowledge capture (DKC) tasks. This assessment will include the identification of required capabilities for DKC and hardware/software modifications needed to support DKC. Several approaches to achieving this objective are discussed and interim results are provided: (1) research into the problem of knowledge engineering in a traditional computer-aided software engineering (CASE) environment, like the SSE; (2) research into the problem of applying SSE CASE tools to develop knowledge based systems; and (3) direct utilization of SSE workstations to support a DKC activity.
Real-time lens distortion correction: speed, accuracy and efficiency
NASA Astrophysics Data System (ADS)
Bax, Michael R.; Shahidi, Ramin
2014-11-01
Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.
Mars Science Laboratory Flight Software Boot Robustness Testing Project Report
NASA Technical Reports Server (NTRS)
Roth, Brian
2011-01-01
On the surface of Mars, the Mars Science Laboratory will boot up its flight computers every morning, having charged the batteries through the night. This boot process is complicated, critical, and affected by numerous hardware states that can be difficult to test. The hardware test beds do not facilitate testing a long duration of back-to-back unmanned automated tests, and although the software simulation has provided the necessary functionality and fidelity for this boot testing, there has not been support for the full flexibility necessary for this task. Therefore to perform this testing a framework has been build around the software simulation that supports running automated tests loading a variety of starting configurations for software and hardware states. This implementation has been tested against the nominal cases to validate the methodology, and support for configuring off-nominal cases is ongoing. The implication of this testing is that the introduction of input configurations that have yet proved difficult to test may reveal boot scenarios worth higher fidelity investigation, and in other cases increase confidence in the robustness of the flight software boot process.
NASA Technical Reports Server (NTRS)
Mckee, James W.
1988-01-01
This final report describes the accomplishments of the General Purpose Intelligent Sensor Interface task of the Applications of Artificial Intelligence to Space Station grant for the period from October 1, 1987 through September 30, 1988. Portions of the First Biannual Report not revised will not be included but only referenced. The goal is to develop an intelligent sensor system that will simplify the design and development of expert systems using sensors of the physical phenomena as a source of data. This research will concentrate on the integration of image processing sensors and voice processing sensors with a computer designed for expert system development. The result of this research will be the design and documentation of a system in which the user will not need to be an expert in such areas as image processing algorithms, local area networks, image processor hardware selection or interfacing, television camera selection, voice recognition hardware selection, or analog signal processing. The user will be able to access data from video or voice sensors through standard LISP statements without any need to know about the sensor hardware or software.
Fastener Capture Plate Technology to Contain On-Orbit Debris
NASA Technical Reports Server (NTRS)
Eisenhower, Kevin
2010-01-01
The Fastener Capture Plate technology was developed to solve the problem of capturing loose hardware and small fasteners, items that were not originally intended to be disengaged in microgravity, thus preventing them from becoming space debris. This technology was incorporated into astronaut tools designed and successfully used on NASA s Hubble Space Telescope Servicing Mission #4. The technology s ultimate benefit is that it allows a very time-efficient method for disengaging fasteners and removing hardware while minimizing the chances of losing parts or generating debris. The technology aims to simplify the manual labor required of the operator. It does so by optimizing visibility and access to the work site and minimizing the operator's need to be concerned with debris while performing the operations. It has a range of unique features that were developed to minimize task time, as well as maximize the ease and confidence of the astronaut operator. This paper describes the technology and the astronaut tools developed specifically for a complicated on-orbit repair, and it includes photographs of the hardware being used in outer space.
The dynamical analysis of modified two-compartment neuron model and FPGA implementation
NASA Astrophysics Data System (ADS)
Lin, Qianjin; Wang, Jiang; Yang, Shuangming; Yi, Guosheng; Deng, Bin; Wei, Xile; Yu, Haitao
2017-10-01
The complexity of neural models is increasing with the investigation of larger biological neural network, more various ionic channels and more detailed morphologies, and the implementation of biological neural network is a task with huge computational complexity and power consumption. This paper presents an efficient digital design using piecewise linearization on field programmable gate array (FPGA), to succinctly implement the reduced two-compartment model which retains essential features of more complicated models. The design proposes an approximate neuron model which is composed of a set of piecewise linear equations, and it can reproduce different dynamical behaviors to depict the mechanisms of a single neuron model. The consistency of hardware implementation is verified in terms of dynamical behaviors and bifurcation analysis, and the simulation results including varied ion channel characteristics coincide with the biological neuron model with a high accuracy. Hardware synthesis on FPGA demonstrates that the proposed model has reliable performance and lower hardware resource compared with the original two-compartment model. These investigations are conducive to scalability of biological neural network in reconfigurable large-scale neuromorphic system.
Narasimhan, Seetharam; Chiel, Hillel J; Bhunia, Swarup
2009-01-01
For implantable neural interface applications, it is important to compress data and analyze spike patterns across multiple channels in real time. Such a computational task for online neural data processing requires an innovative circuit-architecture level design approach for low-power, robust and area-efficient hardware implementation. Conventional microprocessor or Digital Signal Processing (DSP) chips would dissipate too much power and are too large in size for an implantable system. In this paper, we propose a novel hardware design approach, referred to as "Preferential Design" that exploits the nature of the neural signal processing algorithm to achieve a low-voltage, robust and area-efficient implementation using nanoscale process technology. The basic idea is to isolate the critical components with respect to system performance and design them more conservatively compared to the noncritical ones. This allows aggressive voltage scaling for low power operation while ensuring robustness and area efficiency. We have applied the proposed approach to a neural signal processing algorithm using the Discrete Wavelet Transform (DWT) and observed significant improvement in power and robustness over conventional design.
Neuromorphic Hardware Architecture Using the Neural Engineering Framework for Pattern Recognition.
Wang, Runchun; Thakur, Chetan Singh; Cohen, Gregory; Hamilton, Tara Julia; Tapson, Jonathan; van Schaik, Andre
2017-06-01
We present a hardware architecture that uses the neural engineering framework (NEF) to implement large-scale neural networks on field programmable gate arrays (FPGAs) for performing massively parallel real-time pattern recognition. NEF is a framework that is capable of synthesising large-scale cognitive systems from subnetworks and we have previously presented an FPGA implementation of the NEF that successfully performs nonlinear mathematical computations. That work was developed based on a compact digital neural core, which consists of 64 neurons that are instantiated by a single physical neuron using a time-multiplexing approach. We have now scaled this approach up to build a pattern recognition system by combining identical neural cores together. As a proof of concept, we have developed a handwritten digit recognition system using the MNIST database and achieved a recognition rate of 96.55%. The system is implemented on a state-of-the-art FPGA and can process 5.12 million digits per second. The architecture and hardware optimisations presented offer high-speed and resource-efficient means for performing high-speed, neuromorphic, and massively parallel pattern recognition and classification tasks.
Towards Evolving Electronic Circuits for Autonomous Space Applications
NASA Technical Reports Server (NTRS)
Lohn, Jason D.; Haith, Gary L.; Colombano, Silvano P.; Stassinopoulos, Dimitris
2000-01-01
The relatively new field of Evolvable Hardware studies how simulated evolution can reconfigure, adapt, and design hardware structures in an automated manner. Space applications, especially those requiring autonomy, are potential beneficiaries of evolvable hardware. For example, robotic drilling from a mobile platform requires high-bandwidth controller circuits that are difficult to design. In this paper, we present automated design techniques based on evolutionary search that could potentially be used in such applications. First, we present a method of automatically generating analog circuit designs using evolutionary search and a circuit construction language. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. Using a parallel genetic algorithm, we present experimental results for five design tasks. Second, we investigate the use of coevolution in automated circuit design. We examine fitness evaluation by comparing the effectiveness of four fitness schedules. The results indicate that solution quality is highest with static and co-evolving fitness schedules as compared to the other two dynamic schedules. We discuss these results and offer two possible explanations for the observed behavior: retention of useful information, and alignment of problem difficulty with circuit proficiency.
NASA Technical Reports Server (NTRS)
Haddad, Michael E.
2008-01-01
On-Orbit Constraints Test (OOCT's) refers to mating flight hardware together on the ground before they will be mated on-orbit. The concept seems simple but it can be difficult to perform operations like this on the ground when the flight hardware is being designed to be mated on-orbit in a zero-g and/or vacuum environment of space. Also some of the items are manufactured years apart so how are mating tasks performed on these components if one piece is on-orbit before its mating piece is planned to be built. Both the Internal Vehicular Activity (IVA) and Extra-Vehicular Activity (EVA) OOCT's performed at Kennedy Space Center will be presented in this paper. Details include how OOCT's should mimic on-orbit operational scenarios, a series of photographs will be shown that were taken during OOCT's performed on International Space Station (ISS) flight elements, lessons learned as a result of the OOCT's will be presented and the paper will conclude with possible applications to Moon and Mars Surface operations planned for the Constellation Program.
An experiment in vision based autonomous grasping within a reduced gravity environment
NASA Technical Reports Server (NTRS)
Grimm, K. A.; Erickson, J. D.; Anderson, G.; Chien, C. H.; Hewgill, L.; Littlefield, M.; Norsworthy, R.
1992-01-01
The National Aeronautics and Space Administration's Reduced Gravity Program (RGP) offers opportunities for experimentation in gravities of less than one-g. The Extravehicular Activity Helper/Retriever (EVAHR) robot project of the Automation and Robotics Division at the Lyndon B. Johnson Space Center in Houston, Texas, is undertaking a task that will culminate in a series of tests in simulated zero-g using this facility. A subset of the final robot hardware consisting of a three-dimensional laser mapper, a Robotics Research 807 arm, a Jameson JH-5 hand, and the appropriate interconnect hardware/software will be used. This equipment will be flown on the RGP's KC-135 aircraft. This aircraft will fly a series of parabolas creating the effect of zero-g. During the periods of zero-g, a number of objects will be released in front of the fixed base robot hardware in both static and dynamic configurations. The system will then inspect the object, determine the objects pose, plan a grasp strategy, and execute the grasp. This must all be accomplished in the approximately 27 seconds of zero-g.
A curriculum for real-time computer and control systems engineering
NASA Technical Reports Server (NTRS)
Halang, Wolfgang A.
1990-01-01
An outline of a syllabus for the education of real-time-systems engineers is given. This comprises the treatment of basic concepts, real-time software engineering, and programming in high-level real-time languages, real-time operating systems with special emphasis on such topics as task scheduling, hardware architectures, and especially distributed automation structures, process interfacing, system reliability and fault-tolerance, and integrated project development support systems. Accompanying course material and laboratory work are outlined, and suggestions for establishing a laboratory with advanced, but low-cost, hardware and software are provided. How the curriculum can be extended into a second semester is discussed, and areas for possible graduate research are listed. The suitable selection of a high-level real-time language and supporting operating system for teaching purposes is considered.
Performance of Adsorption - Based CO2 Acquisition Hardware for Mars ISRU
NASA Technical Reports Server (NTRS)
Finn, John E.; Mulloth, Lila M.; Borchers, Bruce A.; Luna, Bernadette (Technical Monitor)
2000-01-01
Chemical processing of the dusty, low-pressure Martian atmosphere typically requires conditioning and compression of the gases as first steps. A temperature-swing adsorption process can perform these tasks using nearly solid-state hardware and with relatively low power consumption compared to alternative processes. In addition, the process can separate the atmospheric constituents, producing both pressurized CO2 and a buffer gas mixture of nitrogen and argon. To date we have developed and tested adsorption compressors at scales appropriate for the near-term robotic missions that will lead the way to ISRU-based human exploration missions. In this talk we describe the characteristics, testing, and performance of these devices. We also discuss scale-up issues associated with meeting the processing demands of sample return and human missions.
Modifications to the rapid melt/rapid quench and transparent polymer video furnaces for the KC-135
NASA Technical Reports Server (NTRS)
Smith, Guy A.; Kosten, Sue E.; Workman, Gary L.
1990-01-01
Given here is a summary of tasks performed on two furnace systems, the Transparent Polymer (TPF) and the Rapid Melt/Rapid Quench (RMRQ) furnaces, to be used aboard NASA's KC-135. It was determined that major changes were needed for both furnaces to operate according to the scientific investigators' experiment parameters. Discussed here are what the problems were, what was required to solve the problems, and possible future enhancements. It was determined that the enhancements would be required for the furnaces to perform at their optimal levels. Services provided include hardware and software modifications, Safety DataPackage documentation, ground based testing, transportation to and from Ellington Air Field, operation of hardware during KC-135 flights, and post-flight data processing.
Post-Shuttle EVA Operations on ISS
NASA Technical Reports Server (NTRS)
West, William; Witt, Vincent; Chullen, Cinda
2010-01-01
The expected retirement of the NASA Space Transportation System (also known as the Space Shuttle ) by 2011 will pose a significant challenge to Extra-Vehicular Activities (EVA) on-board the International Space Station (ISS). The EVA hardware currently used to assemble and maintain the ISS was designed assuming that it would be returned to Earth on the Space Shuttle for refurbishment, or if necessary for failure investigation. With the retirement of the Space Shuttle, a new concept of operations was developed to enable EVA hardware (Extra-vehicular Mobility Unit (EMU), Airlock Systems, EVA tools, and associated support hardware and consumables) to perform ISS EVAs until 2015, and possibly beyond to 2020. Shortly after the decision to retire the Space Shuttle was announced, the EVA 2010 Project was jointly initiated by NASA and the One EVA contractor team. The challenges addressed were to extend the operating life and certification of EVA hardware, to secure the capability to launch EVA hardware safely on alternate launch vehicles, to protect for EMU hardware operability on-orbit, and to determine the source of high water purity to support recharge of PLSSs (no longer available via Shuttle). EVA 2010 Project includes the following tasks: the development of a launch fixture that would allow the EMU Portable Life Support System (PLSS) to be launched on-board alternate vehicles; extension of the EMU hardware maintenance interval from 3 years (current certification) to a minimum of 6 years (to extend to 2015); testing of recycled ISS Water Processor Assembly (WPA) water for use in the EMU cooling system in lieu of water resupplied by International Partner (IP) vehicles; development of techniques to remove & replace critical components in the PLSS on-orbit (not routine); extension of on-orbit certification of EVA tools; and development of an EVA hardware logistical plan to support the ISS without the Space Shuttle. Assumptions for the EVA 2010 Project included no more than 8 EVAs per year for ISS EVA operations in the Post-Shuttle environment and limited availability of cargo upmass on IP launch vehicles. From 2010 forward, EVA operations on-board the ISS without the Space Shuttle will be a paradigm shift in safely operating EVA hardware on orbit and the EVA 2010 effort was initiated to accommodate this significant change in EVA evolutionary history. 1
On the Selection of Models for Runtime Prediction of System Resources
NASA Astrophysics Data System (ADS)
Casolari, Sara; Colajanni, Michele
Applications and services delivered through large Internet Data Centers are now feasible thanks to network and server improvement, but also to virtualization, dynamic allocation of resources and dynamic migrations. The large number of servers and resources involved in these systems requires autonomic management strategies because no amount of human administrators would be capable of cloning and migrating virtual machines in time, as well as re-distributing or re-mapping the underlying hardware. At the basis of most autonomic management decisions, there is the need of evaluating own global behavior and change it when the evaluation indicates that they are not accomplishing what they were intended to do or some relevant anomalies are occurring. Decisions algorithms have to satisfy different time scales constraints. In this chapter we are interested to short-term contexts where runtime prediction models work on the basis of time series coming from samples of monitored system resources, such as disk, CPU and network utilization. In similar environments, we have to address two main issues. First, original time series are affected by limited predictability because measurements are characterized by noises due to system instability, variable offered load, heavy-tailed distributions, hardware and software interactions. Moreover, there is no existing criteria that can help us to choose a suitable prediction model and related parameters with the purpose of guaranteeing an adequate prediction quality. In this chapter, we evaluate the impact that different choices on prediction models have on different time series, and we suggest how to treat input data and whether it is convenient to choose the parameters of a prediction model in a static or dynamic way. Our conclusions are supported by a large set of analyses on realistic and synthetic data traces.
ERIC Educational Resources Information Center
Sales, Anthony; Evans, Shirley; Musgrove, Nick; Homfray, Richard
2006-01-01
Potentially, computers can balance some of the effects of visual impairment and provide equality of opportunity (Gerber, 2003). Students' individual needs entail that they and their teachers have access to a range of assistive technologies that may vary according to the task as well as to the learner. A dual output graphics card with a twin…
Methods for design and evaluation of integrated hardware-software systems for concurrent computation
NASA Technical Reports Server (NTRS)
Pratt, T. W.
1985-01-01
Research activities and publications are briefly summarized. The major tasks reviewed are: (1) VAX implementation of the PISCES parallel programming environment; (2) Apollo workstation network implementation of the PISCES environment; (3) FLEX implementation of the PISCES environment; (4) sparse matrix iterative solver in PSICES Fortran; (5) image processing application of PISCES; and (6) a formal model of concurrent computation being developed.
STS-43 crewmembers perform various tasks on OV-104's aft flight deck
1991-08-11
STS043-37-012 (2-11 Aug 1991) --- Three STS-43 astronauts are busy at work onboard the earth-orbiting space shuttle Atlantis. Astronaut Shannon W. Lucid is pictured performing one of several tests on Computer hardware with space station applications in mind. Sharing the aft flight deck with Lucid are Michael A. Baker (left), pilot and John E. Blaha, mission commander.
NASA Technical Reports Server (NTRS)
Perkinson, J. A.
1974-01-01
The application of associative memory processor equipment to conventional host processors type systems is discussed. Efforts were made to demonstrate how such application relieves the task burden of conventional systems, and enhance system speed and efficiency. Data cover comparative theoretical performance analysis, demonstration of expanded growth capabilities, and demonstrations of actual hardware in simulated environment.
Hardening digital systems with distributed functionality: robust networks
NASA Astrophysics Data System (ADS)
Vaskova, Anna; Portela-Garcia, Marta; Garcia-Valderas, Mario; López-Ongil, Celia; Portilla, Jorge; Valverde, Juan; de la Torre, Eduardo; Riesgo, Teresa
2013-05-01
Collaborative hardening and hardware redundancy are nowadays the most interesting solutions in terms of fault tolerance achieved and low extra cost imposed to the project budget. Thanks to the powerful and cheap digital devices that are available in the market, extra processing capabilities can be used for redundant tasks, not only in early data processing (sensed data) but also in routing and interfacing1
VLSI synthesis of digital application specific neural networks
NASA Technical Reports Server (NTRS)
Beagles, Grant; Winters, Kel
1991-01-01
Neural networks tend to fall into two general categories: (1) software simulations, or (2) custom hardware that must be trained. The scope of this project is the merger of these two classifications into a system whereby a software model of a network is trained to perform a specific task and the results used to synthesize a standard cell realization of the network using automated tools.
Microcontroller uses in Long-Duration Ballooning
NASA Astrophysics Data System (ADS)
Jones, Joseph
This paper discusses how microcontrollers are being utilized to fulfill the demands of long duration ballooning (LDB) and the advantages of doing so. The Columbia Scientific Balloon Facility (CSBF) offers the service of launching high altitude balloons (120k ft) which provide an over the horizon telemetry system and platform for scientific research payloads to collect data. CSBF has utilized microcontrollers to address multiple tasks and functions which were previously performed by more complex systems. A microcontroller system has been recently developed and programmed in house to replace our previous backup navigation system which is used on all LDB flights. A similar microcontroller system was developed to be independently launched in Antarctica before the actual scientific payload. This system's function is to transmit its GPS position and a small housekeeping packet so that we can confirm the upper level float winds are as predicted from satellite derived models. Microcontrollers have also been used to create test equipment to functionally check out the flight hardware used in our telemetry systems. One test system which was developed can be used to quickly determine if our communication link we are providing for the science payloads is functioning properly. Another system was developed to provide us with the ability to easily determine the status of one of our over the horizon communication links through a closed loop system. This test system has given us the capability to provide more field support to science groups than we were able to in years past. The trend of utilizing microcontrollers has taken place for a number of reasons. By using microcontrollers to fill these needs, it has given us the ability to quickly design and implement systems which meet flight critical needs, as well as perform many of the everyday tasks in LDB. This route has also allowed us to reduce the amount of time required for personnel to perform a number of the tasks required during the initial fabrication and also refurbishing processes of flight hardware systems. The recent use of microcontrollers in the design of both LDB flight hardware and test equipment has shown some examples of the adaptability and usefulness they have provided for our workplace.
Software components for medical image visualization and surgical planning
NASA Astrophysics Data System (ADS)
Starreveld, Yves P.; Gobbi, David G.; Finnis, Kirk; Peters, Terence M.
2001-05-01
Purpose: The development of new applications in medical image visualization and surgical planning requires the completion of many common tasks such as image reading and re-sampling, segmentation, volume rendering, and surface display. Intra-operative use requires an interface to a tracking system and image registration, and the application requires basic, easy to understand user interface components. Rapid changes in computer and end-application hardware, as well as in operating systems and network environments make it desirable to have a hardware and operating system as an independent collection of reusable software components that can be assembled rapidly to prototype new applications. Methods: Using the OpenGL based Visualization Toolkit as a base, we have developed a set of components that implement the above mentioned tasks. The components are written in both C++ and Python, but all are accessible from Python, a byte compiled scripting language. The components have been used on the Red Hat Linux, Silicon Graphics Iris, Microsoft Windows, and Apple OS X platforms. Rigorous object-oriented software design methods have been applied to ensure hardware independence and a standard application programming interface (API). There are components to acquire, display, and register images from MRI, MRA, CT, Computed Rotational Angiography (CRA), Digital Subtraction Angiography (DSA), 2D and 3D ultrasound, video and physiological recordings. Interfaces to various tracking systems for intra-operative use have also been implemented. Results: The described components have been implemented and tested. To date they have been used to create image manipulation and viewing tools, a deep brain functional atlas, a 3D ultrasound acquisition and display platform, a prototype minimally invasive robotic coronary artery bypass graft planning system, a tracked neuro-endoscope guidance system and a frame-based stereotaxy neurosurgery planning tool. The frame-based stereotaxy module has been licensed and certified for use in a commercial image guidance system. Conclusions: It is feasible to encapsulate image manipulation and surgical guidance tasks in individual, reusable software modules. These modules allow for faster development of new applications. The strict application of object oriented software design methods allows individual components of such a system to make the transition from the research environment to a commercial one.
Andersen, Pia; Lindgaard, Anne-Mette; Prgomet, Mirela; Creswick, Nerida; Westbrook, Johanna I
2009-08-04
Selecting the right mix of stationary and mobile computing devices is a significant challenge for system planners and implementers. There is very limited research evidence upon which to base such decisions. We aimed to investigate the relationships between clinician role, clinical task, and selection of a computer hardware device in hospital wards. Twenty-seven nurses and eight doctors were observed for a total of 80 hours as they used a range of computing devices to access a computerized provider order entry system on two wards at a major Sydney teaching hospital. Observers used a checklist to record the clinical tasks completed, devices used, and location of the activities. Field notes were also documented during observations. Semi-structured interviews were conducted after observation sessions. Assessment of the physical attributes of three devices-stationary PCs, computers on wheels (COWs) and tablet PCs-was made. Two types of COWs were available on the wards: generic COWs (laptops mounted on trolleys) and ergonomic COWs (an integrated computer and cart device). Heuristic evaluation of the user interfaces was also carried out. The majority (93.1%) of observed nursing tasks were conducted using generic COWs. Most nursing tasks were performed in patients' rooms (57%) or in the corridors (36%), with a small percentage at a patient's bedside (5%). Most nursing tasks related to the preparation and administration of drugs. Doctors on ward rounds conducted 57.3% of observed clinical tasks on generic COWs and 35.9% on tablet PCs. On rounds, 56% of doctors' tasks were performed in the corridors, 29% in patients' rooms, and 3% at the bedside. Doctors not on a ward round conducted 93.6% of tasks using stationary PCs, most often within the doctors' office. Nurses and doctors were observed performing workarounds, such as transcribing medication orders from the computer to paper. The choice of device was related to clinical role, nature of the clinical task, degree of mobility required, including where task completion occurs, and device design. Nurses' work, and clinical tasks performed by doctors during ward rounds, require highly mobile computer devices. Nurses and doctors on ward rounds showed a strong preference for generic COWs over all other devices. Tablet PCs were selected by doctors for only a small proportion of clinical tasks. Even when using mobile devices clinicians completed a very low proportion of observed tasks at the bedside. The design of the devices and ward space configurations place limitations on how and where devices are used and on the mobility of clinical work. In such circumstances, clinicians will initiate workarounds to compensate. In selecting hardware devices, consideration should be given to who will be using the devices, the nature of their work, and the physical layout of the ward.
Andersen, Pia; Lindgaard, Anne-Mette; Prgomet, Mirela; Creswick, Nerida
2009-01-01
Background Selecting the right mix of stationary and mobile computing devices is a significant challenge for system planners and implementers. There is very limited research evidence upon which to base such decisions. Objective We aimed to investigate the relationships between clinician role, clinical task, and selection of a computer hardware device in hospital wards. Methods Twenty-seven nurses and eight doctors were observed for a total of 80 hours as they used a range of computing devices to access a computerized provider order entry system on two wards at a major Sydney teaching hospital. Observers used a checklist to record the clinical tasks completed, devices used, and location of the activities. Field notes were also documented during observations. Semi-structured interviews were conducted after observation sessions. Assessment of the physical attributes of three devices—stationary PCs, computers on wheels (COWs) and tablet PCs—was made. Two types of COWs were available on the wards: generic COWs (laptops mounted on trolleys) and ergonomic COWs (an integrated computer and cart device). Heuristic evaluation of the user interfaces was also carried out. Results The majority (93.1%) of observed nursing tasks were conducted using generic COWs. Most nursing tasks were performed in patients’ rooms (57%) or in the corridors (36%), with a small percentage at a patient’s bedside (5%). Most nursing tasks related to the preparation and administration of drugs. Doctors on ward rounds conducted 57.3% of observed clinical tasks on generic COWs and 35.9% on tablet PCs. On rounds, 56% of doctors’ tasks were performed in the corridors, 29% in patients’ rooms, and 3% at the bedside. Doctors not on a ward round conducted 93.6% of tasks using stationary PCs, most often within the doctors’ office. Nurses and doctors were observed performing workarounds, such as transcribing medication orders from the computer to paper. Conclusions The choice of device was related to clinical role, nature of the clinical task, degree of mobility required, including where task completion occurs, and device design. Nurses’ work, and clinical tasks performed by doctors during ward rounds, require highly mobile computer devices. Nurses and doctors on ward rounds showed a strong preference for generic COWs over all other devices. Tablet PCs were selected by doctors for only a small proportion of clinical tasks. Even when using mobile devices clinicians completed a very low proportion of observed tasks at the bedside. The design of the devices and ward space configurations place limitations on how and where devices are used and on the mobility of clinical work. In such circumstances, clinicians will initiate workarounds to compensate. In selecting hardware devices, consideration should be given to who will be using the devices, the nature of their work, and the physical layout of the ward. PMID:19674959
NASA Astrophysics Data System (ADS)
Kotulla, Ralf; Gopu, Arvind; Hayashi, Soichi
2016-08-01
Processing astronomical data to science readiness was and remains a challenge, in particular in the case of multi detector instruments such as wide-field imagers. One such instrument, the WIYN One Degree Imager, is available to the astronomical community at large, and, in order to be scientifically useful to its varied user community on a short timescale, provides its users fully calibrated data in addition to the underlying raw data. However, time-efficient re-processing of the often large datasets with improved calibration data and/or software requires more than just a large number of CPU-cores and disk space. This is particularly relevant if all computing resources are general purpose and shared with a large number of users in a typical university setup. Our approach to address this challenge is a flexible framework, combining the best of both high performance (large number of nodes, internal communication) and high throughput (flexible/variable number of nodes, no dedicated hardware) computing. Based on the Advanced Message Queuing Protocol, we a developed a Server-Manager- Worker framework. In addition to the server directing the work flow and the worker executing the actual work, the manager maintains a list of available worker, adds and/or removes individual workers from the worker pool, and re-assigns worker to different tasks. This provides the flexibility of optimizing the worker pool to the current task and workload, improves load balancing, and makes the most efficient use of the available resources. We present performance benchmarks and scaling tests, showing that, today and using existing, commodity shared- use hardware we can process data with data throughputs (including data reduction and calibration) approaching that expected in the early 2020s for future observatories such as the Large Synoptic Survey Telescope.
NASA Technical Reports Server (NTRS)
Dunn, Mariea C.; Alves, Jeffrey R.; Hutchinson, Sonya L.
1999-01-01
This paper describes the human engineering analysis performed on the Materials Science Research Rack-1 and Quench Module Insert (MSRR-1/QMI) using Transom Jack (Jack) software. The Jack software was used to model a virtual environment consisting of the MSRR-1/QMI hardware configuration and human figures representing the 95th percentile male and 5th percentile female. The purpose of the simulation was to assess the human interfaces in the design for their ability to meet the requirements of the Pressurized Payloads Interface Requirements Document - International Space Program, Revision C (SSP 57000). Jack was used in the evaluation because of its ability to correctly model anthropometric body measurements and the physical behavior of astronauts working in microgravity, which is referred to as the neutral body posture. The Jack model allows evaluation of crewmember interaction with hardware through task simulation including but not limited to collision avoidance behaviors, hand/eye coordination, reach path planning, and automatic grasping to part contours. Specifically, this virtual simulation depicts the human figures performing the QMI installation and check-out, sample cartridge insertion and removal, and gas bottle drawer removal. These tasks were evaluated in terms of adequate clearance in reach envelopes, adequate accessibility in work envelopes, appropriate line of sight in visual envelopes, and accommodation of full size range for male and female stature maneuverability. The results of the human engineering analysis virtual simulation indicate that most of the associated requirements of SSP 57000 were met. However, some hardware design considerations and crew procedures modifications are recommended to improve accessibility, provide an adequate work envelope, reduce awkward body posture, and eliminate permanent protrusions.
Planetary Geologic Mapping Python Toolbox: A Suite of Tools to Support Mapping Workflows
NASA Astrophysics Data System (ADS)
Hunter, M. A.; Skinner, J. A.; Hare, T. M.; Fortezzo, C. M.
2017-06-01
The collective focus of the Planetary Geologic Mapping Python Toolbox is to provide researchers with additional means to migrate legacy GIS data, assess the quality of data and analysis results, and simplify common mapping tasks.
Efficient operating system level virtualization techniques for cloud resources
NASA Astrophysics Data System (ADS)
Ansu, R.; Samiksha; Anju, S.; Singh, K. John
2017-11-01
Cloud computing is an advancing technology which provides the servcies of Infrastructure, Platform and Software. Virtualization and Computer utility are the keys of Cloud computing. The numbers of cloud users are increasing day by day. So it is the need of the hour to make resources available on demand to satisfy user requirements. The technique in which resources namely storage, processing power, memory and network or I/O are abstracted is known as Virtualization. For executing the operating systems various virtualization techniques are available. They are: Full System Virtualization and Para Virtualization. In Full Virtualization, the whole architecture of hardware is duplicated virtually. No modifications are required in Guest OS as the OS deals with the VM hypervisor directly. In Para Virtualization, modifications of OS is required to run in parallel with other OS. For the Guest OS to access the hardware, the host OS must provide a Virtual Machine Interface. OS virtualization has many advantages such as migrating applications transparently, consolidation of server, online maintenance of OS and providing security. This paper briefs both the virtualization techniques and discusses the issues in OS level virtualization.
Avionics upgrade strategies for the Space Shuttle and derivatives
NASA Astrophysics Data System (ADS)
Swaim, Richard A.; Wingert, William B.
Some approaches aimed at providing a low-cost, low-risk strategy to upgrade the shuttle onboard avionics are described. These approaches allow migration to a shuttle-derived vehicle and provide commonality with Space Station Freedom avionics to the extent practical. Some goals of the Shuttle cockpit upgrade include: offloading of the main computers by distributing avionics display functions, reducing crew workload, reducing maintenance cost, and providing display reconfigurability and context sensitivity. These goals are being met by using a combination of off-the-shelf and newly developed software and hardware. The software will be developed using Ada. Advanced active matrix liquid crystal displays are being used to meet the tight space, weight, and power consumption requirements. Eventually, it is desirable to upgrade the current shuttle data processing system with a system that has more in common with the Space Station data management system. This will involve not only changes in Space Shuttle onboard hardware, but changes in the software. Possible approaches to maximizing the use of the existing software base while taking advantage of new language capabilities are discussed.
Issues Related to Cleaning Complex Geometry Surfaces with ODC-Free Solvents
NASA Technical Reports Server (NTRS)
Bradford, Blake F.; Wurth, Laura A.; Nayate, Pramod D.; McCool, Alex (Technical Monitor)
2001-01-01
Implementing ozone depleting chemicals (ODC)-free solvents into full-scale reusable solid rocket motor cleaning operations has presented problems due to the low vapor pressures of the solvents. Because of slow evaporation, solvent retention is a problem on porous substrates or on surfaces with irregular geometry, such as threaded boltholes, leak check ports, and nozzle backfill joints. The new solvents are being evaluated to replace 1,1,1-trichloroethane, which readily evaporates from these surfaces. Selection of the solvents to be evaluated on full-scale hardware was made based on results of subscale tests performed with flat surface coupons, which did not manifest the problem. Test efforts have been undertaken to address concerns with the slow-evaporating solvents. These concerns include effects on materials due to long-term exposure to solvent, potential migration from bolthole threads to seal surfaces, and effects on bolt loading due to solvent retention in threads. Tests performed to date have verified that retained solvent does not affect materials or hardware performance. Process modifications have also been developed to assist drying, and these can be implemented if additional drying becomes necessary.
NASA Technical Reports Server (NTRS)
Lala, J. H.; Smith, T. B., III
1983-01-01
The software developed for the Fault-Tolerant Multiprocessor (FTMP) is described. The FTMP executive is a timer-interrupt driven dispatcher that schedules iterative tasks which run at 3.125, 12.5, and 25 Hz. Major tasks which run under the executive include system configuration control, flight control, and display. The flight control task includes autopilot and autoland functions for a jet transport aircraft. System Displays include status displays of all hardware elements (processors, memories, I/O ports, buses), failure log displays showing transient and hard faults, and an autopilot display. All software is in a higher order language (AED, an ALGOL derivative). The executive is a fully distributed general purpose executive which automatically balances the load among available processor triads. Provisions for graceful performance degradation under processing overload are an integral part of the scheduling algorithms.
NASA Astrophysics Data System (ADS)
Da Silva, A.; Sánchez Prieto, S.; Polo, O.; Parra Espada, P.
2013-05-01
Because of the tough robustness requirements in space software development, it is imperative to carry out verification tasks at a very early development stage to ensure that the implemented exception mechanisms work properly. All this should be done long time before the real hardware is available. But even if real hardware is available the verification of software fault tolerance mechanisms can be difficult since real faulty situations must be systematically and artificially brought about which can be imposible on real hardware. To solve this problem the Alcala Space Research Group (SRG) has developed a LEON2 virtual platform (Leon2ViP) with fault injection capabilities. This way it is posible to run the exact same target binary software as runs on the physical system in a more controlled and deterministic environment, allowing a more strict requirements verification. Leon2ViP enables unmanned and tightly focused fault injection campaigns, not possible otherwise, in order to expose and diagnose flaws in the software implementation early. Furthermore, the use of a virtual hardware-in-the-loop approach makes it possible to carry out preliminary integration tests with the spacecraft emulator or the sensors. The use of Leon2ViP has meant a signicant improvement, in both time and cost, in the development and verification processes of the Instrument Control Unit boot software on board Solar Orbiter's Energetic Particle Detector.
Reliability Information Analysis Center 1st Quarter 2007, Technical Area Task (TAT) Report
2007-02-05
34* Created new SQL server database for "PC Configuration" web application. Added roles for security closed 4235 and posted application to production. "e Wrote...and ran SQL Server scripts to migrate production databases to new server . "e Created backup jobs for new SQL Server databases. "* Continued...second phase of the TENA demo. Extensive tasking was established and assigned. A TENA interface to EW Server was reaffirmed after some uncertainty about
SiMA: A simplified migration assay for analyzing neutrophil migration.
Weckmann, Markus; Becker, Tim; Nissen, Gyde; Pech, Martin; Kopp, Matthias V
2017-07-01
In lung inflammation, neutrophils are the first leukocytes migrating to an inflammatory site, eliminating pathogens by multiple mechanisms. The term "migration" describes several stages of neutrophil movement to reach the site of inflammation, of which the passage of the interstitium and basal membrane of the airway are necessary to reach the site of bronchial inflammation. Currently, several methods exist (e.g., Boyden Chamber, under-agarose assay, or microfluidic systems) to assess neutrophil mobility. However, these methods do not allow for parameterization on single cell level, that is, the individual neutrophil pathway analysis is still considered challenging. This study sought to develop a simplified yet flexible method to monitor and quantify neutrophil chemotaxis by utilizing commercially available tissue culture hardware, simple video microscopic equipment and highly standardized tracking. A chemotaxis 3D µ-slide (IBIDI) was used with different chemoattractants [interleukin-8 (IL-8), fMLP, and Leukotriene B4 (LTB 4 )] to attract neutrophils in different matrices like Fibronectin (FN) or human placental matrix. Migration was recorded for 60 min using phase contrast microscopy with an EVOS ® FL Cell Imaging System. The images were normalized and texture based image segmentation was used to generate neutrophil trajectories. Based on these spatio-temporal information a comprehensive parameter set is extracted from each time series describing the neutrophils motility, including velocity and directness and neutrophil chemotaxis. To characterize the latter one, a sector analysis was employed enabling the quantification of the neutrophils response to the chemoattractant. Using this hard- and software framework we were able to identify typical migration profiles of the chemoattractants IL-8, fMLP, and LTB 4 , the effect of the matrices FN versus HEM as well as the response to different medications (Prednisolone). Additionally, a comparison of four asthmatic and three non-asthmatic patients gives a first hint to the capability of SiMA assay in the context of migration based diagnostics. Using SiMA we were able to identify typical migration profiles of the chemoattractants IL-8, fMLP, and LTB 4 , the effect of the matrices FN versus HEM as well as the response to different medications, that is, Prednisolone induced a change of direction of migrating neutrophils in FN but no such effect was observed in human placental matrix. In addition, neutrophils of asthmatic individuals showed an increased proportion of cells migrating toward the vehicle. With the SiMA platform we presented a simplified but yet flexible platform for cost-effective tracking and quantification of neutrophil migration. The introduced method is based on a simple microscopic video stage, standardized, commercially available, µ-fluidic migration chambers and automated image analysis, and track validation software. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
An Execution Service for Grid Computing
NASA Technical Reports Server (NTRS)
Smith, Warren; Hu, Chaumin
2004-01-01
This paper describes the design and implementation of the IPG Execution Service that reliably executes complex jobs on a computational grid. Our Execution Service is part of the IPG service architecture whose goal is to support location-independent computing. In such an environment, once n user ports an npplicntion to one or more hardware/software platfrms, the user can describe this environment to the grid the grid can locate instances of this platfrm, configure the platfrm as required for the application, and then execute the application. Our Execution Service runs jobs that set up such environments for applications and executes them. These jobs consist of a set of tasks for executing applications and managing data. The tasks have user-defined starting conditions that allow users to specih complex dependencies including task to execute when tasks fail, afiequent occurrence in a large distributed system, or are cancelled. The execution task provided by our service also configures the application environment exactly as specified by the user and captures the exit code of the application, features that many grid execution services do not support due to dflculties interfacing to local scheduling systems.
Optimizing spectral CT parameters for material classification tasks
NASA Astrophysics Data System (ADS)
Rigie, D. S.; La Rivière, P. J.
2016-06-01
In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies.
Optimizing Spectral CT Parameters for Material Classification Tasks
Rigie, D. S.; La Rivière, P. J.
2017-01-01
In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies. PMID:27227430
Apollo 15 time and motion study
NASA Technical Reports Server (NTRS)
Kubis, J. F.; Elrod, J. T.; Rusnak, R.; Barnes, J. E.
1972-01-01
A time and motion study of Apollo 15 lunar surface activity led to examination of four distinct areas of crewmen activity. These areas are: an analysis of lunar mobility, a comparative analysis of tasks performed in 1-g training and lunar EVA, an analysis of the metabolic cost of two activities that are performed in several EVAs, and a fall/near-fall analysis. An analysis of mobility showed that the crewmen used three basic mobility patterns (modified walk, hop, side step) while on the lunar surface. These mobility patterns were utilized as adaptive modes to compensate for the uneven terrain and varied soil conditions that the crewmen encountered. A comparison of the time required to perform tasks at the final 1-g lunar EVA training sessions and the time required to perform the same task on the lunar surface indicates that, in almost all cases, it took significantly more time (on the order of 40%) to perform tasks on the moon. This increased time was observed even after extraneous factors (e.g., hardware difficulties) were factored out.
Space station Simulation Computer System (SCS) study for NASA/MSFC. Volume 5: Study analysis report
NASA Technical Reports Server (NTRS)
1989-01-01
The Simulation Computer System (SCS) is the computer hardware, software, and workstations that will support the Payload Training Complex (PTC) at the Marshall Space Flight Center (MSFC). The PTC will train the space station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be on-board the Freedom Space Station. The further analysis performed on the SCS study as part of task 2-Perform Studies and Parametric Analysis-of the SCS study contract is summarized. These analyses were performed to resolve open issues remaining after the completion of task 1, and the publishing of the SCS study issues report. The results of these studies provide inputs into SCS task 3-Develop and present SCS requirements, and SCS task 4-develop SCS conceptual designs. The purpose of these studies is to resolve the issues into usable requirements given the best available information at the time of the study. A list of all the SCS study issues is given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gyllenhaal, J.
CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading. For simplicity, it does not use MPI by default but it is expected to be run on the resources a threaded MPI task would use (e.g., a portion of a shared memory compute node). Compiling with -DWITH_MPI allows packing one or more nodes with CLOMP tasks and having CLOMP report OpenMP performance for the slowest MPI task. On current systems, the strong scaling performance results for 4, 8, or 16 threads are of the most interest. Suggested weakmore » scaling inputs are provided for evaluating future systems. Since MPI is often used to place at least one MPI task per coherence or NUMA domain, it is recommended to focus OpenMP runtime measurements on a subset of node hardware where it is most possible to have low OpenMP overheads (e.g., within one coherence domain or NUMA domain).« less
Maximally Expressive Modeling of Operations Tasks
NASA Technical Reports Server (NTRS)
Jaap, John; Richardson, Lea; Davis, Elizabeth
2002-01-01
Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed, the information sought is at the cutting edge of scientific endeavor, and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a "maximally expressive" modeling schema.
Liu, Bo; Liu, Zhiwei; Chiu, In-Shiang; Di, MengFu; Wu, YongRen; Wang, Jer-Chyi; Hou, Tuo-Hung; Lai, Chao-Sung
2018-06-20
Memristors with rich interior dynamics of ion migration are promising for mimicking various biological synaptic functions in neuromorphic hardware systems. A graphene-based memristor shows an extremely low energy consumption of less than a femtojoule per spike, by taking advantage of weak surface van der Waals interaction of graphene. The device also shows an intriguing programmable metaplasticity property in which the synaptic plasticity depends on the history of the stimuli and yet allows rapid reconfiguration via an immediate stimulus. This graphene-based memristor could be a promising building block toward designing highly versatile and extremely energy efficient neuromorphic computing systems.
Shoulder Acromioclavicular and Coracoclavicular Ligament Injuries: Common Problems and Solutions.
Wylie, James D; Johnson, Jeremiah D; DiVenere, Jessica; Mazzocca, Augustus D
2018-04-01
Injuries to the acromioclavicular joint and coracoclavicular ligaments are common. Many of these injuries heal with nonoperative management. However, more severe injuries may lead to continued pain and shoulder dysfunction. In these patients, surgical techniques have been described to reconstruct the function of the coracoclavicular ligaments to provide stable relationship between the clavicle and scapula. These surgeries have been fraught with high complication rates including clavicle and coracoid fractures, infection, loss of reduction and fixation, hardware migration, and osteolysis. This article reviews common acromioclavicular and coracoclavicular repair and reconstruction techniques and associated complications, and provides recommendations for prevention and management. Copyright © 2018 Elsevier Inc. All rights reserved.
A communications model for an ISAS to NASA span link
NASA Technical Reports Server (NTRS)
Green, James L.; Mcguire, Robert E.; Lopez-Swafford, Brian
1987-01-01
The authors propose that an initial computer-to-computer communication link use the public packet switched networks (PPSN) Venus-P in Japan and TELENET in the U.S. When the traffic warrants it, this link would then be upgraded to a dedicated leased line that directly connects into the Space Physics Analysis Network (SPAN). The proposed system of hardware and software will easily support migration to such a dedicated link. It therefore provides a cost effective approach to the network problem. Once a dedicated line becomes operation it is suggested that the public networks link and continue to coexist, providing a backup capability.
Repeat migration and disappointment.
Grant, E K; Vanderkamp, J
1986-01-01
This article investigates the determinants of repeat migration among the 44 regions of Canada, using information from a large micro-database which spans the period 1968 to 1971. The explanation of repeat migration probabilities is a difficult task, and this attempt is only partly successful. May of the explanatory variables are not significant, and the overall explanatory power of the equations is not high. In the area of personal characteristics, the variables related to age, sex, and marital status are generally significant and with expected signs. The distance variable has a strongly positive effect on onward move probabilities. Variables related to prior migration experience have an important impact that differs between return and onward probabilities. In particular, the occurrence of prior moves has a striking effect on the probability of onward migration. The variable representing disappointment, or relative success of the initial move, plays a significant role in explaining repeat migration probabilities. The disappointment variable represents the ratio of actural versus expected wage income in the year after the initial move, and its effect on both repeat migration probabilities is always negative and almost always highly significant. The repeat probabilities diminish after a year's stay in the destination region, but disappointment in the most recent year still has a bearing on the delayed repeat probabilities. While the quantitative impact of the disappointment variable is not large, it is difficult to draw comparisons since similar estimates are not available elsewhere.
Some Issues in Programming Multi-Mini-Processors
1975-01-01
Hardware ^nd software are to be combined optimally to perform that specialized task. This in essence is the stategy followed by the BBN group in...large memory is directly addressable. MIXED SOLUTIONS The most promising approach appears to involve mixing several of the previous solutions...mini- or micro-computers. Possibly the problem will be solved by avoiding it. Some new minis are appearing on the market now with large physical
Columbus in the Atlantis payload bay during the STS-122 Mission
2008-02-08
S122-E-006275 (8 Feb. 2008) --- Backdropped against the blackness of space, the European Space Agency's Columbus laboratory and associated ESA hardware sit in the aft portion of Space Shuttle Atlantis' cargo bay on the eve of the shuttle's scheduled docking to the International Space Station. The addition of Columbus to the orbital outpost is one of the primary tasks of the STS-122 mission.
Columbus in the Atlantis payload bay during the STS-122 Mission
2008-02-08
S122-E-006273 (8 Feb. 2008) --- Backdropped against a cloud-covered portion of Earth, the European Space Agency's Columbus laboratory and associated ESA hardware sit in the aft section of Space Shuttle Atlantis' cargo bay on the eve of the shuttle's scheduled docking to the International Space Station. The addition of Columbus to the orbital outpost is one of the primary tasks of the STS-122 mission.
Coal gasification systems engineering and analysis. Appendix H: Work breakdown structure
NASA Technical Reports Server (NTRS)
1980-01-01
A work breakdown structure (WBS) is presented which encompasses the multiple facets (hardware, software, services, and other tasks) of the coal gasification program. The WBS is shown to provide the basis for the following: management and control; cost estimating; budgeting and reporting; scheduling activities; organizational structuring; specification tree generation; weight allocation and control; procurement and contracting activities; and serves as a tool for program evaluation.
A unique challenge: Emergency egress and life support equipment at KSC
NASA Technical Reports Server (NTRS)
Waddell, H. M., Jr.
1975-01-01
As a result of the investigation following the January 1967 fire, which took the lives of three astronauts, materials were developed, flight hardware was modified, and test procedures were rewritten in order to establish the framework within which a more effective rescue concept could be developed. Topics discussed include breathing units, improved life support equipment, miniresuscitators, and hazardous tasks during space shuttle launch and landing operations.
N-CET: Network-Centric Exploitation and Tracking
2009-10-01
DATES COVERED (From - To) October 2008 – August 2009 4 . TITLE AND SUBTITLE N-CET: NETWORK – CENTRIC EXPLOITATION AND TRACKING 5a. CONTRACT NUMBER...At the core of N-CET are information management services that decouple data producers and consumers , allowing reconfiguration to suit mission needs...Shown around the head-node are different pieces of hardware including the Sony PlayStation R©3 (PS3) nodes used for computationally demanding tasks
ERIC Educational Resources Information Center
Karsh, Kathryn G.
This final report describes activities of a federally funded project which developed an educational computer-assisted instructional program for persons with severe disabilities. A preliminary review of the literature identified specific inadequacies of most software for this population, such as: too few examples of a task or concept thus limiting…
UPenn Multi-Robot Unmanned Vehicle System (MAGIC)
2014-05-05
unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 UPenn Multi-Robot Unmanned Vehicle System (MAGIC) AFOSR Final Report PI... user interface, the Strategy/Plan operator allows the system to autonomously task the nearest available UGVs to plan and coordinate their movements and...threats in a dynamic urban environment with minimal human guidance. The custom hardware systems consist of robust and complementary sensors, integrated
The PASM Parallel Processing System: Hardware Design and Intelligent Operating System Concepts
1986-07-01
IND-3 Jac Logic 0ISCAUTO-3 UK Jus Parallel IrAorf act Pori 90-7 el MS. IND-3 P110-3 Logic = .CUTO-3 AC-4 0 Sow PAIS WK.110-7 --------- CSS CC. THO...process communication are part of the ment, which must be part of the task body: jitsu VP-20043 uses 32-bit integers. Pro- language. The compiler actually
Mass Storage Performance Information System
NASA Technical Reports Server (NTRS)
Scheuermann, Peter
2000-01-01
The purpose of this task is to develop a data warehouse to enable system administrators and their managers to gather information by querying the data logs of the MDSDS. Currently detailed logs capture the activity of the MDSDS internal to the different systems. The elements to be included in the data warehouse are requirements analysis, data cleansing, database design, database population, hardware/software acquisition, data transformation, query and report generation, and data mining.
Alloy undercooling experiments
NASA Technical Reports Server (NTRS)
Flemings, Merton C.; Matson, Douglas M.
1995-01-01
The research accomplished during 1995 can be organized into three parts. The first task involves analyzing the results of microgravity experiments carried out using TEMPUS hardware during the IML-2 mission on STS-65. The second part was to finalize ground-based experimentation which supported the above flight sample analysis. The final part was to provide technical support for post-flight mission activities specifically aimed at improving TEMPUS performance for potential future missions.
An Internal Data Non-hiding Type Real-time Kernel and its Application to the Mechatronics Controller
NASA Astrophysics Data System (ADS)
Yoshida, Toshio
For the mechatronics equipment controller that controls robots and machine tools, high-speed motion control processing is essential. The software system of the controller like other embedded systems is composed of three layers software such as real-time kernel layer, middleware layer, and application software layer on the dedicated hardware. The application layer in the top layer is composed of many numbers of tasks, and application function of the system is realized by the cooperation between these tasks. In this paper we propose an internal data non-hiding type real-time kernel in which customizing the task control is possible only by change in the program code of the task side without any changes in the program code of real-time kernel. It is necessary to reduce the overhead caused by the real-time kernel task control for the speed-up of the motion control of the mechatronics equipment. For this, customizing the task control function is needed. We developed internal data non-cryptic type real-time kernel ZRK to evaluate this method, and applied to the control of the multi system automatic lathe. The effect of the speed-up of the task cooperation processing was able to be confirmed by combined task control processing on the task side program code using an internal data non-hiding type real-time kernel ZRK.
STS-121: Discovery Spacewalk Overview Briefing
NASA Technical Reports Server (NTRS)
2006-01-01
The briefing began with the introduction of Tomas Gonzalez-Torres (Lead Extra Vehicular Activity Officer). The spacewalk team included Pierce Sellers (EV-1), Mike Fossum (EV-2) and Mark Kelly (coordinator and pilot). Three new EMU's (space suits) were provided with hardware upgrades (warning systems). The 1st EVA would take place on flight day 5 and would include the exchange of the 3 EMU's. The 1st task was the installation of the blade locker, a device used to prevent severing of cables. The team will also install the Interface Umbilical System (IUS) which is an extension cord for the mobile transporter. EVA-2 task will be to replace the old Trailing Umbilical System (TUS) with a new one.
Adler, D; Mahler, Y
1980-04-01
A procedure for automatic detection and digital processing of the maximum first derivative of the intraventricular pressure (dp/dtmax), time to dp/dtmax(t - dp/dt) and beat-to-beat intervals have been developed. The procedure integrates simple electronic circuits with a short program using a simple algorithm for the detection of the points of interest. The tasks of differentiating the pressure signal and detecting the onset of contraction were done by electronics, while the tasks of finding the values of dp/dtmax, t - dp/dt, beat-to-beat intervals and all computations needed were done by software. Software/hardware 'trade off' considerations and the accuracy and reliability of the system are discussed.
Development problem analysis of correlation leak detector’s software
NASA Astrophysics Data System (ADS)
Faerman, V. A.; Avramchuk, V. S.; Marukyan, V. M.
2018-05-01
In the article, the practical application and the structure of the correlation leak detectors’ software is studied and the task of its designing is analyzed. In the first part of the research paper, the expediency of the facilities development of correlation leak detectors for the following operating efficiency of public utilities exploitation is shown. The analysis of the functional structure of correlation leak detectors is conducted and its program software tasks are defined. In the second part of the research paper some development steps of the software package – requirement forming, program structure definition and software concept creation – are examined in the context of the usage experience of the hardware-software prototype of correlation leak detector.
Interactive three-dimensional visualization and creation of geometries for Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Theis, C.; Buchegger, K. H.; Brugger, M.; Forkel-Wirth, D.; Roesler, S.; Vincke, H.
2006-06-01
The implementation of three-dimensional geometries for the simulation of radiation transport problems is a very time-consuming task. Each particle transport code supplies its own scripting language and syntax for creating the geometries. All of them are based on the Constructive Solid Geometry scheme requiring textual description. This makes the creation a tedious and error-prone task, which is especially hard to master for novice users. The Monte Carlo code FLUKA comes with built-in support for creating two-dimensional cross-sections through the geometry and FLUKACAD, a custom-built converter to the commercial Computer Aided Design package AutoCAD, exists for 3D visualization. For other codes, like MCNPX, a couple of different tools are available, but they are often specifically tailored to the particle transport code and its approach used for implementing geometries. Complex constructive solid modeling usually requires very fast and expensive special purpose hardware, which is not widely available. In this paper SimpleGeo is presented, which is an implementation of a generic versatile interactive geometry modeler using off-the-shelf hardware. It is running on Windows, with a Linux version currently under preparation. This paper describes its functionality, which allows for rapid interactive visualization as well as generation of three-dimensional geometries, and also discusses critical issues regarding common CAD systems.
Virtual reality hardware and graphic display options for brain-machine interfaces
Marathe, Amar R.; Carey, Holle L.; Taylor, Dawn M.
2009-01-01
Virtual reality hardware and graphic displays are reviewed here as a development environment for brain-machine interfaces (BMIs). Two desktop stereoscopic monitors and one 2D monitor were compared in a visual depth discrimination task and in a 3D target-matching task where able-bodied individuals used actual hand movements to match a virtual hand to different target hands. Three graphic representations of the hand were compared: a plain sphere, a sphere attached to the fingertip of a realistic hand and arm, and a stylized pacman-like hand. Several subjects had great difficulty using either stereo monitor for depth perception when perspective size cues were removed. A mismatch in stereo and size cues generated inappropriate depth illusions. This phenomenon has implications for choosing target and virtual hand sizes in BMI experiments. Target matching accuracy was about as good with the 2D monitor as with either 3D monitor. However, users achieved this accuracy by exploring the boundaries of the hand in the target with carefully controlled movements. This method of determining relative depth may not be possible in BMI experiments if movement control is more limited. Intuitive depth cues, such as including a virtual arm, can significantly improve depth perception accuracy with or without stereo viewing. PMID:18006069
Framework for teleoperated microassembly systems
NASA Astrophysics Data System (ADS)
Reinhart, Gunther; Anton, Oliver; Ehrenstrasser, Michael; Patron, Christian; Petzold, Bernd
2002-02-01
Manual assembly of minute parts is currently done using simple devices such as tweezers or magnifying glasses. The operator therefore requires a great deal of concentration for successful assembly. Teleoperated micro-assembly systems are a promising method for overcoming the scaling barrier. However, most of today's telepresence systems are based on proprietary and one-of-a-kind solutions. Frameworks which supply the basic functions of a telepresence system, e.g. to establish flexible communication links that depend on bandwidth requirements or to synchronize distributed components, are not currently available. Large amounts of time and money have to be invested in order to create task-specific teleoperated micro-assembly systems from scratch. For this reason, an object-oriented framework for telepresence systems that is based on CORBA as a common middleware was developed at the Institute for Machine Tools and Industrial Management (iwb). The framework is based on a distributed architectural concept and is realized in C++. External hardware components such as haptic, video or sensor devices are coupled to the system by means of defined software interfaces. In this case, the special requirements of teleoperation systems have to be considered, e.g. dynamic parameter settings for sensors during operation. Consequently, an architectural concept based on logical sensors has been developed to achieve maximum flexibility and to enable a task-oriented integration of hardware components.
Experimental task-based optimization of a four-camera variable-pinhole small-animal SPECT system
NASA Astrophysics Data System (ADS)
Hesterman, Jacob Y.; Kupinski, Matthew A.; Furenlid, Lars R.; Wilson, Donald W.
2005-04-01
We have previously utilized lumpy object models and simulated imaging systems in conjunction with the ideal observer to compute figures of merit for hardware optimization. In this paper, we describe the development of methods and phantoms necessary to validate or experimentally carry out these optimizations. Our study was conducted on a four-camera small-animal SPECT system that employs interchangeable pinhole plates to operate under a variety of pinhole configurations and magnifications (representing optimizable system parameters). We developed a small-animal phantom capable of producing random backgrounds for each image sequence. The task chosen for the study was the detection of a 2mm diameter sphere within the phantom-generated random background. A total of 138 projection images were used, half of which included the signal. As our observer, we employed the channelized Hotelling observer (CHO) with Laguerre-Gauss channels. The signal-to-noise (SNR) of this observer was used to compare different system configurations. Results indicate agreement between experimental and simulated data with higher detectability rates found for multiple-camera, multiple-pinhole, and high-magnification systems, although it was found that mixtures of magnifications often outperform systems employing a single magnification. This work will serve as a basis for future studies pertaining to system hardware optimization.
Embedded System Implementation on FPGA System With μCLinux OS
NASA Astrophysics Data System (ADS)
Fairuz Muhd Amin, Ahmad; Aris, Ishak; Syamsul Azmir Raja Abdullah, Raja; Kalos Zakiah Sahbudin, Ratna
2011-02-01
Embedded systems are taking on more complicated tasks as the processors involved become more powerful. The embedded systems have been widely used in many areas such as in industries, automotives, medical imaging, communications, speech recognition and computer vision. The complexity requirements in hardware and software nowadays need a flexibility system for further enhancement in any design without adding new hardware. Therefore, any changes in the design system will affect the processor that need to be changed. To overcome this problem, a System On Programmable Chip (SOPC) has been designed and developed using Field Programmable Gate Array (FPGA). A softcore processor, NIOS II 32-bit RISC, which is the microprocessor core was utilized in FPGA system together with the embedded operating system(OS), μClinux. In this paper, an example of web server is explained and demonstrated
LDEF systems special investigation group overview
NASA Technical Reports Server (NTRS)
Mason, Jim; Dursch, Harry
1995-01-01
The Systems Special Investigation Group (Systems SIG), formed by the LDEF Project Office to perform post-flight analysis of LDEF systems hardware, was chartered to investigate the effects of the extended LDEF mission on both satellite and experiment systems and to coordinate and integrate all systems related analyses performed during post-flight investigations. The Systems SIG published a summary report in April, 1992 titled 'Analysis of Systems Hardware Flown on LDEF - Results of the Systems Special Investigation Group' that described findings through the end of 1991. The Systems SIG, unfunded in FY 92 and FY93, has been funded in FY 94 to update this report with all new systems related findings. This paper provides a brief summary of the highlights of earlier Systems SIG accomplishments and describes tasks the Systems SIG has been funded to accomplish in FY 94.
Operating systems. [of computers
NASA Technical Reports Server (NTRS)
Denning, P. J.; Brown, R. L.
1984-01-01
A counter operating system creates a hierarchy of levels of abstraction, so that at a given level all details concerning lower levels can be ignored. This hierarchical structure separates functions according to their complexity, characteristic time scale, and level of abstraction. The lowest levels include the system's hardware; concepts associated explicitly with the coordination of multiple tasks appear at intermediate levels, which conduct 'primitive processes'. Software semaphore is the mechanism controlling primitive processes that must be synchronized. At higher levels lie, in rising order, the access to the secondary storage devices of a particular machine, a 'virtual memory' scheme for managing the main and secondary memories, communication between processes by way of a mechanism called a 'pipe', access to external input and output devices, and a hierarchy of directories cataloguing the hardware and software objects to which access must be controlled.
The JPL telerobotic Manipulator Control and Mechanization (MCM) subsystem
NASA Technical Reports Server (NTRS)
Hayati, Samad; Lee, Thomas S.; Tso, Kam; Backes, Paul; Kan, Edwin; Lloyd, J.
1989-01-01
The Manipulator Control and Mechanization (MCM) subsystem of the telerobot system provides the real-time control of the robot manipulators in autonomous and teleoperated modes and real time input/output for a variety of sensors and actuators. Substantial hardware and software are included in this subsystem which interfaces in the hierarchy of the telerobot system with the other subsystems. The other subsystems are: run time control, task planning and reasoning, sensing and perception, and operator control subsystem. The architecture of the MCM subsystem, its capabilities, and details of various hardware and software elements are described. Important improvements in the MCM subsystem over the first version are: dual arm coordinated trajectory generation and control, addition of integrated teleoperation, shared control capability, replacement of the ultimate controllers with motor controllers, and substantial increase in real time processing capability.
NASA Technical Reports Server (NTRS)
Pogorzelski, R. J.; Beckon, R. J.
1997-01-01
The virtual spacecraft concept is embodied in a set of subsystems, either in the form of hardware or computational models, which together represent all, or a portion of, a spacecraft. For example, the telecommunications transponder may be a hardware prototype while the propulsion system may exist only as a simulation. As the various subsystems are realized in hardware, the spacecraft becomes progressively less virtual. This concept is enabled by JPL's Mission System Testbed which is a set of networked workstations running a message passing operating system called "TRAMEL" which stands for Task Remote Asynchronous Message Exchange Layer. Each simulation on the workstations, which may in fact be hardware controlled by the workstation, "publishes" its operating parameters on TRAMEL and other simulations requiring those parameters as input may "subscribe" to them. In this manner, the whole simulation operates as a single virtual system. This paper describes a simulation designed to evaluate a communications link between the earth and the Mars Pathfinder Lander module as it descends under a parachute through the Martian atmosphere toward the planet's surface. This link includes a transmitter and a low gain antenna on the spacecraft and a receiving antenna and receiver on the earth as well as a simulation of the dynamics of the spacecraft. The transmitter, the ground station antenna, the receiver and the dynamics are all simulated computationally while the spacecraft antenna is implemented in hardware on a very simple spacecraft mockup. The dynamics simulation is a record of one output of the ensemble of outputs of a Monte Carlo simulation of the descent. Additionally, the antenna/spacecraft mock-up system was simulated using APATCH, a shooting and bouncing ray code developed by Demaco, Inc. The antenna simulation, the antenna hardware, and the link simulation are all physically located in different facilities at JPL separated by several hundred meters and are linked via the local area network (LAN).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warkentin, H; Bubric, K; Giovannetti, H
2016-06-15
Purpose: As a quality improvement measure, we undertook this work to incorporate usability testing into the implementation procedures for new electronic documents and forms used by four affiliated radiation therapy centers. Methods: A human factors specialist provided training in usability testing for a team of medical physicists, radiation therapists, and radiation oncologists from four radiotherapy centers. A usability testing plan was then developed that included controlled scenarios and standardized forms for qualitative and quantitative feedback from participants, including patients. Usability tests were performed by end users using the same hardware and viewing conditions that are found in the clinical environment.more » A pilot test of a form used during radiotherapy CT simulation was performed in a single department; feedback informed adaptive improvements to the electronic form, hardware requirements, resource accessibility and the usability testing plan. Following refinements to the testing plan, usability testing was performed at three affiliated cancer centers with different vault layouts and hardware. Results: Feedback from the testing resulted in the detection of 6 critical errors (omissions and inability to complete task without assistance), 6 non-critical errors (recoverable), and multiple suggestions for improvement. Usability problems with room layout were detected at one center and problems with hardware were detected at one center. Upon amalgamation and summary of the results, three key recommendations were presented to the document’s authors for incorporation into the electronic form. Documented inefficiencies and patient safety concerns related to the room layout and hardware were presented to administration along with a request for funding to purchase upgraded hardware and accessories to allow a more efficient workflow within the simulator vault. Conclusion: By including usability testing as part of the process when introducing any new document or procedure into clinical use, associated risks can be identified and mitigated before patient care and clinical workflow are impacted.« less
Dynamic VMs placement for energy efficiency by PSO in cloud computing
NASA Astrophysics Data System (ADS)
Dashti, Seyed Ebrahim; Rahmani, Amir Masoud
2016-03-01
Recently, cloud computing is growing fast and helps to realise other high technologies. In this paper, we propose a hieratical architecture to satisfy both providers' and consumers' requirements in these technologies. We design a new service in the PaaS layer for scheduling consumer tasks. In the providers' perspective, incompatibility between specification of physical machine and user requests in cloud leads to problems such as energy-performance trade-off and large power consumption so that profits are decreased. To guarantee Quality of service of users' tasks, and reduce energy efficiency, we proposed to modify Particle Swarm Optimisation to reallocate migrated virtual machines in the overloaded host. We also dynamically consolidate the under-loaded host which provides power saving. Simulation results in CloudSim demonstrated that whatever simulation condition is near to the real environment, our method is able to save as much as 14% more energy and the number of migrations and simulation time significantly reduces compared with the previous works.
Hippocampal Astrocytes in Migrating and Wintering Semipalmated Sandpiper Calidris pusilla
Carvalho-Paulo, Dario; de Morais Magalhães, Nara G.; de Almeida Miranda, Diego; Diniz, Daniel G.; Henrique, Ediely P.; Moraes, Isis A. M.; Pereira, Patrick D. C.; de Melo, Mauro A. D.; de Lima, Camila M.; de Oliveira, Marcus A.; Guerreiro-Diniz, Cristovam; Sherry, David F.; Diniz, Cristovam W. P.
2018-01-01
Seasonal migratory birds return to the same breeding and wintering grounds year after year, and migratory long-distance shorebirds are good examples of this. These tasks require learning and long-term spatial memory abilities that are integrated into a navigational system for repeatedly locating breeding, wintering, and stopover sites. Previous investigations focused on the neurobiological basis of hippocampal plasticity and numerical estimates of hippocampal neurogenesis in birds but only a few studies investigated potential contributions of glial cells to hippocampal-dependent tasks related to migration. Here we hypothesized that the astrocytes of migrating and wintering birds may exhibit significant morphological and numerical differences connected to the long-distance flight. We used as a model the semipalmated sandpiper Calidris pusilla, that migrates from northern Canada and Alaska to South America. Before the transatlantic non-stop long-distance component of their flight, the birds make a stopover at the Bay of Fundy in Canada. To test our hypothesis, we estimated total numbers and compared the three-dimensional (3-D) morphological features of adult C. pusilla astrocytes captured in the Bay of Fundy (n = 249 cells) with those from birds captured in the coastal region of Bragança, Brazil, during the wintering period (n = 250 cells). Optical fractionator was used to estimate the number of astrocytes and for 3-D reconstructions we used hierarchical cluster analysis. Both morphological phenotypes showed reduced morphological complexity after the long-distance non-stop flight, but the reduction in complexity was much greater in Type I than in Type II astrocytes. Coherently, we also found a significant reduction in the total number of astrocytes after the transatlantic flight. Taken together these findings suggest that the long-distance non-stop flight altered significantly the astrocytes population and that morphologically distinct astrocytes may play different physiological roles during migration. PMID:29354035
ISS Microgravity Research Payload Training Methodology
NASA Technical Reports Server (NTRS)
Schlagheck, Ronald; Geveden, Rex (Technical Monitor)
2001-01-01
The NASA Microgravity Research Discipline has multiple categories of science payloads that are being planned and currently under development to operate on various ISS on-orbit increments. The current program includes six subdisciplines; Materials Science, Fluids Physics, Combustion Science, Fundamental Physics, Cellular Biology and Macromolecular Biotechnology. All of these experiment payloads will require the astronaut various degrees of crew interaction and science observation. With the current programs planning to build various facility class science racks, the crew will need to be trained on basic core operations as well as science background. In addition, many disciplines will use the Express Rack and the Microgravity Science Glovebox (MSG) to utilize the accommodations provided by these facilities for smaller and less complex type hardware. The Microgravity disciplines will be responsible to have a training program designed to maximize the experiment and hardware throughput as well as being prepared for various contingencies both with anomalies as well as unexpected experiment observations. The crewmembers will need various levels of training from simple tasks as power on and activate to extensive training on hardware mode change out to observing the cell growth of various types of tissue cultures. Sample replacement will be required for furnaces and combustion type modules. The Fundamental Physics program will need crew EVA support to provide module change out of experiment. Training will take place various research centers and hardware development locations. It is expected that onboard training through various methods and video/digital technology as well as limited telecommunication interaction. Since hardware will be designed to operate from a few weeks to multiple research increments, flexibility must be planned in the training approach and procedure skills to optimize the output as well as the equipment maintainability. Early increment lessons learned will be addressed.
Robotic laboratory for distance education
NASA Astrophysics Data System (ADS)
Luciano, Sarah C.; Kost, Alan R.
2016-09-01
This project involves the construction of a remote-controlled laboratory experiment that can be accessed by online students. The project addresses a need to provide a laboratory experience for students who are taking online courses to be able to provide an in-class experience. The chosen task for the remote user is an optical engineering experiment, specifically aligning a spatial filter. We instrument the physical laboratory set up in Tucson, AZ at the University of Arizona. The hardware in the spatial filter experiment is augmented by motors and cameras to allow the user to remotely control the hardware. The user interacts with a software on their computer, which communicates with a server via Internet connection to the host computer in the Optics Laboratory at the University of Arizona. Our final overall system is comprised of several subsystems. These are the optical experiment set-up, which is a spatial filter experiment; the mechanical subsystem, which interfaces the motors with the micrometers to move the optical hardware; the electrical subsystem, which allows for the electrical communications from the remote computer to the host computer to the hardware; and finally the software subsystem, which is the means by which messages are communicated throughout the system. The goal of the project is to convey as much of an in-lab experience as possible by allowing the user to directly manipulate hardware and receive visual feedback in real-time. Thus, the remote user is able to learn important concepts from this particular experiment and is able to connect theory to the physical world by actually seeing the outcome of a procedure. The latter is a learning experience that is often lost with distance learning and is one that this project hopes to provide.
Performance Prediction Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chennupati, Gopinath; Santhi, Nanadakishore; Eidenbenz, Stephen
The Performance Prediction Toolkit (PPT), is a scalable co-design tool that contains the hardware and middle-ware models, which accept proxy applications as input in runtime prediction. PPT relies on Simian, a parallel discrete event simulation engine in Python or Lua, that uses the process concept, where each computing unit (host, node, core) is a Simian entity. Processes perform their task through message exchanges to remain active, sleep, wake-up, begin and end. The PPT hardware model of a compute core (such as a Haswell core) consists of a set of parameters, such as clock speed, memory hierarchy levels, their respective sizes,more » cache-lines, access times for different cache levels, average cycle counts of ALU operations, etc. These parameters are ideally read off a spec sheet or are learned using regression models learned from hardware counters (PAPI) data. The compute core model offers an API to the software model, a function called time_compute(), which takes as input a tasklist. A tasklist is an unordered set of ALU, and other CPU-type operations (in particular virtual memory loads and stores). The PPT application model mimics the loop structure of the application and replaces the computational kernels with a call to the hardware model's time_compute() function giving tasklists as input that model the compute kernel. A PPT application model thus consists of tasklists representing kernels and the high-er level loop structure that we like to think of as pseudo code. The key challenge for the hardware model's time_compute-function is to translate virtual memory accesses into actual cache hierarchy level hits and misses.PPT also contains another CPU core level hardware model, Analytical Memory Model (AMM). The AMM solves this challenge soundly, where our previous alternatives explicitly include the L1,L2,L3 hit-rates as inputs to the tasklists. Explicit hit-rates inevitably only reflect the application modeler's best guess, perhaps informed by a few small test problems using hardware counters; also, hard-coded hit-rates make the hardware model insensitive to changes in cache sizes. Alternatively, we use reuse distance distributions in the tasklists. In general, reuse profiles require the application modeler to run a very expensive trace analysis on the real code that realistically can be done at best for small examples.« less
Mechanical Design of a Performance Test Rig for the Turbine Air-Flow Task (TAFT)
NASA Technical Reports Server (NTRS)
Forbes, John C.; Xenofos, George D.; Farrow, John L.; Tyler, Tom; Williams, Robert; Sargent, Scott; Moharos, Jozsef
2004-01-01
To support development of the Boeing-Rocketdyne RS84 rocket engine, a full-flow, reaction turbine geometry was integrated into the NASA-MSFC turbine air-flow test facility. A mechanical design was generated which minimized the amount of new hardware while incorporating all test and instrumentation requirements. This paper provides details of the mechanical design for this Turbine Air-Flow Task (TAFT) test rig. The mechanical design process utilized for this task included the following basic stages: Conceptual Design. Preliminary Design. Detailed Design. Baseline of Design (including Configuration Control and Drawing Revision). Fabrication. Assembly. During the design process, many lessons were learned that should benefit future test rig design projects. Of primary importance are well-defined requirements early in the design process, a thorough detailed design package, and effective communication with both the customer and the fabrication contractors.
NASA Technical Reports Server (NTRS)
Jaap, John; Davis, Elizabeth; Richardson, Lea
2004-01-01
Planning and scheduling systems organize tasks into a timeline or schedule. Tasks are logically grouped into containers called models. Models are a collection of related tasks, along with their dependencies and requirements, that when met will produce the desired result. One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed; the information sought is at the cutting edge of scientific endeavor; and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a maximally expressive modeling schema.
Extravehicular activity welding experiment
NASA Technical Reports Server (NTRS)
Watson, J. Kevin
1989-01-01
The In-Space Technology Experiments Program (INSTEP) provides an opportunity to explore the many critical questions which can only be answered by experimentation in space. The objective of the Extravehicular Activity Welding Experiment definition project was to define the requirements for a spaceflight experiment to evaluate the feasibility of performing manual welding tasks during EVA. Consideration was given to experiment design, work station design, welding hardware design, payload integration requirements, and human factors (including safety). The results of this effort are presented. Included are the specific objectives of the flight test, details of the tasks which will generate the required data, and a description of the equipment which will be needed to support the tasks. Work station requirements are addressed as are human factors, STS integration procedures and, most importantly, safety considerations. A preliminary estimate of the cost and the schedule for completion of the experiment through flight and postflight analysis are given.
NASA Astrophysics Data System (ADS)
Kyrkou, Christos; Theocharides, Theocharis
2016-07-01
Object detection is a major step in several computer vision applications and a requirement for most smart camera systems. Recent advances in hardware acceleration for real-time object detection feature extensive use of reconfigurable hardware [field programmable gate arrays (FPGAs)], and relevant research has produced quite fascinating results, in both the accuracy of the detection algorithms as well as the performance in terms of frames per second (fps) for use in embedded smart camera systems. Detecting objects in images, however, is a daunting task and often involves hardware-inefficient steps, both in terms of the datapath design and in terms of input/output and memory access patterns. We present how a visual-feature-directed search cascade composed of motion detection, depth computation, and edge detection, can have a significant impact in reducing the data that needs to be examined by the classification engine for the presence of an object of interest. Experimental results on a Spartan 6 FPGA platform for face detection indicate data search reduction of up to 95%, which results in the system being able to process up to 50 1024×768 pixels images per second with a significantly reduced number of false positives.
Plane-Based Registration of Several Thousand Laser Scans on Standard Hardware
NASA Astrophysics Data System (ADS)
Wujanz, D.; Schaller, S.; Gielsdorf, F.; Gründig, L.
2018-05-01
The automatic registration of terrestrial laser scans appears to be a solved problem in science as well as in practice. However, this assumption is questionable especially in the context of large projects where an object of interest is described by several thousand scans. A critical issue inherently linked to this task is memory management especially if cloud-based registration approaches such as the ICP are being deployed. In order to process even thousands of scans on standard hardware a plane-based registration approach is applied. As a first step planar features are detected within the unregistered scans. This step drastically reduces the amount of data that has to be handled by the hardware. After determination of corresponding planar features a pairwise registration procedure is initiated based on a graph that represents topological relations among all scans. For every feature individual stochastic characteristics are computed that are consequently carried through the algorithm. Finally, a block adjustment is carried out that minimises the residuals between redundantly captured areas. The algorithm is demonstrated on a practical survey campaign featuring a historic town hall. In total, 4853 scans were registered on a standard PC with four processors (3.07 GHz) and 12 GB of RAM.
Moving formal methods into practice. Verifying the FTPP Scoreboard: Results, phase 1
NASA Technical Reports Server (NTRS)
Srivas, Mandayam; Bickford, Mark
1992-01-01
This report documents the Phase 1 results of an effort aimed at formally verifying a key hardware component, called Scoreboard, of a Fault-Tolerant Parallel Processor (FTPP) being built at Charles Stark Draper Laboratory (CSDL). The Scoreboard is part of the FTPP virtual bus that guarantees reliable communication between processors in the presence of Byzantine faults in the system. The Scoreboard implements a piece of control logic that approves and validates a message before it can be transmitted. The goal of Phase 1 was to lay the foundation of the Scoreboard verification. A formal specification of the functional requirements and a high-level hardware design for the Scoreboard were developed. The hardware design was based on a preliminary Scoreboard design developed at CSDL. A main correctness theorem, from which the functional requirements can be established as corollaries, was proved for the Scoreboard design. The goal of Phase 2 is to verify the final detailed design of Scoreboard. This task is being conducted as part of a NASA-sponsored effort to explore integration of formal methods in the development cycle of current fault-tolerant architectures being built in the aerospace industry.
Analyzing SystemC Designs: SystemC Analysis Approaches for Varying Applications
Stoppe, Jannis; Drechsler, Rolf
2015-01-01
The complexity of hardware designs is still increasing according to Moore's law. With embedded systems being more and more intertwined and working together not only with each other, but also with their environments as cyber physical systems (CPSs), more streamlined development workflows are employed to handle the increasing complexity during a system's design phase. SystemC is a C++ library for the design of hardware/software systems, enabling the designer to quickly prototype, e.g., a distributed CPS without having to decide about particular implementation details (such as whether to implement a feature in hardware or in software) early in the design process. Thereby, this approach reduces the initial implementation's complexity by offering an abstract layer with which to build a working prototype. However, as SystemC is based on C++, analyzing designs becomes a difficult task due to the complex language features that are available to the designer. Several fundamentally different approaches for analyzing SystemC designs have been suggested. This work illustrates several different SystemC analysis approaches, including their specific advantages and shortcomings, allowing designers to pick the right tools to assist them with a specific problem during the design of a system using SystemC. PMID:25946632
Analyzing SystemC Designs: SystemC Analysis Approaches for Varying Applications.
Stoppe, Jannis; Drechsler, Rolf
2015-05-04
The complexity of hardware designs is still increasing according to Moore's law. With embedded systems being more and more intertwined and working together not only with each other, but also with their environments as cyber physical systems (CPSs), more streamlined development workflows are employed to handle the increasing complexity during a system's design phase. SystemC is a C++ library for the design of hardware/software systems, enabling the designer to quickly prototype, e.g., a distributed CPS without having to decide about particular implementation details (such as whether to implement a feature in hardware or in software) early in the design process. Thereby, this approach reduces the initial implementation's complexity by offering an abstract layer with which to build a working prototype. However, as SystemC is based on C++, analyzing designs becomes a difficult task due to the complex language features that are available to the designer. Several fundamentally different approaches for analyzing SystemC designs have been suggested. This work illustrates several different SystemC analysis approaches, including their specific advantages and shortcomings, allowing designers to pick the right tools to assist them with a specific problem during the design of a system using SystemC.
Pre-Flight Tests with Astronauts, Flight and Ground Hardware, to Assure On-Orbit Success
NASA Technical Reports Server (NTRS)
Haddad Michael E.
2010-01-01
On-Orbit Constraints Test (OOCT's) refers to mating flight hardware together on the ground before they will be mated on-orbit or on the Lunar surface. The concept seems simple but it can be difficult to perform operations like this on the ground when the flight hardware is being designed to be mated on-orbit in a zero-g/vacuum environment of space or low-g/vacuum environment on the Lunar/Mars Surface. Also some of the items are manufactured years apart so how are mating tasks performed on these components if one piece is on-orbit/on Lunar/Mars surface before its mating piece is planned to be built. Both the Internal Vehicular Activity (IVA) and Extra-Vehicular Activity (EVA) OOCT's performed at Kennedy Space Center will be presented in this paper. Details include how OOCT's should mimic on-orbit/Lunar/Mars surface operational scenarios, a series of photographs will be shown that were taken during OOCT's performed on International Space Station (ISS) flight elements, lessons learned as a result of the OOCT's will be presented and the paper will conclude with possible applications to Moon and Mars Surface operations planned for the Constellation Program.
NASA Astrophysics Data System (ADS)
Suarez, Hernan; Zhang, Yan R.
2015-05-01
New radar applications need to perform complex algorithms and process large quantity of data to generate useful information for the users. This situation has motivated the search for better processing solutions that include low power high-performance processors, efficient algorithms, and high-speed interfaces. In this work, hardware implementation of adaptive pulse compression for real-time transceiver optimization are presented, they are based on a System-on-Chip architecture for Xilinx devices. This study also evaluates the performance of dedicated coprocessor as hardware accelerator units to speed up and improve the computation of computing-intensive tasks such matrix multiplication and matrix inversion which are essential units to solve the covariance matrix. The tradeoffs between latency and hardware utilization are also presented. Moreover, the system architecture takes advantage of the embedded processor, which is interconnected with the logic resources through the high performance AXI buses, to perform floating-point operations, control the processing blocks, and communicate with external PC through a customized software interface. The overall system functionality is demonstrated and tested for real-time operations using a Ku-band tested together with a low-cost channel emulator for different types of waveforms.
NASA Astrophysics Data System (ADS)
He, Xin
2017-03-01
The ideal observer is widely used in imaging system optimization. One practical question remains open: do the ideal and human observers have the same preference in system optimization and evaluation? Based on the ideal observer's mathematical properties proposed by Barrett et. al. and the empirical properties of human observers investigated by Myers et. al., I attempt to pursue the general rules regarding the applicability of the ideal observer in system optimization. Particularly, in software optimization, the ideal observer pursues data conservation while humans pursue data presentation or perception. In hardware optimization, the ideal observer pursues a system with the maximum total information, while humans pursue a system with the maximum selected (e.g., certain frequency bands) information. These different objectives may result in different system optimizations between human and the ideal observers. Thus, an ideal observer optimized system is not necessarily optimal for humans. I cite empirical evidence in search and detection tasks, in hardware and software evaluation, in X-ray CT, pinhole imaging, as well as emission computed tomography to corroborate the claims. (Disclaimer: the views expressed in this work do not necessarily represent those of the FDA)
Devi, D Chitra; Uthariaraj, V Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.
Devi, D. Chitra; Uthariaraj, V. Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656
Chemistry & migration mysteries: Fur holds clues to previous journeys
Cryan, Paul M.
2004-01-01
The bat was not only pregnant but downright angry as I snipped a bit of fur from her back. Within a few seconds, however, she flapped her powerful wings, took off from my hand and disappeared into the night, rejoining thousands of female hoary bats (Lasiurus cinereus) on their migration through the mountains of New Mexico.Every spring, hundreds of these expectant mothers pass through this small stream drainage on their way to birthing grounds farther east. Their annual passage was first reported here more than 30 years ago, and it is still one of the few known migration corridors in the area.My task that night was simple: catch hoary bats and snip tiny samples of fur from their thick coats, then let them continue on their way. The explanation, however, is a bit more complicated.
Coons, Stephen Joel; Gwaltney, Chad J; Hays, Ron D; Lundy, J Jason; Sloan, Jeff A; Revicki, Dennis A; Lenderking, William R; Cella, David; Basch, Ethan
2009-06-01
Patient-reported outcomes (PROs) are the consequences of disease and/or its treatment as reported by the patient. The importance of PRO measures in clinical trials for new drugs, biological agents, and devices was underscored by the release of the US Food and Drug Administration's draft guidance for industry titled "Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims." The intent of the guidance was to describe how the FDA will evaluate the appropriateness and adequacy of PRO measures used as effectiveness end points in clinical trials. In response to the expressed need of ISPOR members for further clarification of several aspects of the draft guidance, ISPOR's Health Science Policy Council created three task forces, one of which was charged with addressing the implications of the draft guidance for the collection of PRO data using electronic data capture modes of administration (ePRO). The objective of this report is to present recommendations from ISPOR's ePRO Good Research Practices Task Force regarding the evidence necessary to support the comparability, or measurement equivalence, of ePROs to the paper-based PRO measures from which they were adapted. The task force was composed of the leadership team of ISPOR's ePRO Working Group and members of another group (i.e., ePRO Consensus Development Working Group) that had already begun to develop recommendations regarding ePRO good research practices. The resulting task force membership reflected a broad array of backgrounds, perspectives, and expertise that enriched the development of this report. The prior work became the starting point for the Task Force report. A subset of the task force members became the writing team that prepared subsequent iterations of the report that were distributed to the full task force for review and feedback. In addition, review beyond the task force was sought and obtained. Along with a presentation and discussion period at an ISPOR meeting, a draft version of the full report was distributed to roughly 220 members of a reviewer group. The reviewer group comprised individuals who had responded to an emailed invitation to the full membership of ISPOR. This Task Force report reflects the extensive internal and external input received during the 16-month good research practices development process. RESULTS/RECOMMENDATIONS: An ePRO questionnaire that has been adapted from a paper-based questionnaire ought to produce data that are equivalent or superior (e.g., higher reliability) to the data produced from the original paper version. Measurement equivalence is a function of the comparability of the psychometric properties of the data obtained via the original and adapted administration mode. This comparability is driven by the amount of modification to the content and format of the original paper PRO questionnaire required during the migration process. The magnitude of a particular modification is defined with reference to its potential effect on the content, meaning, or interpretation of the measure's items and/or scales. Based on the magnitude of the modification, evidence for measurement equivalence can be generated through combinations of the following: cognitive debriefing/testing, usability testing, equivalence testing, or, if substantial modifications have been made, full psychometric testing. As long as only minor modifications were made to the measure during the migration process, a substantial body of existing evidence suggests that the psychometric properties of the original measure will still hold for the ePRO version. Hence, an evaluation limited to cognitive debriefing and usability testing only may be sufficient. However, where more substantive changes in the migration process has occurred, confirming that the adaptation to the ePRO format did not introduce significant response bias and that the two modes of administration produce essentially equivalent results is necessary. Recommendations regarding the study designs and statistical approaches for assessing measurement equivalence are provided. The electronic administration of PRO measures offers many advantages over paper administration. We provide a general framework for decisions regarding the level of evidence needed to support modifications that are made to PRO measures when they are migrated from paper to ePRO devices. The key issues include: 1) the determination of the extent of modification required to administer the PRO on the ePRO device and 2) the selection and implementation of an effective strategy for testing the measurement equivalence of the two modes of administration. We hope that these good research practice recommendations provide a path forward for researchers interested in migrating PRO measures to electronic data collection platforms.
HIGH PRESSURE COAL COMBUSTION KINETICS PROJECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chris Guenther; Bill Rogers
2001-09-15
The HPCCK project was initiated with a kickoff meeting held on June 12, 2001 in Morgantown, WV, which was attended by all project participants. SRI's existing g-RCFR reactor was reconfigured to a SRT-RCFR geometry (Task 1.1). This new design is suitable for performing the NBFZ experiments of Task 1.2. It was decided that the SRT-RCFR apparatus could be modified and used for the HPBO experiments. The purchase, assembly, and testing of required instrumentation and hardware is nearly complete (Task 1.1 and 1.2). Initial samples of PBR coal have been shipped from FWC to SRI (Task 1.1). The ECT device formore » coal flow measurements used at FWC will not be used in the SRI apparatus and a screw type feeder has been suggested instead (Task 5.1). NEA has completed a upgrade of an existing Fluent simulator for SRI's RCFR to a version that is suitable for interpreting results from tests in the NBFZ configuration (Task 1.3) this upgrade includes finite-rate submodels for devolatilization, secondary volatiles pyrolysis, volatiles combustion, and char oxidation. Plans for an enhanced version of CBK have been discussed and development of this enhanced version has begun (Task 2.5). A developmental framework for implementing pressure and oxygen effects on ash formation in an ash formation model (Task 3.3) has begun.« less
Data Telemetry and Acquisition System for Acoustic Signal Processing Investigations.
1996-02-20
were VME- based computer systems operating under the VxWorks real - time operating system . Each system shared a common hardware and software... real - time operating system . It interfaces to the Berg PCM Decommutator board, which searches for the embedded synchronization word in the data and re...software were built on top of this architecture. The multi-tasking, message queue and memory management facilities of the VxWorks real - time operating system are
Dynamic I/O Power Management for Hard Real-Time Systems
2005-01-01
recently emerged as an attractive alternative to inflexible hardware solutions. DPM for hard real - time systems has received relatively little attention...In particular, energy-driven I/O device scheduling for real - time systems has not been considered before. We present the first online DPM algorithm...which we call Low Energy Device Scheduler (LEDES), for hard real - time systems . LEDES takes as inputs a predetermined task schedule and a device-usage
Big Data in the Earth Observing System Data and Information System
NASA Technical Reports Server (NTRS)
Lynnes, Chris; Baynes, Katie; McInerney, Mark
2016-01-01
Approaches that are being pursued for the Earth Observing System Data and Information System (EOSDIS) data system to address the challenges of Big Data were presented to the NASA Big Data Task Force. Cloud prototypes are underway to tackle the volume challenge of Big Data. However, advances in computer hardware or cloud won't help (much) with variety. Rather, interoperability standards, conventions, and community engagement are the key to addressing variety.
1990-02-01
human-to- human communication patterns during situation assessment and cooperative problem solving tasks. The research proposed for the second URRP year...Hardware development. In order to create an environment within which to study multi-channeled human-to- human communication , a multi-media observation...that machine-to- human communication can be used to increase cohesion between humans and intelligent machines and to promote human-machine team
2012-09-01
Maintenance activities, as this will allow new methods and Operational changes to be made if necessary (i.e., more downtime than originally planned or...increased complexity of military hardware, both new systems and their integration with legacy systems, requires a correspondingly increased expertise in...available Little of that added weight involves weapons or armor that actually is becoming lighter as new technologies and composites are utilized (Task
Implementation of Custom Colors in the DECwindows Environment
1992-01-01
Implementation of Custom Colors in the DECwindlows Environment Program Element No 0604262 Project No 64214 6. Author(s). Task No Stephanie A. Myrick, Maura C...13. Abstract (Maximum 200 words), This paper describes the implementation of user-defined, or custom , colors in the DECwindows environmeot Custom ...colors can be used to augment the standard color set that is associated with the hardware colormap. The custom color set that is included in this paper
Toward the MIL-STD and MIL-HDBK for Project Support Environment Interfaces
1992-11-01
acquisition and budget process in the past has taken a long time to field new standard computers, so long that the produced technology is often old...compared to commercial technology . The obvious logistics benefits associated with standard hardware are offset by the inability to field current... technology area. The Standard and Handbook Writing Team is tasked with actually writing the draft military standard and handbook. 3. PSESWG STANDARD The
SMARBot: a modular miniature mobile robot platform
NASA Astrophysics Data System (ADS)
Meng, Yan; Johnson, Kerry; Simms, Brian; Conforth, Matthew
2008-04-01
Miniature robots have many advantages over their larger counterparts, such as low cost, low power, and easy to build a large scale team for complex tasks. Heterogeneous multi miniature robots could provide powerful situation awareness capability due to different locomotion capabilities and sensor information. However, it would be expensive and time consuming to develop specific embedded system for different type of robots. In this paper, we propose a generic modular embedded system architecture called SMARbot (Stevens Modular Autonomous Robot), which consists of a set of hardware and software modules that can be configured to construct various types of robot systems. These modules include a high performance microprocessor, a reconfigurable hardware component, wireless communication, and diverse sensor and actuator interfaces. The design of all the modules in electrical subsystem, the selection criteria for module components, and the real-time operating system are described. Some proofs of concept experimental results are also presented.
NASA Technical Reports Server (NTRS)
Brown, Todd S.
2016-01-01
The NASA Soil Moisture Active Passive (SMAP) spacecraft was designed to use radar and radiometer measurements to produce global soil moisture measurements every 2-3 days. The SMAP spacecraft is a complicated dual-spinning design with a large 6 meter deployable mesh reflector mounted on a platform that spins at 14.6 rpm while the Guidance Navigation and Control algorithms maintain precise nadir pointing for the de-spun portion of the spacecraft. After launching in early 2015, the Guidance Navigation and Control software and hardware aboard the SMAP spacecraft underwent an intensive spacecraft checkout and commissioning period. This paper describes the activities performed by the Guidance Navigation and Control team to confirm the health and phasing of subsystem hardware and the functionality of the guidance and control modes and algorithms. The operations tasks performed, as well as anomalies that were encountered during the commissioning, are explained and results are summarized.
Distributed Software for Observations in the Near Infrared
NASA Astrophysics Data System (ADS)
Gavryusev, V.; Baffa, C.; Giani, E.
We have developed an integrated system that performs astronomical observations in Near Infrared bands operating two-dimensional instruments at the Italian National Infrared Facility's \\htmllink{ARNICA}{http://helios.arcetri.astro.it:/home/idefix/Mosaic/ instr/arnica/arnica.html} and \\htmllink{LONGSP}{http://helios.arcetri.astro.it:/home/idefix/Mosaic/ instr/longsp/longsp.html}. This software consists of several communicating processes, generally executed across a network, as well as on a single computer. The user interface is organized as widget-based X11 client. The interprocess communication is provided by sockets and uses TCP/IP. The processes denoted for control of hardware (telescope and other instruments) should be executed currently on a PC dedicated for this task under DESQview/X, while all other components (user interface, tools for the data analysis, etc.) can also work under UNIX\\@. The hardware independent part of software is based on the Athena Widget Set and is compiled by GNU C to provide maximum portability.
Stone, John E.; Hynninen, Antti-Pekka; Phillips, James C.; Schulten, Klaus
2017-01-01
All-atom molecular dynamics simulations of biomolecules provide a powerful tool for exploring the structure and dynamics of large protein complexes within realistic cellular environments. Unfortunately, such simulations are extremely demanding in terms of their computational requirements, and they present many challenges in terms of preparation, simulation methodology, and analysis and visualization of results. We describe our early experiences porting the popular molecular dynamics simulation program NAMD and the simulation preparation, analysis, and visualization tool VMD to GPU-accelerated OpenPOWER hardware platforms. We report our experiences with compiler-provided autovectorization and compare with hand-coded vector intrinsics for the POWER8 CPU. We explore the performance benefits obtained from unique POWER8 architectural features such as 8-way SMT and its value for particular molecular modeling tasks. Finally, we evaluate the performance of several GPU-accelerated molecular modeling kernels and relate them to other hardware platforms. PMID:29202130
TES: A modular systems approach to expert system development for real-time space applications
NASA Technical Reports Server (NTRS)
Cacace, Ralph; England, Brenda
1988-01-01
A major goal of the Space Station era is to reduce reliance on support from ground based experts. The development of software programs using expert systems technology is one means of reaching this goal without requiring crew members to become intimately familiar with the many complex spacecraft subsystems. Development of an expert systems program requires a validation of the software with actual flight hardware. By combining accurate hardware and software modelling techniques with a modular systems approach to expert systems development, the validation of these software programs can be successfully completed with minimum risk and effort. The TIMES Expert System (TES) is an application that monitors and evaluates real time data to perform fault detection and fault isolation tasks as they would otherwise be carried out by a knowledgeable designer. The development process and primary features of TES, a modular systems approach, and the lessons learned are discussed.
4273π: Bioinformatics education on low cost ARM hardware
2013-01-01
Background Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. Results We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012–2013. Conclusions 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost. PMID:23937194
Development of on line automatic separation device for apple and sleeve
NASA Astrophysics Data System (ADS)
Xin, Dengke; Ning, Duo; Wang, Kangle; Han, Yuhang
2018-04-01
Based on STM32F407 single chip microcomputer as control core, automatic separation device of fruit sleeve is designed. This design consists of hardware and software. In hardware, it includes mechanical tooth separator and three degree of freedom manipulator, as well as industrial control computer, image data acquisition card, end effector and other structures. The software system is based on Visual C++ development environment, to achieve localization and recognition of fruit sleeve with the technology of image processing and machine vision, drive manipulator of foam net sets of capture, transfer, the designated position task. Test shows: The automatic separation device of the fruit sleeve has the advantages of quick response speed and high separation success rate, and can realize separation of the apple and plastic foam sleeve, and lays the foundation for further studying and realizing the application of the enterprise production line.
Usability: Human Research Program - Space Human Factors and Habitability
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Holden, Kritina L.
2009-01-01
The Usability project addresses the need for research in the area of metrics and methodologies used in hardware and software usability testing in order to define quantifiable and verifiable usability requirements. A usability test is a human-in-the-loop evaluation where a participant works through a realistic set of representative tasks using the hardware/software under investigation. The purpose of this research is to define metrics and methodologies for measuring and verifying usability in the aerospace domain in accordance with FY09 focus on errors, consistency, and mobility/maneuverability. Usability metrics must be predictive of success with the interfaces, must be easy to obtain and/or calculate, and must meet the intent of current Human Systems Integration Requirements (HSIR). Methodologies must work within the constraints of the aerospace domain, be cost and time efficient, and be able to be applied without extensive specialized training.
4273π: bioinformatics education on low cost ARM hardware.
Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D
2013-08-12
Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.
Intelligent software for laboratory automation.
Whelan, Ken E; King, Ross D
2004-09-01
The automation of laboratory techniques has greatly increased the number of experiments that can be carried out in the chemical and biological sciences. Until recently, this automation has focused primarily on improving hardware. Here we argue that future advances will concentrate on intelligent software to integrate physical experimentation and results analysis with hypothesis formulation and experiment planning. To illustrate our thesis, we describe the 'Robot Scientist' - the first physically implemented example of such a closed loop system. In the Robot Scientist, experimentation is performed by a laboratory robot, hypotheses concerning the results are generated by machine learning and experiments are allocated and selected by a combination of techniques derived from artificial intelligence research. The performance of the Robot Scientist has been evaluated by a rediscovery task based on yeast functional genomics. The Robot Scientist is proof that the integration of programmable laboratory hardware and intelligent software can be used to develop increasingly automated laboratories.
Robotic control and inspection verification
NASA Technical Reports Server (NTRS)
Davis, Virgil Leon
1991-01-01
Three areas of possible commercialization involving robots at the Kennedy Space Center (KSC) are discussed: a six degree-of-freedom target tracking system for remote umbilical operations; an intelligent torque sensing end effector for operating hand valves in hazardous locations; and an automatic radiator inspection device, a 13 by 65 foot robotic mechanism involving completely redundant motors, drives, and controls. Aspects concerning the first two innovations can be integrated to enable robots or teleoperators to perform tasks involving orientation and panal actuation operations that can be done with existing technology rather than waiting for telerobots to incorporate artificial intelligence (AI) to perform 'smart' autonomous operations. The third robot involves the application of complete control hardware redundancy to enable performance of work over and near expensive Space Shuttle hardware. The consumer marketplace may wish to explore commercialization of similiar component redundancy techniques for applications when a robot would not normally be used because of reliability concerns.
EVA Development and Verification Testing at NASA's Neutral Buoyancy Laboratory
NASA Technical Reports Server (NTRS)
Jairala, Juniper; Durkin, Robert
2012-01-01
As an early step in preparing for future EVAs, astronauts perform neutral buoyancy testing to develop and verify EVA hardware and operations. To date, neutral buoyancy demonstrations at NASA JSC’s Sonny Carter Training Facility have primarily evaluated assembly and maintenance tasks associated with several elements of the ISS. With the retirement of the Space Shuttle, completion of ISS assembly, and introduction of commercial participants for human transportation into space, evaluations at the NBL will take on a new focus. In this session, Juniper Jairala briefly discussed the design of the NBL and, in more detail, described the requirements and process for performing a neutral buoyancy test, including typical hardware and support equipment requirements, personnel and administrative resource requirements, examples of ISS systems and operations that are evaluated, and typical operational objectives that are evaluated. Robert Durkin discussed the new and potential types of uses for the NBL, including those by non-NASA external customers.
EVA Development and Verification Testing at NASA's Neutral Buoyancy Laboratory
NASA Technical Reports Server (NTRS)
Jairala, Juniper; Durkin, Robert
2012-01-01
As an early step in preparing for future EVAs, astronauts perform neutral buoyancy testing to develop and verify EVA hardware and operations. To date, neutral buoyancy demonstrations at NASA JSC's Sonny Carter Training Facility have primarily evaluated assembly and maintenance tasks associated with several elements of the ISS. With the retirement of the Space Shuttle, completion of ISS assembly, and introduction of commercial participants for human transportation into space, evaluations at the NBL will take on a new focus. In this session, Juniper Jairala briefly discussed the design of the NBL and, in more detail, described the requirements and process for performing a neutral buoyancy test, including typical hardware and support equipment requirements, personnel and administrative resource requirements, examples of ISS systems and operations that are evaluated, and typical operational objectives that are evaluated. Robert Durkin discussed the new and potential types of uses for the NBL, including those by non-NASA external customers.
Highly efficient simulation environment for HDTV video decoder in VLSI design
NASA Astrophysics Data System (ADS)
Mao, Xun; Wang, Wei; Gong, Huimin; He, Yan L.; Lou, Jian; Yu, Lu; Yao, Qingdong; Pirsch, Peter
2002-01-01
With the increase of the complex of VLSI such as the SoC (System on Chip) of MPEG-2 Video decoder with HDTV scalability especially, simulation and verification of the full design, even as high as the behavior level in HDL, often proves to be very slow, costly and it is difficult to perform full verification until late in the design process. Therefore, they become bottleneck of the procedure of HDTV video decoder design, and influence it's time-to-market mostly. In this paper, the architecture of Hardware/Software Interface of HDTV video decoder is studied, and a Hardware-Software Mixed Simulation (HSMS) platform is proposed to check and correct error in the early design stage, based on the algorithm of MPEG-2 video decoding. The application of HSMS to target system could be achieved by employing several introduced approaches. Those approaches speed up the simulation and verification task without decreasing performance.
Engineering of the LISA Pathfinder mission—making the experiment a practical reality
NASA Astrophysics Data System (ADS)
Warren, Carl; Dunbar, Neil; Backler, Mike
2009-05-01
LISA Pathfinder represents a unique challenge in the development of scientific spacecraft—not only is the LISA Test Package (LTP) payload a complex integrated development, placing stringent requirements on its developers and the spacecraft, but the payload also acts as the core sensor and actuator for the spacecraft, making the tasks of control design, software development and system verification unusually difficult. The micro-propulsion system which provides the remaining actuation also presents substantial development and verification challenges. As the mission approaches the system critical design review, flight hardware is completing verification and the process of verification using software and hardware simulators and test benches is underway. Preparation for operations has started, but critical milestones for LTP and field effect electric propulsion (FEEP) lie ahead. This paper summarizes the status of the present development and outlines the key challenges that must be overcome on the way to launch.
Controlling Infrastructure Costs: Right-Sizing the Mission Control Facility
NASA Technical Reports Server (NTRS)
Martin, Keith; Sen-Roy, Michael; Heiman, Jennifer
2009-01-01
Johnson Space Center's Mission Control Center is a space vehicle, space program agnostic facility. The current operational design is essentially identical to the original facility architecture that was developed and deployed in the mid-90's. In an effort to streamline the support costs of the mission critical facility, the Mission Operations Division (MOD) of Johnson Space Center (JSC) has sponsored an exploratory project to evaluate and inject current state-of-the-practice Information Technology (IT) tools, processes and technology into legacy operations. The general push in the IT industry has been trending towards a data-centric computer infrastructure for the past several years. Organizations facing challenges with facility operations costs are turning to creative solutions combining hardware consolidation, virtualization and remote access to meet and exceed performance, security, and availability requirements. The Operations Technology Facility (OTF) organization at the Johnson Space Center has been chartered to build and evaluate a parallel Mission Control infrastructure, replacing the existing, thick-client distributed computing model and network architecture with a data center model utilizing virtualization to provide the MCC Infrastructure as a Service. The OTF will design a replacement architecture for the Mission Control Facility, leveraging hardware consolidation through the use of blade servers, increasing utilization rates for compute platforms through virtualization while expanding connectivity options through the deployment of secure remote access. The architecture demonstrates the maturity of the technologies generally available in industry today and the ability to successfully abstract the tightly coupled relationship between thick-client software and legacy hardware into a hardware agnostic "Infrastructure as a Service" capability that can scale to meet future requirements of new space programs and spacecraft. This paper discusses the benefits and difficulties that a migration to cloud-based computing philosophies has uncovered when compared to the legacy Mission Control Center architecture. The team consists of system and software engineers with extensive experience with the MCC infrastructure and software currently used to support the International Space Station (ISS) and Space Shuttle program (SSP).
Mills, P.C.
1993-01-01
The U.S. Geological Survey investigated contaminant migration in the Galena-Platteville aquifer at the Parson's Casket Hardware site in Belvidere, Ill. This report presents the results of the first phase of the investigation, from August through December 1990. A packer assembly was used to isolate various depth intervals in three 150-foot-deep boreholes in the dolomite aquifer. Aquifer-test data include vertical distributions of vertical hydraulic gradient, horizontal hydraulic conductivity (K), and response of water levels in observation wells to borehole pumping. Water-quality data include vertical distributions of field-measured properties and laboratory determinations of concentrations of volatile organic compounds (VOC's). vertical hydraulic gradients in the aquifer were downward. The downward gradients ranged from less than 0.01 to 0.37 foot/foot. The largest gradient was associated with an elevated-K interval at 115 to 125 feet below land surface. The hydraulic characteristics of strata within the aquifer seem to be generally consistent across the site. The strata can be subdivided into five hydraulic units with the following approximate depth ranges-and K's : (1) a 1- to 5-foot-thick weathered surface at about 35 feet below land surface, 1-200 ft/d (feet per day); (2) 35-80 feet, 0.05-0.5 ft/d; (3) 80-115 feet, 0.5 ft/d; (4) 115-125 feet, 0.5-10 ft/d; and (5) 125-150 feet, 0.5 ft/d. Water-level drawdowns were detected in one shallow bedrock observation well during pumping of some of the packed intervals in a nearby borehole, indicating that the degree of vertical connection between some intervals in the aquifer may be greater than that between others. During development pumping of one borehole, drawdowns were detected in a nearby well screened in the lower part of the overlying glacial-drift deposits, indicating hydraulic connection between the glacial drift aquifer and the bedrock aquifer. VOC's were detected throughout the upper half (about 150 feet ) of the bedrock aquifer beneath the site. The detected compounds were predominantly chlorinated ethenes and ethanes (maximum concentration was 570 ppb (parts per billion) of trichloroethylene. There was a positive correlation between concentrations of VOC's, specific conductance, and K. The distribution of VOC concentrations indicate that the low-K dolomite beds in the Galena-Platteville aquifer may impede the downward migration of the VOC's and that the high-K beds and fissures may provide pathways for the lateral migration of VOC's through the aquifer. Contaminant migration is possibly affected by ground-water flow through vertical fractures that connect shallow beds with deeper beds in the aquifer, thus explaining the detections of some VOC species at intermittent depths.
NASA Technical Reports Server (NTRS)
Redhed, D. D.; Tripp, L. L.; Kawaguchi, A. S.; Miller, R. E., Jr.
1973-01-01
The strategy of the IPAD implementation plan presented, proposes a three phase development of the IPAD system and technical modules, and the transfer of this capability from the development environment to the aerospace vehicle design environment. The system and technical module capabilities for each phase of development are described. The system and technical module programming languages are recommended as well as the initial host computer system hardware and operating system. The cost of developing the IPAD technology is estimated. A schedule displaying the flowtime required for each development task is given. A PERT chart gives the developmental relationships of each of the tasks and an estimate of the operational cost of the IPAD system is offered.
Barista: A Framework for Concurrent Speech Processing by USC-SAIL
Can, Doğan; Gibson, James; Vaz, Colin; Georgiou, Panayiotis G.; Narayanan, Shrikanth S.
2016-01-01
We present Barista, an open-source framework for concurrent speech processing based on the Kaldi speech recognition toolkit and the libcppa actor library. With Barista, we aim to provide an easy-to-use, extensible framework for constructing highly customizable concurrent (and/or distributed) networks for a variety of speech processing tasks. Each Barista network specifies a flow of data between simple actors, concurrent entities communicating by message passing, modeled after Kaldi tools. Leveraging the fast and reliable concurrency and distribution mechanisms provided by libcppa, Barista lets demanding speech processing tasks, such as real-time speech recognizers and complex training workflows, to be scheduled and executed on parallel (and/or distributed) hardware. Barista is released under the Apache License v2.0. PMID:27610047
The implementation and use of Ada on distributed systems with reliability requirements
NASA Technical Reports Server (NTRS)
Reynolds, P. F.; Knight, J. C.; Urquhart, J. I. A.
1983-01-01
The issues involved in the use of the programming language Ada on distributed systems are discussed. The effects of Ada programs on hardware failures such as loss of a processor are emphasized. It is shown that many Ada language elements are not well suited to this environment. Processor failure can easily lead to difficulties on those processors which remain. As an example, the calling task in a rendezvous may be suspended forever if the processor executing the serving task fails. A mechanism for detecting failure is proposed and changes to the Ada run time support system are suggested which avoid most of the difficulties. Ada program structures are defined which allow programs to reconfigure and continue to provide service following processor failure.
Automated Sequence Processor: Something Old, Something New
NASA Technical Reports Server (NTRS)
Streiffert, Barbara; Schrock, Mitchell; Fisher, Forest; Himes, Terry
2012-01-01
High productivity required for operations teams to meet schedules Risk must be minimized. Scripting used to automate processes. Scripts perform essential operations functions. Automated Sequence Processor (ASP) was a grass-roots task built to automate the command uplink process System engineering task for ASP revitalization organized. ASP is a set of approximately 200 scripts written in Perl, C Shell, AWK and other scripting languages.. ASP processes/checks/packages non-interactive commands automatically.. Non-interactive commands are guaranteed to be safe and have been checked by hardware or software simulators.. ASP checks that commands are non-interactive.. ASP processes the commands through a command. simulator and then packages them if there are no errors.. ASP must be active 24 hours/day, 7 days/week..
NASA Technical Reports Server (NTRS)
Wolfgang, R.; Natarajan, T.; Day, J.
1987-01-01
A feedback control system, called an auxiliary array switch, was designed to connect or disconnect auxiliary solar panel segments from a spacecraft electrical bus to meet fluctuating demand for power. A simulation of the control system was used to carry out a number of design and analysis tasks that could not economically be performed with a breadboard of the hardware. These tasks included: (1) the diagnosis of a stability problem, (2) identification of parameters to which the performance of the control system was particularly sensitive, (3) verification that the response of the control system to anticipated fluctuations in the electrical load of the spacecraft was satisfactory, and (4) specification of limitations on the frequency and amplitude of the load fluctuations.
Scalable cluster administration - Chiba City I approach and lessons learned.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Navarro, J. P.; Evard, R.; Nurmi, D.
2002-07-01
Systems administrators of large clusters often need to perform the same administrative activity hundreds or thousands of times. Often such activities are time-consuming, especially the tasks of installing and maintaining software. By combining network services such as DHCP, TFTP, FTP, HTTP, and NFS with remote hardware control, cluster administrators can automate all administrative tasks. Scalable cluster administration addresses the following challenge: What systems design techniques can cluster builders use to automate cluster administration on very large clusters? We describe the approach used in the Mathematics and Computer Science Division of Argonne National Laboratory on Chiba City I, a 314-node Linuxmore » cluster; and we analyze the scalability, flexibility, and reliability benefits and limitations from that approach.« less
Design tools for complex dynamic security systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byrne, Raymond Harry; Rigdon, James Brian; Rohrer, Brandon Robinson
2007-01-01
The development of tools for complex dynamic security systems is not a straight forward engineering task but, rather, a scientific task where discovery of new scientific principles and math is necessary. For years, scientists have observed complex behavior but have had difficulty understanding it. Prominent examples include: insect colony organization, the stock market, molecular interactions, fractals, and emergent behavior. Engineering such systems will be an even greater challenge. This report explores four tools for engineered complex dynamic security systems: Partially Observable Markov Decision Process, Percolation Theory, Graph Theory, and Exergy/Entropy Theory. Additionally, enabling hardware technology for next generation security systemsmore » are described: a 100 node wireless sensor network, unmanned ground vehicle and unmanned aerial vehicle.« less
Barista: A Framework for Concurrent Speech Processing by USC-SAIL.
Can, Doğan; Gibson, James; Vaz, Colin; Georgiou, Panayiotis G; Narayanan, Shrikanth S
2014-05-01
We present Barista, an open-source framework for concurrent speech processing based on the Kaldi speech recognition toolkit and the libcppa actor library. With Barista, we aim to provide an easy-to-use, extensible framework for constructing highly customizable concurrent (and/or distributed) networks for a variety of speech processing tasks. Each Barista network specifies a flow of data between simple actors, concurrent entities communicating by message passing, modeled after Kaldi tools. Leveraging the fast and reliable concurrency and distribution mechanisms provided by libcppa, Barista lets demanding speech processing tasks, such as real-time speech recognizers and complex training workflows, to be scheduled and executed on parallel (and/or distributed) hardware. Barista is released under the Apache License v2.0.
Neurovision processor for designing intelligent sensors
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1992-03-01
A programmable multi-task neuro-vision processor, called the Positive-Negative (PN) neural processor, is proposed as a plausible hardware mechanism for constructing robust multi-task vision sensors. The computational operations performed by the PN neural processor are loosely based on the neural activity fields exhibited by certain nervous tissue layers situated in the brain. The neuro-vision processor can be programmed to generate diverse dynamic behavior that may be used for spatio-temporal stabilization (STS), short-term visual memory (STVM), spatio-temporal filtering (STF) and pulse frequency modulation (PFM). A multi- functional vision sensor that performs a variety of information processing operations on time- varying two-dimensional sensory images can be constructed from a parallel and hierarchical structure of numerous individually programmed PN neural processors.
Machine Learning Based Online Performance Prediction for Runtime Parallelization and Task Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, J; Ma, X; Singh, K
2008-10-09
With the emerging many-core paradigm, parallel programming must extend beyond its traditional realm of scientific applications. Converting existing sequential applications as well as developing next-generation software requires assistance from hardware, compilers and runtime systems to exploit parallelism transparently within applications. These systems must decompose applications into tasks that can be executed in parallel and then schedule those tasks to minimize load imbalance. However, many systems lack a priori knowledge about the execution time of all tasks to perform effective load balancing with low scheduling overhead. In this paper, we approach this fundamental problem using machine learning techniques first to generatemore » performance models for all tasks and then applying those models to perform automatic performance prediction across program executions. We also extend an existing scheduling algorithm to use generated task cost estimates for online task partitioning and scheduling. We implement the above techniques in the pR framework, which transparently parallelizes scripts in the popular R language, and evaluate their performance and overhead with both a real-world application and a large number of synthetic representative test scripts. Our experimental results show that our proposed approach significantly improves task partitioning and scheduling, with maximum improvements of 21.8%, 40.3% and 22.1% and average improvements of 15.9%, 16.9% and 4.2% for LMM (a real R application) and synthetic test cases with independent and dependent tasks, respectively.« less
The NSF Earthscope USArray Instrumentation Network
NASA Astrophysics Data System (ADS)
Davis, G. A.; Vernon, F.
2012-12-01
Since 2004, the Transportable Array component of the USArray Instrumentation Network has collected high resolution seismic data in near real-time from over 400 geographically distributed seismic stations. The deployed footprint of the array has steadily migrated across the continental United States, starting on the west coast and gradually moving eastward. As the network footprint shifts, stations from various regional seismic networks have been incorporated into the dataset. In 2009, an infrasound and barometric sensor component was added to existing core stations and to all new deployments. The ongoing success of the project can be attributed to a number of factors, including reliable communications to each site, on-site data buffering, largely homogenous data logging hardware, and a common phase-locked time reference between all stations. Continuous data quality is ensured by thorough human and automated review of data from the primary sensors and over 24 state-of-health parameters from each station. The staff at the Array Network Facility have developed a number of tools to visualize data and troubleshoot problematic stations remotely. In the event of an emergency or maintenance on the server hardware, data acquisition can be shifted to alternate data centers through the use of virtualization technologies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadgu, Teklu; Appel, Gordon John
Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the currentmore » analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less
FY17 Status Report on the Computing Systems for the Yucca Mountain Project TSPA-LA Models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Appel, Gordon John
Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014), Hadgu et al. (2015) and Hadgu and Appel (2016). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) weremore » used for the current analysis. One floating license of GoldSim with Versions 9.60.300, 10.5, 11.1 and 12.0 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA- type analysis on the server cluster. The current tasks included preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 12.0 and address DLL-related issues observed in the FY16 work. The model upgrade task successfully converted the Nominal Modeling case to GoldSim Versions 11.1/12. Conversions of the rest of the TSPA models were also attempted but program and operational difficulties precluded this. Upgrade of the remaining of the modeling cases and distributed processing tasks is expected to continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less
Guidi, Luiz G; Mattley, Jane; Martinez-Garay, Isabel; Monaco, Anthony P; Linden, Jennifer F; Velayos-Baeza, Antonio
2017-01-01
Abstract Developmental dyslexia is a neurodevelopmental disorder that affects reading ability caused by genetic and non-genetic factors. Amongst the susceptibility genes identified to date, KIAA0319 is a prime candidate. RNA-interference experiments in rats suggested its involvement in cortical migration but we could not confirm these findings in Kiaa0319-mutant mice. Given its homologous gene Kiaa0319L (AU040320) has also been proposed to play a role in neuronal migration, we interrogated whether absence of AU040320 alone or together with KIAA0319 affects migration in the developing brain. Analyses of AU040320 and double Kiaa0319;AU040320 knockouts (dKO) revealed no evidence for impaired cortical lamination, neuronal migration, neurogenesis or other anatomical abnormalities. However, dKO mice displayed an auditory deficit in a behavioral gap-in-noise detection task. In addition, recordings of click-evoked auditory brainstem responses revealed suprathreshold deficits in wave III amplitude in AU040320-KO mice, and more general deficits in dKOs. These findings suggest that absence of AU040320 disrupts firing and/or synchrony of activity in the auditory brainstem, while loss of both proteins might affect both peripheral and central auditory function. Overall, these results stand against the proposed role of KIAA0319 and AU040320 in neuronal migration and outline their relationship with deficits in the auditory system. PMID:29045729
Lam, Theodora; Yeoh, Brenda S A
2018-01-01
The distinct feminization of labour migration in Southeast Asia - particularly in the migration of breadwinning mothers as domestic and care workers in gender-segmented global labour markets - has altered care arrangements, gender roles and practices, as well as family relationships within the household significantly. Such changes were experienced by both the migrating women and other left-behind members of the family, particularly 'substitute' carers such as left-behind husbands. During the women's absence from the home, householding strategies have to be reformulated when migrant women-as-mothers rewrite their roles (but often not their identities) through labour migration as productive workers who contribute to the well-being of their children via financial remittances and 'long-distance mothering', while left-behind fathers and/or other family members step up to assume some of the tasks vacated by the mother. Using both quantitative and qualitative interview material with returned migrants and left-behind household members in source communities in Indonesia and the Philippines experiencing considerable pressures from labour migration, this article explores how carework is redistributed in the migrant mother's absence, and the ensuing implications on the gender roles of remaining family members, specifically left-behind fathers. It further examines how affected members of the household negotiate and respond to any changing gender ideologies brought about by the mother's migration over time.
Lam, Theodora; Yeoh, Brenda S. A.
2018-01-01
Abstract The distinct feminization of labour migration in Southeast Asia – particularly in the migration of breadwinning mothers as domestic and care workers in gender-segmented global labour markets – has altered care arrangements, gender roles and practices, as well as family relationships within the household significantly. Such changes were experienced by both the migrating women and other left-behind members of the family, particularly ‘substitute’ carers such as left-behind husbands. During the women’s absence from the home, householding strategies have to be reformulated when migrant women-as-mothers rewrite their roles (but often not their identities) through labour migration as productive workers who contribute to the well-being of their children via financial remittances and ‘long-distance mothering’, while left-behind fathers and/or other family members step up to assume some of the tasks vacated by the mother. Using both quantitative and qualitative interview material with returned migrants and left-behind household members in source communities in Indonesia and the Philippines experiencing considerable pressures from labour migration, this article explores how carework is redistributed in the migrant mother’s absence, and the ensuing implications on the gender roles of remaining family members, specifically left-behind fathers. It further examines how affected members of the household negotiate and respond to any changing gender ideologies brought about by the mother’s migration over time. PMID:29682633
Augmented Reality versus Virtual Reality for 3D Object Manipulation.
Krichenbauer, Max; Yamamoto, Goshiro; Taketom, Takafumi; Sandor, Christian; Kato, Hirokazu
2018-02-01
Virtual Reality (VR) Head-Mounted Displays (HMDs) are on the verge of becoming commodity hardware available to the average user and feasible to use as a tool for 3D work. Some HMDs include front-facing cameras, enabling Augmented Reality (AR) functionality. Apart from avoiding collisions with the environment, interaction with virtual objects may also be affected by seeing the real environment. However, whether these effects are positive or negative has not yet been studied extensively. For most tasks it is unknown whether AR has any advantage over VR. In this work we present the results of a user study in which we compared user performance measured in task completion time on a 9 degrees of freedom object selection and transformation task performed either in AR or VR, both with a 3D input device and a mouse. Our results show faster task completion time in AR over VR. When using a 3D input device, a purely VR environment increased task completion time by 22.5 percent on average compared to AR ( ). Surprisingly, a similar effect occurred when using a mouse: users were about 17.3 percent slower in VR than in AR ( ). Mouse and 3D input device produced similar task completion times in each condition (AR or VR) respectively. We further found no differences in reported comfort.
Skylab task and work performance /Experiment M-151 - Time and motion study/
NASA Technical Reports Server (NTRS)
Kubis, J. F.; Mclaughlin, E. J.
1975-01-01
The primary objective of Experiment M151 was to study the inflight adaptation of Skylab crewmen to a variety of task situations involving different types of activity. A parallel objective was to examine astronaut inflight performance for any behavioral stress effects associated with the working and living conditions of the Skylab environment. Training data provided the basis for comparison of preflight and inflight performance. Efficiency was evaluated through the adaptation function, namely, the relation of performance time over task trials. The results indicate that the initial changeover from preflight to inflight was accompanied by a substantial increase in performance time for most work and task activities. Equally important was the finding that crewmen adjusted rapidly to the weightless environment and became proficient in developing techniques with which to optimize task performance. By the end of the second inflight trial, most of the activities were performed almost as efficiently as on the last preflight trial. The analysis demonstrated the sensitivity of the adaptation function to differences in task and hardware configurations. The function was found to be more regular and less variable inflight than preflight. Translation and control of masses were accomplished easily and efficiently through the rapid development of the arms and legs as subtle guidance and restraint systems.
Functional Contour-following via Haptic Perception and Reinforcement Learning.
Hellman, Randall B; Tekin, Cem; van der Schaar, Mihaela; Santos, Veronica J
2018-01-01
Many tasks involve the fine manipulation of objects despite limited visual feedback. In such scenarios, tactile and proprioceptive feedback can be leveraged for task completion. We present an approach for real-time haptic perception and decision-making for a haptics-driven, functional contour-following task: the closure of a ziplock bag. This task is challenging for robots because the bag is deformable, transparent, and visually occluded by artificial fingertip sensors that are also compliant. A deep neural net classifier was trained to estimate the state of a zipper within a robot's pinch grasp. A Contextual Multi-Armed Bandit (C-MAB) reinforcement learning algorithm was implemented to maximize cumulative rewards by balancing exploration versus exploitation of the state-action space. The C-MAB learner outperformed a benchmark Q-learner by more efficiently exploring the state-action space while learning a hard-to-code task. The learned C-MAB policy was tested with novel ziplock bag scenarios and contours (wire, rope). Importantly, this work contributes to the development of reinforcement learning approaches that account for limited resources such as hardware life and researcher time. As robots are used to perform complex, physically interactive tasks in unstructured or unmodeled environments, it becomes important to develop methods that enable efficient and effective learning with physical testbeds.
Supply chain optimization: a practitioner's perspective on the next logistics breakthrough.
Schlegel, G L
2000-08-01
The objective of this paper is to profile a practitioner's perspective on supply chain optimization and highlight the critical elements of this potential new logistics breakthrough idea. The introduction will briefly describe the existing distribution network, and business environment. This will include operational statistics, manufacturing software, and hardware configurations. The first segment will cover the critical success factors or foundations elements that are prerequisites for success. The second segment will give you a glimpse of a "working game plan" for successful migration to supply chain optimization. The final segment will briefly profile "bottom-line" benefits to be derived from the use of supply chain optimization as a strategy, tactical tool, and competitive advantage.
Baseline experiments in teleoperator control
NASA Technical Reports Server (NTRS)
Hankins, W. W., III; Mixon, R. W.
1986-01-01
Studies have been conducted at the NASA Langley Research Center (LaRC) to establish baseline human teleoperator interface data and to assess the influence of some of the interface parameters on human performance in teleoperation. As baseline data, the results will be used to assess future interface improvements resulting from this research in basic teleoperator human factors. In addition, the data have been used to validate LaRC's basic teleoperator hardware setup and to compare initial teleoperator study results. Four subjects controlled a modified industrial manipulator to perform a simple task involving both high and low precision. Two different schemes for controlling the manipulator were studied along with both direct and indirect viewing of the task. Performance of the task was measured as the length of time required to complete the task along with the number of errors made in the process. Analyses of variance were computed to determine the significance of the influences of each of the independent variables. Comparisons were also made between the LaRC data and data taken earlier by Grumman Aerospace Corp. at their facilities.
NASA Technical Reports Server (NTRS)
Abbott, T. S.; Moen, G. C.
1981-01-01
The weather radar cathode ray tube (CRT) is the prime candidate for presenting cockpit display of traffic information (CDTI) in current, conventionally equipped transport aircraft. Problems may result from this, since the CRT size is not optimized for CDTI applications and the CRT is not in the pilot's primary visual scan area. The impact of display size on the ability of pilots to utilize the traffic information to maintain a specified spacing interval behind a lead aircraft during an approach task was studied. The five display sizes considered are representative of the display hardware configurations of airborne weather radar systems. From a pilot's subjective workload viewpoint, even the smallest display size was usable for performing the self spacing task. From a performane viewpoint, the mean spacing values, which are indicative of how well the pilots were able to perform the task, exhibit the same trends, irrespective of display size; however, the standard deviation of the spacing intervals decreased (performance improves) as the display size increased. Display size, therefore, does have a significant effect on pilot performance.
Sensitivity of a critical tracking task to alcohol impairment
NASA Technical Reports Server (NTRS)
Tennant, J. A.; Thompson, R. R.
1973-01-01
A first order critical tracking task is evaluated for its potential to discriminate between sober and intoxicated performances. Mean differences between predrink and postdrink performances as a function of BAC are analyzed. Quantification of the results shows that intoxicated failure rates of 50% for blood alcohol concentrations (BACs) at or above 0.1%, and 75% for BACs at or above 0.14%, can be attained with no sober failure rates. A high initial rate of learning is observed, perhaps due to the very nature of the task whereby the operator is always pushed to his limit, and the scores approach a stable asymptote after approximately 50 trials. Finally, the implementation of the task as an ignition interlock system in the automobile environment is discussed. It is pointed out that lower critical performance limits are anticipated for the mechanized automotive units because of the introduction of larger hardware and neuromuscular lags. Whether such degradation in performance would reduce the effectiveness of the device or not will be determined in a continuing program involving a broader based sample of the driving population and performance correlations with both BACs and driving proficiency.
Two-dimensional systolic-array architecture for pixel-level vision tasks
NASA Astrophysics Data System (ADS)
Vijverberg, Julien A.; de With, Peter H. N.
2010-05-01
This paper presents ongoing work on the design of a two-dimensional (2D) systolic array for image processing. This component is designed to operate on a multi-processor system-on-chip. In contrast with other 2D systolic-array architectures and many other hardware accelerators, we investigate the applicability of executing multiple tasks in a time-interleaved fashion on the Systolic Array (SA). This leads to a lower external memory bandwidth and better load balancing of the tasks on the different processing tiles. To enable the interleaving of tasks, we add a shadow-state register for fast task switching. To reduce the number of accesses to the external memory, we propose to share the communication assist between consecutive tasks. A preliminary, non-functional version of the SA has been synthesized for an XV4S25 FPGA device and yields a maximum clock frequency of 150 MHz requiring 1,447 slices and 5 memory blocks. Mapping tasks from video content-analysis applications from literature on the SA yields reductions in the execution time of 1-2 orders of magnitude compared to the software implementation. We conclude that the choice for an SA architecture is useful, but a scaled version of the SA featuring less logic with fewer processing and pipeline stages yielding a lower clock frequency, would be sufficient for a video analysis system-on-chip.
Cache Sharing and Isolation Tradeoffs in Multicore Mixed-Criticality Systems
2015-05-01
of lockdown registers, to provide way-based partitioning. These alternatives are illustrated in Fig. 1 with respect to a quad-core ARM Cortex A9...presented a cache-partitioning scheme that allows multiple tasks to share the same cache partition on a single processor (as we do for Level-A and...sets and determined the fraction that were schedulable on our target hardware platform, the quad-core ARM Cortex A9 machine mentioned earlier, the LLC
NASA Technical Reports Server (NTRS)
Pauckert, R. P.
1974-01-01
The stability characteristics of the like-doublet injector were defined over the range of OME chamber pressures and mixture ratios. This was accomplished by bomb testing the injector and cavity configurations in solid wall thrust chamber hardware typical of a flight contour with fuel heated to regenerative chamber outlet temperatures. It was found that stability in the 2600-2800 Hz region depends upon injector hydraulics and on chamber acoustics.
Extravehicular activity at geosynchronous earth orbit
NASA Technical Reports Server (NTRS)
Shields, Nicholas, Jr.; Schulze, Arthur E.; Carr, Gerald P.; Pogue, William
1988-01-01
The basic contract to define the system requirements to support the Advanced Extravehicular Activity (EVA) has three phases: EVA in geosynchronous Earth orbit; EVA in lunar base operations; and EVA in manned Mars surface exploration. The three key areas to be addressed in each phase are: environmental/biomedical requirements; crew and mission requirements; and hardware requirements. The structure of the technical tasks closely follows the structure of the Advanced EVA studies for the Space Station completed in 1986.
Pressure control and analysis report: Hydrogen Thermal Test Article (HTTA)
NASA Technical Reports Server (NTRS)
1971-01-01
Tasks accomplished during the HTTA Program study period included: (1) performance of a literature review to provide system guidelines; (2) development of analytical procedures needed to predict system performance; (3) design and analysis of the HTTA pressurization system considering (a) future utilization of results in the design of a spacecraft maneuvering system propellant package, (b) ease of control and operation, (c) system safety, and (d) hardware cost; and (4) making conclusions and recommendations for systems design.
Integrating Security in Real-Time Embedded Systems
2017-04-26
b) detect any intrusions/a ttacks once tl1ey occur and (c) keep the overall system safe in the event of an attack. 4. Analysis and evaluation of...beyond), we expanded our work in both security integration and attack mechanisms, and worked on demonstrations and evaluations in hardware. Year I...scheduling for each busy interval w ith the calculated arrival time w indow. Step 1 focuses on the problem of finding the quanti ty of each task
1980-06-06
classes of objects, a basic inventory of hardware can be established to provide a remote recovery capabilily. Although these tests were performed in...decided to continue the operatio , ihe following day. Friday, 10 Aug 0600 Task team arrives at YD197. 0830 Dive team to lift module for inspection. The
On the writing of programming systems for spacecraft computers.
NASA Technical Reports Server (NTRS)
Mathur, F. P.; Rohr, J. A.
1972-01-01
Consideration of the systems designed to generate programs for the increasingly complex digital computers being used on board unmanned deep-space probes. Such programming systems must accommodate the special-purpose features incorporated in the hardware. The use of higher-level language facilities in the programming system can significantly simplify the task. Computers for Mariner and for the Outer Planets Grand Tour are briefly described, as well as their programming systems. Aspects of the higher level languages are considered.
A microcomputer system for clinical bacteriology: experience of 12 months' trial.
Courcol, R J; Roussel-Delvallez, M; Martin, G R
1982-01-01
A data processing system using microcomputers was developed in a hospital bacteriology laboratory processing more than 60 000 specimens yearly. The purchase price of the hardware was frs 200 000 (17 500 pounds) and the software was written by the authors. The system has been running since May 1980 without general breakdown. The present configuration allows the processing of specimens, enquiries, scientific and administrative tasks but multiprogramming and cumulative reports are not possible. PMID:7107962
Studies to design and develop improved remote manipulator systems
NASA Technical Reports Server (NTRS)
Hill, J. W.; Sword, A. J.
1973-01-01
Remote manipulator control considered is based on several levels of automatic supervision which derives manipulator commands from an analysis of sensor states and task requirements. Principle sensors are manipulator joint position, tactile, and currents. The tactile sensor states can be displayed visually in perspective or replicated in the operator's control handle of perceived by the automatic supervisor. Studies are reported on control organization, operator performance and system performance measures. Unusual hardware and software details are described.
NASA Technical Reports Server (NTRS)
Spiger, R. J.; Farrell, R. J.; Holcomb, G. A.
1982-01-01
The access schema developed to access both individual switch functions as well as automated or semiautomated procedures for the orbital maneuvering system and electrical power and distribution and control system discussed and the operation of the system is described. Feasibility tests and analyses used to define display parameters and to select applicable hardware choices for use in such a system are presented and the results are discussed.
Combat Service Support Model Development: BRASS - TRANSLOG - Army 21
1984-07-01
throughout’the system. Transitional problems may address specific hardware and related software , such as the Standard Army Ammunition System ( SAAS ...FILE. 00 Cabat Service Support Model Development .,PASS TRANSLOG -- ARMY 21 0 Contract Number DAAK11-84-D-0004 Task Order #1 DRAFT REPOkT July 1984 D...Armament Systems, Inc. 211 West Bel Air Avenue P.O. Box 158 Aberdeen, MD 21001 8 8 8 2 1 S CORMIT SERVICE SUPPORT MODEL DEVELOPMENT BRASS -- TRANSLOG
What Scientific Applications can Benefit from Hardware Transactional Memory?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schindewolf, M; Bihari, B; Gyllenhaal, J
2012-06-04
Achieving efficient and correct synchronization of multiple threads is a difficult and error-prone task at small scale and, as we march towards extreme scale computing, will be even more challenging when the resulting application is supposed to utilize millions of cores efficiently. Transactional Memory (TM) is a promising technique to ease the burden on the programmer, but only recently has become available on commercial hardware in the new Blue Gene/Q system and hence the real benefit for realistic applications has not been studied, yet. This paper presents the first performance results of TM embedded into OpenMP on a prototype systemmore » of BG/Q and characterizes code properties that will likely lead to benefits when augmented with TM primitives. We first, study the influence of thread count, environment variables and memory layout on TM performance and identify code properties that will yield performance gains with TM. Second, we evaluate the combination of OpenMP with multiple synchronization primitives on top of MPI to determine suitable task to thread ratios per node. Finally, we condense our findings into a set of best practices. These are applied to a Monte Carlo Benchmark and a Smoothed Particle Hydrodynamics method. In both cases an optimized TM version, executed with 64 threads on one node, outperforms a simple TM implementation. MCB with optimized TM yields a speedup of 27.45 over baseline.« less
Using SRAM Based FPGAs for Power-Aware High Performance Wireless Sensor Networks
Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa
2012-01-01
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today’s applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements. PMID:22736971
Using SRAM based FPGAs for power-aware high performance wireless sensor networks.
Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa
2012-01-01
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today's applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements.
High-Performance, Radiation-Hardened Electronics for Space Environments
NASA Technical Reports Server (NTRS)
Keys, Andrew S.; Watson, Michael D.; Frazier, Donald O.; Adams, James H.; Johnson, Michael A.; Kolawa, Elizabeth A.
2007-01-01
The Radiation Hardened Electronics for Space Environments (RHESE) project endeavors to advance the current state-of-the-art in high-performance, radiation-hardened electronics and processors, ensuring successful performance of space systems required to operate within extreme radiation and temperature environments. Because RHESE is a project within the Exploration Technology Development Program (ETDP), RHESE's primary customers will be the human and robotic missions being developed by NASA's Exploration Systems Mission Directorate (ESMD) in partial fulfillment of the Vision for Space Exploration. Benefits are also anticipated for NASA's science missions to planetary and deep-space destinations. As a technology development effort, RHESE provides a broad-scoped, full spectrum of approaches to environmentally harden space electronics, including new materials, advanced design processes, reconfigurable hardware techniques, and software modeling of the radiation environment. The RHESE sub-project tasks are: SelfReconfigurable Electronics for Extreme Environments, Radiation Effects Predictive Modeling, Radiation Hardened Memory, Single Event Effects (SEE) Immune Reconfigurable Field Programmable Gate Array (FPGA) (SIRF), Radiation Hardening by Software, Radiation Hardened High Performance Processors (HPP), Reconfigurable Computing, Low Temperature Tolerant MEMS by Design, and Silicon-Germanium (SiGe) Integrated Electronics for Extreme Environments. These nine sub-project tasks are managed by technical leads as located across five different NASA field centers, including Ames Research Center, Goddard Space Flight Center, the Jet Propulsion Laboratory, Langley Research Center, and Marshall Space Flight Center. The overall RHESE integrated project management responsibility resides with NASA's Marshall Space Flight Center (MSFC). Initial technology development emphasis within RHESE focuses on the hardening of Field Programmable Gate Arrays (FPGA)s and Field Programmable Analog Arrays (FPAA)s for use in reconfigurable architectures. As these component/chip level technologies mature, the RHESE project emphasis shifts to focus on efforts encompassing total processor hardening techniques and board-level electronic reconfiguration techniques featuring spare and interface modularity. This phased approach to distributing emphasis between technology developments provides hardened FPGA/FPAAs for early mission infusion, then migrates to hardened, board-level, high speed processors with associated memory elements and high density storage for the longer duration missions encountered for Lunar Outpost and Mars Exploration occurring later in the Constellation schedule.
Scheduling Operations for Massive Heterogeneous Clusters
NASA Technical Reports Server (NTRS)
Humphrey, John; Spagnoli, Kyle
2013-01-01
High-performance computing (HPC) programming has become increasingly difficult with the advent of hybrid supercomputers consisting of multicore CPUs and accelerator boards such as the GPU. Manual tuning of software to achieve high performance on this type of machine has been performed by programmers. This is needlessly difficult and prone to being invalidated by new hardware, new software, or changes in the underlying code. A system was developed for task-based representation of programs, which when coupled with a scheduler and runtime system, allows for many benefits, including higher performance and utilization of computational resources, easier programming and porting, and adaptations of code during runtime. The system consists of a method of representing computer algorithms as a series of data-dependent tasks. The series forms a graph, which can be scheduled for execution on many nodes of a supercomputer efficiently by a computer algorithm. The schedule is executed by a dispatch component, which is tailored to understand all of the hardware types that may be available within the system. The scheduler is informed by a cluster mapping tool, which generates a topology of available resources and their strengths and communication costs. Software is decoupled from its hardware, which aids in porting to future architectures. A computer algorithm schedules all operations, which for systems of high complexity (i.e., most NASA codes), cannot be performed optimally by a human. The system aids in reducing repetitive code, such as communication code, and aids in the reduction of redundant code across projects. It adds new features to code automatically, such as recovering from a lost node or the ability to modify the code while running. In this project, the innovators at the time of this reporting intend to develop two distinct technologies that build upon each other and both of which serve as building blocks for more efficient HPC usage. First is the scheduling and dynamic execution framework, and the second is scalable linear algebra libraries that are built directly on the former.
Limited-memory BFGS based least-squares pre-stack Kirchhoff depth migration
NASA Astrophysics Data System (ADS)
Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu
2015-08-01
Least-squares migration (LSM) is a linearized inversion technique for subsurface reflectivity estimation. Compared to conventional migration algorithms, it can improve spatial resolution significantly with a few iterative calculations. There are three key steps in LSM, (1) calculate data residuals between observed data and demigrated data using the inverted reflectivity model; (2) migrate data residuals to form reflectivity gradient and (3) update reflectivity model using optimization methods. In order to obtain an accurate and high-resolution inversion result, the good estimation of inverse Hessian matrix plays a crucial role. However, due to the large size of Hessian matrix, the inverse matrix calculation is always a tough task. The limited-memory BFGS (L-BFGS) method can evaluate the Hessian matrix indirectly using a limited amount of computer memory which only maintains a history of the past m gradients (often m < 10). We combine the L-BFGS method with least-squares pre-stack Kirchhoff depth migration. Then, we validate the introduced approach by the 2-D Marmousi synthetic data set and a 2-D marine data set. The results show that the introduced method can effectively obtain reflectivity model and has a faster convergence rate with two comparison gradient methods. It might be significant for general complex subsurface imaging.
Nakamura, Miyoko; Kolinsky, Régine
2014-12-01
We explored the functional units of speech segmentation in Japanese using dichotic presentation and a detection task requiring no intentional sublexical analysis. Indeed, illusory perception of a target word might result from preattentive migration of phonemes, morae, or syllables from one ear to the other. In Experiment I, Japanese listeners detected targets presented in hiragana and/or kanji. Phoneme migrations did occur, suggesting that orthography-independent sublexical constituents play some role in segmentation. However, syllable and especially mora migrations were more numerous. This pattern of results was not observed in French speakers (Experiment 2), suggesting that it reflects native segmentation in Japanese. To control for the intervention of kanji representations (many words are written in kanji, and one kanji often corresponds to one syllable), in Experiment 3, Japanese listeners were presented with target loanwords that can be written only in katakana. Again, phoneme migrations occurred, while the first mora and syllable led to similar rates of illusory percepts. No migration occurred for the second, "special" mora (/J/ or/N/), probably because this constitutes the latter part of a heavy syllable. Overall, these findings suggest that multiple units, such as morae, syllables, and even phonemes, function independently of orthographic knowledge in Japanese preattentive speech segmentation.
Autonomous target tracking of UAVs based on low-power neural network hardware
NASA Astrophysics Data System (ADS)
Yang, Wei; Jin, Zhanpeng; Thiem, Clare; Wysocki, Bryant; Shen, Dan; Chen, Genshe
2014-05-01
Detecting and identifying targets in unmanned aerial vehicle (UAV) images and videos have been challenging problems due to various types of image distortion. Moreover, the significantly high processing overhead of existing image/video processing techniques and the limited computing resources available on UAVs force most of the processing tasks to be performed by the ground control station (GCS) in an off-line manner. In order to achieve fast and autonomous target identification on UAVs, it is thus imperative to investigate novel processing paradigms that can fulfill the real-time processing requirements, while fitting the size, weight, and power (SWaP) constrained environment. In this paper, we present a new autonomous target identification approach on UAVs, leveraging the emerging neuromorphic hardware which is capable of massively parallel pattern recognition processing and demands only a limited level of power consumption. A proof-of-concept prototype was developed based on a micro-UAV platform (Parrot AR Drone) and the CogniMemTMneural network chip, for processing the video data acquired from a UAV camera on the y. The aim of this study was to demonstrate the feasibility and potential of incorporating emerging neuromorphic hardware into next-generation UAVs and their superior performance and power advantages towards the real-time, autonomous target tracking.
ISS Logistics Hardware Disposition and Metrics Validation
NASA Technical Reports Server (NTRS)
Rogers, Toneka R.
2010-01-01
I was assigned to the Logistics Division of the International Space Station (ISS)/Spacecraft Processing Directorate. The Division consists of eight NASA engineers and specialists that oversee the logistics portion of the Checkout, Assembly, and Payload Processing Services (CAPPS) contract. Boeing, their sub-contractors and the Boeing Prime contract out of Johnson Space Center, provide the Integrated Logistics Support for the ISS activities at Kennedy Space Center. Essentially they ensure that spares are available to support flight hardware processing and the associated ground support equipment (GSE). Boeing maintains a Depot for electrical, mechanical and structural modifications and/or repair capability as required. My assigned task was to learn project management techniques utilized by NASA and its' contractors to provide an efficient and effective logistics support infrastructure to the ISS program. Within the Space Station Processing Facility (SSPF) I was exposed to Logistics support components, such as, the NASA Spacecraft Services Depot (NSSD) capabilities, Mission Processing tools, techniques and Warehouse support issues, required for integrating Space Station elements at the Kennedy Space Center. I also supported the identification of near-term ISS Hardware and Ground Support Equipment (GSE) candidates for excessing/disposition prior to October 2010; and the validation of several Logistics Metrics used by the contractor to measure logistics support effectiveness.
Systems Maintenance Automated Repair Tasks (SMART)
NASA Technical Reports Server (NTRS)
Schuh, Joseph; Mitchell, Brent; Locklear, Louis; Belson, Martin A.; Al-Shihabi, Mary Jo Y.; King, Nadean; Norena, Elkin; Hardin, Derek
2010-01-01
SMART is a uniform automated discrepancy analysis and repair-authoring platform that improves technical accuracy and timely delivery of repair procedures for a given discrepancy (see figure a). SMART will minimize data errors, create uniform repair processes, and enhance the existing knowledge base of engineering repair processes. This innovation is the first tool developed that links the hardware specification requirements with the actual repair methods, sequences, and required equipment. SMART is flexibly designed to be useable by multiple engineering groups requiring decision analysis, and by any work authorization and disposition platform (see figure b). The organizational logic creates the link between specification requirements of the hardware, and specific procedures required to repair discrepancies. The first segment in the SMART process uses a decision analysis tree to define all the permutations between component/ subcomponent/discrepancy/repair on the hardware. The second segment uses a repair matrix to define what the steps and sequences are for any repair defined in the decision tree. This segment also allows for the selection of specific steps from multivariable steps. SMART will also be able to interface with outside databases and to store information from them to be inserted into the repair-procedure document. Some of the steps will be identified as optional, and would only be used based on the location and the current configuration of the hardware. The output from this analysis would be sent to a work authoring system in the form of a predefined sequence of steps containing required actions, tools, parts, materials, certifications, and specific requirements controlling quality, functional requirements, and limitations.
Bradby, Hannah
2014-11-01
This paper critically appraises the discourse around international medical migration at the turn of the 21st century. A critical narrative review of a range of English-language sources, including grey literature, books and research reports, traces the development and spread of specific causative models. The attribution of causative relations between the movement of skilled medical workers, the provision of health care and population health outcomes illustrates how the global reach of biomedicine has to be understood in the context of local conditions. The need to understand migration as an aspect of uneven global development, rather than a delimited issue of manpower services management, is illustrated with reference to debates about 'brain drain' of Africa's health-care professionals, task-shifting and the crisis in health-care human resources. The widespread presumed cause of shortages of skilled health-care staff in sub-Saharan Africa was overdetermined by a compelling narrative of rich countries stealing poor countries' trained health-care professionals. This narrative promotes medical professional interests and ignores historical patterns of underinvestment in health-care systems and structures. Sociological theories of medicalization suggest that the international marketization of medical recruitment is a key site where the uneven global development of capital is at work. A radical reconfiguration of medical staffing along the lines of 'task-shifting' in rich and poor countries' health-care systems alike offers one means of thinking about global equity in access to quality care. © The Author(s) 2014.