Sample records for hardware performance counters

  1. Using DMA for copying performance counter data to memory

    DOEpatents

    Gara, Alan; Salapura, Valentina; Wisniewski, Robert W.

    2012-09-25

    A device for copying performance counter data includes hardware path that connects a direct memory access (DMA) unit to a plurality of hardware performance counters and a memory device. Software prepares an injection packet for the DMA unit to perform copying, while the software can perform other tasks. In one aspect, the software that prepares the injection packet runs on a processing core other than the core that gathers the hardware performance counter data.

  2. Hardware support for collecting performance counters directly to memory

    DOEpatents

    Gara, Alan; Salapura, Valentina; Wisniewski, Robert W.

    2012-09-25

    Hardware support for collecting performance counters directly to memory, in one aspect, may include a plurality of performance counters operable to collect one or more counts of one or more selected activities. A first storage element may be operable to store an address of a memory location. A second storage element may be operable to store a value indicating whether the hardware should begin copying. A state machine may be operable to detect the value in the second storage element and trigger hardware copying of data in selected one or more of the plurality of performance counters to the memory location whose address is stored in the first storage element.

  3. Using DMA for copying performance counter data to memory

    DOEpatents

    Gara, Alan; Salapura, Valentina; Wisniewski, Robert W

    2013-12-31

    A device for copying performance counter data includes hardware path that connects a direct memory access (DMA) unit to a plurality of hardware performance counters and a memory device. Software prepares an injection packet for the DMA unit to perform copying, while the software can perform other tasks. In one aspect, the software that prepares the injection packet runs on a processing core other than the core that gathers the hardware performance data.

  4. Hardware support for software controlled fast reconfiguration of performance counters

    DOEpatents

    Salapura, Valentina; Wisniewski, Robert W.

    2013-06-18

    Hardware support for software controlled reconfiguration of performance counters may include a plurality of performance counters collecting one or more counts of one or more selected activities. A storage element stores data value representing a time interval, and a timer element reads the data value and detects expiration of the time interval based on the data value and generates a signal. A plurality of configuration registers stores a set of performance counter configurations. A state machine receives the signal and selects a configuration register from the plurality of configuration registers for reconfiguring the one or more performance counters.

  5. Hardware support for software controlled fast reconfiguration of performance counters

    DOEpatents

    Salapura, Valentina; Wisniewski, Robert W

    2013-09-24

    Hardware support for software controlled reconfiguration of performance counters may include a plurality of performance counters collecting one or more counts of one or more selected activities. A storage element stores data value representing a time interval, and a timer element reads the data value and detects expiration of the time interval based on the data value and generates a signal. A plurality of configuration registers stores a set of performance counter configurations. A state machine receives the signal and selects a configuration register from the plurality of configuration registers for reconfiguring the one or more performance counters.

  6. Hardware enabled performance counters with support for operating system context switching

    DOEpatents

    Salapura, Valentina; Wisniewski, Robert W.

    2015-06-30

    A device for supporting hardware enabled performance counters with support for context switching include a plurality of performance counters operable to collect information associated with one or more computer system related activities, a first register operable to store a memory address, a second register operable to store a mode indication, and a state machine operable to read the second register and cause the plurality of performance counters to copy the information to memory area indicated by the memory address based on the mode indication.

  7. Understanding GPU Power. A Survey of Profiling, Modeling, and Simulation Methods

    DOE PAGES

    Bridges, Robert A.; Imam, Neena; Mintz, Tiffany M.

    2016-09-01

    Modern graphics processing units (GPUs) have complex architectures that admit exceptional performance and energy efficiency for high throughput applications.Though GPUs consume large amounts of power, their use for high throughput applications facilitate state-of-the-art energy efficiency and performance. Consequently, continued development relies on understanding their power consumption. Our work is a survey of GPU power modeling and profiling methods with increased detail on noteworthy efforts. Moreover, as direct measurement of GPU power is necessary for model evaluation and parameter initiation, internal and external power sensors are discussed. Hardware counters, which are low-level tallies of hardware events, share strong correlation to powermore » use and performance. Statistical correlation between power and performance counters has yielded worthwhile GPU power models, yet the complexity inherent to GPU architectures presents new hurdles for power modeling. Developments and challenges of counter-based GPU power modeling is discussed. Often building on the counter-based models, research efforts for GPU power simulation, which make power predictions from input code and hardware knowledge, provide opportunities for optimization in programming or architectural design. Noteworthy strides in power simulations for GPUs are included along with their performance or functional simulator counterparts when appropriate. Lastly, possible directions for future research are discussed.« less

  8. Understanding GPU Power. A Survey of Profiling, Modeling, and Simulation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bridges, Robert A.; Imam, Neena; Mintz, Tiffany M.

    Modern graphics processing units (GPUs) have complex architectures that admit exceptional performance and energy efficiency for high throughput applications.Though GPUs consume large amounts of power, their use for high throughput applications facilitate state-of-the-art energy efficiency and performance. Consequently, continued development relies on understanding their power consumption. Our work is a survey of GPU power modeling and profiling methods with increased detail on noteworthy efforts. Moreover, as direct measurement of GPU power is necessary for model evaluation and parameter initiation, internal and external power sensors are discussed. Hardware counters, which are low-level tallies of hardware events, share strong correlation to powermore » use and performance. Statistical correlation between power and performance counters has yielded worthwhile GPU power models, yet the complexity inherent to GPU architectures presents new hurdles for power modeling. Developments and challenges of counter-based GPU power modeling is discussed. Often building on the counter-based models, research efforts for GPU power simulation, which make power predictions from input code and hardware knowledge, provide opportunities for optimization in programming or architectural design. Noteworthy strides in power simulations for GPUs are included along with their performance or functional simulator counterparts when appropriate. Lastly, possible directions for future research are discussed.« less

  9. Hardware support for software controlled fast multiplexing of performance counters

    DOEpatents

    Salapura, Valentina; Wisniewski, Robert W

    2013-10-01

    Performance counters may be operable to collect one or more counts of one or more selected activities, and registers may be operable to store a set of performance counter configurations. A state machine may be operable to automatically select a register from the registers for reconfiguring the one or more performance counters in response to receiving a first signal. The state machine may be further operable to reconfigure the one or more performance counters based on a configuration specified in the selected register. The state machine yet further may be operable to copy data in selected one or more of the performance counters to a memory location, or to copy data from the memory location to the counters, in response to receiving a second signal. The state machine may be operable to store or restore the counter values and state machine configuration in response to a context switch event.

  10. Hardware support for software controlled fast multiplexing of performance counters

    DOEpatents

    Salapura, Valentina; Wisniewski, Robert W.

    2013-01-01

    Performance counters may be operable to collect one or more counts of one or more selected activities, and registers may be operable to store a set of performance counter configurations. A state machine may be operable to automatically select a register from the registers for reconfiguring the one or more performance counters in response to receiving a first signal. The state machine may be further operable to reconfigure the one or more performance counters based on a configuration specified in the selected register. The state machine yet further may be operable to copy data in selected one or more of the performance counters to a memory location, or to copy data from the memory location to the counters, in response to receiving a second signal. The state machine may be operable to store or restore the counter values and state machine configuration in response to a context switch event.

  11. The effect of fuel/air mixer design parameters on the continuous and discrete phase structure in the reaction-stabilizing region

    NASA Astrophysics Data System (ADS)

    Ateshkadi, Arash

    The demands on current and future aero gas turbine combustors are demanding a greater insight into the role of the injector/dome design on combustion performance. The structure of the two-phase flow and combustion performance associated with practical injector/dome hardware is thoroughly investigated. A spray injector with two radial inflow swirlers was custom-designed to maintain tight tolerances and strict assembly protocol to isolate the sensitivity of performance to hardware design. The custom set is a unique modular design that (1) accommodates parametric variation in geometry, (2) retains symmetry, and (3) maintains effective area. Swirl sense and presence of a venturi were found to be the most influential on fuel distribution and Lean Blowout. The venturi acts as a fuel-prefilming surface and constrains the highest fuel mass concentration to an annular ring near the centerline. Co-swirl enhances the radial dispersion of the continuous phase and counter-swirl increases the level of mixing that occurs in the downstream region of the mixer. The smallest drop size distributions were found to occur with the counter-swirl configuration with venturi. In the case of counter-swirl without venturi the high concentration of fluid mass is found in the center region of the flow. The Lean Blowout (LBO) equivalence ratio was lower for counter-swirl due to the coupling of the centerline recirculation zone with the location of high fuel concentration emanating from smaller droplets. In the co-swirl configuration a more intense reaction was found near the mixer exit leading to the lowest concentration of NOx, CO and UHC. An LBO model with good agreement to the measured values was developed that related, for the first time, specific hardware parameters and operating condition to stability performance. A semi-analytical model, which agreed best with co-swirl configurations, was modified and used to describe the axial velocity profile downstream of the mixer exit. The development of these two models exemplifies the use of mathematical expressions to guide the design and development procedure for mixer geometry that meet the stringent demands on increasing combustion performance.

  12. Hardware accuracy counters for application precision and quality feedback

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Paula Rosa Piga, Leonardo; Majumdar, Abhinandan; Paul, Indrani

    Methods, devices, and systems for capturing an accuracy of an instruction executing on a processor. An instruction may be executed on the processor, and the accuracy of the instruction may be captured using a hardware counter circuit. The accuracy of the instruction may be captured by analyzing bits of at least one value of the instruction to determine a minimum or maximum precision datatype for representing the field, and determining whether to adjust a value of the hardware counter circuit accordingly. The representation may be output to a debugger or logfile for use by a developer, or may be outputmore » to a runtime or virtual machine to automatically adjust instruction precision or gating of portions of the processor datapath.« less

  13. VETA-1 x ray detection system

    NASA Technical Reports Server (NTRS)

    Podgorski, W. A.; Flanagan, Kathy A.; Freeman, Mark D.; Goddard, R. G.; Kellogg, Edwin M.; Norton, T. J.; Ouellette, J. P.; Roy, A. G.; Schwartz, Daniel A.

    1992-01-01

    The alignment and X-ray imaging performance of the Advanced X-ray Astrophysics Facility (AXAF) Verification Engineering Test Article-I (VETA-I) was measured by the VETA-I X-Ray Detection System (VXDS). The VXDS was based on the X-ray detection system utilized in the AXAF Technology Mirror Assembly (TMA) program, upgraded to meet the more stringent requirements of the VETA-I test program. The VXDS includes two types of X-ray detectors: (1) a High Resolution Imager (HRI) which provides X-ray imaging capabilities, and (2) sealed and flow proportional counters which, in conjunction with apertures of various types and precision translation stages, provide the most accurate measurement of VETA-I performance. Herein we give an overview of the VXDS hardware including X-ray detectors, translation stages, apertures, proportional counters and flow counter gas supply system and associated electronics. We also describe the installation of the VXDS into the Marshall Space Flight Center (MSFC) X-Ray Calibration Facility (XRCF). We discuss in detail the design and performance of those elements of the VXDS which have not been discussed elsewhere; translation systems, flow counter gas supply system, apertures and thermal monitoring system.

  14. Determination of performance characteristics of scientific applications on IBM Blue Gene/Q

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evangelinos, C.; Walkup, R. E.; Sachdeva, V.

    The IBM Blue Gene®/Q platform presents scientists and engineers with a rich set of hardware features such as 16 cores per chip sharing a Level 2 cache, a wide SIMD (single-instruction, multiple-data) unit, a five-dimensional torus network, and hardware support for collective operations. Especially important is the feature related to cores that have four “hardware threads,” which makes it possible to hide latencies and obtain a high fraction of the peak issue rate from each core. All of these hardware resources present unique performance-tuning opportunities on Blue Gene/Q. We provide an overview of several important applications and solvers and studymore » them on Blue Gene/Q using performance counters and Message Passing Interface profiles. We also discuss how Blue Gene/Q tools help us understand the interaction of the application with the hardware and software layers and provide guidance for optimization. Furthermore, on the basis of our analysis, we discuss code improvement strategies targeting Blue Gene/Q. Information about how these algorithms map to the Blue Gene® architecture is expected to have an impact on future system design as we move to the exascale era.« less

  15. Performance Prediction Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chennupati, Gopinath; Santhi, Nanadakishore; Eidenbenz, Stephen

    The Performance Prediction Toolkit (PPT), is a scalable co-design tool that contains the hardware and middle-ware models, which accept proxy applications as input in runtime prediction. PPT relies on Simian, a parallel discrete event simulation engine in Python or Lua, that uses the process concept, where each computing unit (host, node, core) is a Simian entity. Processes perform their task through message exchanges to remain active, sleep, wake-up, begin and end. The PPT hardware model of a compute core (such as a Haswell core) consists of a set of parameters, such as clock speed, memory hierarchy levels, their respective sizes,more » cache-lines, access times for different cache levels, average cycle counts of ALU operations, etc. These parameters are ideally read off a spec sheet or are learned using regression models learned from hardware counters (PAPI) data. The compute core model offers an API to the software model, a function called time_compute(), which takes as input a tasklist. A tasklist is an unordered set of ALU, and other CPU-type operations (in particular virtual memory loads and stores). The PPT application model mimics the loop structure of the application and replaces the computational kernels with a call to the hardware model's time_compute() function giving tasklists as input that model the compute kernel. A PPT application model thus consists of tasklists representing kernels and the high-er level loop structure that we like to think of as pseudo code. The key challenge for the hardware model's time_compute-function is to translate virtual memory accesses into actual cache hierarchy level hits and misses.PPT also contains another CPU core level hardware model, Analytical Memory Model (AMM). The AMM solves this challenge soundly, where our previous alternatives explicitly include the L1,L2,L3 hit-rates as inputs to the tasklists. Explicit hit-rates inevitably only reflect the application modeler's best guess, perhaps informed by a few small test problems using hardware counters; also, hard-coded hit-rates make the hardware model insensitive to changes in cache sizes. Alternatively, we use reuse distance distributions in the tasklists. In general, reuse profiles require the application modeler to run a very expensive trace analysis on the real code that realistically can be done at best for small examples.« less

  16. Introduction on performance analysis and profiling methodologies for KVM on ARM virtualization

    NASA Astrophysics Data System (ADS)

    Motakis, Antonios; Spyridakis, Alexander; Raho, Daniel

    2013-05-01

    The introduction of hardware virtualization extensions on ARM Cortex-A15 processors has enabled the implementation of full virtualization solutions for this architecture, such as KVM on ARM. This trend motivates the need to quantify and understand the performance impact, emerged by the application of this technology. In this work we start looking into some interesting performance metrics on KVM for ARM processors, which can provide us with useful insight that may lead to potential improvements in the future. This includes measurements such as interrupt latency and guest exit cost, performed on ARM Versatile Express and Samsung Exynos 5250 hardware platforms. Furthermore, we discuss additional methodologies that can provide us with a deeper understanding in the future of the performance footprint of KVM. We identify some of the most interesting approaches in this field, and perform a tentative analysis on how these may be implemented in the KVM on ARM port. These take into consideration hardware and software based counters for profiling, and issues related to the limitations of the simulators which are often used, such as the ARM Fast Models platform.

  17. Hardware packet pacing using a DMA in a parallel computer

    DOEpatents

    Chen, Dong; Heidelberger, Phillip; Vranas, Pavlos

    2013-08-13

    Method and system for hardware packet pacing using a direct memory access controller in a parallel computer which, in one aspect, keeps track of a total number of bytes put on the network as a result of a remote get operation, using a hardware token counter.

  18. Using a PC as a Frequency Meter or a Counter.

    ERIC Educational Resources Information Center

    Sartori, J.; And Others

    1995-01-01

    Describes hardware that enables the use of an IBM PC microcomputer as a frequency meter or a counter by using the parallel printer port. Eliminates the 16-bit time-day counter through the use of an external time base that can be conveniently set depending on the desired frequency range. (JRH)

  19. Fast concurrent array-based stacks, queues and deques using fetch-and-increment-bounded, fetch-and-decrement-bounded and store-on-twin synchronization primitives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Dong; Gara, Alana; Heidelberger, Philip

    Implementation primitives for concurrent array-based stacks, queues, double-ended queues (deques) and wrapped deques are provided. In one aspect, each element of the stack, queue, deque or wrapped deque data structure has its own ticket lock, allowing multiple threads to concurrently use multiple elements of the data structure and thus achieving high performance. In another aspect, new synchronization primitives FetchAndIncrementBounded (Counter, Bound) and FetchAndDecrementBounded (Counter, Bound) are implemented. These primitives can be implemented in hardware and thus promise a very fast throughput for queues, stacks and double-ended queues.

  20. Novel Designs of Quantum Reversible Counters

    NASA Astrophysics Data System (ADS)

    Qi, Xuemei; Zhu, Haihong; Chen, Fulong; Zhu, Junru; Zhang, Ziyang

    2016-11-01

    Reversible logic, as an interesting and important issue, has been widely used in designing combinational and sequential circuits for low-power and high-speed computation. Though a significant number of works have been done on reversible combinational logic, the realization of reversible sequential circuit is still at premature stage. Reversible counter is not only an important part of the sequential circuit but also an essential part of the quantum circuit system. In this paper, we designed two kinds of novel reversible counters. In order to construct counter, the innovative reversible T Flip-flop Gate (TFG), T Flip-flop block (T_FF) and JK flip-flop block (JK_FF) are proposed. Based on the above blocks and some existing reversible gates, the 4-bit binary-coded decimal (BCD) counter and controlled Up/Down synchronous counter are designed. With the help of Verilog hardware description language (Verilog HDL), these counters above have been modeled and confirmed. According to the simulation results, our circuits' logic structures are validated. Compared to the existing ones in terms of quantum cost (QC), delay (DL) and garbage outputs (GBO), it can be concluded that our designs perform better than the others. There is no doubt that they can be used as a kind of important storage components to be applied in future low-power computing systems.

  1. Configurable memory system and method for providing atomic counting operations in a memory device

    DOEpatents

    Bellofatto, Ralph E.; Gara, Alan G.; Giampapa, Mark E.; Ohmacht, Martin

    2010-09-14

    A memory system and method for providing atomic memory-based counter operations to operating systems and applications that make most efficient use of counter-backing memory and virtual and physical address space, while simplifying operating system memory management, and enabling the counter-backing memory to be used for purposes other than counter-backing storage when desired. The encoding and address decoding enabled by the invention provides all this functionality through a combination of software and hardware.

  2. Verification of OpenSSL version via hardware performance counters

    NASA Astrophysics Data System (ADS)

    Bruska, James; Blasingame, Zander; Liu, Chen

    2017-05-01

    Many forms of malware and security breaches exist today. One type of breach downgrades a cryptographic program by employing a man-in-the-middle attack. In this work, we explore the utilization of hardware events in conjunction with machine learning algorithms to detect which version of OpenSSL is being run during the encryption process. This allows for the immediate detection of any unknown downgrade attacks in real time. Our experimental results indicated this detection method is both feasible and practical. When trained with normal TLS and SSL data, our classifier was able to detect which protocol was being used with 99.995% accuracy. After the scope of the hardware event recording was enlarged, the accuracy diminished greatly, but to 53.244%. Upon removal of TLS 1.1 from the data set, the accuracy returned to 99.905%.

  3. Optimization of analytical laboratory work using computer networking and databasing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upp, D.L.; Metcalf, R.A.

    1996-06-01

    The Health Physics Analysis Laboratory (HPAL) performs around 600,000 analyses for radioactive nuclides each year at Los Alamos National Laboratory (LANL). Analysis matrices vary from nasal swipes, air filters, work area swipes, liquids, to the bottoms of shoes and cat litter. HPAL uses 8 liquid scintillation counters, 8 gas proportional counters, and 9 high purity germanium detectors in 5 laboratories to perform these analyses. HPAL has developed a computer network between the labs and software to produce analysis results. The software and hardware package includes barcode sample tracking, log-in, chain of custody, analysis calculations, analysis result printing, and utility programs.more » All data are written to a database, mirrored on a central server, and eventually written to CD-ROM to provide for online historical results. This system has greatly reduced the work required to provide for analysis results as well as improving the quality of the work performed.« less

  4. RRTMGP: A High-Performance Broadband Radiation Code for the Next Decade

    DTIC Science & Technology

    2014-09-30

    Hardware counters were used to measure several performance metrics, including the number of double-precision (DP) floating- point operations ( FLOPs ...0.2 DP FLOPs per CPU cycle. Experience with production science code is that it is possible to achieve execution rates in the range of 0.5 to 1.0...DP FLOPs per cycle. Looking at the ratio of vectorized DP FLOPs to total DP FLOPs we see (Figure PROF) that for most of the execution time the

  5. Instruction-Level Characterization of Scientific Computing Applications Using Hardware Performance Counters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Y.; Cameron, K.W.

    1998-11-24

    Workload characterization has been proven an essential tool to architecture design and performance evaluation in both scientific and commercial computing areas. Traditional workload characterization techniques include FLOPS rate, cache miss ratios, CPI (cycles per instruction or IPC, instructions per cycle) etc. With the complexity of sophisticated modern superscalar microprocessors, these traditional characterization techniques are not powerful enough to pinpoint the performance bottleneck of an application on a specific microprocessor. They are also incapable of immediately demonstrating the potential performance benefit of any architectural or functional improvement in a new processor design. To solve these problems, many people rely on simulators,more » which have substantial constraints especially on large-scale scientific computing applications. This paper presents a new technique of characterizing applications at the instruction level using hardware performance counters. It has the advantage of collecting instruction-level characteristics in a few runs virtually without overhead or slowdown. A variety of instruction counts can be utilized to calculate some average abstract workload parameters corresponding to microprocessor pipelines or functional units. Based on the microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. In particular, the analysis results can provide some insight to the problem that only a small percentage of processor peak performance can be achieved even for many very cache-friendly codes. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. Eventually, these abstract parameters can lead to the creation of an analytical microprocessor pipeline model and memory hierarchy model.« less

  6. Measuring FLOPS Using Hardware Performance Counter Technologies on LC systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, D H

    2008-09-05

    FLOPS (FLoating-point Operations Per Second) is a commonly used performance metric for scientific programs that rely heavily on floating-point (FP) calculations. The metric is based on the number of FP operations rather than instructions, thereby facilitating a fair comparison between different machines. A well-known use of this metric is the LINPACK benchmark that is used to generate the Top500 list. It measures how fast a computer solves a dense N by N system of linear equations Ax=b, which requires a known number of FP operations, and reports the result in millions of FP operations per second (MFLOPS). While running amore » benchmark with known FP workloads can provide insightful information about the efficiency of a machine's FP pipelines in relation to other machines, measuring FLOPS of an arbitrary scientific application in a platform-independent manner is nontrivial. The goal of this paper is twofold. First, we explore the FP microarchitectures of key processors that are underpinning the LC machines. Second, we present the hardware performance monitoring counter-based measurement techniques that a user can use to get the native FLOPS of his or her program, which are practical solutions readily available on LC platforms. By nature, however, these native FLOPS metrics are not directly comparable across different machines mainly because FP operations are not consistent across microarchitectures. Thus, the first goal of this paper represents the base reference by which a user can interpret the measured FLOPS more judiciously.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yao; Balaprakash, Prasanna; Meng, Jiayuan

    We present Raexplore, a performance modeling framework for architecture exploration. Raexplore enables rapid, automated, and systematic search of architecture design space by combining hardware counter-based performance characterization and analytical performance modeling. We demonstrate Raexplore for two recent manycore processors IBM Blue- Gene/Q compute chip and Intel Xeon Phi, targeting a set of scientific applications. Our framework is able to capture complex interactions between architectural components including instruction pipeline, cache, and memory, and to achieve a 3–22% error for same-architecture and cross-architecture performance predictions. Furthermore, we apply our framework to assess the two processors, and discover and evaluate a list ofmore » architectural scaling options for future processor designs.« less

  8. A Cross-Platform Infrastructure for Scalable Runtime Application Performance Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jack Dongarra; Shirley Moore; Bart Miller, Jeffrey Hollingsworth

    2005-03-15

    The purpose of this project was to build an extensible cross-platform infrastructure to facilitate the development of accurate and portable performance analysis tools for current and future high performance computing (HPC) architectures. Major accomplishments include tools and techniques for multidimensional performance analysis, as well as improved support for dynamic performance monitoring of multithreaded and multiprocess applications. Previous performance tool development has been limited by the burden of having to re-write a platform-dependent low-level substrate for each architecture/operating system pair in order to obtain the necessary performance data from the system. Manual interpretation of performance data is not scalable for large-scalemore » long-running applications. The infrastructure developed by this project provides a foundation for building portable and scalable performance analysis tools, with the end goal being to provide application developers with the information they need to analyze, understand, and tune the performance of terascale applications on HPC architectures. The backend portion of the infrastructure provides runtime instrumentation capability and access to hardware performance counters, with thread-safety for shared memory environments and a communication substrate to support instrumentation of multiprocess and distributed programs. Front end interfaces provides tool developers with a well-defined, platform-independent set of calls for requesting performance data. End-user tools have been developed that demonstrate runtime data collection, on-line and off-line analysis of performance data, and multidimensional performance analysis. The infrastructure is based on two underlying performance instrumentation technologies. These technologies are the PAPI cross-platform library interface to hardware performance counters and the cross-platform Dyninst library interface for runtime modification of executable images. The Paradyn and KOJAK projects have made use of this infrastructure to build performance measurement and analysis tools that scale to long-running programs on large parallel and distributed systems and that automate much of the search for performance bottlenecks.« less

  9. PC based graphic display real-time particle beam uniformity

    NASA Technical Reports Server (NTRS)

    Huebner, M. A.; Malone, C. J.; Smith, L. S.; Soli, G. A.

    1989-01-01

    A technique has been developed to support the study of the effects of cosmic rays on integrated circuits. The system is designed to determine the particle distribution across the surface of an integrated circuit accurately while the circuit is bombarded by a particle beam. The system uses photomultiplier tubes, an octal discriminator, a computer-controlled NIM quad counter, and an IBM PC. It provides real-time operator feedback for fast beam tuning and monitors momentary fluctuations in the particle beam. The hardware, software, and system performance are described.

  10. Unraveling Network-induced Memory Contention: Deeper Insights with Machine Learning

    DOE PAGES

    Groves, Taylor Liles; Grant, Ryan; Gonzales, Aaron; ...

    2017-11-21

    Remote Direct Memory Access (RDMA) is expected to be an integral communication mechanism for future exascale systems enabling asynchronous data transfers, so that applications may fully utilize CPU resources while simultaneously sharing data amongst remote nodes. We examine Network-induced Memory Contention (NiMC) on Infiniband networks. We expose the interactions between RDMA, main-memory and cache, when applications and out-of-band services compete for memory resources. We then explore NiMCs resulting impact on application-level performance. For a range of hardware technologies and HPC workloads, we quantify NiMC and show that NiMCs impact grows with scale resulting in up to 3X performance degradation atmore » scales as small as 8K processes even in applications that previously have been shown to be performance resilient in the presence of noise. In addition, this work examines the problem of predicting NiMC's impact on applications by leveraging machine learning and easily accessible performance counters. This approach provides additional insights about the root cause of NiMC and facilitates dynamic selection of potential solutions. Finally, we evaluated three potential techniques to reduce NiMCs impact, namely hardware offloading, core reservation and network throttling.« less

  11. Unraveling Network-induced Memory Contention: Deeper Insights with Machine Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groves, Taylor Liles; Grant, Ryan; Gonzales, Aaron

    Remote Direct Memory Access (RDMA) is expected to be an integral communication mechanism for future exascale systems enabling asynchronous data transfers, so that applications may fully utilize CPU resources while simultaneously sharing data amongst remote nodes. We examine Network-induced Memory Contention (NiMC) on Infiniband networks. We expose the interactions between RDMA, main-memory and cache, when applications and out-of-band services compete for memory resources. We then explore NiMCs resulting impact on application-level performance. For a range of hardware technologies and HPC workloads, we quantify NiMC and show that NiMCs impact grows with scale resulting in up to 3X performance degradation atmore » scales as small as 8K processes even in applications that previously have been shown to be performance resilient in the presence of noise. In addition, this work examines the problem of predicting NiMC's impact on applications by leveraging machine learning and easily accessible performance counters. This approach provides additional insights about the root cause of NiMC and facilitates dynamic selection of potential solutions. Finally, we evaluated three potential techniques to reduce NiMCs impact, namely hardware offloading, core reservation and network throttling.« less

  12. Performance Analysis of GYRO: A Tool Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Worley, P.; Roth, P.; Candy, J.

    2005-06-26

    The performance of the Eulerian gyrokinetic-Maxwell solver code GYRO is analyzed on five high performance computing systems. First, a manual approach is taken, using custom scripts to analyze the output of embedded wall clock timers, floating point operation counts collected using hardware performance counters, and traces of user and communication events collected using the profiling interface to Message Passing Interface (MPI) libraries. Parts of the analysis are then repeated or extended using a number of sophisticated performance analysis tools: IPM, KOJAK, SvPablo, TAU, and the PMaC modeling tool suite. The paper briefly discusses what has been discovered via this manualmore » analysis process, what performance analyses are inconvenient or infeasible to attempt manually, and to what extent the tools show promise in accelerating or significantly extending the manual performance analyses.« less

  13. Power Consumption and Calculation Requirement Analysis of AES for WSN IoT.

    PubMed

    Hung, Chung-Wen; Hsu, Wen-Ting

    2018-05-23

    Because of the ubiquity of Internet of Things (IoT) devices, the power consumption and security of IoT systems have become very important issues. Advanced Encryption Standard (AES) is a block cipher algorithm is commonly used in IoT devices. In this paper, the power consumption and cryptographic calculation requirement for different payload lengths and AES encryption types are analyzed. These types include software-based AES-CB, hardware-based AES-ECB (Electronic Codebook Mode), and hardware-based AES-CCM (Counter with CBC-MAC Mode). The calculation requirement and power consumption for these AES encryption types are measured on the Texas Instruments LAUNCHXL-CC1310 platform. The experimental results show that the hardware-based AES performs better than the software-based AES in terms of power consumption and calculation cycle requirements. In addition, in terms of AES mode selection, the AES-CCM-MIC64 mode may be a better choice if the IoT device is considering security, encryption calculation requirement, and low power consumption at the same time. However, if the IoT device is pursuing lower power and the payload length is generally less than 16 bytes, then AES-ECB could be considered.

  14. The Efficiency and the Scalability of an Explicit Operator on an IBM POWER4 System

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We present an evaluation of the efficiency and the scalability of an explicit CFD operator on an IBM POWER4 system. The POWER4 architecture exhibits a common trend in HPC architectures: boosting CPU processing power by increasing the number of functional units, while hiding the latency of memory access by increasing the depth of the memory hierarchy. The overall machine performance depends on the ability of the caches-buses-fabric-memory to feed the functional units with the data to be processed. In this study we evaluate the efficiency and scalability of one explicit CFD operator on an IBM POWER4. This operator performs computations at the points of a Cartesian grid and involves a few dozen floating point numbers and on the order of 100 floating point operations per grid point. The computations in all grid points are independent. Specifically, we estimate the efficiency of the RHS operator (SP of NPB) on a single processor as the observed/peak performance ratio. Then we estimate the scalability of the operator on a single chip (2 CPUs), a single MCM (8 CPUs), 16 CPUs, and the whole machine (32 CPUs). Then we perform the same measurements for a chache-optimized version of the RHS operator. For our measurements we use the HPM (Hardware Performance Monitor) counters available on the POWER4. These counters allow us to analyze the obtained performance results.

  15. A Software Architecture for a Small Autonomous Underwater Vehicle Navigation System

    DTIC Science & Technology

    1993-06-01

    angle consistent with system accuracy objectives for the interim SANS system must be quantified. 12 DEPTH CHAC oCLIMB ANGLE HORIZONTAL DISTANCE Figure...Figure 4.1 illustrates the hardware interface. COMPUTER (ESP-8o80) D IG IT A L B I N A R GYRO SIGNAL BINARY BINARY HEADING DATA "\\DATA DEPTH /RS-232...Mode 3 of the 82C54 provides a square wave through any of the 3 counters in the 82C54. An initial count N is written to the counter control register

  16. Single Axis Attitude Control and DC Bus Regulation with Two Flywheels

    NASA Technical Reports Server (NTRS)

    Kascak, Peter E.; Jansen, Ralph H.; Kenny, Barbara; Dever, Timothy P.

    2002-01-01

    A computer simulation of a flywheel energy storage single axis attitude control system is described. The simulation models hardware which will be experimentally tested in the future. This hardware consists of two counter rotating flywheels mounted to an air table. The air table allows one axis of rotational motion. An inertia DC bus coordinator is set forth that allows the two control problems, bus regulation and attitude control, to be separated. Simulation results are presented with a previously derived flywheel bus regulator and a simple PID attitude controller.

  17. Orion MPCV Touchdown Detection Threshold Development and Testing

    NASA Technical Reports Server (NTRS)

    Daum, Jared; Gay, Robert

    2013-01-01

    A robust method of detecting Orion Multi-Purpose Crew Vehicle (MPCV) splashdown is necessary to ensure crew and hardware safety during descent and after touchdown. The proposed method uses a triple redundant system to inhibit Reaction Control System (RCS) thruster firings, detach parachute risers from the vehicle, and transition to the post-landing segment of the Flight Software (FSW). An in-depth trade study was completed to determine optimal characteristics of the touchdown detection method resulting in an algorithm monitoring filtered, lever-arm corrected, 200 Hz Inertial Measurement Unit (IMU) vehicle acceleration magnitude data against a tunable threshold using persistence counter logic. Following the design of the algorithm, high fidelity environment and vehicle simulations, coupled with the actual vehicle FSW, were used to tune the acceleration threshold and persistence counter value to result in adequate performance in detecting touchdown and sufficient safety margin against early detection while descending under parachutes. An analytical approach including Kriging and adaptive sampling allowed for a sufficient number of finite element analysis (FEA) impact simulations to be completed using minimal computation time. The combination of a persistence counter of 10 and an acceleration threshold of approximately 57.3 ft/s2 resulted in an impact performance factor of safety (FOS) of 1.0 and a safety FOS of approximately 2.6 for touchdown declaration. An RCS termination acceleration threshold of approximately 53.1 ft/s(exp)2 with a persistence counter of 10 resulted in an increased impact performance FOS of 1.2 at the expense of a lowered under-parachutes safety factor of 2.2. The resulting tuned algorithm was then tested on data from eight Capsule Parachute Assembly System (CPAS) flight tests, showing an experimental minimum safety FOS of 6.1. The formulated touchdown detection algorithm will be flown on the Orion MPCV FSW during the Exploration Flight Test 1 (EFT-1) mission in the second half of 2014.

  18. Distributed performance counters

    DOEpatents

    Davis, Kristan D; Evans, Kahn C; Gara, Alan; Satterfield, David L

    2013-11-26

    A plurality of first performance counter modules is coupled to a plurality of processing cores. The plurality of first performance counter modules is operable to collect performance data associated with the plurality of processing cores respectively. A plurality of second performance counter modules are coupled to a plurality of L2 cache units, and the plurality of second performance counter modules are operable to collect performance data associated with the plurality of L2 cache units respectively. A central performance counter module may be operable to coordinate counter data from the plurality of first performance counter modules and the plurality of second performance modules, the a central performance counter module, the plurality of first performance counter modules, and the plurality of second performance counter modules connected by a daisy chain connection.

  19. Parallelizing ATLAS Reconstruction and Simulation: Issues and Optimization Solutions for Scaling on Multi- and Many-CPU Platforms

    NASA Astrophysics Data System (ADS)

    Leggett, C.; Binet, S.; Jackson, K.; Levinthal, D.; Tatarkhanov, M.; Yao, Y.

    2011-12-01

    Thermal limitations have forced CPU manufacturers to shift from simply increasing clock speeds to improve processor performance, to producing chip designs with multi- and many-core architectures. Further the cores themselves can run multiple threads as a zero overhead context switch allowing low level resource sharing (Intel Hyperthreading). To maximize bandwidth and minimize memory latency, memory access has become non uniform (NUMA). As manufacturers add more cores to each chip, a careful understanding of the underlying architecture is required in order to fully utilize the available resources. We present AthenaMP and the Atlas event loop manager, the driver of the simulation and reconstruction engines, which have been rewritten to make use of multiple cores, by means of event based parallelism, and final stage I/O synchronization. However, initial studies on 8 andl6 core Intel architectures have shown marked non-linearities as parallel process counts increase, with as much as 30% reductions in event throughput in some scenarios. Since the Intel Nehalem architecture (both Gainestown and Westmere) will be the most common choice for the next round of hardware procurements, an understanding of these scaling issues is essential. Using hardware based event counters and Intel's Performance Tuning Utility, we have studied the performance bottlenecks at the hardware level, and discovered optimization schemes to maximize processor throughput. We have also produced optimization mechanisms, common to all large experiments, that address the extreme nature of today's HEP code, which due to it's size, places huge burdens on the memory infrastructure of today's processors.

  20. Instruction-level performance modeling and characterization of multimedia applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Y.; Cameron, K.W.

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based onmore » microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.« less

  1. CPMIP: measurements of real computational performance of Earth system models in CMIP6

    NASA Astrophysics Data System (ADS)

    Balaji, Venkatramani; Maisonnave, Eric; Zadeh, Niki; Lawrence, Bryan N.; Biercamp, Joachim; Fladrich, Uwe; Aloisio, Giovanni; Benson, Rusty; Caubel, Arnaud; Durachta, Jeffrey; Foujols, Marie-Alice; Lister, Grenville; Mocavero, Silvia; Underwood, Seth; Wright, Garrett

    2017-01-01

    A climate model represents a multitude of processes on a variety of timescales and space scales: a canonical example of multi-physics multi-scale modeling. The underlying climate system is physically characterized by sensitive dependence on initial conditions, and natural stochastic variability, so very long integrations are needed to extract signals of climate change. Algorithms generally possess weak scaling and can be I/O and/or memory-bound. Such weak-scaling, I/O, and memory-bound multi-physics codes present particular challenges to computational performance. Traditional metrics of computational efficiency such as performance counters and scaling curves do not tell us enough about real sustained performance from climate models on different machines. They also do not provide a satisfactory basis for comparative information across models. codes present particular challenges to computational performance. We introduce a set of metrics that can be used for the study of computational performance of climate (and Earth system) models. These measures do not require specialized software or specific hardware counters, and should be accessible to anyone. They are independent of platform and underlying parallel programming models. We show how these metrics can be used to measure actually attained performance of Earth system models on different machines, and identify the most fruitful areas of research and development for performance engineering. codes present particular challenges to computational performance. We present results for these measures for a diverse suite of models from several modeling centers, and propose to use these measures as a basis for a CPMIP, a computational performance model intercomparison project (MIP).

  2. Linguistic geometry for technologies procurement

    NASA Astrophysics Data System (ADS)

    Stilman, Boris; Yakhnis, Vladimir; Umanskiy, Oleg; Boyd, Ron

    2005-05-01

    In the modern world of rapidly rising prices of new military hardware, the importance of Simulation Based Acquisition (SBA) is hard to overestimate. With SAB, DOD would be able to test, develop CONOPS for, debug, and evaluate new conceptual military equipment before actually building the expensive hardware. However, only recently powerful tools for real SBA have been developed. Linguistic Geometry (LG) permits full-scale modeling and evaluation of new military technologies, combinations of hardware systems and concepts of their application. Using LG tools, the analysts can create a gaming environment populated with the Blue forces armed with the new conceptual hardware as well as with appropriate existing weapons and equipment. This environment will also contain the intelligent enemy with appropriate weaponry and, if desired, with a conceptual counters to the new Blue weapons. Within such LG gaming environment, the analyst can run various what-ifs with the LG tools providing the simulated combatants with strategies and tactics solving their goals with minimal resources spent.

  3. Flight evaluation of advanced third-generation midwave infrared sensor

    NASA Astrophysics Data System (ADS)

    Shen, Chyau N.; Donn, Matthew

    1998-08-01

    In FY-97 the Counter Drug Optical Upgrade (CDOU) demonstration program was initiated by the Program Executive Office for Counter Drug to increase the detection and classification ranges of P-3 counter drug aircraft by using advanced staring infrared sensors. The demonstration hardware is a `pin-for-pin' replacement of the AAS-36 Infrared Detection Set (IRDS) located under the nose radome of a P-3 aircraft. The hardware consists of a 3rd generation mid-wave infrared (MWIR) sensor integrated into a three axis-stabilized turret. The sensor, when installed on the P- 3, has a hemispheric field of regard and analysis has shown it will be capable of detecting and classifying Suspected Drug Trafficking Aircraft and Vessels at ranges several factors over the current IRDS. This paper will discuss the CDOU system and it's lab, ground, and flight evaluation results. Test targets included target templates, range targets, dedicated target boats, and targets of opportunity at the Naval Air Warfare Center Aircraft Division and at operational test sites. The objectives of these tests were to: (1) Validate the integration concept of the CDOU package into the P-3 aircraft. (2) Validate the end-to-end functionality of the system, including sensor/turret controls and recording of imagery during flight. (3) Evaluate the system sensitivity and resolution on a set of verified resolution targets templates. (4) Validate the ability of the 3rd generation MWIR sensor to detect and classify targets at a significantly increased range.

  4. Hardware proofs using EHDM and the RSRE verification methodology

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Sjogren, Jon A.

    1988-01-01

    Examined is a methodology for hardware verification developed by Royal Signals and Radar Establishment (RSRE) in the context of the SRI International's Enhanced Hierarchical Design Methodology (EHDM) specification/verification system. The methodology utilizes a four-level specification hierarchy with the following levels: functional level, finite automata model, block model, and circuit level. The properties of a level are proved as theorems in the level below it. This methodology is applied to a 6-bit counter problem and is critically examined. The specifications are written in EHDM's specification language, Extended Special, and the proofs are improving both the RSRE methodology and the EHDM system.

  5. Candidate Exercise Technologies and Prescriptions

    NASA Technical Reports Server (NTRS)

    Loerch, Linda H.

    2010-01-01

    This slide presentation reviews potential exercise technologies to counter the effects of space flight. It includes a overview of the exercise countermeasures project, a review of some of the candidate exercise technologies being considered and a few of the analog exercise hardware devices, and a review of new studies that are designed to optimize the current and future exercise protocols.

  6. Characterization of UMT2013 Performance on Advanced Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howell, Louis

    2014-12-31

    This paper presents part of a larger effort to make detailed assessments of several proxy applications on various advanced architectures, with the eventual goal of extending these assessments to codes of programmatic interest running more realistic simulations. The focus here is on UMT2013, a proxy implementation of deterministic transport for unstructured meshes. I present weak and strong MPI scaling results and studies of OpenMP efficiency on the Sequoia BG/Q system at LLNL, with comparison against similar tests on an Intel Sandy Bridge TLCC2 system. The hardware counters on BG/Q provide detailed information on many aspects of on-node performance, while informationmore » from the mpiP tool gives insight into the reasons for the differing scaling behavior on these two different architectures. Preliminary tests that exploit NVRAM as extended memory on an Ivy Bridge machine designed for “Big Data” applications are also included.« less

  7. Consortium for Robotics & Unmanned Systems Education & Research (CRUSER)

    DTIC Science & Technology

    2012-09-30

    as facilities at Camp Roberts, Calif. and frequent experimentation events, the Many vs. Many ( MvM ) Autonomous Systems Testbed provides the...and expediently translate theory to practice. The MvM Testbed is designed to integrate technological advances in hardware (inexpensive, expendable...designed to leverage the MvM Autonomous Systems Testbed to explore practical and operationally relevant avenues to counter these “swarm” opponents, and

  8. Controlling behavioral experiments with a new programming language (SORCA) for microcomputer systems.

    PubMed

    Brinkhus, H B; Klinkenborg, H; Estorf, R; Weber, R

    1983-01-01

    A new programming language SORCA has been defined and a compiler has been written for Z80-based microcomputer systems with CP/M operating system. The language was developed to control behavioral experiments by external stimuli and by time schedule in real-time. Eight binary hardware input lines are sampled cyclically by the computer and can be used to sense switches, level detectors and other binary information, while 8 binary hardware output lines, that are cyclically updated, can be used to control relays, lamps, generate tones or for other purposes. The typical reaction time (cycle time) of a SORCA-program is 500 microseconds to 1 ms. All functions can be programmed as often as necessary. Included are the basic logic functions, counters, timers, majority gates and other complex functions. Parameters can be given as constants or as a result of a step function or of a random process (with Gaussian or equal distribution). Several tasks can be performed simultaneously. In addition, results of an experiment (e.g., number of reactions or latencies) can be measured and printed out on request or automatically. The language is easy to learn and can also be used for many other control purposes.

  9. OSO-8 soft X-ray wheel experiment: Data analysis

    NASA Technical Reports Server (NTRS)

    Kraushaar, W. L.

    1982-01-01

    The soft X-ray experiment hardware and its operation are described. The device included six X-ray proportional counters, two of which, numbers 1 and 4, were pressurized with on-board methane gas supplies. Number 4 developed an excessive leak rate early in the mission and was turned off on 1975 day number 282 except for brief (typically 2-hour) periods up to day 585 after which it as left off. Counter 1 worked satisfactorily until 1975 day number 1095 (January 1, 1978) at which time the on-board methane supply was depleted. The other four counters were sealed and all except number 3 worked satisfactorily throughout the mission which terminated with permanent satellie shut-down on day 1369. This was the first large area thin-window, gas-flow X-ray detector to be flown in orbit. The background problems were severe and consumed a very large portion of the data analysis effort. These background problems were associated with the Earth's trapped electron belts.

  10. Performing a local barrier operation

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-03-04

    Performing a local barrier operation with parallel tasks executing on a compute node including, for each task: retrieving a present value of a counter; calculating, in dependence upon the present value of the counter and a total number of tasks performing the local barrier operation, a base value, the base value representing the counter's value prior to any task joining the local barrier; calculating, in dependence upon the base value and the total number of tasks performing the local barrier operation, a target value of the counter, the target value representing the counter's value when all tasks have joined the local barrier; joining the local barrier, including atomically incrementing the value of the counter; and repetitively, until the present value of the counter is no less than the target value of the counter: retrieving the present value of the counter and determining whether the present value equals the target value.

  11. Performing a local barrier operation

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-03-04

    Performing a local barrier operation with parallel tasks executing on a compute node including, for each task: retrieving a present value of a counter; calculating, in dependence upon the present value of the counter and a total number of tasks performing the local barrier operation, a base value of the counter, the base value representing the counter's value prior to any task joining the local barrier; calculating, in dependence upon the base value and the total number of tasks performing the local barrier operation, a target value, the target value representing the counter's value when all tasks have joined the local barrier; joining the local barrier, including atomically incrementing the value of the counter; and repetitively, until the present value of the counter is no less than the target value of the counter: retrieving the present value of the counter and determining whether the present value equals the target value.

  12. Interactive graphics system for IBM 1800 computer

    NASA Technical Reports Server (NTRS)

    Carleton, T. P.; Howell, D. R.; Mish, W. H.

    1972-01-01

    A FORTRAN compatible software system that has been developed to provide an interactive graphics capability for the IBM 1800 computer is described. The interactive graphics hardware consists of a Hewlett-Packard 1300A cathode ray tube, Sanders photopen, digital to analog converters, pulse counter, and necessary interface. The hardware is available from IBM as several related RPQ's. The software developed permits the application programmer to use IBM 1800 FORTRAN to develop a display on the cathode ray tube which consists of one or more independent units called pictures. The software permits a great deal of flexibility in the manipulation of these pictures and allows the programmer to use the photopen to interact with the displayed data and make decisions based on information returned by the photopen.

  13. Acquisition Research: Creating Synergy for Informed Change. May 15-16 2013

    DTIC Science & Technology

    2013-05-01

    It requires sensors to collect data on component conditions that will be used to generate condition assessments. Royal Dutch Navy Fleet...electronic counter measures (ECMs), communications, and sensors . A more complex example is the ability to load different software onto pre-defined hardware...2013; Sherborne Sensors , 2013). To add to the confusion, Thomke’s (1997) paper, which contains excellent case studies into what we would call

  14. Research on the adaptive optical control technology based on DSP

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolu; Xue, Qiao; Zeng, Fa; Zhao, Junpu; Zheng, Kuixing; Su, Jingqin; Dai, Wanjun

    2018-02-01

    Adaptive optics is a real-time compensation technique using high speed support system for wavefront errors caused by atmospheric turbulence. However, the randomness and instantaneity of atmospheric changing introduce great difficulties to the design of adaptive optical systems. A large number of complex real-time operations lead to large delay, which is an insurmountable problem. To solve this problem, hardware operation and parallel processing strategy are proposed, and a high-speed adaptive optical control system based on DSP is developed. The hardware counter is used to check the system. The results show that the system can complete a closed loop control in 7.1ms, and improve the controlling bandwidth of the adaptive optical system. Using this system, the wavefront measurement and closed loop experiment are carried out, and obtain the good results.

  15. Development of slow control system for the Belle II ARICH counter

    NASA Astrophysics Data System (ADS)

    Yonenaga, M.; Adachi, I.; Dolenec, R.; Hataya, K.; Iori, S.; Iwata, S.; Kakuno, H.; Kataura, R.; Kawai, H.; Kindo, H.; Kobayashi, T.; Korpar, S.; Križan, P.; Kumita, T.; Mrvar, M.; Nishida, S.; Ogawa, K.; Ogawa, S.; Pestotnik, R.; Šantelj, L.; Sumiyoshi, T.; Tabata, M.; Yusa, Y.

    2017-12-01

    A slow control system (SCS) for the Aerogel Ring Imaging Cherenkov (ARICH) counter in the Belle II experiment was newly developed and coded in the development frameworks of the Belle II DAQ software. The ARICH is based on 420 Hybrid Avalanche Photo-Detectors (HAPDs). Each HAPD has 144 pixels to be readout and requires 6 power supply (PS) channels, therefore a total number of 2520 PS channels and 60,480 pixels have to be configured and controlled. Graphical User Interfaces (GUIs) with detector oriented view and device oriented view, were also implemented to ease the detector operation. The ARICH SCS is in operation for detector construction and cosmic rays tests. The paper describes the detailed features of the SCS and preliminary results of operation of a reduced set of hardware which confirm the scalability to the full detector.

  16. A shuttle and space station manipulator system for assembly, docking, maintenance, cargo handling and spacecraft retrieval (preliminary design). Volume 4: Simulation studies

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Laboratory simulations of three concepts, based on maximum use of available off-the-shelf hardware elements, are described. The concepts are a stereo-foveal-peripheral TV system with symmetric steroscopic split-image registration and 90 deg counter rotation; a computer assisted model control system termed the trajectory following control system; and active manipulator damping. It is concluded that the feasibility of these concepts is established.

  17. Towards Countering the Rise of the Silicon Trojan

    DTIC Science & Technology

    The Trojan Horse has a venerable if unwelcome history and it is still regarded by many as the primary component in Computer Network Attack. Trojans ... Trojans have in the vast majority taken the form of malicious software. However, more recent times have seen the emergence of what has been dubbed by some...as the ’Silicon Trojan ’ these trojans are embedded at the hardware level and can be designed directly into chips and devices. The complexity of the

  18. A performance model for GPUs with caches

    DOE PAGES

    Dao, Thanh Tuan; Kim, Jungwon; Seo, Sangmin; ...

    2014-06-24

    To exploit the abundant computational power of the world's fastest supercomputers, an even workload distribution to the typically heterogeneous compute devices is necessary. While relatively accurate performance models exist for conventional CPUs, accurate performance estimation models for modern GPUs do not exist. This paper presents two accurate models for modern GPUs: a sampling-based linear model, and a model based on machine-learning (ML) techniques which improves the accuracy of the linear model and is applicable to modern GPUs with and without caches. We first construct the sampling-based linear model to predict the runtime of an arbitrary OpenCL kernel. Based on anmore » analysis of NVIDIA GPUs' scheduling policies we determine the earliest sampling points that allow an accurate estimation. The linear model cannot capture well the significant effects that memory coalescing or caching as implemented in modern GPUs have on performance. We therefore propose a model based on ML techniques that takes several compiler-generated statistics about the kernel as well as the GPU's hardware performance counters as additional inputs to obtain a more accurate runtime performance estimation for modern GPUs. We demonstrate the effectiveness and broad applicability of the model by applying it to three different NVIDIA GPU architectures and one AMD GPU architecture. On an extensive set of OpenCL benchmarks, on average, the proposed model estimates the runtime performance with less than 7 percent error for a second-generation GTX 280 with no on-chip caches and less than 5 percent for the Fermi-based GTX 580 with hardware caches. On the Kepler-based GTX 680, the linear model has an error of less than 10 percent. On an AMD GPU architecture, Radeon HD 6970, the model estimates with 8 percent of error rates. As a result, the proposed technique outperforms existing models by a factor of 5 to 6 in terms of accuracy.« less

  19. Operating systems. [of computers

    NASA Technical Reports Server (NTRS)

    Denning, P. J.; Brown, R. L.

    1984-01-01

    A counter operating system creates a hierarchy of levels of abstraction, so that at a given level all details concerning lower levels can be ignored. This hierarchical structure separates functions according to their complexity, characteristic time scale, and level of abstraction. The lowest levels include the system's hardware; concepts associated explicitly with the coordination of multiple tasks appear at intermediate levels, which conduct 'primitive processes'. Software semaphore is the mechanism controlling primitive processes that must be synchronized. At higher levels lie, in rising order, the access to the secondary storage devices of a particular machine, a 'virtual memory' scheme for managing the main and secondary memories, communication between processes by way of a mechanism called a 'pipe', access to external input and output devices, and a hierarchy of directories cataloguing the hardware and software objects to which access must be controlled.

  20. Microcomputer control soft tube measuring-testing instrument

    NASA Astrophysics Data System (ADS)

    Zhou, Yanzhou; Jiang, Xiu-Zhen; Wang, Wen-Yi

    1993-09-01

    Soft tube are key and easily spoiled parts used by the vehicles in the transportation with large numbers. Measuring and testing of the tubes were made by hands for a long time. Cooperating with Harbin Railway Bureau recently we have developed a new kind of automatical measuring and testing instrument In the paper the instrument structure property and measuring principle are presented in details. Centre of the system is a singlechip processor INTEL 80C31 . It can collect deal with data and display the results on LED. Furthermore it brings electromagnetic valves and motors under control. Five soft tubes are measured and tested in the same time all the process is finished automatically. On the hardware and software counter-electromagnetic disturbance methods is adopted efficiently so the performance of the instrument is improved significantly. In the long run the instrument is reliable and practical It solves a quite difficult problem in the railway transportation.

  1. A Large Motion Suspension System for Simulation of Orbital Deployment

    NASA Technical Reports Server (NTRS)

    Straube, T. M.; Peterson, L. D.

    1994-01-01

    This paper describes the design and implementation of a vertical degree of freedom suspension system which provides a constant force off-load condition to counter gravity over large displacements. By accommodating motions up to one meter for structures weighing up to 100 pounds, the system is useful for experiments which simulate the on-orbit deployment of spacecraft components. A unique aspect of this system is the combination of a large stroke passive off-load device augmented by electromotive torque actuated force feedback. The active force feedback has the effect of reducing breakaway friction by an order of magnitude over the passive system alone. The paper describes the development of the suspension hardware and the feedback control algorithm. Experiments were performed to verify the suspensions system's ability to provide a gravity off-load as well as its effect on the modal characteristics of a test article.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langer, Steven H.; Karlin, Ian; Marinak, Marty M.

    HYDRA is used to simulate a variety of experiments carried out at the National Ignition Facility (NIF) [4] and other high energy density physics facilities. HYDRA has packages to simulate radiation transfer, atomic physics, hydrodynamics, laser propagation, and a number of other physics effects. HYDRA has over one million lines of code and includes both MPI and thread-level (OpenMP and pthreads) parallelism. This paper measures the performance characteristics of HYDRA using hardware counters on an IBM BlueGene/Q system. We report key ratios such as bytes/instruction and memory bandwidth for several different physics packages. The total number of bytes read andmore » written per time step is also reported. We show that none of the packages which use significant time are memory bandwidth limited on a Blue Gene/Q. HYDRA currently issues very few SIMD instructions. The pressure on memory bandwidth will increase if high levels of SIMD instructions can be achieved.« less

  3. Energy efficient engine combustor test hardware detailed design report

    NASA Technical Reports Server (NTRS)

    Zeisser, M. H.; Greene, W.; Dubiel, D. J.

    1982-01-01

    The combustor for the Energy Efficient Engine is an annular, two-zone component. As designed, it either meets or exceeds all program goals for performance, safety, durability, and emissions, with the exception of oxides of nitrogen. When compared to the configuration investigated under the NASA-sponsored Experimental Clean Combustor Program, which was used as a basis for design, the Energy Efficient Engine combustor component has several technology advancements. The prediffuser section is designed with short, strutless, curved-walls to provide a uniform inlet airflow profile. Emissions control is achieved by a two-zone combustor that utilizes two types of fuel injectors to improve fuel atomization for more complete combustion. The combustor liners are a segmented configuration to meet the durability requirements at the high combustor operating pressures and temperatures. Liner cooling is accomplished with a counter-parallel FINWALL technique, which provides more effective heat transfer with less coolant.

  4. Collar height and heel counter-stiffness for ankle stability and athletic performance in basketball.

    PubMed

    Liu, Hui; Wu, Zitian; Lam, Wing-Kai

    2017-01-01

    This study examined the effects of collar height and heel counter-stiffness of basketball shoes on ankle stability during sidestep cutting and athletic performance. 15 university basketball players wore customized shoes with different collar heights (high and low) and heel counter-stiffness (regular, stiffer and stiffest) for this study. Ankle stability was evaluated in sidestep cutting while athletic performance evaluated in jumping and agility tasks. All variables were analysed using two-way repeated ANOVA. Results showed shorter time to peak ankle inversion for both high collar and stiff heel counter conditions (P < 0.05), while smaller initial ankle inversion angle, peak inversion velocity and total range of inversion for wearing high collar shoes (P < 0.05). No shoe differences were found for performance variables. These findings imply that the collar height might play a larger role in lateral stability than heel counter-stiffness, while both collar height and counter-stiffness have no effect on athletic performance.

  5. Counter Action Procedure Generation in an Emergency Situation of Nuclear Power Plants

    NASA Astrophysics Data System (ADS)

    Gofuku, A.

    2018-02-01

    Lessons learned from the Fukushima Daiichi accident revealed various weak points in the design and operation of nuclear power plants at the time although there were many resilient activities made by the plant staff under difficult work environment. In order to reinforce the measures to make nuclear power plants more resilient, improvement of hardware and improvement of education and training of nuclear personnel are considered. In addition, considering the advancement of computer technology and artificial intelligence, it is a promising way to develop software tools to support the activities of plant staff.This paper focuses on the software tools to support the operations by human operators and introduces a concept of an intelligent operator support system that is called as co-operator. This paper also describes a counter operation generation technique the authors are studying as a core component of the co-operator.

  6. Design of a ``Digital Atlas Vme Electronics'' (DAVE) module

    NASA Astrophysics Data System (ADS)

    Goodrick, M.; Robinson, D.; Shaw, R.; Postranecky, M.; Warren, M.

    2012-01-01

    ATLAS-SCT has developed a new ATLAS trigger card, 'Digital Atlas Vme Electronics' (``DAVE''). The unit is designed to provide a versatile array of interface and logic resources, including a large FPGA. It interfaces to both VME bus and USB hosts. DAVE aims to provide exact ATLAS CTP (ATLAS Central Trigger Processor) functionality, with random trigger, simple and complex deadtime, ECR (Event Counter Reset), BCR (Bunch Counter Reset) etc. being generated to give exactly the same conditions in standalone running as experienced in combined runs. DAVE provides additional hardware and a large amount of free firmware resource to allow users to add or change functionality. The combination of the large number of individually programmable inputs and outputs in various formats, with very large external RAM and other components all connected to the FPGA, also makes DAVE a powerful and versatile FPGA utility card.

  7. Automated Cache Performance Analysis And Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohror, Kathryn

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool tomore » gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters, cache behavior could only be measured reliably in the ag- gregate across tens or hundreds of thousands of instructions. With the newest iteration of PEBS technology, cache events can be tied to a tuple of instruction pointer, target address (for both loads and stores), memory hierarchy, and observed latency. With this information we can now begin asking questions regarding the efficiency of not only regions of code, but how these regions interact with particular data structures and how these interactions evolve over time. In the short term, this information will be vital for performance analysts understanding and optimizing the behavior of their codes for the memory hierarchy. In the future, we can begin to ask how data layouts might be changed to improve performance and, for a particular application, what the theoretical optimal performance might be. The overall benefit to be produced by this effort was a commercial quality easy-to- use and scalable performance tool that will allow both beginner and experienced parallel programmers to automatically tune their applications for optimal cache usage. Effective use of such a tool can literally save weeks of performance tuning effort. Easy to use. With the proposed innovations, finding and fixing memory performance issues would be more automated and hide most to all of the performance engineer exper- tise ”under the hood” of the Open|SpeedShop performance tool. One of the biggest public benefits from the proposed innovations is that it makes performance analysis more usable to a larger group of application developers. Intuitive reporting of results. The Open|SpeedShop performance analysis tool has a rich set of intuitive, yet detailed reports for presenting performance results to application developers. Our goal was to leverage this existing technology to present the results from our memory performance addition to Open|SpeedShop. Suitable for experts as well as novices. Application performance is getting more difficult to measure as the hardware platforms they run on become more complicated. This makes life difficult for the application developer, in that they need to know more about the hardware platform, including the memory system hierarchy, in order to understand the performance of their application. Some application developers are comfortable in that sce- nario, while others want to do their scientific research and not have to understand all the nuances in the hardware platform they are running their application on. Our proposed innovations were aimed to support both experts and novice performance analysts. Useful in many markets. The enhancement to Open|SpeedShop would appeal to a broader market space, as it will be useful in scientific, commercial, and cloud computing environments. Our goal was to use technology developed initially at the and Lawrence Livermore Na- tional Laboratory combined with the development and commercial software experience of the Argo Navis Technologies, LLC (ANT) to form a powerful combination to delivery these objectives.« less

  8. Simulation verification techniques study

    NASA Technical Reports Server (NTRS)

    Schoonmaker, P. B.; Wenglinski, T. H.

    1975-01-01

    Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided.

  9. Analog hardware implementation of neocognitron networks

    NASA Astrophysics Data System (ADS)

    Inigo, Rafael M.; Bonde, Allen, Jr.; Holcombe, Bradford

    1990-08-01

    This paper deals with the analog implementation of neocognitron based neural networks. All of Fukushima''s and related work on the neocognitron is based on digital computer simulations. To fully take advantage of the power of this network paradigm an analog electronic approach is proposed. We first implemented a 6-by-6 sensor network with discrete analog components and fixed weights. The network was given weight values to recognize the characters U L and F. These characters are recognized regardless of their location on the sensor and with various levels of distortion and noise. The network performance has also shown an excellent correlation with software simulation results. Next we implemented a variable weight network which can be trained to recognize simple patterns by means of self-organization. The adaptable weights were implemented with PETs configured as voltage-controlled resistors. To implement a variable weight there must be some type of " memory" to store the weight value and hold it while the value is reinforced or incremented. Two methods were evaluated: an analog sample-hold circuit and a digital storage scheme using binary counters. The latter is preferable for VLSI implementation because it uses standard components and does not require the use of capacitors. The analog design and implementation of these small-scale networks demonstrates the feasibility of implementing more complicated ANNs in electronic hardware. The circuits developed can also be designed for VLSI implementation. 1.

  10. Characterization of Proxy Application Performance on Advanced Architectures. UMT2013, MCB, AMG2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howell, Louis H.; Gunney, Brian T.; Bhatele, Abhinav

    2015-10-09

    Three codes were tested at LLNL as part of a Tri-Lab effort to make detailed assessments of several proxy applications on various advanced architectures, with the eventual goal of extending these assessments to codes of programmatic interest running more realistic simulations. Teams from Sandia and Los Alamos tested proxy apps of their own. The focus in this report is on the LLNL codes UMT2013, MCB, and AMG2013. We present weak and strong MPI scaling results and studies of OpenMP efficiency on a large BG/Q system at LLNL, with comparison against similar tests on an Intel Sandy Bridge TLCC2 system. Themore » hardware counters on BG/Q provide detailed information on many aspects of on-node performance, while information from the mpiP tool gives insight into the reasons for the differing scaling behavior on these two different architectures. Results from three more speculative tests are also included: one that exploits NVRAM as extended memory, one that studies performance under a power bound, and one that illustrates the effects of changing the torus network mapping on BG/Q.« less

  11. Real-time optimizations for integrated smart network camera

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Lienard, Bruno; Meessen, Jerome; Delaigle, Jean-Francois

    2005-02-01

    We present an integrated real-time smart network camera. This system is composed of an image sensor, an embedded PC based electronic card for image processing and some network capabilities. The application detects events of interest in visual scenes, highlights alarms and computes statistics. The system also produces meta-data information that could be shared between other cameras in a network. We describe the requirements of such a system and then show how the design of the system is optimized to process and compress video in real-time. Indeed, typical video-surveillance algorithms as background differencing, tracking and event detection should be highly optimized and simplified to be used in this hardware. To have a good adequation between hardware and software in this light embedded system, the software management is written on top of the java based middle-ware specification established by the OSGi alliance. We can integrate easily software and hardware in complex environments thanks to the Java Real-Time specification for the virtual machine and some network and service oriented java specifications (like RMI and Jini). Finally, we will report some outcomes and typical case studies of such a camera like counter-flow detection.

  12. Pratt and Whitney Overview and Advanced Health Management Program

    NASA Technical Reports Server (NTRS)

    Inabinett, Calvin

    2008-01-01

    Hardware Development Activity: Design and Test Custom Multi-layer Circuit Boards for use in the Fault Emulation Unit; Logic design performed using VHDL; Layout power system for lab hardware; Work lab issues with software developers and software testers; Interface with Engine Systems personnel with performance of Engine hardware components; Perform off nominal testing with new engine hardware.

  13. Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali

    Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errorsmore » of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.« less

  14. Workshop on Countering Space Adaptation with Exercise: Current Issues

    NASA Technical Reports Server (NTRS)

    Harris, Bernard A. (Editor); Siconolfi, Steven F. (Editor)

    1994-01-01

    The proceedings represent an update to the problems associated with living and working in space and the possible impact exercise would have on helping reduce risk. The meeting provided a forum for discussions and debates on contemporary issues in exercise science and medicine as they relate to manned space flight with outside investigators. This meeting also afforded an opportunity to introduce the current status of the Exercise Countermeasures Project (ECP) science investigations and inflight hardware and software development. In addition, techniques for physiological monitoring and the development of various microgravity countermeasures were discussed.

  15. The effects of gender stereotypic and counter-stereotypic textbook images on science performance.

    PubMed

    Good, Jessica J; Woodzicka, Julie A; Wingfield, Lylan C

    2010-01-01

    We investigated the effect of gender stereotypic and counter-stereotypic images on male and female high school students' science comprehension and anxiety. We predicted stereotypic images to induce stereotype threat in females and impair science performance. Counter-stereotypic images were predicted to alleviate threat and enhance female performance. Students read one of three chemistry lessons, each containing the same text, with photograph content varied according to stereotype condition. Participants then completed a comprehension test and anxiety measure. Results indicate that female students had higher comprehension after viewing counter-stereotypic images (female scientists) than after viewing stereotypic images (male scientists). Male students had higher comprehension after viewing stereotypic images than after viewing counter-stereotypic images. Implications for alleviating the gender gap in science achievement are discussed.

  16. Progress on a Multichannel, Dual-Mixer Stability Analyzer

    NASA Technical Reports Server (NTRS)

    Kirk, Albert; Cole, Steven; Stevens, Gary; Tucker, Blake; Greenhall, Charles

    2005-01-01

    Several documents describe aspects of the continuing development of a multichannel, dual-mixer system for simultaneous characterization of the instabilities of multiple precise, low-noise oscillators. One of the oscillators would be deemed to be a reference oscillator, its frequency would be offset by an amount (100 Hz) much greater than the desired data rate, and each of the other oscillators would be compared with the frequency-offset signal by operation of a combination of hardware and software. A high-rate time-tag counter would collect zero-crossing times of the approximately equal 100-Hz beat notes. The system would effect a combination of interpolation and averaging to process the time tags into low-rate phase residuals at the desired grid times. Circuitry that has been developed since the cited prior article includes an eight-channel timer board to replace an obsolete commercial time-tag counter, plus a custom offset generator, cleanup loop, distribution amplifier, zero-crossing detector, and frequency divider.

  17. Mine-hunting dolphins of the Navy

    NASA Astrophysics Data System (ADS)

    Moore, Patrick W.

    1997-07-01

    Current counter-mine and obstacle avoidance technology is inadequate, and limits the Navy's capability to conduct shallow water (SW) and very shallow water (VSW) MCM in support of beach assaults by Marine Corps forces. Without information as to the location or density of mined beach areas, it must be assumed that if mines are present in one area then they are present in all areas. Marine mammal systems (MMS) are an unusual, effective and unique solution to current problems of mine and obstacle hunting. In the US Navy Mine Warfare Plan for 1994-1995 Marine Mammal Systems are explicitly identified as the Navy's only means of countering buried mines and the best means for dealing with close-tethered mines. The dolphins in these systems possess a biological sonar specifically adapted for their shallow and very shallow water habitat. Research has demonstrated that the dolphin biosonar outperforms any current hardware system available for SW and VSW applications. This presentation will cover current Fleet MCM systems and future technology application to the littoral region.

  18. A large motion zero-gravity suspension system for experimental simulation of orbital construction and deployment. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Straube, Timothy Milton

    1993-01-01

    The design and implementation of a vertical degree of freedom suspension system is described which provides a constant force off-load condition to counter gravity over large displacements. By accommodating motions up to one meter for structures weighing up to 100 pounds, the system is useful for experiments which simulate orbital construction events such as docking, multiple component assembly, or structural deployment. A unique aspect of this device is the combination of a large stroke passive off-load device augmented by electromotive torque actuated force feedback. The active force feedback has the effect of reducing break-away friction by a factor of twenty over the passive system alone. The thesis describes the development of the suspension hardware and the control algorithm. Experiments were performed to verify the suspensions system's effectiveness in providing a gravity off-load and simulating the motion of a structure in orbit. Additionally, a three dimensional system concept is presented as an extension of the one dimensional suspension system which was implemented.

  19. Design for Run-Time Monitor on Cloud Computing

    NASA Astrophysics Data System (ADS)

    Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.

  20. Design and Implementation of a New Real-Time Frequency Sensor Used as Hardware Countermeasure

    PubMed Central

    Jiménez-Naharro, Raúl; Gómez-Galán, Juan Antonio; Sánchez-Raya, Manuel; Gómez-Bravo, Fernando; Pedro-Carrasco, Manuel

    2013-01-01

    A new digital countermeasure against attacks related to the clock frequency is –presented. This countermeasure, known as frequency sensor, consists of a local oscillator, a transition detector, a measurement element and an output block. The countermeasure has been designed using a full-custom technique implemented in an Application-Specific Integrated Circuit (ASIC), and the implementation has been verified and characterized with an integrated design using a 0.35 μm standard Complementary Metal Oxide Semiconductor (CMOS) technology (Very Large Scale Implementation—VLSI implementation). The proposed solution is configurable in resolution time and allowed range of period, achieving a minimum resolution time of only 1.91 ns and an initialization time of 5.84 ns. The proposed VLSI implementation shows better results than other solutions, such as digital ones based on semi-custom techniques and analog ones based on band pass filters, all design parameters considered. Finally, a counter has been used to verify the good performance of the countermeasure in avoiding the success of an attack. PMID:24008285

  1. FLIC: High-Throughput, Continuous Analysis of Feeding Behaviors in Drosophila

    PubMed Central

    Pletcher, Scott D.

    2014-01-01

    We present a complete hardware and software system for collecting and quantifying continuous measures of feeding behaviors in the fruit fly, Drosophila melanogaster. The FLIC (Fly Liquid-Food Interaction Counter) detects analog electronic signals as brief as 50 µs that occur when a fly makes physical contact with liquid food. Signal characteristics effectively distinguish between different types of behaviors, such as feeding and tasting events. The FLIC system performs as well or better than popular methods for simple assays, and it provides an unprecedented opportunity to study novel components of feeding behavior, such as time-dependent changes in food preference and individual levels of motivation and hunger. Furthermore, FLIC experiments can persist indefinitely without disturbance, and we highlight this ability by establishing a detailed picture of circadian feeding behaviors in the fly. We believe that the FLIC system will work hand-in-hand with modern molecular techniques to facilitate mechanistic studies of feeding behaviors in Drosophila using modern, high-throughput technologies. PMID:24978054

  2. Design and performance of A 3He-free coincidence counter based on parallel plate boron-lined proportional technology

    DOE PAGES

    Henzlova, D.; Menlove, H. O.; Marlow, J. B.

    2015-07-01

    Thermal neutron counters utilized and developed for deployment as non-destructive assay (NDA) instruments in the field of nuclear safeguards traditionally rely on 3He-based proportional counting systems. 3He-based proportional counters have provided core NDA detection capabilities for several decades and have proven to be extremely reliable with range of features highly desirable for nuclear facility deployment. Facing the current depletion of 3He gas supply and the continuing uncertainty of options for future resupply, a search for detection technologies that could provide feasible short-term alternative to 3He gas was initiated worldwide. As part of this effort, Los Alamos National Laboratory (LANL) designedmore » and built a 3He-free full scale thermal neutron coincidence counter based on boron-lined proportional technology. The boronlined technology was selected in a comprehensive inter-comparison exercise based on its favorable performance against safeguards specific parameters. This paper provides an overview of the design and initial performance evaluation of the prototype High Level Neutron counter – Boron (HLNB). The initial results suggest that current HLNB design is capable to provide ~80% performance of a selected reference 3He-based coincidence counter (High Level Neutron Coincidence Counter, HLNCC). Similar samples are expected to be measurable in both systems, however, slightly longer measurement times may be anticipated for large samples in HLNB. The initial evaluation helped to identify potential for further performance improvements via additional tailoring of boron-layer thickness.« less

  3. Authenticated, private, and secured smart cards (APS-SC)

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Mehmood, Amir

    2006-04-01

    From historical perspective, the recent advancements in better antenna designs, low power circuitry integrations and inexpensive fabrication materials have made possible a miniature counter-measure against Radar, a clutter behaving like a fake target return called Digital Reflection Frequency Modulation (DRFM). Such a military counter-measure have found its way in the commerce as a near field communication known as Radio Frequency Identification (RFID), a passive or active item tag T attached to every readable-writable Smart Card (SC): Passports ID, medical patient ID, biometric ID, driver licenses, book ID, library ID, etc. These avalanche phenomena may be due to the 3 rd Gen phones seeking much more versatile & inexpensive interfaces, than the line-of-sight bar-code optical scan. Despite of the popularity of RFID, the lacking of Authenticity, Privacy and Security (APS) protection restricted somewhat the wide spread commercial, financial, medical, legal, and militarily applications. Conventional APS approach can obfuscate a private passkey K of SC with the tag number T or the reader number R, or both, i.e. only T*K or R*K or both will appear on them, where * denotes an invertible operation, e.g. EXOR, but not limited to it. Then, only the authentic owner, knowing all, can inverse the operation, e.g. EXOR*EXOR= I to find K. However, such an encryption could be easily compromised by a hacker seeking exhaustively by comparison based on those frequently used words. Nevertheless, knowing biological wetware lesson for power of pairs sensors and Radar hardware counter-measure history, we can counter the counter-measure DRFM, instead using one RFID tag per SD, we follow the Nature adopting two ears/tags, e.g. each one holding portions of the ID or simply two different ID's readable only by different modes of the interrogating reader, followed by brain central processor in terms of nonlinear invertible shufflers mixing two ID bits. We prefer to adopt such a hardware-software combined hybrid approach because of a too limited phase space of a single RFID for any meaningful encryption approach. Furthermore, a useful biological lesson is not to put all eggs in one basket, "if you don't get it all, you can't hack it". According to the Radar physics, we can choose the amplitude, the frequency, the phase, the polarization, and two radiation energy supply principles, the capacitance coupling (~6m) and the inductance coupling (<1m), to code the pair of tags differently. A casual skimmer equipped with single-mode reader can not read all. We consider near-field and mid-field applications each in this paper. The near-field is at check-out counters or the convey-belt inventory involving sensitive and invariant data. The mid-field search & rescue involves not only item/person identification, but also the geo-location. If more RF power becomes cheaper & portable for longer propagation distance in the near future, then a triangulation with pair of secured readers, located at known geo-locations, could interrogate and identify items/persons and their locations in a GPS-blind environment.

  4. Scientific investigations with the data base HEAO-1 scanning modulator collimator

    NASA Technical Reports Server (NTRS)

    Schwartz, Daniel A.

    1992-01-01

    The hardware specification for the Scanning Modulation Collimator (MC) experiment on HEAO-1 was to measure positions of bright (greater than 10(exp -11) ergs/cm(exp 2)s), hard (1 to 15 keV) x-ray sources to 5-10 arcsec, and to measure their size and structure in three energy bands down to 10 arcsec resolution. The scientific purpose of this specification was to enable the identification of these x-ray sources with optical and radio objects in order to elucidate the x-ray emission mechanism and the nature of the candidate astronomical system. The experiment was an outstanding success. Hardware systems functioned perfectly although loss of one (out of eight) proportional counters degraded our sensitivity by about 10 percent. Our aspect solution of 7 arcsec precision, allowed us to achieve statistic-limited location precision for all but the strongest sources. We vigorously pursued a strategy of determining the scientific importance of each identification, and of publishing each scientific result as it came along.

  5. The Validity and Reliability of the Gymaware Linear Position Transducer for Measuring Counter-Movement Jump Performance in Female Athletes

    ERIC Educational Resources Information Center

    O'Donnell, Shannon; Tavares, Francisco; McMaster, Daniel; Chambers, Samuel; Driller, Matthew

    2018-01-01

    The current study aimed to assess the validity and test-retest reliability of a linear position transducer when compared to a force plate through a counter-movement jump in female participants. Twenty-seven female recreational athletes (19 ± 2 years) performed three counter-movement jumps simultaneously using the linear position transducer and…

  6. Reliability of Metrics Associated with a Counter-Movement Jump Performed on a Force Plate

    ERIC Educational Resources Information Center

    Lombard, Wayne; Reid, Sorrel; Pearson, Keagan; Lambert, Michael

    2017-01-01

    The counter-movement jump is a consequence of maximal force, rate of force developed, and neuromuscular coordination. Thus, the counter-movement jump has been used to monitor various training adaptations. However, the smallest detectable difference of counter-movement jump metrics has yet to be established. The objective of the present study was…

  7. Shuttle spectrum despreader

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The results of the spread spectrum despreader project are reported and three principal products are designed and tested. The products are, (1) a spread spectrum despreader breadboard, (2) associated test equipment consisting of a spectrum spreader and bit reconstruction/error counter and (3) paper design of a Ku-band receiver which would incorporate the despreader as a principal subsystem. The despreader and test set are designed for maximum flexibility. A choice of unbalanced quadriphase or biphase shift keyed data modulation is available. Selectable integration time and threshold voltages on the despreader further lend true usefulness as laboratory test equipment to the delivered hardware.

  8. A Next Generation Digital Counting System For Low-Level Tritium Studies (Project Report)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowman, P.

    2016-10-03

    Since the early seventies, SRNL has pioneered low-level tritium analysis using various nuclear counting technologies and techniques. Since 1999, SRNL has successfully performed routine low-level tritium analyses with counting systems based on digital signal processor (DSP) modules developed in the late 1990s. Each of these counting systems are complex, unique to SRNL, and fully dedicated to performing routine tritium analyses of low-level environmental samples. It is time to modernize these systems due to a variety of issues including (1) age, (2) lack of direct replacement electronics modules and (3) advances in digital signal processing and computer technology. There has beenmore » considerable development in many areas associated with the enterprise of performing low-level tritium analyses. The objective of this LDRD project was to design, build, and demonstrate a Next Generation Tritium Counting System (NGTCS), while not disrupting the routine low-level tritium analyses underway in the facility on the legacy counting systems. The work involved (1) developing a test bed for building and testing new counting system hardware that does not interfere with our routine analyses, (2) testing a new counting system based on a modern state of the art DSP module, and (3) evolving the low-level tritium counter design to reflect the state of the science.« less

  9. SOLARTRAK. Solar Array Tracking Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manish, A.B.; Dudley, J.

    1995-06-01

    SolarTrak used in conjunction with various versions of 68HC11-based SolarTrack hardware boards provides control system for one or two axis solar tracking arrays. Sun position is computed from stored position data and time from an on-board clock/calendar chip. Position feedback can be by one or two offset motor turn counter square wave signals per axis, or by a position potentiometer. A limit of 256 counts resolution is imposed by the on-board analog to digital (A/D) convertor. Control is provided for one or two motors. Numerous options are provided to customize the controller for specific applications. Some options are imposed atmore » compile time, some are setable during operation. Software and hardware board designs are provided for Control Board and separate User Interface Board that accesses and displays variables from Control Board. Controller can be used with range of sensor options ranging from a single turn count sensor per motor to systems using dual turn-count sensors, limit sensors, and a zero reference sensor. Dual axis trackers oriented azimuth elevation, east west, north south, or polar declination can be controlled. Misalignments from these orientations can also be accommodated. The software performs a coordinate transformation using six parameters to compute sun position in misaligned coordinates of the tracker. Parameters account for tilt of tracker in two directions, rotation about each axis, and gear ration errors in each axis. The software can even measure and compute these prameters during an initial setup period if current from a sun position sensor or output from photovoltaic array is available as an anlog voltage to the control board`s A/D port. Wind or emergency stow to aj present position is available triggered by digital or analog signals. Night stow is also available. Tracking dead band is adjustable from narrow to wide. Numerous features of the hardware and software conserve energy for use with battery powered systems.« less

  10. Solar Array Tracking Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maish, Alexander

    1995-06-22

    SolarTrak used in conjunction with various versions of 68HC11-based SolarTrack hardware boards provides control system for one or two axis solar tracking arrays. Sun position is computed from stored position data and time from an on-board clock/calendar chip. Position feedback can be by one or two offset motor turn counter square wave signals per axis, or by a position potentiometer. A limit of 256 counts resolution is imposed by the on-board analog to digital (A/D) convertor. Control is provided for one or two motors. Numerous options are provided to customize the controller for specific applications. Some options are imposed atmore » compile time, some are setable during operation. Software and hardware board designs are provided for Control Board and separate User Interface Board that accesses and displays variables from Control Board. Controller can be used with range of sensor options ranging from a single turn count sensor per motor to systems using dual turn-count sensors, limit sensors, and a zero reference sensor. Dual axis trackers oriented azimuth elevation, east west, north south, or polar declination can be controlled. Misalignments from these orientations can also be accommodated. The software performs a coordinate transformation using six parameters to compute sun position in misaligned coordinates of the tracker. Parameters account for tilt of tracker in two directions, rotation about each axis, and gear ration errors in each axis. The software can even measure and compute these prameters during an initial setup period if current from a sun position sensor or output from photovoltaic array is available as an anlog voltage to the control board''s A/D port. Wind or emergency stow to aj present position is available triggered by digital or analog signals. Night stow is also available. Tracking dead band is adjustable from narrow to wide. Numerous features of the hardware and software conserve energy for use with battery powered systems.« less

  11. Training Post-9/11 Police Officers with a Counter-Terrorism Reality-Based Training Model: A Case Study

    ERIC Educational Resources Information Center

    Biddle, Christopher J.

    2013-01-01

    The purpose of this qualitative holistic multiple-case study was to identify the optimal theoretical approach for a Counter-Terrorism Reality-Based Training (CTRBT) model to train post-9/11 police officers to perform effectively in their counter-terrorism assignments. Post-9/11 police officers assigned to counter-terrorism duties are not trained…

  12. Documentation of daily sit-to-stands performed by community-dwelling adults.

    PubMed

    Bohannon, Richard W; Barreca, Susan R; Shove, Megan E; Lambert, Cynthia; Masters, Lisa M; Sigouin, Christopher S

    2008-01-01

    No information exists about how many sit-to-stands (STSs) are performed daily by community-dwelling adults. We, therefore, examined the feasibility of using a tally counter to document daily STSs, documented the number of daily STSs performed, and determined if the number of STSs was influenced by demographic or health variables. Ninety-eight community-dwelling adults (19-84 years) agreed to participate. After providing demographic and health information, subjects used a tally counter to document the number of STSs performed daily for 7 consecutive days. All but two subjects judged their counter-documented STS number to be accurate. Excluding data from these and two other subjects, the mean number of STSs for subjects was 42.8 to 49.3, depending on the day. The number was significantly higher on weekdays than weekends. No demographic or health variable was significantly related to the number of STSs in univariate or multivariate analysis. In conclusion, this study suggests that a tally counter may be a practical aid to documenting STS activity. The STS repetitions recorded by the counter in this study provide an estimate of the number of STSs that community-dwelling adults perform daily.

  13. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates - as reported by a cache simulation tool, and confirmed by hardware counters - only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  14. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates-as reported by a cache simulation tool, and confirmed by hardware counters-only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  15. Counter electrodes in dye-sensitized solar cells.

    PubMed

    Wu, Jihuai; Lan, Zhang; Lin, Jianming; Huang, Miaoliang; Huang, Yunfang; Fan, Leqing; Luo, Genggeng; Lin, Yu; Xie, Yimin; Wei, Yuelin

    2017-10-02

    Dye-sensitized solar cells (DSSCs) are regarded as prospective solar cells for the next generation of photovoltaic technologies and have become research hotspots in the PV field. The counter electrode, as a crucial component of DSSCs, collects electrons from the external circuit and catalyzes the redox reduction in the electrolyte, which has a significant influence on the photovoltaic performance, long-term stability and cost of the devices. Solar cells, dye-sensitized solar cells, as well as the structure, principle, preparation and characterization of counter electrodes are mentioned in the introduction section. The next six sections discuss the counter electrodes based on transparency and flexibility, metals and alloys, carbon materials, conductive polymers, transition metal compounds, and hybrids, respectively. The special features and performance, advantages and disadvantages, preparation, characterization, mechanisms, important events and development histories of various counter electrodes are presented. In the eighth section, the development of counter electrodes is summarized with an outlook. This article panoramically reviews the counter electrodes in DSSCs, which is of great significance for enhancing the development levels of DSSCs and other photoelectrochemical devices.

  16. Toward Evolvable Hardware Chips: Experiments with a Programmable Transistor Array

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian

    1998-01-01

    Evolvable Hardware is reconfigurable hardware that self-configures under the control of an evolutionary algorithm. We search for a hardware configuration can be performed using software models or, faster and more accurate, directly in reconfigurable hardware. Several experiments have demonstrated the possibility to automatically synthesize both digital and analog circuits. The paper introduces an approach to automated synthesis of CMOS circuits, based on evolution on a Programmable Transistor Array (PTA). The approach is illustrated with a software experiment showing evolutionary synthesis of a circuit with a desired DC characteristic. A hardware implementation of a test PTA chip is then described, and the same evolutionary experiment is performed on the chip demonstrating circuit synthesis/self-configuration directly in hardware.

  17. Targeting multiple heterogeneous hardware platforms with OpenCL

    NASA Astrophysics Data System (ADS)

    Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.

    2014-06-01

    The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware-specific optimizations as necessary.

  18. Measuring human performance on NASA's microgravity aircraft

    NASA Technical Reports Server (NTRS)

    Morris, Randy B.; Whitmore, Mihriban

    1993-01-01

    Measuring human performance in a microgravity environment will aid in identifying the design requirements, human capabilities, safety, and productivity of future astronauts. The preliminary understanding of the microgravity effects on human performance can be achieved through evaluations conducted onboard NASA's KC-135 aircraft. These evaluations can be performed in relation to hardware performance, human-hardware interface, and hardware integration. Measuring human performance in the KC-135 simulated environment will contribute to the efforts of optimizing the human-machine interfaces for future and existing space vehicles. However, there are limitations, such as limited number of qualified subjects, unexpected hardware problems, and miscellaneous plane movements which must be taken into consideration. Examples for these evaluations, the results, and their implications are discussed in the paper.

  19. 77 FR 57005 - Airworthiness Directives; Bell Helicopter Textron Canada Helicopters

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-17

    ... tailboom-attachment hardware (attachment hardware), and perform initial and recurring determinations of the... bolts specified in the BHTC Model 407 Maintenance Manual and applied during manufacturing was incorrect... require replacing attachment hardware and performing initial and recurring determinations of the torque on...

  20. Apollo experience report: Battery subsystem

    NASA Technical Reports Server (NTRS)

    Trout, J. B.

    1972-01-01

    Experience with the Apollo command service module and lunar module batteries is discussed. Significant hardware development concepts and hardware test results are summarized, and the operational performance of batteries on the Apollo 7 to 13 missions is discussed in terms of performance data, mission constraints, and basic hardware design and capability. Also, the flight performance of the Apollo battery charger is discussed. Inflight data are presented.

  1. The cognitive costs of the counter-stereotypic: gender, emotion, and social presence.

    PubMed

    McCarty, Megan K; Kelly, Janice R; Williams, Kipling D

    2014-01-01

    We explored the concurrent and subsequent cognitive consequences of the experience of gender counter-stereotypic emotions. Participants experiencing gender counter-stereotypic emotions were expected to display less emotional expression and demonstrate poorer cognitive performance when in the public condition than when in the private condition. Seventy-one women and 66 men completed an anger- or sadness-inducing task privately or publicly. Participants completed two cognitive tasks: one during and one after the emotion-induction task. Participants exhibited poorer performance during and following gender counter-stereotypic emotions only in the public condition. Direct evidence for greater suppression of gender counter-stereotypic emotions in the public conditions was not obtained. These results suggest that the same public emotional events may be differentially cognitively depleting depending on one's gender, potentially contributing to the perpetuation of stereotypes.

  2. Benchmarking Model Variants in Development of a Hardware-in-the-Loop Simulation System

    NASA Technical Reports Server (NTRS)

    Aretskin-Hariton, Eliot D.; Zinnecker, Alicia M.; Kratz, Jonathan L.; Culley, Dennis E.; Thomas, George L.

    2016-01-01

    Distributed engine control architecture presents a significant increase in complexity over traditional implementations when viewed from the perspective of system simulation and hardware design and test. Even if the overall function of the control scheme remains the same, the hardware implementation can have a significant effect on the overall system performance due to differences in the creation and flow of data between control elements. A Hardware-in-the-Loop (HIL) simulation system is under development at NASA Glenn Research Center that enables the exploration of these hardware dependent issues. The system is based on, but not limited to, the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k). This paper describes the step-by-step conversion from the self-contained baseline model to the hardware in the loop model, and the validation of each step. As the control model hardware fidelity was improved during HIL system development, benchmarking simulations were performed to verify that engine system performance characteristics remained the same. The results demonstrate the goal of the effort; the new HIL configurations have similar functionality and performance compared to the baseline C-MAPSS40k system.

  3. Distributed subterranean exploration and mapping with teams of UAVs

    NASA Astrophysics Data System (ADS)

    Rogers, John G.; Sherrill, Ryan E.; Schang, Arthur; Meadows, Shava L.; Cox, Eric P.; Byrne, Brendan; Baran, David G.; Curtis, J. Willard; Brink, Kevin M.

    2017-05-01

    Teams of small autonomous UAVs can be used to map and explore unknown environments which are inaccessible to teams of human operators in humanitarian assistance and disaster relief efforts (HA/DR). In addition to HA/DR applications, teams of small autonomous UAVs can enhance Warfighter capabilities and provide operational stand-off for military operations such as cordon and search, counter-WMD, and other intelligence, surveillance, and reconnaissance (ISR) operations. This paper will present a hardware platform and software architecture to enable distributed teams of heterogeneous UAVs to navigate, explore, and coordinate their activities to accomplish a search task in a previously unknown environment.

  4. The Evolution of Exercise Hardware on ISS: Past, Present, and Future

    NASA Technical Reports Server (NTRS)

    Buxton, R. E.; Kalogera, K. L.; Hanson, A. M.

    2017-01-01

    During 16 years in low-Earth orbit, the suite of exercise hardware aboard the International Space Station (ISS) has matured significantly. Today, the countermeasure system supports an array of physical-training protocols and serves as an extensive research platform. Future hardware designs are required to have smaller operational envelopes and must also mitigate known physiologic issues observed in long-duration spaceflight. Taking lessons learned from the long history of space exercise will be important to successful development and implementation of future, compact exercise hardware. The evolution of exercise hardware as deployed on the ISS has implications for future exercise hardware and operations. Key lessons learned from the early days of ISS have helped to: 1. Enhance hardware performance (increased speed and loads). 2. Mature software interfaces. 3. Compare inflight exercise workloads to pre-, in-, and post-flight musculoskeletal and aerobic conditions. 4. Improve exercise comfort. 5. Develop complimentary hardware for research and operations. Current ISS exercise hardware includes both custom and commercial-off-the-shelf (COTS) hardware. Benefits and challenges to this approach have prepared engineering teams to take a hybrid approach when designing and implementing future exercise hardware. Significant effort has gone into consideration of hardware instrumentation and wearable devices that provide important data to monitor crew health and performance.

  5. A Scintillation Counter System Design To Detect Antiproton Annihilation using the High Performance Antiproton Trap(HiPAT)

    NASA Technical Reports Server (NTRS)

    Martin, James J.; Lewis, Raymond A.; Stanojev, Boris

    2003-01-01

    The High Performance Antiproton Trap (HiPAT), a system designed to hold up to l0(exp 12) charge particles with a storage half-life of approximately 18 days, is a tool to support basic antimatter research. NASA's interest stems from the energy density represented by the annihilation of matter with antimatter, 10(exp 2)MJ/g. The HiPAT is configured with a Penning-Malmberg style electromagnetic confinement region with field strengths up to 4 Tesla, and 20kV. To date a series of normal matter experiments, using positive and negative ions, have been performed evaluating the designs performance prior to operations with antiprotons. The primary methods of detecting and monitoring stored normal matter ions and antiprotons within the trap includes a destructive extraction technique that makes use of a micro channel plate (MCP) device and a non-destractive radio frequency scheme tuned to key particle frequencies. However, an independent means of detecting stored antiprotons is possible by making use of the actual annihilation products as a unique indicator. The immediate yield of the annihilation event includes photons and pie mesons, emanating spherically from the point of annihilation. To "count" these events, a hardware system of scintillators, discriminators, coincident meters and multi channel scalars (MCS) have been configured to surround much of the HiPAT. Signal coincidence with voting logic is an essential part of this system, necessary to weed out the single cosmic ray events from the multi-particle annihilation shower. This system can be operated in a variety of modes accommodating various conditions. The first is a low-speed sampling interval that monitors the background loss or "evaporation" rate of antiprotons held in the trap during long storage periods; provides an independent method of validating particle lifetimes. The second is a high-speed sample rate accumulating information on a microseconds time-scale; useful when trapped antiparticles are extracted against a target, providing an indication of quantity. This paper details the layout of this system, setup of the hardware components around HiPAT, and applicable checkouts using normal matter radioactive sources.

  6. MSFC Skylab structures and mechanical systems mission evaluation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A performance analysis for structural and mechanical major hardware systems and components is presented. Development background testing, modifications, and requirement adjustments are included. Functional narratives are provided for comparison purposes as are predicted design performance criterion. Each item is evaluated on an individual basis: that is, (1) history (requirements, design, manufacture, and test); (2) in-orbit performance (description and analysis); and (3) conclusions and recommendations regarding future space hardware application. Overall, the structural and mechanical performance of the Skylab hardware was outstanding.

  7. Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing

    PubMed Central

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811

  8. Design and development of a run-time monitor for multi-core architectures in cloud computing.

    PubMed

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  9. 18F-FDG PET/CT evaluation of children and young adults with suspected spinal fusion hardware infection.

    PubMed

    Bagrosky, Brian M; Hayes, Kari L; Koo, Phillip J; Fenton, Laura Z

    2013-08-01

    Evaluation of the child with spinal fusion hardware and concern for infection is challenging because of hardware artifact with standard imaging (CT and MRI) and difficult physical examination. Studies using (18)F-FDG PET/CT combine the benefit of functional imaging with anatomical localization. To discuss a case series of children and young adults with spinal fusion hardware and clinical concern for hardware infection. These people underwent FDG PET/CT imaging to determine the site of infection. We performed a retrospective review of whole-body FDG PET/CT scans at a tertiary children's hospital from December 2009 to January 2012 in children and young adults with spinal hardware and suspected hardware infection. The PET/CT scan findings were correlated with pertinent clinical information including laboratory values of inflammatory markers, postoperative notes and pathology results to evaluate the diagnostic accuracy of FDG PET/CT. An exempt status for this retrospective review was approved by the Institution Review Board. Twenty-five FDG PET/CT scans were performed in 20 patients. Spinal fusion hardware infection was confirmed surgically and pathologically in six patients. The most common FDG PET/CT finding in patients with hardware infection was increased FDG uptake in the soft tissue and bone immediately adjacent to the posterior spinal fusion rods at multiple contiguous vertebral levels. Noninfectious hardware complications were diagnosed in ten patients and proved surgically in four. Alternative sources of infection were diagnosed by FDG PET/CT in seven patients (five with pneumonia, one with pyonephrosis and one with superficial wound infections). FDG PET/CT is helpful in evaluation of children and young adults with concern for spinal hardware infection. Noninfectious hardware complications and alternative sources of infection, including pneumonia and pyonephrosis, can be diagnosed. FDG PET/CT should be the first-line cross-sectional imaging study in patients with suspected spinal hardware infection. Because pneumonia was diagnosed as often as spinal hardware infection, initial chest radiography should also be performed.

  10. Implementation of real-time digital signal processing systems

    NASA Technical Reports Server (NTRS)

    Narasimha, M.; Peterson, A.; Narayan, S.

    1978-01-01

    Special purpose hardware implementation of DFT Computers and digital filters is considered in the light of newly introduced algorithms and IC devices. Recent work by Winograd on high-speed convolution techniques for computing short length DFT's, has motivated the development of more efficient algorithms, compared to the FFT, for evaluating the transform of longer sequences. Among these, prime factor algorithms appear suitable for special purpose hardware implementations. Architectural considerations in designing DFT computers based on these algorithms are discussed. With the availability of monolithic multiplier-accumulators, a direct implementation of IIR and FIR filters, using random access memories in place of shift registers, appears attractive. The memory addressing scheme involved in such implementations is discussed. A simple counter set-up to address the data memory in the realization of FIR filters is also described. The combination of a set of simple filters (weighting network) and a DFT computer is shown to realize a bank of uniform bandpass filters. The usefulness of this concept in arriving at a modular design for a million channel spectrum analyzer, based on microprocessors, is discussed.

  11. Overdetermined shooting methods for computing standing water waves with spectral accuracy

    NASA Astrophysics Data System (ADS)

    Wilkening, Jon; Yu, Jia

    2012-01-01

    A high-performance shooting algorithm is developed to compute time-periodic solutions of the free-surface Euler equations with spectral accuracy in double and quadruple precision. The method is used to study resonance and its effect on standing water waves. We identify new nucleation mechanisms in which isolated large-amplitude solutions, and closed loops of such solutions, suddenly exist for depths below a critical threshold. We also study degenerate and secondary bifurcations related to Wilton's ripples in the traveling case, and explore the breakdown of self-similarity at the crests of extreme standing waves. In shallow water, we find that standing waves take the form of counter-propagating solitary waves that repeatedly collide quasi-elastically. In deep water with surface tension, we find that standing waves resemble counter-propagating depression waves. We also discuss the existence and non-uniqueness of solutions, and smooth versus erratic dependence of Fourier modes on wave amplitude and fluid depth. In the numerical method, robustness is achieved by posing the problem as an overdetermined nonlinear system and using either adjoint-based minimization techniques or a quadratically convergent trust-region method to minimize the objective function. Efficiency is achieved in the trust-region approach by parallelizing the Jacobian computation, so the setup cost of computing the Dirichlet-to-Neumann operator in the variational equation is not repeated for each column. Updates of the Jacobian are also delayed until the previous Jacobian ceases to be useful. Accuracy is maintained using spectral collocation with optional mesh refinement in space, a high-order Runge-Kutta or spectral deferred correction method in time and quadruple precision for improved navigation of delicate regions of parameter space as well as validation of double-precision results. Implementation issues for transferring much of the computation to a graphic processing units are briefly discussed, and the performance of the algorithm is tested for a number of hardware configurations.

  12. ICSH guidelines for the verification and performance of automated cell counters for body fluids.

    PubMed

    Bourner, G; De la Salle, B; George, T; Tabe, Y; Baum, H; Culp, N; Keng, T B

    2014-12-01

    One of the many challenges facing laboratories is the verification of their automated Complete Blood Count cell counters for the enumeration of body fluids. These analyzers offer improved accuracy, precision, and efficiency in performing the enumeration of cells compared with manual methods. A patterns of practice survey was distributed to laboratories that participate in proficiency testing in Ontario, Canada, the United States, the United Kingdom, and Japan to determine the number of laboratories that are testing body fluids on automated analyzers and the performance specifications that were performed. Based on the results of this questionnaire, an International Working Group for the Verification and Performance of Automated Cell Counters for Body Fluids was formed by the International Council for Standardization in Hematology (ICSH) to prepare a set of guidelines to help laboratories plan and execute the verification of their automated cell counters to provide accurate and reliable results for automated body fluid counts. These guidelines were discussed at the ICSH General Assemblies and reviewed by an international panel of experts to achieve further consensus. © 2014 John Wiley & Sons Ltd.

  13. Hardware Removal in Craniomaxillofacial Trauma

    PubMed Central

    Cahill, Thomas J.; Gandhi, Rikesh; Allori, Alexander C.; Marcus, Jeffrey R.; Powers, David; Erdmann, Detlev; Hollenbeck, Scott T.; Levinson, Howard

    2015-01-01

    Background Craniomaxillofacial (CMF) fractures are typically treated with open reduction and internal fixation. Open reduction and internal fixation can be complicated by hardware exposure or infection. The literature often does not differentiate between these 2 entities; so for this study, we have considered all hardware exposures as hardware infections. Approximately 5% of adults with CMF trauma are thought to develop hardware infections. Management consists of either removing the hardware versus leaving it in situ. The optimal approach has not been investigated. Thus, a systematic review of the literature was undertaken and a resultant evidence-based approach to the treatment and management of CMF hardware infections was devised. Materials and Methods A comprehensive search of journal articles was performed in parallel using MEDLINE, Web of Science, and ScienceDirect electronic databases. Keywords and phrases used were maxillofacial injuries; facial bones; wounds and injuries; fracture fixation, internal; wound infection; and infection. Our search yielded 529 articles. To focus on CMF fractures with hardware infections, the full text of English-language articles was reviewed to identify articles focusing on the evaluation and management of infected hardware in CMF trauma. Each article’s reference list was manually reviewed and citation analysis performed to identify articles missed by the search strategy. There were 259 articles that met the full inclusion criteria and form the basis of this systematic review. The articles were rated based on the level of evidence. There were 81 grade II articles included in the meta-analysis. Result Our meta-analysis revealed that 7503 patients were treated with hardware for CMF fractures in the 81 grade II articles. Hardware infection occurred in 510 (6.8%) of these patients. Of those infections, hardware removal occurred in 264 (51.8%) patients; hardware was left in place in 166 (32.6%) patients; and in 80 (15.6%) cases, there was no report as to hardware management. Finally, our review revealed that there were no reported differences in outcomes between groups. Conclusions Management of CMF hardware infections should be performed in a sequential and consistent manner to optimize outcome. An evidence-based algorithm for management of CMF hardware infections based on this critical review of the literature is presented and discussed. PMID:25393499

  14. Kuwait: Governance, Security, and U.S. Policy

    DTIC Science & Technology

    2016-05-04

    19 Performance on Countering Terrorism Financing / Islamic State...U.S. Policy Congressional Research Service 20 Performance on Countering Terrorism Financing / Islamic State Donations32 Some U.S.-Kuwait...terrorism financing . 39 Earlier, in June 2008, the Department of theTreasury froze the assets of a Kuwait-based charity—the Islamic Heritage Restoration

  15. Reliability and Qualification of Hardware to Enhance the Mission Assurance of JPL/NASA Projects

    NASA Technical Reports Server (NTRS)

    Ramesham, Rajeshuni

    2010-01-01

    Packaging Qualification and Verification (PQV) and life testing of advanced electronic packaging, mechanical assemblies (motors/actuators), and interconnect technologies (flip-chip), platinum temperature thermometer attachment processes, and various other types of hardware for Mars Exploration Rover (MER)/Mars Science Laboratory (MSL), and JUNO flight projects was performed to enhance the mission assurance. The qualification of hardware under extreme cold to hot temperatures was performed with reference to various project requirements. The flight like packages, assemblies, test coupons, and subassemblies were selected for the study to survive three times the total number of expected temperature cycles resulting from all environmental and operational exposures occurring over the life of the flight hardware including all relevant manufacturing, ground operations, and mission phases. Qualification/life testing was performed by subjecting flight-like qualification hardware to the environmental temperature extremes and assessing any structural failures, mechanical failures or degradation in electrical performance due to either overstress or thermal cycle fatigue. Experimental flight qualification test results will be described in this presentation.

  16. Model of separation performance of bilinear gradients in scanning format counter-flow gradient electrofocusing techniques.

    PubMed

    Shameli, Seyed Mostafa; Glawdel, Tomasz; Ren, Carolyn L

    2015-03-01

    Counter-flow gradient electrofocusing allows the simultaneous concentration and separation of analytes by generating a gradient in the total velocity of each analyte that is the sum of its electrophoretic velocity and the bulk counter-flow velocity. In the scanning format, the bulk counter-flow velocity is varying with time so that a number of analytes with large differences in electrophoretic mobility can be sequentially focused and passed by a single detection point. Studies have shown that nonlinear (such as a bilinear) velocity gradients along the separation channel can improve both peak capacity and separation resolution simultaneously, which cannot be realized by using a single linear gradient. Developing an effective separation system based on the scanning counter-flow nonlinear gradient electrofocusing technique usually requires extensive experimental and numerical efforts, which can be reduced significantly with the help of analytical models for design optimization and guiding experimental studies. Therefore, this study focuses on developing an analytical model to evaluate the separation performance of scanning counter-flow bilinear gradient electrofocusing methods. In particular, this model allows a bilinear gradient and a scanning rate to be optimized for the desired separation performance. The results based on this model indicate that any bilinear gradient provides a higher separation resolution (up to 100%) compared to the linear case. This model is validated by numerical studies. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Performance of a Boron-Coated-Straw-Based HLNCC for International Safeguards Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simone, Angela T.; Croft, Stephen; McElroy, Robert Dennis

    3He gas has been used in various scientific and security applications for decades, but it is now in short supply. Alternatives to 3He detectors are currently being integrated and tested in neutron coincidence counter designs, of a type which are widely used in nuclear safeguards for nuclear materials assay. A boron-coated-straw-based design, similar to the High-Level Neutron Coincidence Counter-II, was built by Proportional Technologies Inc., and has been tested by the Oak Ridge National Laboratory (ORNL) at both the JRC in Ispra and ORNL. Characterization measurements, along with nondestructive assays of various plutonium samples, have been conducted to determine themore » performance of this coincidence counter replacement in comparison with other similar counters. This paper presents results of these measurements.« less

  18. Results on the Performance of a Broad Band Focussing Cherenkov Counter

    DOE R&D Accomplishments Database

    Cester, R.; Fitch, V. L.; Montag, A.; Sherman, S.; Webb, R. C.; Witherell, M. S.

    1980-01-01

    The field of ring imaging (broad band differential) Cherenkov detectors has become a very active area of interest in detector development at several high energy physics laboratories. Our group has previously reported on a method of Cherenkov ring imaging for a counter with large momentum and angular acceptance using standard photo multipliers. Recently, we have applied this technique to the design of a set of Cherenkov counters for use in a particle search experiment at Fermi National Accelerator Laboratory (FNAL). This new detector operates over the range 0.998 < ..beta.. < 1.000 in velocity with a delta..beta.. approx. 2 x 10{sup -4}. The acceptance in angle is +- 14 mrad in the horizontal and +- 28 mrad in the vertical. We report here on the performance of this counter.

  19. Proportional Counter Calibration and Analysis for 12C + p Resonance Scattering

    NASA Astrophysics Data System (ADS)

    Nelson, Austin; Rogachev, Grigory; Uberseder, Ethan; Hooker, Josh; Koshchiy, Yevgen

    2014-09-01

    Light exotic nuclei provide a unique opportunity to test the predictions of modern ab initio theoretical calculations near the drip line. In ab initio approaches, nuclear structure is described starting from bare nucleon-nucleon and three-nucleon interactions. Calculations are very heavy and can only be performed for the lightest nuclei (A < 16). Experimental information on the structure of light exotic nuclei is crucial to determine the validity of these calculations and to fix the parameters for the three-nucleon forces. Resonance scattering with rare isotope beams is a very effective tool to study spectroscopy of nuclei near the drip line. A new setup was developed at the Cyclotron Institute for effective resonance scattering measurements. The setup includes ionization chamber, silicon array, and an array of proportional counters. The proportional counter array, consisting of 8 anode wires arranged in a parallel cellular grid, is used for particle identification and to track the positioning of light recoils. The main objective of this project was to test the performance and perform position calibration of this proportional counter array. The test was done using 12C beam. The excitation function for 12C + p elastic scattering was measured and calibration of the proportional counter was performed using known resonances in 13N. The method of calibration, including solid angle calculations, normalization corrections, and position calibration will be presented. Light exotic nuclei provide a unique opportunity to test the predictions of modern ab initio theoretical calculations near the drip line. In ab initio approaches, nuclear structure is described starting from bare nucleon-nucleon and three-nucleon interactions. Calculations are very heavy and can only be performed for the lightest nuclei (A < 16). Experimental information on the structure of light exotic nuclei is crucial to determine the validity of these calculations and to fix the parameters for the three-nucleon forces. Resonance scattering with rare isotope beams is a very effective tool to study spectroscopy of nuclei near the drip line. A new setup was developed at the Cyclotron Institute for effective resonance scattering measurements. The setup includes ionization chamber, silicon array, and an array of proportional counters. The proportional counter array, consisting of 8 anode wires arranged in a parallel cellular grid, is used for particle identification and to track the positioning of light recoils. The main objective of this project was to test the performance and perform position calibration of this proportional counter array. The test was done using 12C beam. The excitation function for 12C + p elastic scattering was measured and calibration of the proportional counter was performed using known resonances in 13N. The method of calibration, including solid angle calculations, normalization corrections, and position calibration will be presented. Funded by DOE and NSF-REU Program; Grant No. PHY-1263281.

  20. Low-speed wind-tunnel tests of single- and counter-rotation propellers

    NASA Technical Reports Server (NTRS)

    Dunham, D. M.; Gentry, G. L., Jr.; Coe, P. L., Jr.

    1986-01-01

    A low-speed (Mach 0 to 0.3) wind-tunnel investigation was conducted to determine the basic performance, force and moment characteristics, and flow-field velocities of single- and counter-rotation propellers. Compared with the eight-blade single-rotation propeller, a four- by four- (4 x 4) blade counter-rotation propeller with the same blade design produced substantially higher thrust coefficients for the same blade angles and advance ratios. The results further indicated that ingestion of the wake from a supporting pylon for a pusher configuration produced no significant change in the propeller thrust performance for either the single- or counter-rotation propellers. A two-component laser velocimeter (LV) system was used to make detailed measurements of the propeller flow fields. Results show increasing slipstream velocities with increasing blade angle and decreasing advance ratio. Flow-field measurements for the counter-rotation propeller show that the rear propeller turned the flow in the opposite direction from the front propeller and, therefore, could eliminate the swirl component of velocity, as would be expected.

  1. DESIGN AND PERFORMANCE CHARACTERISTICS OF A TURBULENT MIXING CONDENSATION NUCLEI COUNTER. (R826654)

    EPA Science Inventory

    The design and optimization of operation parameters of a Turbulent Mixing Condensation Nuclei Counter (TMCNC) are discussed as well as its performance using dibutylphthalate (DBP) as the working fluid. A detection limit of 3 nm has been achieved at a flow rate of 2.8 lmin-1<...

  2. Swarm Counter-Asymmetric-Threat (CAT) 6-DOF Dynamics Simulation

    DTIC Science & Technology

    2005-07-01

    NAWCWD TP 8593 Swarm Counter-Asymmetric-Threat ( CAT ) 6-DOF Dynamics Simulation by James Bobinchak Weapons and Energetics...mathematical models used in the swarm counter- asymmetric-threat ( CAT ) simulation and the results of extensive Monte Carlo simulations. The swarm CAT ...Asymmetric-Threat ( CAT ) 6-DOF Dynamics Simulation (U) 6. AUTHOR(S) James Bobinchak and Gary Hewer 7. PERFORMING ORGANIZATION NAME(S) AND

  3. DAQ for commissioning and calibration of a multichannel analyzer of scintillation counters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tortorici, F.; Jones, M.; Bellini, V.

    We report the status of the Data Acquisition (DAQ) system for the Coordinate Detector (CDET) module of the Super Bigbite Spectrometer facility at Hall A of Thomas Jefferson Accelerator Facility. Presently, the DAQ is fully assembled and tested with one CDET module. The commissioning of CDET module, that is the goal of the tests presented here, consists essentially in the measures of the amplitude and time-over-threshold of signals from cosmic rays. Hardware checks, the developing of DAQ control and off-line analysis software are ongoing; the module currently seems to work roughly accordingly to expectations. Data presented in this note aremore » still preliminary.« less

  4. C-9 and Other Microgravity Simulations

    NASA Technical Reports Server (NTRS)

    Hecht, Sharon (Editor); Reeves, Jacqueline M. (Editor); Spector, Elisabeth (Editor)

    2009-01-01

    This document represents a summary of medical and scientific evaluations conducted aboard the C-9 and other NASA-sponsored aircraft from June 2008 to June 2009. Included is a general overview of investigations manifested and coordinated by the Human Adaptation and Counter-measures Division. A collection of brief reports that describe tests conducted aboard the NASA-sponsored aircraft follows the overview. Principal investigators and test engineers contributed significantly to the content of the report, describing their particular experiment or hardware evaluation. Although this document follows general guidelines, each report format may vary to accommodate differences in experiment design and procedures. This document concludes with an appendix that provides background information concerning the Reduced Gravity Program. Acknowledgments

  5. Design of coin sorter counter based on MCU

    NASA Astrophysics Data System (ADS)

    Yang, Yahan; Si, Xu

    2018-04-01

    With unmanned tickets, vending machines promotion, greatly increased the circulation of coins, especially bus companies, the financial sector need to classify a large number of coins every day, inventory, a huge workload. The design of the microcontroller as the control center, combined with the sensor technology and the corresponding mechanical structure to complete the separation of coins and finishing the packaging work and real-time monitoring and display of the type and number of coins function, this article details the system hardware and software design, and The test adjustment shows that the system can achieve the function of separating and sorting coins and monitoring the type and quantity of coins displayed on the coin.

  6. Summary of materials and hardware performance on LDEF

    NASA Technical Reports Server (NTRS)

    Dursch, Harry; Pippin, Gary; Teichman, Lou

    1993-01-01

    A wide variety of materials and experiment support hardware were flown on the Long Duration Exposure Facility (LDEF). Postflight testing has determined the effects of the almost 6 years of low-earth orbit (LEO) exposure on this hardware. An overview of the results are presented. Hardware discussed includes adhesives, fasteners, lubricants, data storage systems, solar cells, seals, and the LDEF structure. Lessons learned from the testing and analysis of LDEF hardware is also presented.

  7. Comparative Modal Analysis of Sieve Hardware Designs

    NASA Technical Reports Server (NTRS)

    Thompson, Nathaniel

    2012-01-01

    The CMTB Thwacker hardware operates as a testbed analogue for the Flight Thwacker and Sieve components of CHIMRA, a device on the Curiosity Rover. The sieve separates particles with a diameter smaller than 150 microns for delivery to onboard science instruments. The sieving behavior of the testbed hardware should be similar to the Flight hardware for the results to be meaningful. The elastodynamic behavior of both sieves was studied analytically using the Rayleigh Ritz method in conjunction with classical plate theory. Finite element models were used to determine the mode shapes of both designs, and comparisons between the natural frequencies and mode shapes were made. The analysis predicts that the performance of the CMTB Thwacker will closely resemble the performance of the Flight Thwacker within the expected steady state operating regime. Excitations of the testbed hardware that will mimic the flight hardware were recommended, as were those that will improve the efficiency of the sieving process.

  8. Superior Generalization Capability of Hardware-Learing Algorithm Developed for Self-Learning Neuron-MOS Neural Networks

    NASA Astrophysics Data System (ADS)

    Kondo, Shuhei; Shibata, Tadashi; Ohmi, Tadahiro

    1995-02-01

    We have investigated the learning performance of the hardware backpropagation (HBP) algorithm, a hardware-oriented learning algorithm developed for the self-learning architecture of neural networks constructed using neuron MOS (metal-oxide-semiconductor) transistors. The solution to finding a mirror symmetry axis in a 4×4 binary pixel array was tested by computer simulation based on the HBP algorithm. Despite the inherent restrictions imposed on the hardware-learning algorithm, HBP exhibits equivalent learning performance to that of the original backpropagation (BP) algorithm when all the pertinent parameters are optimized. Very importantly, we have found that HBP has a superior generalization capability over BP; namely, HBP exhibits higher performance in solving problems that the network has not yet learnt.

  9. Redox-stratification controlled biofilm (ReSCoBi) for completely autotrophic nitrogen removal: the effect of co- versus counter-diffusion on reactor performance.

    PubMed

    Terada, Akihiko; Lackner, Susanne; Tsuneda, Satoshi; Smets, Barth F

    2007-05-01

    A multi-population biofilm model for completely autotrophic nitrogen removal was developed and implemented in the simulation program AQUASIM to corroborate the concept of a redox-stratification controlled biofilm (ReSCoBi). The model considers both counter- and co-diffusion biofilm geometries. In the counter-diffusion biofilm, oxygen is supplied through a gas-permeable membrane that supports the biofilm while ammonia (NH(4)(+)) is supplied from the bulk liquid. On the contrary, in the co-diffusion biofilm, both oxygen and NH(4)(+) are supplied from the bulk liquid. Results of the model revealed a clear stratification of microbial activities in both of the biofilms, the resulting chemical profiles, and the obvious effect of the relative surface loadings of oxygen and NH(4)(+) (J(O(2))/J(NH(4)(+))) on the reactor performances. Steady-state biofilm thickness had a significant but different effect on T-N removal for co- and counter-diffusion biofilms: the removal efficiency in the counter-diffusion biofilm geometry was superior to that in the co-diffusion counterpart, within the range of 450-1,400 microm; however, the efficiency deteriorated with a further increase in biofilm thickness, probably because of diffusion limitation of NH(4)(+). Under conditions of oxygen excess (J(O(2))/J(NH(4)(+)) > 3.98), almost all NH(4)(+) was consumed by aerobic ammonia oxidation in the co-diffusion biofilm, leading to poor performance, while in the counter-diffusion biofilm, T-N removal efficiency was maintained because of the physical location of anaerobic ammonium oxidizers near the bulk liquid. These results clearly reveal that counter-diffusion biofilms have a wider application range for autotrophic T-N removal than co-diffusion biofilms. (c) 2006 Wiley Periodicals, Inc.

  10. СЦИНТИЛЛЯЦИОННЫЕ ДЕТЕКТОРЫ УСТАНОВКИ CDF П В ЭКСПЕРИМЕНТАХ ПО ФИЗИКЕ ТЯЖЁЛЫХ КВАРКОВ НА ТЭВА ТРОНЕ (in Russian)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chokheli, Davit

    2007-01-01

    The author presents the following: 1) Development and creation from scratch of scintillation detectors system for CDF II muon trigger using more than 1140 scintillation counters different type and size; development of the contol and monitoring software/hardware systems; 2) Development and creation of updgraded preshower CPR II for electromagnet calorimeter CDF II with better segmentation by pseydorapidity (10 times more against previous version) to be able collect the data with increased Tevatron luminosity; 3) Aging study for scintillation counters used at CDF II and its long-term efficiency estimation; and 4) Research of the possibility to use the proposed new muon trigger atmore » $$1.0 \\leq \\mu \\leq 1.25$$ region by pseudorapidity by creation of additional layers of muon scintillation detectors.« less

  11. UNH Project SMART 2017: Space Science for High School Students

    NASA Astrophysics Data System (ADS)

    Smith, C. W.; Broad, L.; Goelzer, S.; Levergood, R.; Lugaz, N.; Moebius, E.

    2017-12-01

    Every summer for the past 26 years the University of New Hampshire (UNH) has run a month-long, residential outreach program for high school students considering careers in mathematics, science, or engineering. Space science is one of the modules. Students work directly with UNH faculty performing original work with real spacecraft data and hardware and present the results of that effort at the end of the program. This year the student research projects used data from the Messenger, STEREO, and Triana missions. In addition, the students build and fly a high-altitude balloon payload with instruments of their own construction. Students learn circuit design and construction, microcontroller programming, and core atmospheric and space science along with fundamental concepts in space physics and engineering. Our payload design has evolved significantly since the first flight of a simple rectangular box and now involves a stable descent vehicle that does not require a parachute. Our flight hardware includes an on-board flight control computer, in-flight autonomous control and data acquisition of multiple student-built instruments, and real-time camera images sent to ground. This year we developed, built and flew a successful line cutter based on GPS location information that prevents our payload from falling into the ocean while also separating the payload from the balloon remains for a cleaner descent. We will describe that new line cutter design and implementation along with the shielded Geiger counters that we flew as part of our cosmic ray air shower experiment. This is a program that can be used as a model for other schools to follow and that high schools can initiate. More information can be found at .

  12. Direct synthesis of platelet graphitic-nanofibres as a highly porous counter-electrode in dye-sensitized solar cells.

    PubMed

    Hsieh, Chien-Kuo; Tsai, Ming-Chi; Yen, Ming-Yu; Su, Ching-Yuan; Chen, Kuei-Fu; Ma, Chen-Chi M; Chen, Fu-Rong; Tsai, Chuen-Horng

    2012-03-28

    We synthesized platelet graphitic-nanofibres (GNFs) directly onto FTO glass and applied this forest of platelet GNFs as a highly porous structural counter-electrode in dye-sensitized solar cells (DSSCs). We investigated the electrochemical properties of counter-electrodes made from the highly porous structural GNFs and the photoconversion performance of the cells made with these electrodes.

  13. Qualification Testing of Engineering Camera and Platinum Resistance Thermometer (PRT) Sensors for Mars Science Laboratory (MSL) Project under Extreme Temperatures to Assess Reliability and to Enhance Mission Assurance

    NASA Technical Reports Server (NTRS)

    Ramesham, Rajeshuni; Maki, Justin N.; Cucullu, Gordon C.

    2008-01-01

    Package Qualification and Verification (PQV) of advanced electronic packaging and interconnect technologies and various other types of qualification hardware for the Mars Exploration Rover/Mars Science Laboratory flight projects has been performed to enhance the mission assurance. The qualification of hardware (Engineering Camera and Platinum Resistance Thermometer, PRT) under extreme cold temperatures has been performed with reference to various project requirements. The flight-like packages, sensors, and subassemblies have been selected for the study to survive three times (3x) the total number of expected temperature cycles resulting from all environmental and operational exposures occurring over the life of the flight hardware including all relevant manufacturing, ground operations and mission phases. Qualification has been performed by subjecting above flight-like qual hardware to the environmental temperature extremes and assessing any structural failures or degradation in electrical performance due to either overstress or thermal cycle fatigue. Experiments of flight like hardware qualification test results have been described in this paper.

  14. Shared performance monitor in a multiprocessor system

    DOEpatents

    Chiu, George; Gara, Alan G.; Salapura, Valentina

    2012-07-24

    A performance monitoring unit (PMU) and method for monitoring performance of events occurring in a multiprocessor system. The multiprocessor system comprises a plurality of processor devices units, each processor device for generating signals representing occurrences of events in the processor device, and, a single shared counter resource for performance monitoring. The performance monitor unit is shared by all processor cores in the multiprocessor system. The PMU comprises: a plurality of performance counters each for counting signals representing occurrences of events from one or more the plurality of processor units in the multiprocessor system; and, a plurality of input devices for receiving the event signals from one or more processor devices of the plurality of processor units, the plurality of input devices programmable to select event signals for receipt by one or more of the plurality of performance counters for counting, wherein the PMU is shared between multiple processing units, or within a group of processors in the multiprocessing system. The PMU is further programmed to monitor event signals issued from non-processor devices.

  15. Semiannual Technical Summary, 1 April-30 September 1993

    DTIC Science & Technology

    1993-12-01

    Hardware failure 11 Jul 2200 - Hardware failure 12 Jul - 0531 Hardware failure 12 Jul 0744 - 1307 Hardware service 1OAug 0821 - 1514 Line failure 29 Aug...1000 - Line failure 30 Aug - 1211 Line failure 08 Sep 1518 - Line failure 09 Sep - 0428 Line failure 10 Sep 0821 - 1030 Hardware failure 18 Sep 0817...reair. Between 8 September 1306 hrs and 9 September 0428 hre all communications systems wene affected (13.5 hrs). Reduced 01B performance started 10

  16. The response of a scintillation counter below an emulsion chamber to heavy nucleus interactions in the chamber

    NASA Technical Reports Server (NTRS)

    Burnett, T. H.; Dake, S.; Derrickson, J. H.; Fountain, W. F.; Fuki, M.; Gregory, J. C.; Hayashi, T.; Hayashi, T.; Holynski, R.; Iwai, J.; hide

    1985-01-01

    In 1982 a hybrid electronic counter-emulsion chamber experiment was flown on a balloon to study heavy nucleus interactions in the 20 to approximately 100 GeV/AMU energy range. A gas Cerenkov counter, two solid Cerenkov counters, and a proportional counter hodoscope gave the primary energy, the primary charge and the trajectory of the particles, respectively. Using the trajectory information cosmic ray nuclei of Z 10 were found reliably and efficiently, and interaction characteristics of the Fe group nuclei were measured in the chamber. A plastic scintillator below the emulsion chamber responded to showers resulting from interactions in the chamber and to noninteracting nuclei. Data on the response of the counter have been compared with simulations of hadronic-electromagnetic cascades to derive the average neutral energy fraction released by the heavy interactions, and to predict the performance of this kind of counter at higher energies. For the interacting events of highest produced particles multiplicity comparison between various simulations and the shower counter signal have been made.

  17. Systems Performance Laboratory | Energy Systems Integration Facility | NREL

    Science.gov Websites

    array access Small Commercial Power Hardware in the Loop The small commercial power-hardware-in-the-loop (PHIL) test bay is dedicated to small-scale power hardware-in-the-loop studies of inverters and other , natural gas supply Multi-Inverter Power Hardware in the Loop The multi-inverter test bay is dedicated to

  18. EVA Training and Development Facilities

    NASA Technical Reports Server (NTRS)

    Cupples, Scott

    2016-01-01

    Overview: Vast majority of US EVA (ExtraVehicular Activity) training and EVA hardware development occurs at JSC; EVA training facilities used to develop and refine procedures and improve skills; EVA hardware development facilities test hardware to evaluate performance and certify requirement compliance; Environmental chambers enable testing of hardware from as large as suits to as small as individual components in thermal vacuum conditions.

  19. Evaluation of Counter-Based Dynamic Load Balancing Schemes for Massive Contingency Analysis on Over 10,000 Cores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Rice, Mark J.

    Contingency analysis studies are necessary to assess the impact of possible power system component failures. The results of the contingency analysis are used to ensure the grid reliability, and in power market operation for the feasibility test of market solutions. Currently, these studies are performed in real time based on the current operating conditions of the grid with a set of pre-selected contingency list, which might result in overlooking some critical contingencies caused by variable system status. To have a complete picture of a power grid, more contingencies need to be studied to improve grid reliability. High-performance computing techniques holdmore » the promise of being able to perform the analysis for more contingency cases within a much shorter time frame. This paper evaluates the performance of counter-based dynamic load balancing schemes for a massive contingency analysis program on 10,000+ cores. One million N-2 contingency analysis cases with a Western Electricity Coordinating Council power grid model have been used to demonstrate the performance. The speedup of 3964 with 4096 cores and 7877 with 10240 cores are obtained. This paper reports the performance of the load balancing scheme with a single counter and two counters, describes disk I/O issues, and discusses other potential techniques for further improving the performance.« less

  20. Exercise Countermeasure Hardware Evolution on ISS: The First Decade.

    PubMed

    Korth, Deborah W

    2015-12-01

    The hardware systems necessary to support exercise countermeasures to the deconditioning associated with microgravity exposure have evolved and improved significantly during the first decade of the International Space Station (ISS), resulting in both new types of hardware and enhanced performance capabilities for initial hardware items. The original suite of countermeasure hardware supported the first crews to arrive on the ISS and the improved countermeasure system delivered in later missions continues to serve the astronauts today with increased efficacy. Due to aggressive hardware development schedules and constrained budgets, the initial approach was to identify existing spaceflight-certified exercise countermeasure equipment, when available, and modify it for use on the ISS. Program management encouraged the use of commercial-off-the-shelf (COTS) hardware, or hardware previously developed (heritage hardware) for the Space Shuttle Program. However, in many cases the resultant hardware did not meet the additional requirements necessary to support crew health maintenance during long-duration missions (3 to 12 mo) and anticipated future utilization activities in support of biomedical research. Hardware development was further complicated by performance requirements that were not fully defined at the outset and tended to evolve over the course of design and fabrication. Modifications, ranging from simple to extensive, were necessary to meet these evolving requirements in each case where heritage hardware was proposed. Heritage hardware was anticipated to be inherently reliable without the need for extensive ground testing, due to its prior positive history during operational spaceflight utilization. As a result, developmental budgets were typically insufficient and schedules were too constrained to permit long-term evaluation of dedicated ground-test units ("fleet leader" type testing) to identify reliability issues when applied to long-duration use. In most cases, the exercise unit with the most operational history was the unit installed on the ISS.

  1. MSFC Skylab corollary experiment systems mission evaluation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Evaluations are presented of the performances of corollary experiment hardware developed by the George C. Marshall Space Flight Center and operated during the three manned Skylab missions. Also presented are assessments of the functional adequacy of the experiment hardware and its supporting systems, and indications are given as to the degrees by which experiment constraints and interfaces were met. It is shown that most of the corollary experiment hardware performed satisfactorily and within design specifications.

  2. Institute for Sustained Performance, Energy, and Resilience (SuPER)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jagode, Heike; Bosilca, George; Danalis, Anthony

    The University of Tennessee (UTK) and University of Texas at El Paso (UTEP) partnership supported the three main thrusts of the SUPER project---performance, energy, and resilience. The UTK-UTEP effort thus helped advance the main goal of SUPER, which was to ensure that DOE's computational scientists can successfully exploit the emerging generation of high performance computing (HPC) systems. This goal is being met by providing application scientists with strategies and tools to productively maximize performance, conserve energy, and attain resilience. The primary vehicle through which UTK provided performance measurement support to SUPER and the larger HPC community is the Performance Applicationmore » Programming Interface (PAPI). PAPI is an ongoing project that provides a consistent interface and methodology for collecting hardware performance information from various hardware and software components, including most major CPUs, GPUs and accelerators, interconnects, I/O systems, and power interfaces, as well as virtual cloud environments. The PAPI software is widely used for performance modeling of scientific and engineering applications---for example, the HOMME (High Order Methods Modeling Environment) climate code, and the GAMESS and NWChem computational chemistry codes---on DOE supercomputers. PAPI is widely deployed as middleware for use by higher-level profiling, tracing, and sampling tools (e.g., CrayPat, HPCToolkit, Scalasca, Score-P, TAU, Vampir, PerfExpert), making it the de facto standard for hardware counter analysis. PAPI has established itself as fundamental software infrastructure in every application domain (spanning academia, government, and industry), where improving performance can be mission critical. Ultimately, as more application scientists migrate their applications to HPC platforms, they will benefit from the extended capabilities this grant brought to PAPI to analyze and optimize performance in these environments, whether they use PAPI directly, or via third-party performance tools. Capabilities added to PAPI through this grant include support for new architectures such as the lastest GPU and Xeon Phi accelerators, and advanced power measurement and management features. Another important topic for the UTK team was providing support for a rich ecosystem of different fault management strategies in the context of parallel computing. Our long term efforts have been oriented toward proposing flexible strategies and providing building boxes that application developers can use to build the most efficient fault management technique for their application. These efforts span across the entire software spectrum, from theoretical models of existing strategies to easily assess their performance, to algorithmic modifications to take advantage of specific mathematical properties for data redundancy and to extensions to widely used programming paradigms to empower the application developers to deal with all types of faults. We have also continued our tight collaborations with users to help them adopt these technologies to ensure their application always deliver meaningful scientific data. Large supercomputer systems are becoming more and more power and energy constrained, and future systems and applications running on them will need to be optimized to run under power caps and/or minimize energy consumption. The UTEP team contributed to the SUPER energy thrust by developing power modeling methodologies and investigating power management strategies. Scalability modeling results showed that some applications can scale better with respect to an increasing power budget than with respect to only the number of processors. Power management, in particular shifting power to processors on the critical path of an application execution, can reduce perturbation due to system noise and other sources of runtime variability, which are growing problems on large-scale power-constrained computer systems.« less

  3. Dynamic Monitoring of Cleanroom Fallout Using an Air Particle Counter

    NASA Technical Reports Server (NTRS)

    Perry, Radford

    2011-01-01

    The particle fallout limitations and periodic allocations for the James Webb Space Telescope are very stringent. Standard prediction methods are complicated by non-linearity and monitoring methods that are insufficiently responsive. A method for dynamically predicting the particle fallout in a cleanroom using air particle counter data was determined by numerical correlation. This method provides a simple linear correlation to both time and air quality, which can be monitored in real time. The summation of effects provides the program better understanding of the cleanliness and assists in the planning of future activities. Definition of fallout rates within a cleanroom during assembly and integration of contamination-sensitive hardware, such as the James Webb Space Telescope, is essential for budgeting purposes. Balancing the activity levels for assembly and test with the particle accumulation rate is paramount. The current approach to predicting particle fallout in a cleanroom assumes a constant air quality based on the rated class of a cleanroom, with adjustments for projected work or exposure times. Actual cleanroom class can also depend on the number of personnel present and the type of activities. A linear correlation of air quality and normalized particle fallout was determined numerically. An air particle counter (standard cleanroom equipment) can be used to monitor the air quality on a real-time basis and determine the "class" of the cleanroom (per FED-STD-209 or ISO-14644). The correlation function provides an area coverage coefficient per class-hour of exposure. The prediction of particle accumulations provides scheduling inputs for activity levels and cleanroom class requirements.

  4. Best bang for your buck: GPU nodes for GROMACS biomolecular simulations.

    PubMed

    Kutzner, Carsten; Páll, Szilárd; Fechner, Martin; Esztermann, Ansgar; de Groot, Bert L; Grubmüller, Helmut

    2015-10-05

    The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well-exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)-based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off-loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance-to-price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer-class GPUs this improvement equally reflects in the performance-to-price ratio. Although memory issues in consumer-class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost-efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well-balanced ratio of CPU and consumer-class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.

  5. SILAR deposition of nickel sulfide counter electrode for application in quantum dot sensitized solar cell

    NASA Astrophysics Data System (ADS)

    Singh, Navjot; Siwatch, Poonam; Arora, Anmol; Sharma, Jadab; Tripathi, S. K.

    2018-05-01

    Quantum Dot Sensitized Solar Cells are a likely replacement for Silicon-based solar cells. Counter electrodes are a fundamental aspect of QDSSC's performance. NiS being a less expensive material is a decent choice for the purpose. In this paper, we have discussed the synthesis of NiS by Successive Ionic Layer Adsorption Reaction. Optical, Crystallographic and Electrical studies have been presented. Electrical studies of the device with NiS counter electrode is compared with characteristics of the device with CNTs as the counter electrode. SILAR method is easy and less time to consume than chemical bath deposition or any other method. Results show the success of NiS synthesized by SILAR method as the counter electrode.

  6. Performance/price estimates for cortex-scale hardware: a design space exploration.

    PubMed

    Zaveri, Mazad S; Hammerstrom, Dan

    2011-04-01

    In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Media processors using a new microsystem architecture designed for the Internet era

    NASA Astrophysics Data System (ADS)

    Wyland, David C.

    1999-12-01

    The demands of digital image processing, communications and multimedia applications are growing more rapidly than traditional design methods can fulfill them. Previously, only custom hardware designs could provide the performance required to meet the demands of these applications. However, hardware design has reached a crisis point. Hardware design can no longer deliver a product with the required performance and cost in a reasonable time for a reasonable risk. Software based designs running on conventional processors can deliver working designs in a reasonable time and with low risk but cannot meet the performance requirements. What is needed is a media processing approach that combines very high performance, a simple programming model, complete programmability, short time to market and scalability. The Universal Micro System (UMS) is a solution to these problems. The UMS is a completely programmable (including I/O) system on a chip that combines hardware performance with the fast time to market, low cost and low risk of software designs.

  8. 50 CFR 660.15 - Equipment requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... receivers, computer hardware for electronic fish ticket software and computer hardware for electronic logbook software. (b) Performance and technical requirements for scales used to weigh catch at sea... ticket software provided by Pacific States Marine Fish Commission are required to meet the hardware and...

  9. Counter traction makes endoscopic submucosal dissection easier.

    PubMed

    Oyama, Tsuneo

    2012-11-01

    Poor counter traction and poor field of vision make endoscopic submucosal dissection (ESD) difficult. Good counter traction allows dissections to be performed more quickly and safely. Position change, which utilizes gravity, is the simplest method to create a clear field of vision. It is useful especially for esophageal and colon ESD. The second easiest method is clip with line method. Counter traction made by clip with line accomplishes the creation of a clear field of vision and suitable counter traction thereby making ESD more efficient and safe. The author published this method in 2002. The name ESD was not established in those days; the name cutting endoscopic mucosal resection (EMR) or EMR with hook knife was used. The other traction methods such as external grasping forceps, internal traction, double channel scope, and double scopes method are introduced in this paper. A good strategy for creating counter traction makes ESD easier.

  10. Value of PCR in sonication fluid for the diagnosis of orthopedic hardware-associated infections: Has the molecular era arrived?

    PubMed

    Renz, Nora; Cabric, Sabrina; Morgenstern, Christian; Schuetz, Michael A; Trampuz, Andrej

    2018-04-01

    Bone healing disturbance following fracture fixation represents a continuing challenge. We evaluated a novel fully automated polymerase chain reaction (PCR) assay using sonication fluid from retrieved orthopedic hardware to diagnose infection. In this prospective diagnostic cohort study, explanted orthopedic hardware materials from consecutive patients were investigated by sonication and the resulting sonication fluid was analyzed by culture (standard procedure) and multiplex PCR (investigational procedure). Hardware-associated infection was defined as visible purulence, presence of a sinus tract, implant on view, inflammation in peri-implant tissue or positive culture. McNemar's chi-squared test was used to compare the performance of diagnostic tests. For the clinical performance all pathogens were considered, whereas for analytical performance only microorganisms were considered for which primers are included in the PCR assay. Among 51 patients, hardware-associated infection was diagnosed in 38 cases (75%) and non-infectious causes in 13 patients (25%). The sensitivity for diagnosing infection was 66% for peri-implant tissue culture, 84% for sonication fluid culture, 71% (clinical performance) and 77% (analytical performance) for sonication fluid PCR, the specificity of all tests was >90%. The analytical sensitivity of PCR was higher for gram-negative bacilli (100%), coagulase-negative staphylococci (89%) and Staphylococcus aureus (75%) than for Cutibacterium (formerly Propionibacterium) acnes (57%), enterococci (50%) and Candida spp. (25%). The performance of sonication fluid PCR for diagnosis of orthopedic hardware-associated infection was comparable to culture tests. The additional advantage of PCR was short processing time (<5 h) and fully automated procedure. With further improvement of the performance, PCR has the potential to complement conventional cultures. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Online Learning Flight Control for Intelligent Flight Control Systems (IFCS)

    NASA Technical Reports Server (NTRS)

    Niewoehner, Kevin R.; Carter, John (Technical Monitor)

    2001-01-01

    The research accomplishments for the cooperative agreement 'Online Learning Flight Control for Intelligent Flight Control Systems (IFCS)' include the following: (1) previous IFC program data collection and analysis; (2) IFC program support site (configured IFC systems support network, configured Tornado/VxWorks OS development system, made Configuration and Documentation Management Systems Internet accessible); (3) Airborne Research Test Systems (ARTS) II Hardware (developed hardware requirements specification, developing environmental testing requirements, hardware design, and hardware design development); (4) ARTS II software development laboratory unit (procurement of lab style hardware, configured lab style hardware, and designed interface module equivalent to ARTS II faceplate); (5) program support documentation (developed software development plan, configuration management plan, and software verification and validation plan); (6) LWR algorithm analysis (performed timing and profiling on algorithm); (7) pre-trained neural network analysis; (8) Dynamic Cell Structures (DCS) Neural Network Analysis (performing timing and profiling on algorithm); and (9) conducted technical interchange and quarterly meetings to define IFC research goals.

  12. Intrinsic Hardware Evolution for the Design and Reconfiguration of Analog Speed Controllers for a DC Motor

    NASA Technical Reports Server (NTRS)

    Gwaltney, David A.; Ferguson, Michael I.

    2003-01-01

    Evolvable hardware provides the capability to evolve analog circuits to produce amplifier and filter functions. Conventional analog controller designs employ these same functions. Analog controllers for the control of the shaft speed of a DC motor are evolved on an evolvable hardware platform utilizing a second generation Field Programmable Transistor Array (FPTA2). The performance of an evolved controller is compared to that of a conventional proportional-integral (PI) controller. It is shown that hardware evolution is able to create a compact design that provides good performance, while using considerably less functional electronic components than the conventional design. Additionally, the use of hardware evolution to provide fault tolerance by reconfiguring the design is explored. Experimental results are presented showing that significant recovery of capability can be made in the face of damaging induced faults.

  13. Using a Personal Device to Strengthen Password Authentication from an Untrusted Computer

    NASA Astrophysics Data System (ADS)

    Mannan, Mohammad; van Oorschot, P. C.

    Keylogging and phishing attacks can extract user identity and sensitive account information for unauthorized access to users' financial accounts. Most existing or proposed solutions are vulnerable to session hijacking attacks. We propose a simple approach to counter these attacks, which cryptographically separates a user's long-term secret input from (typically untrusted) client PCs; a client PC performs most computations but has access only to temporary secrets. The user's long-term secret (typically short and low-entropy) is input through an independent personal trusted device such as a cellphone. The personal device provides a user's long-term secrets to a client PC only after encrypting the secrets using a pre-installed, "correct" public key of a remote service (the intended recipient of the secrets). The proposed protocol (MP-Auth) realizes such an approach, and is intended to safeguard passwords from keyloggers, other malware (including rootkits), phishing attacks and pharming, as well as to provide transaction security to foil session hijacking. We report on a prototype implementation of MP-Auth, and provide a comparison of web authentication techniques that use an additional factor of authentication (e.g. a cellphone, PDA or hardware token).

  14. CRionScan: A stand-alone real time controller designed to perform ion beam imaging, dose controlled irradiation and proton beam writing

    NASA Astrophysics Data System (ADS)

    Daudin, L.; Barberet, Ph.; Serani, L.; Moretto, Ph.

    2013-07-01

    High resolution ion microbeams, usually used to perform elemental mapping, low dose targeted irradiation or ion beam lithography needs a very flexible beam control system. For this purpose, we have developed a dedicated system (called “CRionScan”), on the AIFIRA facility (Applications Interdisciplinaires des Faisceaux d'Ions en Région Aquitaine). It consists of a stand-alone real-time scanning and imaging instrument based on a Compact Reconfigurable Input/Output (Compact RIO) device from National Instruments™. It is based on a real-time controller, a Field Programmable Gate Array (FPGA), input/output modules and Ethernet connectivity. We have implemented a fast and deterministic beam scanning system interfaced with our commercial data acquisition system without any hardware development. CRionScan is built under LabVIEW™ and has been used on AIFIRA's nanobeam line since 2009 (Barberet et al., 2009, 2011) [1,2]. A Graphical User Interface (GUI) embedded in the Compact RIO as a web page is used to control the scanning parameters. In addition, a fast electrostatic beam blanking trigger has been included in the FPGA and high speed counters (15 MHz) have been implemented to perform dose controlled irradiation and on-line images on the GUI. Analog to Digital converters are used for the beam current measurement and in the near future for secondary electrons imaging. Other functionalities have been integrated in this controller like LED lighting using Pulse Width Modulation and a “NIM Wilkinson ADC” data acquisition.

  15. Edge Stability and Performance of the ELM-Free Quiescent H-Mode and the Quiescent Double Barrier Mode on DIII-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, W P; Burrell, K H; Casper, T A

    2004-12-03

    The quiescent H (QH) mode, an edge localized mode (ELM)-free, high-confinement mode, combines well with an internal transport barrier to form quiescent double barrier (QDB) stationary state, high performance plasmas. The QH-mode edge pedestal pressure is similar to that seen in ELMing phases of the same discharge, with similar global energy confinement. The pedestal density in early ELMing phases of strongly pumped counter injection discharges drops and a transition to QH-mode occurs, leading to lower calculated edge bootstrap current. Plasmas current ramp experiment and ELITE code modeling of edge stability suggest that QH-modes lie near an edge current stability boundary.more » At high triangularity, QH-mode discharges operate at higher pedestal density and pressure, and have achieved ITER level values of {beta}{sub PED} and {nu}*. The QDB achieves performance of {alpha}{sub N}H{sub 89} {approx} 7 in quasi-stationary conditions for a duration of 10 tE, limited by hardware. Recently we demonstrated stationary state QDB discharges with little change in kinetic and q profiles (q{sub 0} > 1) for 2 s, comparable to ELMing ''hybrid scenarios'', yet without the debilitating effects of ELMs. Plasma profile control tools, including electron cyclotron heating and current drive and neutral beam heating, have been demonstrated to control simultaneously the q profile development, the density peaking, impurity accumulation and plasma beta.« less

  16. OpenMM 4: A Reusable, Extensible, Hardware Independent Library for High Performance Molecular Simulation.

    PubMed

    Eastman, Peter; Friedrichs, Mark S; Chodera, John D; Radmer, Randall J; Bruns, Christopher M; Ku, Joy P; Beauchamp, Kyle A; Lane, Thomas J; Wang, Lee-Ping; Shukla, Diwakar; Tye, Tony; Houston, Mike; Stich, Timo; Klein, Christoph; Shirts, Michael R; Pande, Vijay S

    2013-01-08

    OpenMM is a software toolkit for performing molecular simulations on a range of high performance computing architectures. It is based on a layered architecture: the lower layers function as a reusable library that can be invoked by any application, while the upper layers form a complete environment for running molecular simulations. The library API hides all hardware-specific dependencies and optimizations from the users and developers of simulation programs: they can be run without modification on any hardware on which the API has been implemented. The current implementations of OpenMM include support for graphics processing units using the OpenCL and CUDA frameworks. In addition, OpenMM was designed to be extensible, so new hardware architectures can be accommodated and new functionality (e.g., energy terms and integrators) can be easily added.

  17. OpenMM 4: A Reusable, Extensible, Hardware Independent Library for High Performance Molecular Simulation

    PubMed Central

    Eastman, Peter; Friedrichs, Mark S.; Chodera, John D.; Radmer, Randall J.; Bruns, Christopher M.; Ku, Joy P.; Beauchamp, Kyle A.; Lane, Thomas J.; Wang, Lee-Ping; Shukla, Diwakar; Tye, Tony; Houston, Mike; Stich, Timo; Klein, Christoph; Shirts, Michael R.; Pande, Vijay S.

    2012-01-01

    OpenMM is a software toolkit for performing molecular simulations on a range of high performance computing architectures. It is based on a layered architecture: the lower layers function as a reusable library that can be invoked by any application, while the upper layers form a complete environment for running molecular simulations. The library API hides all hardware-specific dependencies and optimizations from the users and developers of simulation programs: they can be run without modification on any hardware on which the API has been implemented. The current implementations of OpenMM include support for graphics processing units using the OpenCL and CUDA frameworks. In addition, OpenMM was designed to be extensible, so new hardware architectures can be accommodated and new functionality (e.g., energy terms and integrators) can be easily added. PMID:23316124

  18. Development of Generic Aircrew Measures of Performance for Distributed Mission Training

    DTIC Science & Technology

    2003-03-31

    collecte de données utiles sur les performances humaines représente des défis considérables. De plus, on ne trouvera pas deux essais d’IMD exactement...weapons Countered bandits offensive Very sound energy management and aircraft manoeuvering Quickly capitalized on offensive position...Effectively countered bandits offensive to a neutral position Ideal Aircraft energy management and manoeuvering Expeditiously capitalized on

  19. Automatic Parameter Tuning for the Morpheus Vehicle Using Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Birge, B.

    2013-01-01

    A high fidelity simulation using a PC based Trick framework has been developed for Johnson Space Center's Morpheus test bed flight vehicle. There is an iterative development loop of refining and testing the hardware, refining the software, comparing the software simulation to hardware performance and adjusting either or both the hardware and the simulation to extract the best performance from the hardware as well as the most realistic representation of the hardware from the software. A Particle Swarm Optimization (PSO) based technique has been developed that increases speed and accuracy of the iterative development cycle. Parameters in software can be automatically tuned to make the simulation match real world subsystem data from test flights. Special considerations for scale, linearity, discontinuities, can be all but ignored with this technique, allowing fast turnaround both for simulation tune up to match hardware changes as well as during the test and validation phase to help identify hardware issues. Software models with insufficient control authority to match hardware test data can be immediately identified and using this technique requires very little to no specialized knowledge of optimization, freeing model developers to concentrate on spacecraft engineering. Integration of the PSO into the Morpheus development cycle will be discussed as well as a case study highlighting the tool's effectiveness.

  20. Distributed Control Architecture for Gas Turbine Engine. Chapter 4

    NASA Technical Reports Server (NTRS)

    Culley, Dennis; Garg, Sanjay

    2009-01-01

    The transformation of engine control systems from centralized to distributed architecture is both necessary and enabling for future aeropropulsion applications. The continued growth of adaptive control applications and the trend to smaller, light weight cores is a counter influence on the weight and volume of control system hardware. A distributed engine control system using high temperature electronics and open systems communications will reverse the growing trend of control system weight ratio to total engine weight and also be a major factor in decreasing overall cost of ownership for aeropropulsion systems. The implementation of distributed engine control is not without significant challenges. There are the needs for high temperature electronics, development of simple, robust communications, and power supply for the on-board electronics.

  1. Ethernet based data logger for gaseous detectors

    NASA Astrophysics Data System (ADS)

    Swain, S.; Sahu, P. K.; Sahu, S. K.

    2018-05-01

    A data logger is designed to monitor and record ambient parameters such as temperature, pressure and relative humidity along with gas flow rate as a function of time. These parameters are required for understanding the characteristics of gas-filled detectors such as Gas Electron Multiplier (GEM) and Multi-Wire Proportional Counter (MWPC). The data logger has different microcontrollers and has been interfaced to an ethernet port with a local LCD unit for displaying all measured parameters. In this article, the explanation of the data logger design, hardware, and software description of the master microcontroller and the DAQ system along with LabVIEW interface client program have been presented. We have implemented this device with GEM detector and displayed few preliminary results as a function of above parameters.

  2. Data collection system for a wide range of gas-discharge proportional neutron counters

    NASA Astrophysics Data System (ADS)

    Oskomov, V.; Sedov, A.; Saduyev, N.; Kalikulov, O.; Kenzhina, I.; Tautaev, E.; Mukhamejanov, Y.; Dyachkov, V.; Utey, Sh

    2017-12-01

    This article describes the development and creation of a universal system of data collection to measure the intensity of pulsed signals. As a result of careful analysis of time conditions and operating conditions of software and hardware complex circuit solutions were selected that meet the required specifications: frequency response is optimized in order to obtain the maximum ratio signal/noise; methods and modes of operation of the microcontroller were worked out to implement the objectives of continuous measurement of signal amplitude at the output of amplifier and send the data to a computer; function of control of high voltage source was implemented. The preliminary program has been developed for microcontroller in its simplest form, which works on a particular algorithm.

  3. Hardware and software status of QCDOC

    NASA Astrophysics Data System (ADS)

    Boyle, P. A.; Chen, D.; Christ, N. H.; Clark, M.; Cohen, S. D.; Cristian, C.; Dong, Z.; Gara, A.; Joó, B.; Jung, C.; Kim, C.; Levkova, L.; Liao, X.; Liu, G.; Mawhinney, R. D.; Ohta, S.; Petrov, K.; Wettig, T.; Yamaguchi, A.

    2004-03-01

    QCDOC is a massively parallel supercomputer whose processing nodes are based on an application-specific integrated circuit (ASIC). This ASIC was custom-designed so that crucial lattice QCD kernels achieve an overall sustained performance of 50% on machines with several 10,000 nodes. This strong scalability, together with low power consumption and a price/performance ratio of $1 per sustained MFlops, enable QCDOC to attack the most demanding lattice QCD problems. The first ASICs became available in June of 2003, and the testing performed so far has shown all systems functioning according to specification. We review the hardware and software status of QCDOC and present performance figures obtained in real hardware as well as in simulation.

  4. Performance Qualification Test of the ISS Water Processor Assembly (WPA) Expendables

    NASA Technical Reports Server (NTRS)

    Carter, Layne; Tabb, David; Tatara, James D.; Mason, Richard K.

    2005-01-01

    The Water Processor Assembly (WPA) for use on the International Space Station (ISS) includes various technologies for the treatment of waste water. These technologies include filtration, ion exchange, adsorption, catalytic oxidation, and iodination. The WPA hardware implementing portions of these technologies, including the Particulate Filter, Multifiltration Bed, Ion Exchange Bed, and Microbial Check Valve, was recently qualified for chemical performance at the Marshall Space Flight Center. Waste water representing the quality of that produced on the ISS was generated by test subjects and processed by the WPA. Water quality analysis and instrumentation data was acquired throughout the test to monitor hardware performance. This paper documents operation of the test and the assessment of the hardware performance.

  5. Study and Fabrication of Super Low-Cost Solar Cell (SLC-SC) Based on Counter Electrode from Animal’s Bone

    NASA Astrophysics Data System (ADS)

    Fadlilah, D. R.; Fajar, M. N.; Aini, A. N.; Haqqiqi, R. I.; Wirawan, P. R.; Endarko

    2018-04-01

    The synthesized carbon from bones of chicken, cow, and fish with the calcination temperature at 450 and 600°C have been successfully fabricated for counter electrode in the Super Low-Cost Solar Cell (SLC-LC) based the structure of Dye-Sensitized Solar Cells (DSSC). The main proposed study was to fabricate SLC-SC and investigate the influence of the synthesized carbon from animal’s bone for counter electrode towards to photovoltaic performance of SLC-SC. X-Ray Diffraction and UV-Vis was used to characterize the phase and the optical properties of TiO2 as photoanode in SLC-SC. Meanwhile, the morphology and particle size distribution of the synthesized carbon in counter electrodes were investigated by Scanning Electron Microscopy (SEM) and Particle Size Analyzer (PSA). The results showed that the TiO2 has anatase phase with the absorption wavelength of 300 to 550 nm. The calcination temperature for synthesizing of carbon could affect morphology and particle size distribution. The increasing temperature gave the effect more dense in morphology and increased the particle size of carbon in the counter electrode. Changes in morphology and particle size of carbon give effect to the performance of the SLC-SC where the increased morphology’s compact and particle size make decreased in the performance of the SLC-SC.

  6. FPGA Based Reconfigurable ATM Switch Test Bed

    NASA Technical Reports Server (NTRS)

    Chu, Pong P.; Jones, Robert E.

    1998-01-01

    Various issues associated with "FPGA Based Reconfigurable ATM Switch Test Bed" are presented in viewgraph form. Specific topics include: 1) Network performance evaluation; 2) traditional approaches; 3) software simulation; 4) hardware emulation; 5) test bed highlights; 6) design environment; 7) test bed architecture; 8) abstract sheared-memory switch; 9) detailed switch diagram; 10) traffic generator; 11) data collection circuit and user interface; 12) initial results; and 13) the following conclusions: Advances in FPGA make hardware emulation feasible for performance evaluation, hardware emulation can provide several orders of magnitude speed-up over software simulation; due to the complexity of hardware synthesis process, development in emulation is much more difficult than simulation and requires knowledge in both networks and digital design.

  7. Performance Evaluation of Counter-Based Dynamic Load Balancing Schemes for Massive Contingency Analysis with Different Computing Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Chavarría-Miranda, Daniel

    Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimation. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. High performance computing holds the promise of faster analysis of more contingency cases for the purpose of safe and reliable operation of today’s power grids with less operating margin and more intermittent renewable energy sources. This paper evaluates the performance of counter-based dynamic load balancing schemes for massive contingency analysis under different computing environments. Insights frommore » the performance evaluation can be used as guidance for users to select suitable schemes in the application of massive contingency analysis. Case studies, as well as MATLAB simulations, of massive contingency cases using the Western Electricity Coordinating Council power grid model are presented to illustrate the application of high performance computing with counter-based dynamic load balancing schemes.« less

  8. Shielding concepts for low-background proportional counter arrays in surface laboratories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aalseth, Craig E.; Humble, Paul H.; Mace, Emily K.

    2016-02-01

    Development of ultra low background gas proportional counters has made the contribution from naturally occurring radioactive isotopes – primarily and activity in the uranium and thorium decay chains – inconsequential to instrumental sensitivity levels when measurements are performed in above ground surface laboratories. Simple lead shielding is enough to mitigate against gamma rays as gas proportional counters are already relatively insensitive to naturally occurring gamma radiation. The dominant background in these surface laboratory measurements using ultra low background gas proportional counters is due to cosmic ray generated muons, neutrons, and protons. Studies of measurements with ultra low background gas proportionalmore » counters in surface and underground laboratories as well as radiation transport Monte Carlo simulations suggest a preferred conceptual design to achieve the highest possible sensitivity from an array of low background gas proportional counters when operated in a surface laboratory. The basis for a low background gas proportional counter array and the preferred shielding configuration is reported, especially in relation to measurements of radioactive gases having low energy decays such as 37Ar.« less

  9. Safe to Fly: Certifying COTS Hardware for Spaceflight

    NASA Technical Reports Server (NTRS)

    Fichuk, Jessica L.

    2011-01-01

    Providing hardware for the astronauts to use on board the Space Shuttle or International Space Station (ISS) involves a certification process that entails evaluating hardware safety, weighing risks, providing mitigation, and verifying requirements. Upon completion of this certification process, the hardware is deemed safe to fly. This process from start to finish can be completed as quickly as 1 week or can take several years in length depending on the complexity of the hardware and whether the item is a unique custom design. One area of cost and schedule savings that NASA implements is buying Commercial Off the Shelf (COTS) hardware and certifying it for human spaceflight as safe to fly. By utilizing commercial hardware, NASA saves time not having to develop, design and build the hardware from scratch, as well as a timesaving in the certification process. By utilizing COTS hardware, the current detailed certification process can be simplified which results in schedule savings. Cost savings is another important benefit of flying COTS hardware. Procuring COTS hardware for space use can be more economical than custom building the hardware. This paper will investigate the cost savings associated with certifying COTS hardware to NASA s standards rather than performing a custom build.

  10. Standard Hardware Acquisition and Reliability Program's (SHARP's) efforts in incorporating fiber optic interconnects into standard electronic module (SEM) connectors

    NASA Astrophysics Data System (ADS)

    Riggs, William R.

    1994-05-01

    SHARP is a Navy wide logistics technology development effort aimed at reducing the acquisition costs, support costs, and risks of military electronic weapon systems while increasing the performance capability, reliability, maintainability, and readiness of these systems. Lower life cycle costs for electronic hardware are achieved through technology transition, standardization, and reliability enhancement to improve system affordability and availability as well as enhancing fleet modernization. Advanced technology is transferred into the fleet through hardware specifications for weapon system building blocks of standard electronic modules, standard power systems, and standard electronic systems. The product lines are all defined with respect to their size, weight, I/O, environmental performance, and operational performance. This method of defining the standard is very conducive to inserting new technologies into systems using the standard hardware. This is the approach taken thus far in inserting photonic technologies into SHARP hardware. All of the efforts have been related to module packaging; i.e. interconnects, component packaging, and module developments. Fiber optic interconnects are discussed in this paper.

  11. Transistor Level Circuit Experiments using Evolvable Hardware

    NASA Technical Reports Server (NTRS)

    Stoica, A.; Zebulum, R. S.; Keymeulen, D.; Ferguson, M. I.; Daud, Taher; Thakoor, A.

    2005-01-01

    The Jet Propulsion Laboratory (JPL) performs research in fault tolerant, long life, and space survivable electronics for the National Aeronautics and Space Administration (NASA). With that focus, JPL has been involved in Evolvable Hardware (EHW) technology research for the past several years. We have advanced the technology not only by simulation and evolution experiments, but also by designing, fabricating, and evolving a variety of transistor-based analog and digital circuits at the chip level. EHW refers to self-configuration of electronic hardware by evolutionary/genetic search mechanisms, thereby maintaining existing functionality in the presence of degradations due to aging, temperature, and radiation. In addition, EHW has the capability to reconfigure itself for new functionality when required for mission changes or encountered opportunities. Evolution experiments are performed using a genetic algorithm running on a DSP as the reconfiguration mechanism and controlling the evolvable hardware mounted on a self-contained circuit board. Rapid reconfiguration allows convergence to circuit solutions in the order of seconds. The paper illustrates hardware evolution results of electronic circuits and their ability to perform under 230 C temperature as well as radiations of up to 250 kRad.

  12. Speeding-up Bioinformatics Algorithms with Heterogeneous Architectures: Highly Heterogeneous Smith-Waterman (HHeterSW).

    PubMed

    Gálvez, Sergio; Ferusic, Adis; Esteban, Francisco J; Hernández, Pilar; Caballero, Juan A; Dorado, Gabriel

    2016-10-01

    The Smith-Waterman algorithm has a great sensitivity when used for biological sequence-database searches, but at the expense of high computing-power requirements. To overcome this problem, there are implementations in literature that exploit the different hardware-architectures available in a standard PC, such as GPU, CPU, and coprocessors. We introduce an application that splits the original database-search problem into smaller parts, resolves each of them by executing the most efficient implementations of the Smith-Waterman algorithms in different hardware architectures, and finally unifies the generated results. Using non-overlapping hardware allows simultaneous execution, and up to 2.58-fold performance gain, when compared with any other algorithm to search sequence databases. Even the performance of the popular BLAST heuristic is exceeded in 78% of the tests. The application has been tested with standard hardware: Intel i7-4820K CPU, Intel Xeon Phi 31S1P coprocessors, and nVidia GeForce GTX 960 graphics cards. An important increase in performance has been obtained in a wide range of situations, effectively exploiting the available hardware.

  13. Environmental qualification testing of payload G-534, the Pool Boiling Experiment

    NASA Technical Reports Server (NTRS)

    Sexton, J. Andrew

    1992-01-01

    Payload G-534, the prototype Pool Boiling Experiment (PBE), is scheduled to fly on the STS-47 mission in September 1992. This paper describes the purpose of the experiment and the environmental qualification testing program that was used to prove the integrity of the hardware. Component and box level vibration and thermal cycling tests were performed to give an early level of confidence in the hardware designs. At the system level, vibration, thermal extreme soaks, and thermal vacuum cycling tests were performed to qualify the complete design for the expected shuttle environment. The system level vibration testing included three axis sine sweeps and random inputs. The system level hot and cold soak tests demonstrated the hardware's capability to operate over a wide range of temperatures and gave wider latitude in determining which shuttle thermal attitudes were compatible with the experiment. The system level thermal vacuum cycling tests demonstrated the hardware's capability to operate in a convection free environment. A unique environmental chamber was designed and fabricated by the PBE team and allowed most of the environmental testing to be performed within the hardware build laboratory. The completion of the test program gave the project team high confidence in the hardware's ability to function as designed during flight.

  14. OS friendly microprocessor architecture: Hardware level computer security

    NASA Astrophysics Data System (ADS)

    Jungwirth, Patrick; La Fratta, Patrick

    2016-05-01

    We present an introduction to the patented OS Friendly Microprocessor Architecture (OSFA) and hardware level computer security. Conventional microprocessors have not tried to balance hardware performance and OS performance at the same time. Conventional microprocessors have depended on the Operating System for computer security and information assurance. The goal of the OS Friendly Architecture is to provide a high performance and secure microprocessor and OS system. We are interested in cyber security, information technology (IT), and SCADA control professionals reviewing the hardware level security features. The OS Friendly Architecture is a switched set of cache memory banks in a pipeline configuration. For light-weight threads, the memory pipeline configuration provides near instantaneous context switching times. The pipelining and parallelism provided by the cache memory pipeline provides for background cache read and write operations while the microprocessor's execution pipeline is running instructions. The cache bank selection controllers provide arbitration to prevent the memory pipeline and microprocessor's execution pipeline from accessing the same cache bank at the same time. This separation allows the cache memory pages to transfer to and from level 1 (L1) caching while the microprocessor pipeline is executing instructions. Computer security operations are implemented in hardware. By extending Unix file permissions bits to each cache memory bank and memory address, the OSFA provides hardware level computer security.

  15. Picosecond-precision multichannel autonomous time and frequency counter

    NASA Astrophysics Data System (ADS)

    Szplet, R.; Kwiatkowski, P.; RóŻyc, K.; Jachna, Z.; Sondej, T.

    2017-12-01

    This paper presents the design, implementation, and test results of a multichannel time interval and frequency counter developed as a desktop instrument. The counter contains four main functional modules for (1) performing precise measurements, (2) controlling and fast data processing, (3) low-noise power suppling, and (4) supplying a stable reference clock (optional rubidium standard). A fundamental for the counter, the time interval measurement is based on time stamping combined with a period counting and in-period two-stage time interpolation that allows us to achieve wide measurement range (above 1 h), high precision (even better than 4.5 ps), and high measurement speed (up to 91.2 × 106 timestamps/s). The frequency is measured up to 3.0 GHz with the use of the reciprocal method. Wide functionality of the counter includes also the evaluation of frequency stability of clocks and oscillators (Allan deviation) and phase variation (time interval error, maximum time interval error, time deviation). The 8-channel measurement module is based on a field programmable gate array device, while the control unit involves a microcontroller with a high performance ARM-Cortex core. An efficient and user-friendly control of the counter is provided either locally, through the built-in keypad or/and color touch panel, or remotely, with the aid of USB, Ethernet, RS232C, or RS485 interfaces.

  16. Picosecond-precision multichannel autonomous time and frequency counter.

    PubMed

    Szplet, R; Kwiatkowski, P; Różyc, K; Jachna, Z; Sondej, T

    2017-12-01

    This paper presents the design, implementation, and test results of a multichannel time interval and frequency counter developed as a desktop instrument. The counter contains four main functional modules for (1) performing precise measurements, (2) controlling and fast data processing, (3) low-noise power suppling, and (4) supplying a stable reference clock (optional rubidium standard). A fundamental for the counter, the time interval measurement is based on time stamping combined with a period counting and in-period two-stage time interpolation that allows us to achieve wide measurement range (above 1 h), high precision (even better than 4.5 ps), and high measurement speed (up to 91.2 × 10 6 timestamps/s). The frequency is measured up to 3.0 GHz with the use of the reciprocal method. Wide functionality of the counter includes also the evaluation of frequency stability of clocks and oscillators (Allan deviation) and phase variation (time interval error, maximum time interval error, time deviation). The 8-channel measurement module is based on a field programmable gate array device, while the control unit involves a microcontroller with a high performance ARM-Cortex core. An efficient and user-friendly control of the counter is provided either locally, through the built-in keypad or/and color touch panel, or remotely, with the aid of USB, Ethernet, RS232C, or RS485 interfaces.

  17. Link calibration against receiver calibration: an assessment of GPS time transfer uncertainties

    NASA Astrophysics Data System (ADS)

    Rovera, G. D.; Torre, J.-M.; Sherwood, R.; Abgrall, M.; Courde, C.; Laas-Bourez, M.; Uhrich, P.

    2014-10-01

    We present a direct comparison between two different techniques for the relative calibration of time transfer between remote time scales when using the signals transmitted by the Global Positioning System (GPS). Relative calibration estimates the delay of equipment or the delay of a time transfer link with respect to reference equipment. It is based on the circulation of some travelling GPS equipment between the stations in the network, against which the local equipment is measured. Two techniques can be considered: first a station calibration by the computation of the hardware delays of the local GPS equipment; second the computation of a global hardware delay offset for the time transfer between the reference points of two remote time scales. This last technique is called a ‘link’ calibration, with respect to the other one, which is a ‘receiver’ calibration. The two techniques require different measurements on site, which change the uncertainty budgets, and we discuss this and related issues. We report on one calibration campaign organized during Autumn 2013 between Observatoire de Paris (OP), Paris, France, Observatoire de la Côte d'Azur (OCA), Calern, France, and NERC Space Geodesy Facility (SGF), Herstmonceux, United Kingdom. The travelling equipment comprised two GPS receivers of different types, along with the required signal generator and distribution amplifier, and one time interval counter. We show the different ways to compute uncertainty budgets, leading to improvement factors of 1.2 to 1.5 on the hardware delay uncertainties when comparing the relative link calibration to the relative receiver calibration.

  18. Analysis of counter flow of corona wind for heat transfer enhancement

    NASA Astrophysics Data System (ADS)

    Shin, Dong Ho; Baek, Soo Hong; Ko, Han Seo

    2018-03-01

    A heat sink for cooling devices using the counter flow of a corona wind was developed in this study. Detailed information about the numerical investigations of forced convection using the corona wind was presented. The fins of the heat sink using the counter flow of a corona wind were also investigated. The corona wind generator with a wire-to-plate electrode arrangement was used for generating the counter flow to the fin. The compact and simple geometric characteristics of the corona wind generator facilitate the application of the heat sink using the counter flow, demonstrating the heat sink is effective for cooling electronic devices. Parametric studies were performed to analyze the effect of the counter flow on the fins. Also, the velocity and temperature were measured experimentally for the test mock-up of the heat sink with the corona wind generator to verify the numerical results. From a numerical study, the type of fin and its optimal height, length, and pitch were suggested for various heat fluxes. In addition, the correlations to calculate the mass of the developed heat sink and its cooling performance in terms of the heat transfer coefficient were derived. Finally, the cooling efficiencies corresponding to the mass, applied power, total size, and noise of the devices were compared with the existing commercial central processing unit (CPU) cooling devices with rotor fans. As a result, it was confirmed that the heat sink using the counter flow of the corona wind showed appropriate efficiencies for cooling electronic devices, and is a suitable replacement for the existing cooling device for high power electronics.

  19. A laboratory comparison of clockwise and counter-clockwise rapidly rotating shift schedules, part II : performance : final report.

    DOT National Transportation Integrated Search

    2002-07-01

    INTRODUCTION. Many Air Traffic Control Specialists (ATCSs) work a relatively unique counter-clockwise, rapidly rotating shift schedule. Although arguments against these kinds of schedules are prevalent in the literature, few studies have examined rot...

  20. An approach to secure weather and climate models against hardware faults

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; Dawson, Andrew

    2017-03-01

    Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelization to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. In this paper, we present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform model simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13 % for the shallow water model.

  1. An approach to secure weather and climate models against hardware faults

    NASA Astrophysics Data System (ADS)

    Düben, Peter; Dawson, Andrew

    2017-04-01

    Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelisation to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. We present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13% for the shallow water model.

  2. Electromagnetic Counter-Counter Measure (ECCM) Techniques of the Digital Microwave Radio.

    DTIC Science & Technology

    1982-05-01

    Frequency hopping requires special synthesizers and filter banks. Large bandwidth expansion in a microwave radio relay application can best be achieved with...34 processing gain " performance as a function of jammer modulation type " pulse jammer performance • emission bandwidth and spectral shaping 0... spectral efficiency, implementation complexity, and suitability for ECCK techniques will be considered. A sumary of the requirements and characteristics of

  3. Age Life Evaluation of Space Shuttle Crew Escape System Pyrotechnic Components Loaded with Hexanitrostilbene (HNS)

    NASA Technical Reports Server (NTRS)

    Hoffman, William C., III

    1996-01-01

    Determining deterioration characteristics of the Space Shuttle crew escape system pyrotechnic components loaded with hexanitrostilbene would enable us to establish a hardware life-limit for these items, so we could better plan our equipment use and, possibly, extend the useful life of the hardware. We subjected components to accelerated-age environments to determine degradation characteristics and established a hardware life-limit based upon observed and calculated trends. We extracted samples using manufacturing lots currently installed in the Space Shuttle crew escape system and from other NASA programs. Hardware included in the study consisted of various forms and ages of mild detonating fuse, linear shaped charge, and flexible confined detonating cord. The hardware types were segregated into 5 groups. One was subjected to detonation velocity testing for a baseline. Two were first subjected to prolonged 155 F heat exposure, and the other two were first subjected to 255 F, before undergoing detonation velocity testing and/or chromatography analysis. Test results showed no measurable changes in performance to allow a prediction of an end of life given the storage and elevated temperature environments the hardware experiences. Given the lack of a definitive performance trend, coupled with previous tests on post-flight Space Shuttle hardware showing no significant changes in chemical purity or detonation velocity, we recommend a safe increase in the useful life of the hardware to 20 years, from the current maximum limits of 10 and 15 years, depending on the hardware.

  4. Recent developments and comprehensive evaluations of a GPU-based Monte Carlo package for proton therapy

    PubMed Central

    Qin, Nan; Botas, Pablo; Giantsoudi, Drosoula; Schuemann, Jan; Tian, Zhen; Jiang, Steve B.; Paganetti, Harald; Jia, Xun

    2016-01-01

    Monte Carlo (MC) simulation is commonly considered as the most accurate dose calculation method for proton therapy. Aiming at achieving fast MC dose calculations for clinical applications, we have previously developed a GPU-based MC tool, gPMC. In this paper, we report our recent updates on gPMC in terms of its accuracy, portability, and functionality, as well as comprehensive tests on this tool. The new version, gPMC v2.0, was developed under the OpenCL environment to enable portability across different computational platforms. Physics models of nuclear interactions were refined to improve calculation accuracy. Scoring functions of gPMC were expanded to enable tallying particle fluence, dose deposited by different particle types, and dose-averaged linear energy transfer (LETd). A multiple counter approach was employed to improve efficiency by reducing frequency of memory writing conflict at scoring. For dose calculation, accuracy improvements over gPMC v1.0 were observed in both water phantom cases and a patient case. For a prostate cancer case planned using high-energy proton beams, dose discrepancies in beam entrance and target region seen in gPMC v1.0 with respect to the gold standard tool for proton Monte Carlo simulations (TOPAS) results were substantially reduced and gamma test passing rate (1%/1mm) was improved from 82.7% to 93.1%. Average relative difference in LETd between gPMC and TOPAS was 1.7%. Average relative differences in dose deposited by primary, secondary, and other heavier particles were within 2.3%, 0.4%, and 0.2%. Depending on source proton energy and phantom complexity, it took 8 to 17 seconds on an AMD Radeon R9 290x GPU to simulate 107 source protons, achieving less than 1% average statistical uncertainty. As beam size was reduced from 10×10 cm2 to 1×1 cm2, time on scoring was only increased by 4.8% with eight counters, in contrast to a 40% increase using only one counter. With the OpenCL environment, the portability of gPMC v2.0 was enhanced. It was successfully executed on different CPUs and GPUs and its performance on different devices varied depending on processing power and hardware structure. PMID:27694712

  5. Managing Risk for Thermal Vacuum Testing of the International Space Station Radiators

    NASA Technical Reports Server (NTRS)

    Carek, Jerry A.; Beach, Duane E.; Remp, Kerry L.

    2000-01-01

    The International Space Station (ISS) is designed with large deployable radiator panels that are used to reject waste heat from the habitation modules. Qualification testing of the Heat Rejection System (HRS) radiators was performed using qualification hardware only. As a result of those tests, over 30 design changes were made to the actual flight hardware. Consequently, a system level test of the flight hardware was needed to validate its performance in the final configuration. A full thermal vacuum test was performed on the flight hardware in order to demonstrate its ability to deploy on-orbit. Since there is an increased level of risk associated with testing flight hardware, because of cost and schedule limitations, special risk mitigation procedures were developed and implemented for the test program, This paper introduces the Continuous Risk Management process that was utilized for the ISS HRS test program. Testing was performed in the Space Power Facility at the NASA Glenn Research Center, Plum Brook Station located in Sandusky, Ohio. The radiator system was installed in the 100-foot diameter by 122-foot tall vacuum chamber on a special deployment track. Radiator deployments were performed at several thermal conditions similar to those expected on-orbit using both the primary deployment mechanism and the back-up deployment mechanism. The tests were highly successful and were completed without incident.

  6. The 3 DLE instrument on ATS-5. [plasma electron counter

    NASA Technical Reports Server (NTRS)

    Deforest, S. E.

    1973-01-01

    The performance and operation of the DLE plasma electron counter on board the ATS 5 are described. Two methods of data presentation, microfilm line plots and spectrograms, are discussed along with plasma dynamics, plasma flow velocity, electrostatic charging, and wave-particle interactions.

  7. Test Hardware Design for Flightlike Operation of Advanced Stirling Convertors (ASC-E3)

    NASA Technical Reports Server (NTRS)

    Oriti, Salvatore M.

    2012-01-01

    NASA Glenn Research Center (GRC) has been supporting development of the Advanced Stirling Radioisotope Generator (ASRG) since 2006. A key element of the ASRG project is providing life, reliability, and performance testing of the Advanced Stirling Convertor (ASC). For this purpose, the Thermal Energy Conversion branch at GRC has been conducting extended operation of a multitude of free-piston Stirling convertors. The goal of this effort is to generate long-term performance data (tens of thousands of hours) simultaneously on multiple units to build a life and reliability database. The test hardware for operation of these convertors was designed to permit in-air investigative testing, such as performance mapping over a range of environmental conditions. With this, there was no requirement to accurately emulate the flight hardware. For the upcoming ASC-E3 units, the decision has been made to assemble the convertors into a flight-like configuration. This means the convertors will be arranged in the dual-opposed configuration in a housing that represents the fit, form, and thermal function of the ASRG. The goal of this effort is to enable system level tests that could not be performed with the traditional test hardware at GRC. This offers the opportunity to perform these system-level tests much earlier in the ASRG flight development, as they would normally not be performed until fabrication of the qualification unit. This paper discusses the requirements, process, and results of this flight-like hardware design activity.

  8. Test Hardware Design for Flight-Like Operation of Advanced Stirling Convertors

    NASA Technical Reports Server (NTRS)

    Oriti, Salvatore M.

    2012-01-01

    NASA Glenn Research Center (GRC) has been supporting development of the Advanced Stirling Radioisotope Generator (ASRG) since 2006. A key element of the ASRG project is providing life, reliability, and performance testing of the Advanced Stirling Convertor (ASC). For this purpose, the Thermal Energy Conversion branch at GRC has been conducting extended operation of a multitude of free-piston Stirling convertors. The goal of this effort is to generate long-term performance data (tens of thousands of hours) simultaneously on multiple units to build a life and reliability database. The test hardware for operation of these convertors was designed to permit in-air investigative testing, such as performance mapping over a range of environmental conditions. With this, there was no requirement to accurately emulate the flight hardware. For the upcoming ASC-E3 units, the decision has been made to assemble the convertors into a flight-like configuration. This means the convertors will be arranged in the dual-opposed configuration in a housing that represents the fit, form, and thermal function of the ASRG. The goal of this effort is to enable system level tests that could not be performed with the traditional test hardware at GRC. This offers the opportunity to perform these system-level tests much earlier in the ASRG flight development, as they would normally not be performed until fabrication of the qualification unit. This paper discusses the requirements, process, and results of this flight-like hardware design activity.

  9. Model-Based Verification and Validation of Spacecraft Avionics

    NASA Technical Reports Server (NTRS)

    Khan, M. Omair; Sievers, Michael; Standley, Shaun

    2012-01-01

    Verification and Validation (V&V) at JPL is traditionally performed on flight or flight-like hardware running flight software. For some time, the complexity of avionics has increased exponentially while the time allocated for system integration and associated V&V testing has remained fixed. There is an increasing need to perform comprehensive system level V&V using modeling and simulation, and to use scarce hardware testing time to validate models; the norm for thermal and structural V&V for some time. Our approach extends model-based V&V to electronics and software through functional and structural models implemented in SysML. We develop component models of electronics and software that are validated by comparison with test results from actual equipment. The models are then simulated enabling a more complete set of test cases than possible on flight hardware. SysML simulations provide access and control of internal nodes that may not be available in physical systems. This is particularly helpful in testing fault protection behaviors when injecting faults is either not possible or potentially damaging to the hardware. We can also model both hardware and software behaviors in SysML, which allows us to simulate hardware and software interactions. With an integrated model and simulation capability we can evaluate the hardware and software interactions and identify problems sooner. The primary missing piece is validating SysML model correctness against hardware; this experiment demonstrated such an approach is possible.

  10. No-hardware-signature cybersecurity-crypto-module: a resilient cyber defense agent

    NASA Astrophysics Data System (ADS)

    Zaghloul, A. R. M.; Zaghloul, Y. A.

    2014-06-01

    We present an optical cybersecurity-crypto-module as a resilient cyber defense agent. It has no hardware signature since it is bitstream reconfigurable, where single hardware architecture functions as any selected device of all possible ones of the same number of inputs. For a two-input digital device, a 4-digit bitstream of 0s and 1s determines which device, of a total of 16 devices, the hardware performs as. Accordingly, the hardware itself is not physically reconfigured, but its performance is. Such a defense agent allows the attack to take place, rendering it harmless. On the other hand, if the system is already infected with malware sending out information, the defense agent allows the information to go out, rendering it meaningless. The hardware architecture is immune to side attacks since such an attack would reveal information on the attack itself and not on the hardware. This cyber defense agent can be used to secure a point-to-point, point-to-multipoint, a whole network, and/or a single entity in the cyberspace. Therefore, ensuring trust between cyber resources. It can provide secure communication in an insecure network. We provide the hardware design and explain how it works. Scalability of the design is briefly discussed. (Protected by United States Patents No.: US 8,004,734; US 8,325,404; and other National Patents worldwide.)

  11. 3D graphene from CO 2 and K as an excellent counter electrode for dye-sensitized solar cells

    DOE PAGES

    Wei, Wei; Stacchiola, Dario J.; Hu, Yun Hang

    2017-07-19

    3D graphene, which was synthesized directly from CO 2 via its exothermic reaction with liquid K, exhibited excellent performance as a counter electrode for a dye-sensitized solar cell (DSSC). The DSSC has achieved a high power conversion efficiency of 8.25%, which is 10 times larger than that (0.74%) of a DSSC with a counter electrode of the regular graphene synthesized via chemical exfoliation of graphite. The efficiency is even higher than that (7.73%) of a dye-sensitized solar cell with an expensive standard Pt counter electrode. This work provides a novel approach to use a greenhouse gas for DSSCs.

  12. Three-dimensional nitrogen doped holey reduced graphene oxide framework as metal-free counter electrodes for high performance dye-sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Yu, Mei; Zhang, Jindan; Li, Songmei; Meng, Yanbing; Liu, Jianhua

    2016-03-01

    Three-dimensional nitrogen doped holey reduced graphene oxide framework (NHGF) with hierarchical porosity structure was developed as high-performance metal-free counter electrodes (CEs) for dye-sensitized solar cells (DSSCs). With plenty of exposed active sites, efficient electron and ion transport pathways as well as a high surface hydrophilicity, NHGF-CE exhibits good electrocatalytic performances for I- /I3- redox couple and a low charge transfer resistance (Rct). The Rct of NHGF-CE is 1.46 Ω cm2, which is much lower than that of Pt-CE (4.02 Ω cm2). The DSSC with NHGF-CE reaches a power conversion efficiency of 5.56% and a fill factor of 65.5%, while those of the DSSC with Pt-CE are only 5.45% and 62.3%, respectively. The achievement of the highly efficient 3D structure presents a potential way to fabricate low-cost and metal-free counter electrodes with excellent performance.

  13. Test Program for Stirling Radioisotope Generator Hardware at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Lewandowski, Edward J.; Bolotin, Gary S.; Oriti, Salvatore M.

    2015-01-01

    Stirling-based energy conversion technology has demonstrated the potential of high efficiency and low mass power systems for future space missions. This capability is beneficial, if not essential, to making certain deep space missions possible. Significant progress was made developing the Advanced Stirling Radioisotope Generator (ASRG), a 140-W radioisotope power system. A variety of flight-like hardware, including Stirling convertors, controllers, and housings, was designed and built under the ASRG flight development project. To support future Stirling-based power system development NASA has proposals that, if funded, will allow this hardware to go on test at the NASA Glenn Research Center. While future flight hardware may not be identical to the hardware developed under the ASRG flight development project, many components will likely be similar, and system architectures may have heritage to ASRG. Thus, the importance of testing the ASRG hardware to the development of future Stirling-based power systems cannot be understated. This proposed testing will include performance testing, extended operation to establish an extensive reliability database, and characterization testing to quantify subsystem and system performance and better understand system interfaces. This paper details this proposed test program for Stirling radioisotope generator hardware at NASA Glenn. It explains the rationale behind the proposed tests and how these tests will meet the stated objectives.

  14. Test Program for Stirling Radioisotope Generator Hardware at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Lewandowski, Edward J.; Bolotin, Gary S.; Oriti, Salvatore M.

    2014-01-01

    Stirling-based energy conversion technology has demonstrated the potential of high efficiency and low mass power systems for future space missions. This capability is beneficial, if not essential, to making certain deep space missions possible. Significant progress was made developing the Advanced Stirling Radioisotope Generator (ASRG), a 140-watt radioisotope power system. A variety of flight-like hardware, including Stirling convertors, controllers, and housings, was designed and built under the ASRG flight development project. To support future Stirling-based power system development NASA has proposals that, if funded, will allow this hardware to go on test at the NASA Glenn Research Center (GRC). While future flight hardware may not be identical to the hardware developed under the ASRG flight development project, many components will likely be similar, and system architectures may have heritage to ASRG. Thus the importance of testing the ASRG hardware to the development of future Stirling-based power systems cannot be understated. This proposed testing will include performance testing, extended operation to establish an extensive reliability database, and characterization testing to quantify subsystem and system performance and better understand system interfaces. This paper details this proposed test program for Stirling radioisotope generator hardware at NASA GRC. It explains the rationale behind the proposed tests and how these tests will meet the stated objectives.

  15. Final postflight hardware evaluation report RSRM-28 (STS-53)

    NASA Technical Reports Server (NTRS)

    Starrett, William David, Jr.

    1993-01-01

    The final report for the Clearfield disassembly evaluation and a continuation of the KSC postflight assessment for the RSRM-28 (STS-53) RSRM flight set is presented. All observed hardware conditions were documented on PFOR's and are included in Appendices A through C. Appendices D and E contain the measurements and safety factor data for the nozzle and insulation components. This report, along with the KSC Ten-Day Postflight Hardware Evaluation Report (TWR-64215), represents a summary of the RSRM-28 hardware evaluation. The as-flown hardware configuration is documented in TWR-63638. Disassembly evaluation photograph numbers are logged in TWA-1989. The RSRM-28 flight set disassembly evaluations described were performed at the RSRM Refurbishment Facility in Clearfield, Utah. The final factory joint demate occurred on July 15, 1993. Additional time was required to perform the evaluation of the stiffener rings per special issue 4.1.5.2 because of the washout schedule. The release of this report was after completion of all special issues per program management direction. Detailed evaluations were performed in accordance with the Clearfield PEEP, TWR-50051, Revision A. All observations were compared against limits that are also defined in the PEEP. These limits outline the criteria for categorizing the observations as acceptable, reportable, or critical. Hardware conditions that were unexpected and/or determined to be reportable or critical were evaluated by the applicable team and tracked through the PFAR system.

  16. Combinative application of pH-zone-refining and conventional high-speed counter-current chromatography for preparative separation of caged polyprenylated xanthones from gamboge.

    PubMed

    Xu, Min; Fu, Wenwei; Zhang, Baojun; Tan, Hongsheng; Xiu, Yanfeng; Xu, Hongxi

    2016-02-01

    An efficient method for the preparative separation of four structurally similar caged xanthones from the crude extracts of gamboge was established, which involves the combination of pH-zone-refining counter-current chromatography and conventional high-speed counter-current chromatography for the first time. pH-zone-refining counter-current chromatography was performed with the solvent system composed of n-hexane/ethyl acetate/methanol/water (7:3:8:2, v/v/v/v), where 0.1% trifluoroacetic acid was added to the upper organic stationary phase as a retainer and 0.03% triethylamine was added to the aqueous mobile phase as an eluter. From 3.157 g of the crude extract, 1.134 g of gambogic acid, 180.5 mg of gambogenic acid and 572.9 mg of a mixture of two other caged polyprenylated xanthones were obtained. The mixture was further separated by conventional high-speed counter-current chromatography with a solvent system composed of n-hexane/ethyl acetate/methanol/water (5:5:10:5, v/v/v/v) and n-hexane/methyl tert-butyl ether/acetonitrile/water (8:2:6:4,v/v/v/v), yielding 11.6 mg of isogambogenic acid and 10.4 mg of β-morellic acid from 218.0 mg of the mixture, respectively. The purities of all four of the compounds were over 95%, as determined by high-performance liquid chromatography, and the chemical structures of the four compounds were confirmed by electrospray ionization mass spectrometry and NMR spectroscopy. The combinative application of pH-zone-refining counter-current chromatography and conventional high-speed counter-current chromatography shows great advantages in isolating and enriching the caged polyprenylated xanthones. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Monte Carlo Simulations Comparing the Response of a Novel Hemispherical Tepc to Existing Spherical and Cylindrical Tepcs for Neutron Monitoring and Dosimetry.

    PubMed

    Broughton, David P; Waker, Anthony J

    2017-05-01

    Neutron dosimetry in reactor fields is currently mainly conducted with unwieldy flux monitors. Tissue Equivalent Proportional Counters (TEPCs) have been shown to have the potential to improve the accuracy of neutron dosimetry in these fields, and Multi-Element Tissue Equivalent Proportional Counters (METEPCs) could reduce the size of instrumentation required to do so. Complexity of current METEPC designs has inhibited their use beyond research. This work proposes a novel hemispherical counter with a wireless anode ball in place of the traditional anode wire as a possible solution for simplifying manufacturing. The hemispherical METEPC element was analyzed as a single TEPC to first demonstrate the potential of this new design by evaluating its performance relative to the reference spherical TEPC design and a single element from a cylindrical METEPC. Energy deposition simulations were conducted using the Monte Carlo code PHITS for both monoenergetic 2.5 MeV neutrons and the neutron energy spectrum of Cf-D2O moderated. In these neutron fields, the hemispherical counter appears to be a good alternative to the reference spherical geometry, performing slightly better than the cylindrical counter, which tends to underrespond to H*(10) for the lower neutron energies of the Cf-D2O moderated field. These computational results are promising, and if follow-up experimental work demonstrates the hemispherical counter works as anticipated, it will be ready to be incorporated into an METEPC design.

  18. Study of efficient video compression algorithms for space shuttle applications

    NASA Technical Reports Server (NTRS)

    Poo, Z.

    1975-01-01

    Results are presented of a study on video data compression techniques applicable to space flight communication. This study is directed towards monochrome (black and white) picture communication with special emphasis on feasibility of hardware implementation. The primary factors for such a communication system in space flight application are: picture quality, system reliability, power comsumption, and hardware weight. In terms of hardware implementation, these are directly related to hardware complexity, effectiveness of the hardware algorithm, immunity of the source code to channel noise, and data transmission rate (or transmission bandwidth). A system is recommended, and its hardware requirement summarized. Simulations of the study were performed on the improved LIM video controller which is computer-controlled by the META-4 CPU.

  19. An evaluation of Skylab habitability hardware

    NASA Technical Reports Server (NTRS)

    Stokes, J.

    1974-01-01

    For effective mission performance, participants in space missions lasting 30-60 days or longer must be provided with hardware to accommodate their personal needs. Such habitability hardware was provided on Skylab. Equipment defined as habitability hardware was that equipment composing the food system, water system, sleep system, waste management system, personal hygiene system, trash management system, and entertainment equipment. Equipment not specifically defined as habitability hardware but which served that function were the Wardroom window, the exercise equipment, and the intercom system, which was occasionally used for private communications. All Skylab habitability hardware generally functioned as intended for the three missions, and most items could be considered as adequate concepts for future flights of similar duration. Specific components were criticized for their shortcomings.

  20. Hardware Testing and System Evaluation: Procedures to Evaluate Commodity Hardware for Production Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goebel, J

    2004-02-27

    Without stable hardware any program will fail. The frustration and expense of supporting bad hardware can drain an organization, delay progress, and frustrate everyone involved. At Stanford Linear Accelerator Center (SLAC), we have created a testing method that helps our group, SLAC Computer Services (SCS), weed out potentially bad hardware and purchase the best hardware at the best possible cost. Commodity hardware changes often, so new evaluations happen periodically each time we purchase systems and minor re-evaluations happen for revised systems for our clusters, about twice a year. This general framework helps SCS perform correct, efficient evaluations. This article outlinesmore » SCS's computer testing methods and our system acceptance criteria. We expanded the basic ideas to other evaluations such as storage, and we think the methods outlined in this article has helped us choose hardware that is much more stable and supportable than our previous purchases. We have found that commodity hardware ranges in quality, so systematic method and tools for hardware evaluation were necessary. This article is based on one instance of a hardware purchase, but the guidelines apply to the general problem of purchasing commodity computer systems for production computational work.« less

  1. Informational and symbolic content of over-the-counter drug advertising on television.

    PubMed

    Tsao, J C

    1997-01-01

    The informational and symbolic content of 150 over-the-counter drug commercials on television are empirically analyzed in this study. Results on the informational content suggest that over-the-counter drug ads tend to focus on the concern of what the drug will do for the consumer, rather than on the reasons why the drug should be ingested. Accordingly, advertising strategy is centered on consumer awareness of the product as the primary goal. Educational commitment, however, did not seem to be blended into the promotional efforts for over-the-counter drugs. Findings on the symbolic content of over-the-counter drug ads reveal that drug images have been distorted. Performance of most drugs has been portrayed to be simple resolutions to relieve the symptom. Moreover, a casual attitude toward drug usage is encouraged in the commercials, while time lapse of drug effects is overlooked.

  2. Economically synthesized NiCo2S4/reduced graphene oxide composite as efficient counter electrode in dye-sensitized solar cell

    NASA Astrophysics Data System (ADS)

    Nan, Hui; Han, Jianhua; Luo, Qiang; Yin, Xuewen; Zhou, Yu; Yao, Zhibo; Zhao, Xiaochong; Li, Xin; Lin, Hong

    2018-04-01

    Exploiting efficient Pt-free counter-electrode materials with low cost and highly catalytic property is a hot topic in the field of Dye-sensitized solar cells (DSCs). Here, NiCo2S4/reduced graphene oxide (RGO) was prepared via an economical synthesis route, and the as-prepared composite exhibited comparable electrocatalytic property with the conventional Pt electrode as the counter-electrode. Notably, the introduction of RGO into the NiCo2S4 counter-electrode induces a significantly promoted electrocatalytic rate towards the triiodide reduction than that of pristine NiCo2S4 by increasing surface area in the composite electrode, as revealed by electrochemical impedance spectroscopic measurement and Tafel polarization measurement. The easy synthesis, low cost and excellent electrochemical performance of the NiCo2S4/RGO composites enable themselves to serve as promising counter-electrode candidates for efficient DSCs.

  3. Gold nanoparticle decorated multi-walled carbon nanotubes as counter electrode for dye sensitized solar cells.

    PubMed

    Kaniyoor, Adarsh; Ramaprabhu, Sundara

    2012-11-01

    A novel counter electrode material for dye sensitized solar cells (DSSCs) composed of nanostructured Au particles decorated on functionalized multi-walled carbon nanotubes (f-MWNTs) is demonstrated for the first time. MWNTs synthesized by catalytic chemical vapor deposition technique are purified and functionalized by treating with concentrated acids. Au nanoparticles are decorated on f-MWNTs by a rapid and facile microwave assisted polyol reduction method. The materials are characterized by X-ray diffractometry, Fourier transform infra red spectroscopy and electron microscopy. The DSSC fabricated with Au/f-MWNTs based counter electrode shows enhanced power conversion efficiency (eta) of 4.9% under AM 1.5G simulated solar radiation. In comparison, the reference DSSCs fabricated with f-MWNTs and Pt counter electrodes show eta of 2.1% and 4.5%. This high performance of Au/f-MWNTs counter electrode is investigated using electrochemical impedance spectroscopy and cyclic voltammetry studies.

  4. X-ray astronomy instrumentation studies. [design of a proportional counter and measurements of fluorescent radiation

    NASA Technical Reports Server (NTRS)

    Gregory, J. C.

    1981-01-01

    Preliminary designs were made for a multiplane, multiwire position sensitive proportional counter for X-ray use. Anode spacing was 2 mm and cathode spacing 1 mm. Assistance was provided in setting up and operating two multiwire proportional counters, one with 5 mm anode spacing, and the other with 2 mm spacing. Argon-based counter gases were used for preliminary work in assembling a working experimental system to measure xenon fluorescence yields. The design and specification of a high purity gas filling system capable of supplying mixtures of xenon and other gases to proportional counters was also performed. The system is mounted on a cart, is fully operational, and is flexible enough to be easily used as a pumping station for other clean applications. When needed, assistance was given to put into operation various computer-related pieces of equipment.

  5. Efficient architecture for spike sorting in reconfigurable hardware.

    PubMed

    Hwang, Wen-Jyi; Lee, Wei-Hao; Lin, Shiow-Jyu; Lai, Sheng-Ying

    2013-11-01

    This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA) and fuzzy C-means (FCM) algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA). It is embedded in a System-on-Chip (SOC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation.

  6. Environmental qualification testing of the prototype pool boiling experiment

    NASA Technical Reports Server (NTRS)

    Sexton, J. Andrew

    1992-01-01

    The prototype Pool Boiling Experiment (PBE) flew on the STS-47 mission in September 1992. This report describes the purpose of the experiment and the environmental qualification testing program that was used to prove the integrity of the prototype hardware. Component and box level vibration and thermal cycling tests were performed to give an early level of confidence in the hardware designs. At the system level, vibration, thermal extreme soaks, and thermal vacuum cycling tests were performed to qualify the complete design for the expected shuttle environment. The system level vibration testing included three axis sine sweeps and random inputs. The system level hot and cold soak tests demonstrated the hardware's capability to operate over a wide range of temperatures and gave the project team a wider latitude in determining which shuttle thermal altitudes were compatible with the experiment. The system level thermal vacuum cycling tests demonstrated the hardware's capability to operate in a convection free environment. A unique environmental chamber was designed and fabricated by the PBE team and allowed most of the environmental testing to be performed within the project's laboratory. The completion of the test program gave the project team high confidence in the hardware's ability to function as designed during flight.

  7. 77 FR 18970 - Airworthiness Directives; Bell Helicopter Textron Canada Helicopters

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-29

    ... Model 407 Maintenance Manual and applied during manufacturing was incorrect and exceeded the torque... hardware (attachment hardware), and perform initial and recurring determinations of [[Page 18971

  8. Potential Uses of Deep Space Cooling for Exploration Missions

    NASA Technical Reports Server (NTRS)

    Chambliss, Joe; Sweterlitsch, Jeff; Swickrath, Micahel J.

    2012-01-01

    Nearly all exploration missions envisioned by NASA provide the capability to view deep space and thus to reject heat to a very low temperature environment. Environmental sink temperatures approach as low as 4 Kelvin providing a natural capability to support separation and heat rejection processes that would otherwise be power and hardware intensive in terrestrial applications. For example, radiative heat transfer can be harnessed to cryogenically remove atmospheric contaminants such as carbon dioxide (CO2). Long duration differential temperatures on sunlit versus shadowed sides of the vehicle could be used to drive thermoelectric power generation. Rejection of heat from cryogenic propellant could counter temperature increases thus avoiding the need to vent propellants. These potential uses of deep space cooling will be addressed in this paper with the benefits and practical considerations of such approaches.

  9. Low-power hardware implementation of movement decoding for brain computer interface with reduced-resolution discrete cosine transform.

    PubMed

    Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E

    2014-01-01

    This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.

  10. A Linux Workstation for High Performance Graphics

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  11. Computerized atmospheric trace contaminant control simulation for manned spacecraft

    NASA Technical Reports Server (NTRS)

    Perry, J. L.

    1993-01-01

    Buildup of atmospheric trace contaminants in enclosed volumes such as a spacecraft may lead to potentially serious health problems for the crew members. For this reason, active control methods must be implemented to minimize the concentration of atmospheric contaminants to levels that are considered safe for prolonged, continuous exposure. Designing hardware to accomplish this has traditionally required extensive testing to characterize and select appropriate control technologies. Data collected since the Apollo project can now be used in a computerized performance simulation to predict the performance and life of contamination control hardware to allow for initial technology screening, performance prediction, and operations and contingency studies to determine the most suitable hardware approach before specific design and testing activities begin. The program, written in FORTRAN 77, provides contaminant removal rate, total mass removed, and per pass efficiency for each control device for discrete time intervals. In addition, projected cabin concentration is provided. Input and output data are manipulated using commercial spreadsheet and data graphing software. These results can then be used in analyzing hardware design parameters such as sizing and flow rate, overall process performance and program economics. Test performance may also be predicted to aid test design.

  12. Viking 75 project: Viking lander system primary mission performance report

    NASA Technical Reports Server (NTRS)

    Cooley, C. G.

    1977-01-01

    Viking Lander hardware performance during launch, interplanetary cruise, Mars orbit insertion, preseparation, separation through landing, and the primary landed mission, with primary emphasis on Lander engineering and science hardware operations, the as-flown mission are described with respect to Lander system performance and anomalies during the various mission phases. The extended mission and predicted Lander performance is discussed along with a summary of Viking goals, mission plans, and description of the Lander, and its subsystem definitions.

  13. Preparative isolation and purification of astaxanthin from the microalga Chlorococcum sp. by high-speed counter-current chromatography.

    PubMed

    Li, H B; Chen, F

    2001-08-03

    High-speed counter-current chromatography was applied to the isolation and purification of astaxanthin from microalgae. The crude astaxanthin was obtained by extraction with organic solvents after the astaxanthin esters were saponified. Preparative high-speed counter-current chromatography with a two-phase solvent system composed of n-hexane-ethyl acetate-ethanol-water (5:5:6.5:3, v/v) was successfully performed yielding astaxanthin at 97% purity from 250 mg of the crude extract in a one-step separation.

  14. Technology and benefits of aircraft counter rotation propellers

    NASA Technical Reports Server (NTRS)

    Strack, W. C.; Knip, G.; Weisbrich, A. L.; Godston, J.; Bradley, E.

    1981-01-01

    Results are reported of a NASA sponsored analytical investigation into the merits of advanced counter rotation propellers for Mach 0.80 commercial transport application. Propeller and gearbox performance, acoustics, vibration characteristics, weight, cost and maintenance requirements for a variety of design parameters and special features were considered. Fuel savings in the neighborhood of 8 percent relative to single rotation configurations are feasible through swirl recovery and lighter gearboxes. This is the net gain which includes a 5 percent acoustic treatment weight penalty to offset the broader frequency spectrum noise produced by counter rotation blading.

  15. Hardware Implementation of a MIMO Decoder Using Matrix Factorization Based Channel Estimation

    NASA Astrophysics Data System (ADS)

    Islam, Mohammad Tariqul; Numan, Mostafa Wasiuddin; Misran, Norbahiah; Ali, Mohd Alauddin Mohd; Singh, Mandeep

    2011-05-01

    This paper presents an efficient hardware realization of multiple-input multiple-output (MIMO) wireless communication decoder that utilizes the available resources by adopting the technique of parallelism. The hardware is designed and implemented on Xilinx Virtex™-4 XC4VLX60 field programmable gate arrays (FPGA) device in a modular approach which simplifies and eases hardware update, and facilitates testing of the various modules independently. The decoder involves a proficient channel estimation module that employs matrix factorization on least squares (LS) estimation to reduce a full rank matrix into a simpler form in order to eliminate matrix inversion. This results in performance improvement and complexity reduction of the MIMO system. Performance evaluation of the proposed method is validated through MATLAB simulations which indicate 2 dB improvement in terms of SNR compared to LS estimation. Moreover complexity comparison is performed in terms of mathematical operations, which shows that the proposed approach appreciably outperforms LS estimation at a lower complexity and represents a good solution for channel estimation technique.

  16. Grayscale image segmentation for real-time traffic sign recognition: the hardware point of view

    NASA Astrophysics Data System (ADS)

    Cao, Tam P.; Deng, Guang; Elton, Darrell

    2009-02-01

    In this paper, we study several grayscale-based image segmentation methods for real-time road sign recognition applications on an FPGA hardware platform. The performance of different image segmentation algorithms in different lighting conditions are initially compared using PC simulation. Based on these results and analysis, suitable algorithms are implemented and tested on a real-time FPGA speed sign detection system. Experimental results show that the system using segmented images uses significantly less hardware resources on an FPGA while maintaining comparable system's performance. The system is capable of processing 60 live video frames per second.

  17. 25 ns software correlator for photon and fluorescence correlation spectroscopy

    NASA Astrophysics Data System (ADS)

    Magatti, Davide; Ferri, Fabio

    2003-02-01

    A 25 ns time resolution, multi-tau software correlator developed in LABVIEW based on the use of a standard photon counting unit, a fast timer/counter board (6602-PCI National Instrument) and a personal computer (PC) (1.5 GHz Pentium 4) is presented and quantitatively discussed. The correlator works by processing the stream of incoming data in parallel according to two different algorithms: For large lag times (τ⩾100 μs), a classical time-mode (TM) scheme, based on the measure of the number of pulses per time interval, is used; differently, for τ⩽100 μs a photon-mode (PM) scheme is adopted and the time sequence of the arrival times of the photon pulses is measured. By combining the two methods, we developed a system capable of working out correlation functions on line, in full real time for the TM correlator and partially in batch processing for the PM correlator. For the latter one, the duty cycle depends on the count rate of the incoming pulses, being ˜100% for count rates ⩽3×104 Hz, ˜15% at 105 Hz, and ˜1% at 106 Hz. For limitations imposed by the fairly small first-in, first-out (FIFO) buffer available on the counter board, the maximum count rate permissible for a proper functioning of the PM correlator is limited to ˜105 Hz. However, this limit can be removed by using a board with a deeper FIFO. Similarly, the 25 ns time resolution is only limited by maximum clock frequency available on the 6602-PCI and can be easily improved by using a faster clock. When tested on dilute solutions of calibrated latex spheres, the overall performances of the correlator appear to be comparable with those of commercial hardware correlators, but with several nontrivial advantages related to its flexibility, low cost, and easy adaptability to future developments of PC and data acquisition technology.

  18. Development of Hardware-in-the-Loop Simulation Based on Gazebo and Pixhawk for Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Nguyen, Khoa Dang; Ha, Cheolkeun

    2018-04-01

    Hardware-in-the-loop simulation (HILS) is well known as an effective approach in the design of unmanned aerial vehicles (UAV) systems, enabling engineers to test the control algorithm on a hardware board with a UAV model on the software. Performance of HILS is determined by performances of the control algorithm, the developed model, and the signal transfer between the hardware and software. The result of HILS is degraded if any signal could not be transferred to the correct destination. Therefore, this paper aims to develop a middleware software to secure communications in HILS system for testing the operation of a quad-rotor UAV. In our HILS, the Gazebo software is used to generate a nonlinear six-degrees-of-freedom (6DOF) model, sensor model, and 3D visualization for the quad-rotor UAV. Meanwhile, the flight control algorithm is designed and implemented on the Pixhawk hardware. New middleware software, referred to as the control application software (CAS), is proposed to ensure the connection and data transfer between Gazebo and Pixhawk using the multithread structure in Qt Creator. The CAS provides a graphical user interface (GUI), allowing the user to monitor the status of packet transfer, and perform the flight control commands and the real-time tuning parameters for the quad-rotor UAV. Numerical implementations have been performed to prove the effectiveness of the middleware software CAS suggested in this paper.

  19. Data to hardware binding with physical unclonable functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamlet, Jason

    The various technologies presented herein relate to binding data (e.g., software) to hardware, wherein the hardware is to utilize the data. The generated binding can be utilized to detect whether at least one of the hardware or the data has been modified between an initial moment (enrollment) and a later moment (authentication). During enrollment, an enrollment value is generated that includes a signature of the data, a first response from a PUF located on the hardware, and a code word. During authentication, a second response from the PUF is utilized to authenticate any of the content in the enrollment value,more » and based upon the authentication, a determination can be made regarding whether the hardware and/or the data have been modified. If modification is detected then a mitigating operation can be performed, e.g., the hardware is prevented from utilizing the data. If no modification is detected, the data can be utilized.« less

  20. FPS-RAM: Fast Prefix Search RAM-Based Hardware for Forwarding Engine

    NASA Astrophysics Data System (ADS)

    Zaitsu, Kazuya; Yamamoto, Koji; Kuroda, Yasuto; Inoue, Kazunari; Ata, Shingo; Oka, Ikuo

    Ternary content addressable memory (TCAM) is becoming very popular for designing high-throughput forwarding engines on routers. However, TCAM has potential problems in terms of hardware and power costs, which limits its ability to deploy large amounts of capacity in IP routers. In this paper, we propose new hardware architecture for fast forwarding engines, called fast prefix search RAM-based hardware (FPS-RAM). We designed FPS-RAM hardware with the intent of maintaining the same search performance and physical user interface as TCAM because our objective is to replace the TCAM in the market. Our RAM-based hardware architecture is completely different from that of TCAM and has dramatically reduced the costs and power consumption to 62% and 52%, respectively. We implemented FPS-RAM on an FPGA to examine its lookup operation.

  1. Long-term CF6 engine performance deterioration: Evaluation of engine S/N 451-380

    NASA Technical Reports Server (NTRS)

    Kramer, W. H.; Smith, J. J.

    1978-01-01

    The performance testing and analytical teardown of CF6-6D engine serial number 451-380 which was recently removed from a DC-10 aircraft is summarized. The investigative test program was conducted inbound prior to normal overhaul/refurbishment. The performance testing included an inbound test, a test following cleaning of the low pressure turbine airfoils, and a final test after leading edge rework and cleaning the stage one fan blades. The analytical teardown consisted of detailed disassembly inspection measurements and airfoil surface finish checks of the as-received deteriorated hardware. Aspects discussed include the analysis of the test cell performance data, a complete analytical teardown report with a detailed description of all observed hardware distress, and an analytical assessment of the performance loss (deterioration) relating measured hardware conditions to losses in both specific fuel comsumption and exhaust gas temperature.

  2. Long-term CF6 engine performance deterioration: Evaluation of engine S/N 451-479

    NASA Technical Reports Server (NTRS)

    Kramer, W. H.; Smith, J. J.

    1978-01-01

    The performance testing and analytical teardown of CF6-6D engine is summarized. This engine had completed its initial installation on DC-10 aircraft. The investigative test program was conducted inbound prior to normal overhaul/refurbishment. The performance testing included an inbound test, a test following cleaning of the low pressure turbine airfoils, and a final test after leading edge rework and cleaning the stage one fan blades. The analytical teardown consisted of detailed disassembly inspection measurements and airfoil surface finish checks of the as received deteriorated hardware. Included in this report is a detailed analysis of the test cell performance data, a complete analytical teardown report with a detailed description of all observed hardware distress, and an analytical assessment of the performance loss (deterioration) relating measured hardware conditions to losses in both SFC (specific fuel consumption) and EGT (exhaust gas temperature).

  3. ATLAS offline software performance monitoring and optimization

    NASA Astrophysics Data System (ADS)

    Chauhan, N.; Kabra, G.; Kittelmann, T.; Langenberg, R.; Mandrysch, R.; Salzburger, A.; Seuster, R.; Ritsch, E.; Stewart, G.; van Eldik, N.; Vitillo, R.; Atlas Collaboration

    2014-06-01

    In a complex multi-developer, multi-package software environment, such as the ATLAS offline framework Athena, tracking the performance of the code can be a non-trivial task in itself. In this paper we describe improvements in the instrumentation of ATLAS offline software that have given considerable insight into the performance of the code and helped to guide the optimization work. The first tool we used to instrument the code is PAPI, which is a programing interface for accessing hardware performance counters. PAPI events can count floating point operations, cycles, instructions and cache accesses. Triggering PAPI to start/stop counting for each algorithm and processed event results in a good understanding of the algorithm level performance of ATLAS code. Further data can be obtained using Pin, a dynamic binary instrumentation tool. Pin tools can be used to obtain similar statistics as PAPI, but advantageously without requiring recompilation of the code. Fine grained routine and instruction level instrumentation is also possible. Pin tools can additionally interrogate the arguments to functions, like those in linear algebra libraries, so that a detailed usage profile can be obtained. These tools have characterized the extensive use of vector and matrix operations in ATLAS tracking. Currently, CLHEP is used here, which is not an optimal choice. To help evaluate replacement libraries a testbed has been setup allowing comparison of the performance of different linear algebra libraries (including CLHEP, Eigen and SMatrix/SVector). Results are then presented via the ATLAS Performance Management Board framework, which runs daily with the current development branch of the code and monitors reconstruction and Monte-Carlo jobs. This framework analyses the CPU and memory performance of algorithms and an overview of results are presented on a web page. These tools have provided the insight necessary to plan and implement performance enhancements in ATLAS code by identifying the most common operations, with the call parameters well understood, and allowing improvements to be quantified in detail.

  4. Onboard FPGA-based SAR processing for future spaceborne systems

    NASA Technical Reports Server (NTRS)

    Le, Charles; Chan, Samuel; Cheng, Frank; Fang, Winston; Fischman, Mark; Hensley, Scott; Johnson, Robert; Jourdan, Michael; Marina, Miguel; Parham, Bruce; hide

    2004-01-01

    We present a real-time high-performance and fault-tolerant FPGA-based hardware architecture for the processing of synthetic aperture radar (SAR) images in future spaceborne system. In particular, we will discuss the integrated design approach, from top-level algorithm specifications and system requirements, design methodology, functional verification and performance validation, down to hardware design and implementation.

  5. Zero gravity tissue-culture laboratory

    NASA Technical Reports Server (NTRS)

    Cook, J. E.; Montgomery, P. O., Jr.; Paul, J. S.

    1972-01-01

    Hardware was developed for performing experiments to detect the effects that zero gravity may have on living human cells. The hardware is composed of a timelapse camera that photographs the activity of cell specimens and an experiment module in which a variety of living-cell experiments can be performed using interchangeable modules. The experiment is scheduled for the first manned Skylab mission.

  6. Similarity constraints in testing of cooled engine parts

    NASA Technical Reports Server (NTRS)

    Colladay, R. S.; Stepka, F. S.

    1974-01-01

    A study is made of the effect of testing cooled parts of current and advanced gas turbine engines at the reduced temperature and pressure conditions which maintain similarity with the engine environment. Some of the problems facing the experimentalist in evaluating heat transfer and aerodynamic performance when hardware is tested at conditions other than the actual engine environment are considered. Low temperature and pressure test environments can simulate the performance of actual size prototype engine hardware within the tolerance of experimental accuracy if appropriate similarity conditions are satisfied. Failure to adhere to these similarity constraints because of test facility limitations or other reasons, can result in a number of serious errors in projecting the performance of test hardware to engine conditions.

  7. Purification of Proteins From Cell-Culture Medium or Cell-Lysate by High-Speed Counter-Current Chromatography Using Cross-Axis Coil Planet Centrifuge

    PubMed Central

    Shibusawa, Yoichi; Ito, Yoichiro

    2014-01-01

    This review describes protein purifications from cell culture medium or cell-lysate by high speed counter-current chromatography using the cross-axis coil planet centrifuge. Purifications were performed using aqueous two phase systems composed of polyethylene glycols and dextrans. PMID:25360182

  8. Evaluation of Electronic Counter-Countermeasures Training Using Microcomputer-Based Technology: Phase I. Basic Jamming Recognition.

    ERIC Educational Resources Information Center

    Gardner, Susan G.; Ellis, Burl D.

    Seven microcomputer-based training systems with videotape players/monitors were installed to provide electronic counter-countermeasures (ECCM) simulation training, drill and practice, and performance testing for three courses at a fleet combat training center. Narrated videotape presentations of simulated and live jamming followed by a drill and…

  9. ALTERNATIVES TO HELIUM-3 FOR NEUTRON MULTIPLICITY DETECTORS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ely, James H.; Siciliano, Edward R.; Swinhoe, Martyn T.

    Collaboration between the Pacific Northwest National Laboratory (PNNL) and the Los Alamos National Laboratory (LANL) is underway to evaluate neutron detection technologies that might replace the high-pressure helium (3He) tubes currently used in neutron multiplicity counter for safeguards applications. The current stockpile of 3He is diminishing and alternatives are needed for a variety of neutron detection applications including multiplicity counters. The first phase of this investigation uses a series of Monte Carlo calculations to simulate the performance of an existing neutron multiplicity counter configuration by replacing the 3He tubes in a model for that counter with candidate alternative technologies. Thesemore » alternative technologies are initially placed in approximately the same configuration as the 3He tubes to establish a reference level of performance against the 3He-based system. After these reference-level results are established, the configurations of the alternative models will be further modified for performance optimization. The 3He model for these simulations is the one used by LANL to develop and benchmark the Epithermal Neutron Multiplicity Counter (ENMC) detector, as documented by H.O. Menlove, et al. in the 2004 LANL report LA-14088. The alternative technologies being evaluated are the boron-tri-fluoride-filled proportional tubes, boron-lined tubes, and lithium coated materials previously tested as possible replacements in portal monitor screening applications, as documented by R.T. Kouzes, et al. in the 2010 PNNL report PNNL-72544 and NIM A 623 (2010) 1035–1045. The models and methods used for these comparative calculations will be described and preliminary results shown« less

  10. The acetone bandpass detector for inverse photoemission: operation in proportional and Geiger–Müller modes

    NASA Astrophysics Data System (ADS)

    Thiede, Christian; Niehues, Iris; Schmidt, Anke B.; Donath, Markus

    2018-06-01

    Inverse photoemission is the most versatile experimental tool to study the unoccupied electronic structure at surfaces of solids. Typically, the experiments are performed in the isochromat mode with bandpass photon detectors. For gas-filled counters, the bandpass behavior is realized by the combination of the photoionization threshold of the counting gas as the high-pass filter and the ultraviolet transmission cutoff of an alkaline earth fluoride entrance window as the low-pass filter. The transmission characteristics of the entrance window determine the optical bandpass. The performance of the counter depends on the composition of the detection gas and the fill-gas pressure, the readout electronics and the counter geometry. For the well-known combination of acetone and CaF2, the detector can be operated in proportional and Geiger–Müller modes. In this work, we review aspects concerning the working principles, the counter construction and the read-out electronics. We identify optimum working parameters and provide a step-by-step recipe how to build, install and operate the device.

  11. A threshold gas Cerenkov detector for the spin asymmetries of the nucleon experiment

    DOE PAGES

    Armstrong, Whitney R.; Choi, Seonho; Kaczanowicz, Ed; ...

    2015-09-26

    In this study, we report on the design, construction, commissioning, and performance of a threshold gas Cerenkov counter in an open configuration, which operates in a high luminosity environment and produces a high photo-electron yield. Part of a unique open geometry detector package known as the Big Electron Telescope Array, this Cerenkov counter served to identify scattered electrons and reject produced pions in an inclusive scattering experiment known as the Spin Asymmetries of the Nucleon Experiment E07-003 at the Thomas Jefferson National Accelerator Facility (TJNAF) also known as Jefferson Lab. The experiment consisted of a measurement of double spin asymmetriesmore » A || and A ⊥ of a polarized electron beam impinging on a polarized ammonia target. The Cerenkov counter's performance is characterised by a yield of about 20 photoelectrons per electron or positron track. Thanks to this large number of photoelectrons per track, the Cerenkov counter had enough resolution to identify electron-positron pairs from the conversion of photons resulting mainly from π 0 decays.« less

  12. Profiling an application for power consumption during execution on a compute node

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

    2013-09-17

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  13. Best bang for your buck: GPU nodes for GROMACS biomolecular simulations

    PubMed Central

    Páll, Szilárd; Fechner, Martin; Esztermann, Ansgar; de Groot, Bert L.; Grubmüller, Helmut

    2015-01-01

    The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well‐exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)‐based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off‐loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance‐to‐price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer‐class GPUs this improvement equally reflects in the performance‐to‐price ratio. Although memory issues in consumer‐class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost‐efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well‐balanced ratio of CPU and consumer‐class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26238484

  14. Scaling of Counter-Current Imbibition Process in Low-Permeability Porous Media, TR-121

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kvoscek, A.R.; Zhou, D.; Jia, L.

    2001-01-17

    This project presents the recent work on imaging imbibition in low permeability porous media (diatomite) with X-ray completed tomography. The viscosity ratio between nonwetting and wetting fluids is varied over several orders of magnitude yielding different levels of imbibition performance. Also performed is mathematical analysis of counter-current imbibition processes and development of a modified scaling group incorporating the mobility ratio. This modified group is physically based and appears to improve scaling accuracy of countercurrent imbibition significantly.

  15. System for detecting operating errors in a variable valve timing engine using pressure sensors

    DOEpatents

    Wiles, Matthew A.; Marriot, Craig D

    2013-07-02

    A method and control module includes a pressure sensor data comparison module that compares measured pressure volume signal segments to ideal pressure volume segments. A valve actuation hardware remedy module performs a hardware remedy in response to comparing the measured pressure volume signal segments to the ideal pressure volume segments when a valve actuation hardware failure is detected.

  16. Closed-Loop Neuromorphic Benchmarks

    PubMed Central

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  17. Retrospective Analysis of Inflight Exercise Loading and Physiological Outcomes

    NASA Technical Reports Server (NTRS)

    Ploutz-Snyder, L. L.; Buxton, R. E.; De Witt, J. K.; Guilliams, M. E.; Hanson, A. M.; Peters, B. T.; Pandorf, M. M. Scott; Sibonga, J. D.

    2014-01-01

    Astronauts perform exercise throughout their missions to counter the health declines that occur as a result of long-term exposure to weightlessness. Although all astronauts perform exercise during their missions, the specific prescriptions, and thus the mechanical loading, differs among individuals. For example, inflight ground reaction force data indicate that subject-specific differences exist in foot forces created when exercising on the second-generation treadmill (T2) [1]. The current exercise devices allow astronauts to complete prescriptions at higher intensities, resulting in greater benefits with increased efficiency. Although physiological outcomes have improved, the specific factors related to the increased benefits are unknown. In-flight exercise hardware collect data that allows for exploratory analyses to determine if specific performance factors relate to physiological outcomes. These analyses are vital for understanding which components of exercise are most critical for optimal human health and performance. The relationship between exercise performance variables and physiological changes during flight has yet to be fully investigated. Identifying the critical performance variables that relate to improved physiological outcomes is vital for creating current and future exercise prescriptions to optimize astronaut health. The specific aims of this project are: 1) To quantify the exercise-related mechanical loading experienced by crewmembers on T2 and ARED during their mission on ISS; 2) To explore relationships between exercise loading variables, bone, and muscle health changes during the mission; 3) To determine if specific mechanical loading variables are more critical than others in protecting physiology; 4) To develop methodology for operational use in monitoring accumulated training loads during crew exercise programs. This retrospective analysis, which is currently in progress, is being conducted using data from astronauts that have flown long-duration missions onboard the ISS and have had access to exercise on the T2 and the Advanced Resistive Exercise Device (ARED). The specific exercise prescriptions vary for each astronaut. General exercise summary metrics will be developed to quantify exercise intensities, volumes, and durations for each subject. Where available, ground reaction force data will be used to quantify mechanical loading experienced by each astronaut. These inflight exercise metrics will be investigated relative to changes in pre- to post-flight bone and muscle health to identify which specific variables are related with improved or degraded physiological outcomes. The information generated from this analysis will fill gaps related to typical bone loading characterization, exercise performance capability, exercise volume and efficiency, and importance of exercise hardware. In addition, methods for quantification of exercise loading for use in monitoring the exercise programs during future space missions will be explored with the intent to inform exercise scientists and trainers as to the critical aspects of inflight exercise prescriptions.

  18. Partial Data Traces: Efficient Generation and Representation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, F; De Supinski, B R; McKee, S A

    2001-08-20

    Binary manipulation techniques are increasing in popularity. They support program transformations tailored toward certain program inputs, and these transformations have been shown to yield performance gains beyond the scope of static code optimizations without profile-directed feedback. They even deliver moderate gains in the presence of profile-guided optimizations. In addition, transformations can be performed on the entire executable, including library routines. This work focuses on program instrumentation, yet another application of binary manipulation. This paper reports preliminary results on generating partial data traces through dynamic binary rewriting. The contributions are threefold. First, a portable method for extracting precise data traces formore » partial executions of arbitrary applications is developed. Second, a set of hierarchical structures for compactly representing these accesses is developed. Third, an efficient online algorithm to detect regular accesses is introduced. The authors utilize dynamic binary rewriting to selectively collect partial address traces of regions within a program. This allows partial tracing of hot paths for only a short time during program execution in contrast to static rewriting techniques that lack hot path detection and also lack facilities to limit the duration of data collection. Preliminary results show reductions of three orders of a magnitude of inline instrumentation over a dual process approach involving context switching. They also report constant size representations for regular access patters in nested loops. These efforts are part of a larger project to counter the increasing gap between processor and main memory speeds by means of software optimization and hardware enhancements.« less

  19. Multi-user Droplet Combustion Apparatus (MDCA) Hardware Replacement

    NASA Image and Video Library

    2013-10-02

    ISS037-E-004956 (2 Oct. 2013) --- NASA astronaut Karen Nyberg, Expedition 37 flight engineer, performs the Multi-user Droplet Combustion Apparatus (MDCA) hardware replacement in the Harmony node of the International Space Station.

  20. Multi-user Droplet Combustion Apparatus (MDCA) Hardware Replacement

    NASA Image and Video Library

    2013-10-02

    ISS037-E-004959 (2 Oct. 2013) --- NASA astronaut Karen Nyberg, Expedition 37 flight engineer, performs the Multi-user Droplet Combustion Apparatus (MDCA) hardware replacement in the Harmony node of the International Space Station.

  1. VME rollback hardware for time warp multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Robb, Michael J.; Buzzell, Calvin A.

    1992-01-01

    The purpose of the research effort is to develop and demonstrate innovative hardware to implement specific rollback and timing functions required for efficient queue management and precision timekeeping in multiprocessor discrete event simulations. The previously completed phase 1 effort demonstrated the technical feasibility of building hardware modules which eliminate the state saving overhead of the Time Warp paradigm used in distributed simulations on multiprocessor systems. The current phase 2 effort will build multiple pre-production rollback hardware modules integrated with a network of Sun workstations, and the integrated system will be tested by executing a Time Warp simulation. The rollback hardware will be designed to interface with the greatest number of multiprocessor systems possible. The authors believe that the rollback hardware will provide for significant speedup of large scale discrete event simulation problems and allow multiprocessors using Time Warp to dramatically increase performance.

  2. Thermally Deposited Palladium-Tungsten Carbide and Platinum-Tungsten Carbide Counter Electrodes for a High Performance Dye-Sensitized Solar Cell Based on Organic T-/T₂ Electrolyte.

    PubMed

    Towannang, Madsakorn; Thiangkaew, Anongnad; Maiaugree, Wasan; Ratchaphonsaenwong, Kunthaya; Jarernboon, Wirat; Pimanpang, Samuk; Amornkitbamrung, Vittaya

    2018-02-01

    Tungsten carbide (WC) particles (~1 μm) were dispersed in DI water and dropped onto conductive glass. The resulting WC films were used as dye-sensitized solar cell (DSSC) counter electrodes. The performance of the WC DSSC based on the organic thiolate/disulfide (T-/T2) electrolyte was ~0.78%. The cell efficiency was greatly improved after decorating palladium (Pd) or platinum (Pt) nanoparticles on WC particles with a promising efficiency of ~2.15% for Pd-WC DSSC and ~4.62% for Pt-WC DSSC. The efficiency improvement of the composited (Pd-WC and Pt-WC) cells is attributed to co-functioning catalysts, the large electrode interfacial area and a low charge-transfer resistance at the electrolyte/counter electrode interface.

  3. Real-time measurements of airborne biologic particles using fluorescent particle counter to evaluate microbial contamination: results of a comparative study in an operating theater.

    PubMed

    Dai, Chunyang; Zhang, Yan; Ma, Xiaoling; Yin, Meiling; Zheng, Haiyang; Gu, Xuejun; Xie, Shaoqing; Jia, Hengmin; Zhang, Liang; Zhang, Weijun

    2015-01-01

    Airborne bacterial contamination poses a risk for surgical site infection, and routine surveillance of airborne bacteria is important. Traditional methods for detecting airborne bacteria are time consuming and strenuous. Measurement of biologic particle concentrations using a fluorescent particle counter is a novel method for evaluating air quality. The current study was to determine whether the number of biologic particles detected by the fluorescent particle counter can be used to indicate airborne bacterial counts in operating rooms. The study was performed in an operating theater at a university hospital in Hefei, China. The number of airborne biologic particles every minute was quantified using a fluorescent particle counter. Microbiologic air sampling was performed every 30 minutes using an Andersen air sampler (Pusong Electronic Instruments, Changzhou, China). Correlations between the 2 different methods were analyzed by Pearson correlation coefficients. A significant correlation was observed between biologic particle and bacterial counts (Pearson correlation coefficient = 0.76), and the counting results from 2 methods both increased substantially between operations, corresponding with human movements in the operating room. Fluorescent particle counters show potential as important tools for monitoring bacterial contamination in operating theatres. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  4. Counter-current acid leaching process for copper azole treated wood waste.

    PubMed

    Janin, Amélie; Riche, Pauline; Blais, Jean-François; Mercier, Guy; Cooper, Paul; Morris, Paul

    2012-09-01

    This study explores the performance of a counter-current leaching process (CCLP) for copper extraction from copper azole treated wood waste for recycling of wood and copper. The leaching process uses three acid leaching steps with 0.1 M H2SO4 at 75degrees C and 15% slurry density followed by three rinses with water. Copper is recovered from the leachate using electrodeposition at 5 amperes (A) for 75 min. Ten counter-current remediation cycles were completed achieving > or = 94% copper extraction from the wood during the 10 cycles; 80-90% of the copper was recovered from the extract solution by electrodeposition. The counter-current leaching process reduced acid consumption by 86% and effluent discharge volume was 12 times lower compared with the same process without use of counter-current leaching. However, the reuse of leachates from one leaching step to another released dissolved organic carbon and caused its build-up in the early cycles.

  5. Thermoelectric Generation Using Counter-Flows of Ideal Fluids

    NASA Astrophysics Data System (ADS)

    Meng, Xiangning; Lu, Baiyi; Zhu, Miaoyong; Suzuki, Ryosuke O.

    2017-08-01

    Thermoelectric (TE) performance of a three-dimensional (3-D) TE module is examined by exposing it between a pair of counter-flows of ideal fluids. The ideal fluids are thermal sources of TE module flow in the opposite direction at the same flow rate and generate temperature differences on the hot and cold surfaces due to their different temperatures at the channel inlet. TE performance caused by different inlet temperatures of thermal fluids are numerically analyzed by using the finite-volume method on 3-D meshed physical models and then compared with those using a constant boundary temperature. The results show that voltage and current of the TE module increase gradually from a beginning moment to a steady flow and reach a stable value. The stable values increase with inlet temperature of the hot fluid when the inlet temperature of cold fluid is fixed. However, the time to get to the stable values is almost consistent for all the temperature differences. Moreover, the trend of TE performance using a fluid flow boundary is similar to that of using a constant boundary temperature. Furthermore, 3-D contours of fluid pressure, temperature, enthalpy, electromotive force, current density and heat flux are exhibited in order to clarify the influence of counter-flows of ideal fluids on TE generation. The current density and heat flux homogeneously distribute on an entire TE module, thus indicating that the counter-flows of thermal fluids have high potential to bring about fine performance for TE modules.

  6. Pre-Hardware Optimization of Spacecraft Image Processing Software Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Petrick, David J.; Day, John H. (Technical Monitor)

    2001-01-01

    Spacecraft telemetry rates have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image processing application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms and re-configurable computing hardware technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processing (DSP). It has been shown in [1] and [2] that this configuration can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft. However, since this technology is still maturing, intensive pre-hardware steps are necessary to achieve the benefits of hardware implementation. This paper describes these steps for the GOES-8 application, a software project developed using Interactive Data Language (IDL) (Trademark of Research Systems, Inc.) on a Workstation/UNIX platform. The solution involves converting the application to a PC/Windows/RC platform, selected mainly by the availability of low cost, adaptable high-speed RC hardware. In order for the hybrid system to run, the IDL software was modified to account for platform differences. It was interesting to examine the gains and losses in performance on the new platform, as well as unexpected observations before implementing hardware. After substantial pre-hardware optimization steps, the necessity of hardware implementation for bottleneck code in the PC environment became evident and solvable beginning with the methodology described in [1], [2], and implementing a novel methodology for this specific application [6]. The PC-RC interface bandwidth problem for the class of applications with moderate input-output data rates but large intermediate multi-thread data streams has been addressed and mitigated. This opens a new class of satellite image processing applications for bottleneck problems solution using RC technologies. The issue of a science algorithm level of abstraction necessary for RC hardware implementation is also described. Selected Matlab functions already implemented in hardware were investigated for their direct applicability to the GOES-8 application with the intent to create a library of Matlab and IDL RC functions for ongoing work. A complete class of spacecraft image processing applications using embedded re-configurable computing technology to meet real-time requirements, including performance results and comparison with the existing system, is described in this paper.

  7. A hybrid nanostructure of platinum-nanoparticles/graphitic-nanofibers as a three-dimensional counter electrode in dye-sensitized solar cells.

    PubMed

    Hsieh, Chien-Kuo; Tsai, Ming-Chi; Su, Ching-Yuan; Wei, Sung-Yen; Yen, Ming-Yu; Ma, Chen-Chi M; Chen, Fu-Rong; Tsai, Chuen-Horng

    2011-11-07

    We directly synthesized a platinum-nanoparticles/graphitic-nanofibers (PtNPs/GNFs) hybrid nanostructure on FTO glass. We applied this structure as a three-dimensional counter electrode in dye-sensitized solar cells (DSSCs), and investigated the cells' photoconversion performance. This journal is © The Royal Society of Chemistry 2011

  8. Cross counter-based adaptive assembly scheme in optical burst switching networks

    NASA Astrophysics Data System (ADS)

    Zhu, Zhi-jun; Dong, Wen; Le, Zi-chun; Chen, Wan-jun; Sun, Xingshu

    2009-11-01

    A novel adaptive assembly algorithm called Cross-counter Balance Adaptive Assembly Period (CBAAP) is proposed in this paper. The major difference between CBAAP and other adaptive assembly algorithms is that the threshold of CBAAP can be dynamically adjusted according to the cross counter and step length value. In terms of assembly period and the burst loss probability, we compare the performance of CBAAP with those of three typical algorithms FAP (Fixed Assembly Period), FBL (Fixed Burst Length) and MBMAP (Min-Burst length-Max-Assembly-Period) in the simulation part. The simulation results demonstrate the effectiveness of our algorithm.

  9. The difference in age of the two counter-rotating stellar disks of the spiral galaxy NGC 4138

    NASA Astrophysics Data System (ADS)

    Pizzella, A.; Morelli, L.; Corsini, E. M.; Dalla Bontà, E.; Coccato, L.; Sanjana, G.

    2014-10-01

    Context. Galaxies accrete material from the environment through acquisitions and mergers. These processes contribute to the galaxy assembly and leave their fingerprints on the galactic morphology, internal kinematics of gas and stars, and stellar populations. Aims: The Sa spiral NGC 4138 is known to host two counter-rotating stellar disks, with the ionized gas co-rotating with one of them. We measured the kinematics and properties of the two counter-rotating stellar populations to constrain their formation scenario. Methods: A spectroscopic decomposition of the observed major-axis spectrum was performed to disentangle the relative contribution of the two counter-rotating stellar and one ionized-gas components. The line-strength indices of the two counter-rotating stellar components were measured and modeled with single stellar population models that account for the α/Fe overabundance. Results: The counter-rotating stellar population is younger, marginally more metal poor, and more α-enhanced than the main stellar component. The younger stellar component is also associated with a star-forming ring. Conclusions: The different properties of the counter-rotating stellar components of NGC 4138 rule out the idea that they formed because of bar dissolution. Our findings support the results of numerical simulations in which the counter-rotating component assembled from gas accreted on retrograde orbits from the environment or from the retrograde merging with a gas-rich dwarf galaxy. Based on observation carried out at the Galileo 1.22 m telescope at Padua University.

  10. Independent Orbiter Assessment (IOA): Analysis of the electrical power generation/fuel cell powerplant subsystem

    NASA Technical Reports Server (NTRS)

    Brown, K. L.; Bertsch, P. J.

    1986-01-01

    Results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Generation (EPG)/Fuel Cell Powerplant (FCP) hardware. The EPG/FCP hardware is required for performing functions of electrical power generation and product water distribution in the Orbiter. Specifically, the EPG/FCP hardware consists of the following divisions: (1) Power Section Assembly (PSA); (2) Reactant Control Subsystem (RCS); (3) Thermal Control Subsystem (TCS); and (4) Water Removal Subsystem (WRS). The IOA analysis process utilized available EPG/FCP hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  11. Hardware Evolution of Analog Speed Controllers for a DC Motor

    NASA Technical Reports Server (NTRS)

    Gwaltney, David A.; Ferguson, Michael I.

    2003-01-01

    Evolvable hardware provides the capability to evolve analog circuits to produce amplifier and filter functions. Conventional analog controller designs employ these same functions. Analog controllers for the control of the shaft speed of a DC motor are evolved on an evolvable hardware platform utilizing a Field Programmable Transistor Array (FPTA). The performance of these evolved controllers is compared to that of a conventional proportional-integral (PI) controller.

  12. An iterative approach to region growing using associative memories

    NASA Technical Reports Server (NTRS)

    Snyder, W. E.; Cowart, A.

    1983-01-01

    Region growing, often given as a classical example of the recursive control structures used in image processing which are often awkward to implement in hardware where the intent is the segmentation of an image at raster scan rates, is addressed in light of the postulate that any computation which can be performed recursively can be performed easily and efficiently by iteration coupled with association. Attention is given to an algorithm and hardware structure able to perform region labeling iteratively at scan rates. Every pixel is individually labeled with an identifier which signifies the region to which it belongs. Difficulties otherwise requiring recursion are handled by maintaining an equivalence table in hardware transparent to the computer, which reads the labeled pixels. A simulation of the associative memory has demonstrated its effectiveness.

  13. Postflight hardware evaluation 360T026 (RSRM-26, STS-47)

    NASA Technical Reports Server (NTRS)

    Nielson, Greg

    1993-01-01

    The final report for the Clearfield disassembly evaluation and a continuation of the KSC postflight assessment for the 360T026 (STS-47) Redesigned Solid Rocket Motor (RSRM) flight set is provided. All observed hardware conditions were documented on PFOR's and are included in Appendices A, B, and C. Appendices D and E contain the measurements and safety factor data for the nozzle and insulation components. This report, along with the KSC Ten-Day Postflight Hardware Evaluation Report (TWR-64203), represents a summary of the 360T026 hardware evaluation. The as-flown hardware configuration is documented in TWR-60472. Disassembly evaluation photograph numbers are logged in TWA-1987. The 360T026 flight set disassembly evaluations described were performed at the RSRM Refurbishment Facility in Clearfield, Utah. The final factory joint demate occurred on 12 April 1993. Detailed evaluations were performed in accordance with the Clearfield Postflight Engineering Evaluation Plan (PEEP), TWR-50051, Revision A. All observations were compared against limits that are also defined in the PEEP. These limits outline the criteria for categorizing the observations as acceptable, reportable, or critical. Hardware conditions that were unexpected and/or determined to be reportable or critical were evaluated by the applicable CPT and tracked through the PFAR system.

  14. Final postflight hardware evaluation report RSRM-32 (STS-57)

    NASA Technical Reports Server (NTRS)

    Nielson, Greg

    1993-01-01

    This document is the final report for the postflight assessment of the RSRM-32 (STS-57) flight set. This report presents the disassembly evaluations performed at the Thiokol facilities in Utah and is a continuation of the evaluations performed at KSC (TWR-64239). The PEEP for this assessment is outlined in TWR-50051, Revision B. The PEEP defines the requirements for evaluating RSRM hardware. Special hardware issues pertaining to this flight set requiring additional or modified assessment are outlined in TWR-64237. All observed hardware conditions were documented on PFOR's which are included in Appendix A. Observations were compared against limits defined in the PEEP. Any observation that was categorized as reportable or had no defined limits was documented on a preliminary PFAR by the assessment engineers. Preliminary PFAR's were reviewed by the Thiokol SPAT Executive Board to determine if elevation to PFAR's was required.

  15. Performance of the Extravehicular Mobility Unit (EMU) Airlock Coolant Loop Remediation (A/L CLR) Hardware - Final

    NASA Technical Reports Server (NTRS)

    Steele, John W.; Rector, Tony; Gazda, Daniel; Lewis, John

    2011-01-01

    An EMU water processing kit (Airlock Coolant Loop Recovery -- A/L CLR) was developed as a corrective action to Extravehicular Mobility Unit (EMU) coolant flow disruptions experienced on the International Space Station (ISS) in May of 2004 and thereafter. A conservative duty cycle and set of use parameters for A/L CLR use and component life were initially developed and implemented based on prior analysis results and analytical modeling. Several initiatives were undertaken to optimize the duty cycle and use parameters of the hardware. Examination of post-flight samples and EMU Coolant Loop hardware provided invaluable information on the performance of the A/L CLR and has allowed for an optimization of the process. The intent of this paper is to detail the evolution of the A/L CLR hardware, efforts to optimize the duty cycle and use parameters, and the final recommendations for implementation in the post-Shuttle retirement era.

  16. Chemical calculations on Cray computers

    NASA Technical Reports Server (NTRS)

    Taylor, Peter R.; Bauschlicher, Charles W., Jr.; Schwenke, David W.

    1989-01-01

    The influence of recent developments in supercomputing on computational chemistry is discussed with particular reference to Cray computers and their pipelined vector/limited parallel architectures. After reviewing Cray hardware and software the performance of different elementary program structures are examined, and effective methods for improving program performance are outlined. The computational strategies appropriate for obtaining optimum performance in applications to quantum chemistry and dynamics are discussed. Finally, some discussion is given of new developments and future hardware and software improvements.

  17. Exploring Infiniband Hardware Virtualization in OpenNebula towards Efficient High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pais Pitta de Lacerda Ruivo, Tiago; Bernabeu Altayo, Gerard; Garzoglio, Gabriele

    2014-11-11

    has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56more » virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).« less

  18. Los Alamos radiation transport code system on desktop computing platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briesmeister, J.F.; Brinkley, F.W.; Clark, B.A.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. These codes were originally developed many years ago and have undergone continual improvement. With a large initial effort and continued vigilance, the codes are easily portable from one type of hardware to another. The performance of scientific work-stations (SWS) has evolved to the point that such platforms can be used routinely to perform sophisticated radiation transport calculations. As the personal computer (PC) performance approaches that of the SWS, the hardware options for desk-top radiation transport calculations expands considerably. Themore » current status of the radiation transport codes within the LARTCS is described: MCNP, SABRINA, LAHET, ONEDANT, TWODANT, TWOHEX, and ONELD. Specifically, the authors discuss hardware systems on which the codes run and present code performance comparisons for various machines.« less

  19. Innovative solar thermochemical water splitting.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogan, Roy E. Jr.; Siegel, Nathan P.; Evans, Lindsey R.

    2008-02-01

    Sandia National Laboratories (SNL) is evaluating the potential of an innovative approach for splitting water into hydrogen and oxygen using two-step thermochemical cycles. Thermochemical cycles are heat engines that utilize high-temperature heat to produce chemical work. Like their mechanical work-producing counterparts, their efficiency depends on operating temperature and on the irreversibility of their internal processes. With this in mind, we have invented innovative design concepts for two-step solar-driven thermochemical heat engines based on iron oxide and iron oxide mixed with other metal oxides (ferrites). The design concepts utilize two sets of moving beds of ferrite reactant material in close proximitymore » and moving in opposite directions to overcome a major impediment to achieving high efficiency--thermal recuperation between solids in efficient counter-current arrangements. They also provide inherent separation of the product hydrogen and oxygen and are an excellent match with high-concentration solar flux. However, they also impose unique requirements on the ferrite reactants and materials of construction as well as an understanding of the chemical and cycle thermodynamics. In this report the Counter-Rotating-Ring Receiver/Reactor/Recuperator (CR5) solar thermochemical heat engine and its basic operating principals are described. Preliminary thermal efficiency estimates are presented and discussed. Our ferrite reactant material development activities, thermodynamic studies, test results, and prototype hardware development are also presented.« less

  20. Profiling an application for power consumption during execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2012-08-21

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  1. DIFFICULTY IN THE FORMATION OF COUNTER-ORBITING HOT JUPITERS FROM NEAR-COPLANAR HIERARCHICAL TRIPLE SYSTEMS: A SUB-STELLAR PERTURBER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, Yuxin; Suto, Yasushi, E-mail: yuxin@utap.phys.s.u-tokyo.ac.jp

    2016-03-20

    Among 100 transiting planets with a measured projected spin–orbit angle λ, several systems are suggested to be counter-orbiting. While these cases may be due to the projection effect, the mechanism that produces a counter-orbiting planet has not been established. A promising scenario for counter-orbiting planets is the extreme eccentricity evolution in near-coplanar hierarchical triple systems with eccentric inner and outer orbits. We examine this scenario in detail by performing a series of systematic numerical simulations, and consider the possibility of forming hot Jupiters (HJs), especially a counter-orbiting one under this mechanism with a distant sub-stellar perturber. We incorporate quadrupole andmore » octupole secular gravitational interaction between the two orbits, and also short-range forces (correction for general relativity, star and inner planetary tide, and rotational distortion) simultaneously. We find that most systems are tidally disrupted and that a small fraction of the surviving planets turn out to be prograde. The formation of counter-orbiting HJs in this scenario is possible only in a very restricted parameter region, and thus is very unlikely in practice.« less

  2. Extravehicular activity training and hardware design consideration

    NASA Technical Reports Server (NTRS)

    Thuot, P. J.; Harbaugh, G. J.

    1995-01-01

    Preparing astronauts to perform the many complex extravehicular activity (EVA) tasks required to assemble and maintain Space Station will be accomplished through training simulations in a variety of facilities. The adequacy of this training is dependent on a thorough understanding of the task to be performed, the environment in which the task will be performed, high-fidelity training hardware and an awareness of the limitations of each particular training facility. Designing hardware that can be successfully operated, or assembled, by EVA astronauts in an efficient manner, requires an acute understanding of human factors and the capabilities and limitations of the space-suited astronaut. Additionally, the significant effect the microgravity environment has on the crew members' capabilities has to be carefully considered not only for each particular task, but also for all the overhead related to the task and the general overhead associated with EVA. This paper will describe various training methods and facilities that will be used to train EVA astronauts for Space Station assembly and maintenance. User-friendly EVA hardware design considerations and recent EVA flight experience will also be presented.

  3. Extravehicular activity training and hardware design consideration.

    PubMed

    Thuot, P J; Harbaugh, G J

    1995-07-01

    Preparing astronauts to perform the many complex extravehicular activity (EVA) tasks required to assemble and maintain Space Station will be accomplished through training simulations in a variety of facilities. The adequacy of this training is dependent on a thorough understanding of the task to be performed, the environment in which the task will be performed, high-fidelity training hardware and an awareness of the limitations of each particular training facility. Designing hardware that can be successfully operated, or assembled, by EVA astronauts in an efficient manner, requires an acute understanding of human factors and the capabilities and limitations of the space-suited astronaut. Additionally, the significant effect the microgravity environment has on the crew members' capabilities has to be carefully considered not only for each particular task, but also for all the overhead related to the task and the general overhead associated with EVA. This paper will describe various training methods and facilities that will be used to train EVA astronauts for Space Station assembly and maintenance. User-friendly EVA hardware design considerations and recent EVA flight experience will also be presented.

  4. a Mini Multi-Gas Detection System Based on Infrared Principle

    NASA Astrophysics Data System (ADS)

    Zhijian, Xie; Qiulin, Tan

    2006-12-01

    To counter the problems of gas accidents in coal mines, family safety resulted from using gas, a new infrared detection system with integration and miniaturization has been developed. The infrared detection optics principle used in developing this system is mainly analyzed. The idea that multi gas detection is introduced and guided through analyzing single gas detection is got across. Through researching the design of cell structure, the cell with integration and miniaturization has been devised. The way of data transmission on Controller Area Network (CAN) bus is explained. By taking Single-Chip Microcomputer (SCM) as intelligence handling, the functional block diagram of gas detection system is designed with its hardware and software system analyzed and devised. This system designed has reached the technology requirement of lower power consumption, mini-volume, big measure range, and able to realize multi-gas detection.

  5. Test program, helium II orbital resupply coupling

    NASA Technical Reports Server (NTRS)

    Hyatt, William S.

    1991-01-01

    The full scope of this program was to have included development tests, design and production of custom test equipment and acceptance and qualification testing of prototype and protoflight coupling hardware. This program was performed by Ball Aerospace Systems Division, Boulder, Colorado until its premature termination in May 1991. Development tests were performed on cryogenic face seals and flow control devices at superfluid helium (He II) conditions. Special equipment was developed to allow quantified leak detection at large leak rates up to 8.4 x 10(exp -4) SCCS. Two major fixtures were developed and characterized: The Cryogenic Test Fixture (CTF) and the Thermal Mismatch Fixture (Glovebox). The CTF allows the coupling hardware to be filled with liquid nitrogen (LN2), liquid helium (LHe) or sub-cooled liquid helium when hardware flow control valves are either open or closed. Heat leak measurements, internal and external helium leakage measurements, cryogenic proof pressure tests and external load applications are performed in this fixture. Special reusable MLI closures were developed to provide repeatable installations in the CTF. The Thermal Mismatch Fixture allows all design configurations of coupling hardware to be engaged and disengaged while measuring applied forces and torques. Any two hardware components may be individually thermally preconditioned within the range of 117 deg K to 350 deg K prior to engage/disengage cycling. This verifies dimensional compatibility and operation when thermally mismatched. A clean, dry GN2 atmosphere is maintained in the fixture at all times. The first shipset of hardware was received, inspected and cycled at room temperature just prior to program termination.

  6. Lossless data compression for improving the performance of a GPU-based beamformer.

    PubMed

    Lok, U-Wai; Fan, Gang-Wei; Li, Pai-Chi

    2015-04-01

    The powerful parallel computation ability of a graphics processing unit (GPU) makes it feasible to perform dynamic receive beamforming However, a real time GPU-based beamformer requires high data rate to transfer radio-frequency (RF) data from hardware to software memory, as well as from central processing unit (CPU) to GPU memory. There are data compression methods (e.g. Joint Photographic Experts Group (JPEG)) available for the hardware front end to reduce data size, alleviating the data transfer requirement of the hardware interface. Nevertheless, the required decoding time may even be larger than the transmission time of its original data, in turn degrading the overall performance of the GPU-based beamformer. This article proposes and implements a lossless compression-decompression algorithm, which enables in parallel compression and decompression of data. By this means, the data transfer requirement of hardware interface and the transmission time of CPU to GPU data transfers are reduced, without sacrificing image quality. In simulation results, the compression ratio reached around 1.7. The encoder design of our lossless compression approach requires low hardware resources and reasonable latency in a field programmable gate array. In addition, the transmission time of transferring data from CPU to GPU with the parallel decoding process improved by threefold, as compared with transferring original uncompressed data. These results show that our proposed lossless compression plus parallel decoder approach not only mitigate the transmission bandwidth requirement to transfer data from hardware front end to software system but also reduce the transmission time for CPU to GPU data transfer. © The Author(s) 2014.

  7. EHWPACK: An evolvable hardware environment using the SPICE simulator and the Field Programmable Transistor Array

    NASA Technical Reports Server (NTRS)

    Keymeulen, D.; Klimeck, G.; Zebulum, R.; Stoica, A.; Jin, Y.; Lazaro, C.

    2000-01-01

    This paper describes the EHW development system, a tool that performs the evolutionary synthesis of electronic circuits, using the SPICE simulator and the Field Programmable Transistor Array hardware (FPTA) developed at JPL.

  8. Affordable Emerging Computer Hardware for Neuromorphic Computing Applications

    DTIC Science & Technology

    2011-09-01

    DATES COVERED (From - To) 4 . TITLE AND SUBTITLE AFFORDABLE EMERGING COMPUTER HARDWARE FOR NEUROMORPHIC COMPUTING APPLICATIONS 5a. CONTRACT NUMBER...speedup over software [3, 4 ]. 3 Table 1 shows a comparison of the computing performance, communication performance, power consumption...time is probably 5 frames per second, corresponding to 5 saccades. III. RESULTS AND DISCUSSION The use of IBM Cell-BE technology (Sony PlayStation

  9. The Art of Space Flight Exercise Hardware: Design and Implementation

    NASA Technical Reports Server (NTRS)

    Beyene, Nahom M.

    2004-01-01

    The design of space flight exercise hardware depends on experience with crew health maintenance in a microgravity environment, history in development of flight-quality exercise hardware, and a foundation for certifying proper project management and design methodology. Developed over the past 40 years, the expertise in designing exercise countermeasures hardware at the Johnson Space Center stems from these three aspects of design. The medical community has steadily pursued an understanding of physiological changes in humans in a weightless environment and methods of counteracting negative effects on the cardiovascular and musculoskeletal system. The effects of weightlessness extend to the pulmonary and neurovestibular system as well with conditions ranging from motion sickness to loss of bone density. Results have shown losses in water weight and muscle mass in antigravity muscle groups. With the support of university-based research groups and partner space agencies, NASA has identified exercise to be the primary countermeasure for long-duration space flight. The history of exercise hardware began during the Apollo Era and leads directly to the present hardware on the International Space Station. Under the classifications of aerobic and resistive exercise, there is a clear line of development from the early devices to the countermeasures hardware used today. In support of all engineering projects, the engineering directorate has created a structured framework for project management. Engineers have identified standards and "best practices" to promote efficient and elegant design of space exercise hardware. The quality of space exercise hardware depends on how well hardware requirements are justified by exercise performance guidelines and crew health indicators. When considering the microgravity environment of the device, designers must consider performance of hardware separately from the combined human-in-hardware system. Astronauts are the caretakers of the hardware while it is deployed and conduct all sanitization, calibration, and maintenance for the devices. Thus, hardware designs must account for these issues with a goal of minimizing crew time on orbit required to complete these tasks. In the future, humans will venture to Mars and exercise countermeasures will play a critical role in allowing us to continue in our spirit of exploration. NASA will benefit from further experimentation on Earth, through the International Space Station, and with advanced biomechanical models to quantify how each device counteracts specific symptoms of weightlessness. With the continued support of international space agencies and the academic research community, we will usher the next frontier in human space exploration.

  10. Consumers' perceptions of vape shops in Southern California: an analysis of online Yelp reviews.

    PubMed

    Sussman, Steve; Garcia, Robert; Cruz, Tess Boley; Baezconde-Garbanati, Lourdes; Pentz, Mary Ann; Unger, Jennifer B

    2014-01-01

    E-cigarettes are sold at many different types of retail establishments. A new type of shop has emerged, the vape shop, which specializes in sales of varied types of e-cigarettes. Vape shops allow users to sample several types. There are no empirical research articles on vape shops. Information is needed on consumers' beliefs and behaviors about these shops, the range of products sold, marketing practices, and variation in shop characteristics by ethnic community and potential counter-marketing messages. This study is the first to investigate marketing characteristics of vape shops located in different ethnic neighborhoods in Los Angeles, by conducting a Yelp electronic search and content analysis of consumer reports on vape shops they have visited. The primary measure was Yelp reviews (N = 103 vape shops in the Los Angeles, California area), which were retrieved and content coded. We compared the attributes of vape shops representing four ethnic communities: African American, Hispanic/Latino, Korean, and White. Vape shop attributes listed as most important were the selection of flavors or hardware (95%), fair prices (92%), and unique flavors or hardware (89%). Important staff marketing attributes included being friendly (99%), helpful/patient/respectful (97%), and knowledgeable/professional (95%). Over one-half of the shops were rated as clean (52%) and relaxed (61%). Relatively few of the reviews mentioned quitting smoking (32%) or safety of e-cigarettes (15%). The selection of flavors and hardware appeared relatively important in Korean ethnic location vape shops. Yelp reviews may influence potential consumers. As such, the present study's focus on Yelp reviews addressed at least eight of the FDA's Center for Tobacco Products' priorities pertaining to marketing influences on consumer beliefs and behaviors. The findings suggest that there were several vape shop and product attributes that consumers considered important to disseminate to others through postings on Yelp. Lack of health warnings about these products may misrepresent their potential risk. The main influence variables were product variety and price. There was only a little evidence of influence of ethnic neighborhood; for example, regarding importance of flavors and hardware. Shop observational studies are recommended to discern safety factors across different ethnic neighborhoods.

  11. Transitioning the second-generation antihistamines to over-the-counter status: a cost-effectiveness analysis.

    PubMed

    Sullivan, Patrick W; Follin, Sheryl L; Nichol, Michael B

    2003-12-01

    A U.S. Food and Drug Administration advisory committee deemed the second-generation antihistamines (SGA) safe for over-the-counter use against the preliminary opposition of the manufacturers. As a result, loratadine is now available over-the-counter. First-generation antihistamines (FGA) are associated with an increased risk of unintentional injuries, fatalities, and reduced productivity. Access to SGA over-the-counter could result in decreased use of FGA, thereby reducing deleterious outcomes. The societal impact of transitioning this class of medications from prescription to over-the-counter status has important policy implications. To examine the cost-effectiveness of transitioning SGA to over-the-counter status from a societal perspective. A simulation model of the decision to transition SGA to over-the-counter status was compared with retaining prescription-only status for a hypothetical cohort of individuals with allergic rhinitis in the United States. Estimates of costs and effectiveness were obtained from the medical literature and national surveys. Sensitivity analysis was performed using a second-order Monte Carlo simulation. Discounted, quality-adjusted life-years saved as a result of amelioration of allergic rhinitis symptoms and avoidance of motor vehicle, occupational, public and home injuries and fatalities; discounted direct and indirect costs. Availability of SGA over-the-counter was associated with annual savings of 4 billion dollars (2.4-5.3 billion dollars) or 100 dollars (64-137 dollars) per allergic rhinitis sufferer and 135,061 time-discounted quality-adjusted life years (84,913-191,802). The sensitivity analysis provides evidence in support of these results. Making SGA available over-the-counter is both cost-saving and more effective for society, largely as a result of reduced adverse outcomes associated with FGA-induced sedation. Further study is needed to determine the differential impact on specific vulnerable populations.

  12. Active control of counter-rotating open rotor interior noise in a Dornier 728 experimental aircraft

    NASA Astrophysics Data System (ADS)

    Haase, Thomas; Unruh, Oliver; Algermissen, Stephan; Pohl, Martin

    2016-08-01

    The fuel consumption of future civil aircraft needs to be reduced because of the CO2 restrictions declared by the European Union. A consequent lightweight design and a new engine concept called counter-rotating open rotor are seen as key technologies in the attempt to reach this ambitious goals. Bearing in mind that counter-rotating open rotor engines emit very high sound pressures at low frequencies and that lightweight structures have a poor transmission loss in the lower frequency range, these key technologies raise new questions in regard to acoustic passenger comfort. One of the promising solutions for the reduction of sound pressure levels inside the aircraft cabin are active sound and vibration systems. So far, active concepts have rarely been investigated for a counter-rotating open rotor pressure excitation on complex airframe structures. Hence, the state of the art is augmented by the preliminary study presented in this paper. The study shows how an active vibration control system can influence the sound transmission of counter-rotating open rotor noise through a complex airframe structure into the cabin. Furthermore, open questions on the way towards the realisation of an active control system are addressed. In this phase, an active feedforward control system is investigated in a fully equipped Dornier 728 experimental prototype aircraft. In particular, the sound transmission through the airframe, the coupling of classical actuators (inertial and piezoelectric patch actuators) into the structure and the performance of the active vibration control system with different error sensors are investigated. It can be shown that the active control system achieves a reduction up to 5 dB at several counter-rotating open rotor frequencies but also that a better performance could be achieved through further optimisations.

  13. Human performance interfaces in air traffic control.

    PubMed

    Chang, Yu-Hern; Yeh, Chung-Hsing

    2010-01-01

    This paper examines how human performance factors in air traffic control (ATC) affect each other through their mutual interactions. The paper extends the conceptual SHEL model of ergonomics to describe the ATC system as human performance interfaces in which the air traffic controllers interact with other human performance factors including other controllers, software, hardware, environment, and organisation. New research hypotheses about the relationships between human performance interfaces of the system are developed and tested on data collected from air traffic controllers, using structural equation modelling. The research result suggests that organisation influences play a more significant role than individual differences or peer influences on how the controllers interact with the software, hardware, and environment of the ATC system. There are mutual influences between the controller-software, controller-hardware, controller-environment, and controller-organisation interfaces of the ATC system, with the exception of the controller-controller interface. Research findings of this study provide practical insights in managing human performance interfaces of the ATC system in the face of internal or external change, particularly in understanding its possible consequences in relation to the interactions between human performance factors.

  14. Scout: high-performance heterogeneous computing made simple

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jablin, James; Mc Cormick, Patrick; Herlihy, Maurice

    2011-01-26

    Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focusmore » on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.« less

  15. Gold leaf counter electrodes for dye-sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Shimada, Kazuhiro; Toyoda, Takeshi

    2018-03-01

    In this study, a gold leaf 100 nm thin film is used as the counter electrode in dye-sensitized solar cells. The traditional method of hammering gold foil to obtain a thin gold leaf, which requires only small amounts of gold, was employed. The gold leaf was then attached to the substrate using an adhesive to produce the gold electrode. The proposed approach for fabricating counter electrodes is demonstrated to be facile and cost-effective, as opposed to existing techniques. Compared with electrodes prepared with gold foil and sputtered gold, the gold leaf counter electrode demonstrates higher catalytic activity with a cobalt-complex electrolyte and higher cell efficiency. The origin of the improved performance was investigated by surface morphology examination (scanning electron microscopy), various electrochemical analyses (cyclic voltammetry, linear sweep voltammetry, and electrochemical impedance spectroscopy), and crystalline analysis (X-ray diffractometry).

  16. The HERSCHEL detector: high-rapidity shower counters for LHCb

    NASA Astrophysics Data System (ADS)

    Carvalho Akiba, K.; Alessio, F.; Bondar, N.; Byczynski, W.; Coco, V.; Collins, P.; Dumps, R.; Dzhelyadin, R.; Gandini, P.; Gruberg Cazon, B. R.; Jacobsson, R.; Johnson, D.; Manthey, J.; Mauricio, J.; McNulty, R.; Monteil, S.; Rachwal, B.; Ravonel Salzgeber, M.; Roy, L.; Schindler, H.; Stevenson, S.; Wilkinson, G.

    2018-04-01

    The HERSCHEL detector consists of a set of scintillating counters, designed to increase the coverage of the LHCb experiment in the high-rapidity regions on either side of the main spectrometer. The new detector improves the capabilities of LHCb for studies of diffractive interactions, most notably Central Exclusive Production. In this paper the construction, installation, commissioning, and performance of HERSCHEL are presented.

  17. Effect of a Hypocretin/Orexin Antagonist on Neurocognitive Performance

    DTIC Science & Technology

    2012-09-01

    NEY-1413 FINAL Version 8.05JAN2012 Page 5 of 45 ABBREVIATIONS AE Adverse Event AASM American Academy of Sleep Medicine BzRAs...Current use of statins, ketoconazole, prescription or over- the-counter medications or herbal supplements containing psychoactive properties or...Current use of statins, ketoconazole, prescription or over-the-counter medications or herbal supplements containing psychoactive properties or

  18. Effect of a Hypocretin/Orexin Antagonist on Neurocognitive Performance

    DTIC Science & Technology

    2014-09-01

    46 NEY-1413 FINAL Version 10.30 JANUARY 2014 ABBREVIATIONS AE Adverse Event AASM American Academy of Sleep Medicine BzRAs Benzodiazepine...ketoconazole, prescription or over- the-counter medications or herbal supplements containing psychoactive properties or stimulants in the judgment of the...medical conditions; 12.) Current use of statins, ketoconazole, prescription or over-the-counter medications or herbal supplements containing

  19. Educational Studies of Cosmic Rays with a Telescope of Geiger-Muller Counters

    ERIC Educational Resources Information Center

    Wibig, T.; Kolodziejczak, K.; Pierzynski, R.; Sobczak, R.

    2006-01-01

    A group of high school students (XII Liceum) in the framework of the Roland Maze Project has built a compact telescope of three Geiger-Muller counters. The connection between the telescope and a PC computer was also created and programmed by students involved in the Project. This has allowed students to use their equipment to perform serious…

  20. Impact of Turbine Modulation on Variable-Cycle Engine Performance. Phase 4. Additional Hardware Design and Fabrication, Engine Modification, and Altitude Test. Part 3 B

    DTIC Science & Technology

    1974-12-01

    urbofan engine performance. An AiKesearch Model TFE731 -2 Turbofan Engine was modified to incorporate production-type variable-geometry hardware...reliability was shown for the variable- geometry components. The TFE731 , modified to include variable geometry, proved to be an inexpensive...Atm at a Met Thrust of 3300 LBF 929 85 Variable-Cycle Engine TFE731 Exhaust-Nozzle Performance 948 86 Analytical Model Comparisons, Aerodynamic

  1. Use of Heritage Hardware on Orion MPCV Exploration Flight Test One

    NASA Technical Reports Server (NTRS)

    Rains, George Edward; Cross, Cynthia D.

    2012-01-01

    Due to an aggressive schedule for the first space flight of an unmanned Orion capsule, currently known as Exploration Flight Test One (EFT1), combined with severe programmatic funding constraints, an effort was made within the Orion Program to identify heritage hardware, i.e., already existing, flight-certified components from previous manned space programs, which might be available for use on EFT1. With the end of the Space Shuttle Program, no current means exists to launch Multi-Purpose Logistics Modules (MPLMs) to the International Space Station (ISS), and so the inventory of many flight-certified Shuttle and MPLM components are available for other purposes. Two of these items are the MPLM cabin Positive Pressure Relief Assembly (PPRA), and the Shuttle Ground Support Equipment Heat Exchanger (GSE HX). In preparation for the utilization of these components by the Orion Program, analyses and testing of the hardware were performed. The PPRA had to be analyzed to determine its susceptibility to pyrotechnic shock, and vibration testing had to be performed, since those environments are predicted to be more severe during an Orion mission than those the hardware was originally designed to accommodate. The GSE HX had to be tested for performance with the Orion thermal working fluids, which are different from those used by the Space Shuttle. This paper summarizes the activities required in order to utilize heritage hardware for EFT1.

  2. Use of Heritage Hardware on MPCV Exploration Flight Test One

    NASA Technical Reports Server (NTRS)

    Rains, George Edward; Cross, Cynthia D.

    2011-01-01

    Due to an aggressive schedule for the first orbital test flight of an unmanned Orion capsule, known as Exploration Flight Test One (EFT1), combined with severe programmatic funding constraints, an effort was made to identify heritage hardware, i.e., already existing, flight-certified components from previous manned space programs, which might be available for use on EFT1. With the end of the Space Shuttle Program, no current means exists to launch Multi Purpose Logistics Modules (MPLMs) to the International Space Station (ISS), and so the inventory of many flight-certified Shuttle and MPLM components are available for other purposes. Two of these items are the Shuttle Ground Support Equipment Heat Exchanger (GSE Hx) and the MPLM cabin Positive Pressure Relief Assembly (PPRA). In preparation for the utilization of these components by the Orion Program, analyses and testing of the hardware were performed. The PPRA had to be analyzed to determine its susceptibility to pyrotechnic shock, and vibration testing had to be performed, since those environments are predicted to be significantly more severe during an Orion mission than those the hardware was originally designed to accommodate. The GSE Hx had to be tested for performance with the Orion thermal working fluids, which are different from those used by the Space Shuttle. This paper summarizes the certification of the use of heritage hardware for EFT1.

  3. Requirements analysis for a hardware, discrete-event, simulation engine accelerator

    NASA Astrophysics Data System (ADS)

    Taylor, Paul J., Jr.

    1991-12-01

    An analysis of a general Discrete Event Simulation (DES), executing on the distributed architecture of an eight mode Intel PSC/2 hypercube, was performed. The most time consuming portions of the general DES algorithm were determined to be the functions associated with message passing of required simulation data between processing nodes of the hypercube architecture. A behavioral description, using the IEEE standard VHSIC Hardware Description and Design Language (VHDL), for a general DES hardware accelerator is presented. The behavioral description specifies the operational requirements for a DES coprocessor to augment the hypercube's execution of DES simulations. The DES coprocessor design implements the functions necessary to perform distributed discrete event simulations using a conservative time synchronization protocol.

  4. Performance of the Research Animal Holding Facility (RAHF) and General Purpose Work Station (GPWS) and other hardware in the microgravity environment

    NASA Technical Reports Server (NTRS)

    Hogan, Robert P.; Dalton, Bonnie P.

    1991-01-01

    This paper discusses the performance of the Research Animal Holding Facility (RAHF) and General Purpose Work Station (GPWS) plus other associated hardware during the recent flight of Spacelab Life Sciences 1 (SLS-1). The RAHF was developed to provide proper housing (food, water, temperature control, lighting and waste management) for up to 24 rodents during flights on the Spacelab. The GPWS was designed to contain particulates and toxic chemicals generated during plant and animal handling and dissection/fixation activities during space flights. A history of the hardware development involves as well as the redesign activities prior to the actual flight are discussed.

  5. Recent Technology Advances in Distributed Engine Control

    NASA Technical Reports Server (NTRS)

    Culley, Dennis

    2017-01-01

    This presentation provides an overview of the work performed at NASA Glenn Research Center in distributed engine control technology. This is control system hardware technology that overcomes engine system constraints by modularizing control hardware and integrating the components over communication networks.

  6. Skylab mission report, second visit. [postflight analysis of engineering, experimentation, and medical aspects

    NASA Technical Reports Server (NTRS)

    1974-01-01

    An evaluation is presented of the operational and engineering aspects of the second Skylab flight. Other areas described include: the performance of experimental hardware; the crew's evaluation of the flight; medical aspects; and hardware anomalies.

  7. Impact of uniform electrode current distribution on ETF. [Engineering Test Facility MHD generator

    NASA Technical Reports Server (NTRS)

    Bents, D. J.

    1982-01-01

    A basic reason for the complexity and sheer volume of electrode consolidation hardware in the MHD ETF Powertrain system is the channel electrode current distribution, which is non-uniform. If the channel design is altered to provide uniform electrode current distribution, the amount of hardware required decreases considerably, but at the possible expense of degraded channel performance. This paper explains the design impacts on the ETF electrode consolidation network associated with uniform channel electrode current distribution, and presents the alternate consolidation designs which occur. They are compared to the baseline (non-uniform current) design with respect to performance, and hardware requirements. A rational basis is presented for comparing the requirements for the different designs and the savings that result from uniform current distribution. Performance and cost impacts upon the combined cycle plant are discussed.

  8. Kinematic and stellar population properties of the counter-rotating components in the S0 galaxy NGC 1366

    NASA Astrophysics Data System (ADS)

    Morelli, L.; Pizzella, A.; Coccato, L.; Corsini, E. M.; Dalla Bontà, E.; Buson, L. M.; Ivanov, V. D.; Pagotto, I.; Pompei, E.; Rocco, M.

    2017-04-01

    Context. Many disk galaxies host two extended stellar components that rotate in opposite directions. The analysis of the stellar populations of the counter-rotating components provides constraints on the environmental and internal processes that drive their formation. Aims: The S0 NGC 1366 in the Fornax cluster is known to host a stellar component that is kinematically decoupled from the main body of the galaxy. Here we successfully separated the two counter-rotating stellar components to independently measure the kinematics and properties of their stellar populations. Methods: We performed a spectroscopic decomposition of the spectrum obtained along the galaxy major axis and separated the relative contribution of the two counter-rotating stellar components and of the ionized-gas component. We measured the line-strength indices of the two counter-rotating stellar components and modeled each of them with single stellar population models that account for the α/Fe overabundance. Results: We found that the counter-rotating stellar component is younger, has nearly the same metallicity, and is less α/Fe enhanced than the corotating component. Unlike most of the counter-rotating galaxies, the ionized gas detected in NGC 1366 is neither associated with the counter-rotating stellar component nor with the main galaxy body. On the contrary, it has a disordered distribution and a disturbed kinematics with multiple velocity components observed along the minor axis of the galaxy. Conclusions: The different properties of the counter-rotating stellar components and the kinematic peculiarities of the ionized gas suggest that NGC 1366 is at an intermediate stage of the acquisition process, building the counter-rotating components with some gas clouds still falling onto the galaxy. Based on observations made with ESO Telescopes at the La Silla-Paranal Observatory under programmes 075.B-0794 and 077.B-0767.

  9. Post-Shuttle EVA Operations on ISS

    NASA Technical Reports Server (NTRS)

    West, Bill; Witt, Vincent; Chullen, Cinda

    2010-01-01

    The EVA hardware used to assemble and maintain the ISS was designed with the assumption that it would be returned to Earth on the Space Shuttle for ground processing, refurbishment, or failure investigation (if necessary). With the retirement of the Space Shuttle, a new concept of operations was developed to enable EVA hardware (EMU, Airlock Systems, EVA tools, and associated support equipment and consumables) to perform ISS EVAs until 2016 and possibly beyond to 2020. Shortly after the decision to retire the Space Shuttle was announced, NASA and the One EVA contractor team jointly initiated the EVA 2010 Project. Challenges were addressed to extend the operating life and certification of EVA hardware, secure the capability to launch EVA hardware safely on alternate launch vehicles, and protect EMU hardware operability on orbit for long durations.

  10. Submillisecond X-ray photon correlation spectroscopy from a pixel array detector with fast dual gating and no readout dead-time

    DOE PAGES

    Zhang, Qingteng; Dufresne, Eric M.; Grybos, Pawel; ...

    2016-04-19

    Small-angle scattering X-ray photon correlation spectroscopy (XPCS) studies were performed using a novel photon-counting pixel array detector with dual counters for each pixel. Each counter can be read out independently from the other to ensure there is no readout dead-time between the neighboring frames. A maximum frame rate of 11.8 kHz was achieved. Results on test samples show good agreement with simple diffusion. Lastly, the potential of extending the time resolution of XPCS beyond the limit set by the detector frame rate using dual counters is also discussed.

  11. Submillisecond X-ray photon correlation spectroscopy from a pixel array detector with fast dual gating and no readout dead-time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Qingteng; Dufresne, Eric M.; Grybos, Pawel

    Small-angle scattering X-ray photon correlation spectroscopy (XPCS) studies were performed using a novel photon-counting pixel array detector with dual counters for each pixel. Each counter can be read out independently from the other to ensure there is no readout dead-time between the neighboring frames. A maximum frame rate of 11.8 kHz was achieved. Results on test samples show good agreement with simple diffusion. Lastly, the potential of extending the time resolution of XPCS beyond the limit set by the detector frame rate using dual counters is also discussed.

  12. Submillisecond X-ray photon correlation spectroscopy from a pixel array detector with fast dual gating and no readout dead-time.

    PubMed

    Zhang, Qingteng; Dufresne, Eric M; Grybos, Pawel; Kmon, Piotr; Maj, Piotr; Narayanan, Suresh; Deptuch, Grzegorz W; Szczygiel, Robert; Sandy, Alec

    2016-05-01

    Small-angle scattering X-ray photon correlation spectroscopy (XPCS) studies were performed using a novel photon-counting pixel array detector with dual counters for each pixel. Each counter can be read out independently from the other to ensure there is no readout dead-time between the neighboring frames. A maximum frame rate of 11.8 kHz was achieved. Results on test samples show good agreement with simple diffusion. The potential of extending the time resolution of XPCS beyond the limit set by the detector frame rate using dual counters is also discussed.

  13. Novel algorithm implementations in DARC: the Durham AO real-time controller

    NASA Astrophysics Data System (ADS)

    Basden, Alastair; Bitenc, Urban; Jenkins, David

    2016-07-01

    The Durham AO Real-time Controller has been used on-sky with the CANARY AO demonstrator instrument since 2010, and is also used to provide control for several AO test-benches, including DRAGON. Over this period, many new real-time algorithms have been developed, implemented and demonstrated, leading to performance improvements for CANARY. Additionally, the computational performance of this real-time system has continued to improve. Here, we provide details about recent updates and changes made to DARC, and the relevance of these updates, including new algorithms, to forthcoming AO systems. We present the computational performance of DARC when used on different hardware platforms, including hardware accelerators, and determine the relevance and potential for ELT scale systems. Recent updates to DARC have included algorithms to handle elongated laser guide star images, including correlation wavefront sensing, with options to automatically update references during AO loop operation. Additionally, sub-aperture masking options have been developed to increase signal to noise ratio when operating with non-symmetrical wavefront sensor images. The development of end-user tools has progressed with new options for configuration and control of the system. New wavefront sensor camera models and DM models have been integrated with the system, increasing the number of possible hardware configurations available, and a fully open-source AO system is now a reality, including drivers necessary for commercial cameras and DMs. The computational performance of DARC makes it suitable for ELT scale systems when implemented on suitable hardware. We present tests made on different hardware platforms, along with the strategies taken to optimise DARC for these systems.

  14. HPC Programming on Intel Many-Integrated-Core Hardware with MAGMA Port to Xeon Phi

    DOE PAGES

    Dongarra, Jack; Gates, Mark; Haidar, Azzam; ...

    2015-01-01

    This paper presents the design and implementation of several fundamental dense linear algebra (DLA) algorithms for multicore with Intel Xeon Phi coprocessors. In particular, we consider algorithms for solving linear systems. Further, we give an overview of the MAGMA MIC library, an open source, high performance library, that incorporates the developments presented here and, more broadly, provides the DLA functionality equivalent to that of the popular LAPACK library while targeting heterogeneous architectures that feature a mix of multicore CPUs and coprocessors. The LAPACK-compliance simplifies the use of the MAGMA MIC library in applications, while providing them with portably performant DLA.more » High performance is obtained through the use of the high-performance BLAS, hardware-specific tuning, and a hybridization methodology whereby we split the algorithm into computational tasks of various granularities. Execution of those tasks is properly scheduled over the heterogeneous hardware by minimizing data movements and mapping algorithmic requirements to the architectural strengths of the various heterogeneous hardware components. Our methodology and programming techniques are incorporated into the MAGMA MIC API, which abstracts the application developer from the specifics of the Xeon Phi architecture and is therefore applicable to algorithms beyond the scope of DLA.« less

  15. Synthetic hardware performance analysis in virtualized cloud environment for healthcare organization.

    PubMed

    Tan, Chee-Heng; Teh, Ying-Wah

    2013-08-01

    The main obstacles in mass adoption of cloud computing for database operations in healthcare organization are the data security and privacy issues. In this paper, it is shown that IT services particularly in hardware performance evaluation in virtual machine can be accomplished effectively without IT personnel gaining access to actual data for diagnostic and remediation purposes. The proposed mechanisms utilized the hypothetical data from TPC-H benchmark, to achieve 2 objectives. First, the underlying hardware performance and consistency is monitored via a control system, which is constructed using TPC-H queries. Second, the mechanism to construct stress-testing scenario is envisaged in the host, using a single or combination of TPC-H queries, so that the resource threshold point can be verified, if the virtual machine is still capable of serving critical transactions at this constraining juncture. This threshold point uses server run queue size as input parameter, and it serves 2 purposes: It provides the boundary threshold to the control system, so that periodic learning of the synthetic data sets for performance evaluation does not reach the host's constraint level. Secondly, when the host undergoes hardware change, stress-testing scenarios are simulated in the host by loading up to this resource threshold level, for subsequent response time verification from real and critical transactions.

  16. Independent Orbiter Assessment (IOA): Analysis of the DPS subsystem

    NASA Technical Reports Server (NTRS)

    Lowery, H. J.; Haufler, W. A.; Pietz, K. C.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis/Critical Items List (FMEA/CIL) is presented. The IOA approach features a top-down analysis of the hardware to independently determine failure modes, criticality, and potential critical items. The independent analysis results corresponding to the Orbiter Data Processing System (DPS) hardware are documented. The DPS hardware is required for performing critical functions of data acquisition, data manipulation, data display, and data transfer throughout the Orbiter. Specifically, the DPS hardware consists of the following components: Multiplexer/Demultiplexer (MDM); General Purpose Computer (GPC); Multifunction CRT Display System (MCDS); Data Buses and Data Bus Couplers (DBC); Data Bus Isolation Amplifiers (DBIA); Mass Memory Unit (MMU); and Engine Interface Unit (EIU). The IOA analysis process utilized available DPS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Due to the extensive redundancy built into the DPS the number of critical items are few. Those identified resulted from premature operation and erroneous output of the GPCs.

  17. Compiler-Assisted Multiple Instruction Rollback Recovery Using a Read Buffer. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Alewine, Neal Jon

    1993-01-01

    Multiple instruction rollback (MIR) is a technique to provide rapid recovery from transient processor failures and was implemented in hardware by researchers and slow in mainframe computers. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs were also developed which remove rollback data hazards directly with data flow manipulations, thus eliminating the need for most data redundancy hardware. Compiler-assisted techniques to achieve multiple instruction rollback recovery are addressed. It is observed that data some hazards resulting from instruction rollback can be resolved more efficiently by providing hardware redundancy while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations were conducted which indicate improved efficiency over previous hardware-based and compiler-based schemes. Various enhancements to the compiler transformations and to the data redundancy hardware developed for the compiler-assisted MIR scheme are described and evaluated. The final topic deals with the application of compiler-assisted MIR techniques to aid in exception repair and branch repair in a speculative execution architecture.

  18. Independent Orbiter Assessment (IOA): Analysis of the electrical power distribution and control/electrical power generation subsystem

    NASA Technical Reports Server (NTRS)

    Patton, Jeff A.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C)/Electrical Power Generation (EPG) hardware. The EPD and C/EPG hardware is required for performing critical functions of cryogenic reactant storage, electrical power generation and product water distribution in the Orbiter. Specifically, the EPD and C/EPG hardware consists of the following components: Power Section Assembly (PSA); Reactant Control Subsystem (RCS); Thermal Control Subsystem (TCS); Water Removal Subsystem (WRS); and Power Reactant Storage and Distribution System (PRSDS). The IOA analysis process utilized available EPD and C/EPG hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  19. Performance Measurement of Advanced Stirling Convertors (ASC-E3)

    NASA Technical Reports Server (NTRS)

    Oriti, Salvatore M.

    2013-01-01

    NASA Glenn Research Center (GRC) has been supporting development of the Advanced Stirling Radioisotope Generator (ASRG) since 2006. A key element of the ASRG project is providing life, reliability, and performance testing data of the Advanced Stirling Convertor (ASC). The latest version of the ASC (ASC-E3, to represent the third cycle of engineering model test hardware) is of a design identical to the forthcoming flight convertors. For this generation of hardware, a joint Sunpower and GRC effort was initiated to improve and standardize the test support hardware. After this effort was completed, the first pair of ASC-E3 units was produced by Sunpower and then delivered to GRC in December 2012. GRC has begun operation of these units. This process included performance verification, which examined the data from various tests to validate the convertor performance to the product specification. Other tests included detailed performance mapping that encompassed the wide range of operating conditions that will exist during a mission. These convertors were then transferred to Lockheed Martin for controller checkout testing. The results of this latest convertor performance verification activity are summarized here.

  20. Demonstration Advanced Avionics System (DAAS) functional description. [Cessna 402B aircraft

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A comprehensive set of general aviation avionics were defined for integration into an advanced hardware mechanization for demonstration in a Cessna 402B aircraft. Block diagrams are shown and system and computer architecture as well as significant hardware elements are described. The multifunction integrated data control center and electronic horizontal situation indicator are discussed. The functions that the DAAS will perform are examined. This function definition is the basis for the DAAS hardware and software design.

  1. A historical survey of algorithms and hardware architectures for neural-inspired and neuromorphic computing applications

    DOE PAGES

    James, Conrad D.; Aimone, James B.; Miner, Nadine E.; ...

    2017-01-04

    In this study, biological neural networks continue to inspire new developments in algorithms and microelectronic hardware to solve challenging data processing and classification problems. Here in this research, we survey the history of neural-inspired and neuromorphic computing in order to examine the complex and intertwined trajectories of the mathematical theory and hardware developed in this field. Early research focused on adapting existing hardware to emulate the pattern recognition capabilities of living organisms. Contributions from psychologists, mathematicians, engineers, neuroscientists, and other professions were crucial to maturing the field from narrowly-tailored demonstrations to more generalizable systems capable of addressing difficult problem classesmore » such as object detection and speech recognition. Algorithms that leverage fundamental principles found in neuroscience such as hierarchical structure, temporal integration, and robustness to error have been developed, and some of these approaches are achieving world-leading performance on particular data classification tasks. Additionally, novel microelectronic hardware is being developed to perform logic and to serve as memory in neuromorphic computing systems with optimized system integration and improved energy efficiency. Key to such advancements was the incorporation of new discoveries in neuroscience research, the transition away from strict structural replication and towards the functional replication of neural systems, and the use of mathematical theory frameworks to guide algorithm and hardware developments.« less

  2. A historical survey of algorithms and hardware architectures for neural-inspired and neuromorphic computing applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James, Conrad D.; Aimone, James B.; Miner, Nadine E.

    In this study, biological neural networks continue to inspire new developments in algorithms and microelectronic hardware to solve challenging data processing and classification problems. Here in this research, we survey the history of neural-inspired and neuromorphic computing in order to examine the complex and intertwined trajectories of the mathematical theory and hardware developed in this field. Early research focused on adapting existing hardware to emulate the pattern recognition capabilities of living organisms. Contributions from psychologists, mathematicians, engineers, neuroscientists, and other professions were crucial to maturing the field from narrowly-tailored demonstrations to more generalizable systems capable of addressing difficult problem classesmore » such as object detection and speech recognition. Algorithms that leverage fundamental principles found in neuroscience such as hierarchical structure, temporal integration, and robustness to error have been developed, and some of these approaches are achieving world-leading performance on particular data classification tasks. Additionally, novel microelectronic hardware is being developed to perform logic and to serve as memory in neuromorphic computing systems with optimized system integration and improved energy efficiency. Key to such advancements was the incorporation of new discoveries in neuroscience research, the transition away from strict structural replication and towards the functional replication of neural systems, and the use of mathematical theory frameworks to guide algorithm and hardware developments.« less

  3. 49 CFR 236.903 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... electrical, mechanical, hardware, or software) that is part of a system or subsystem. Configuration..., including the hardware components and software version, is documented and maintained through the life-cycle... or compensates individuals to perform the duties specified in § 236.921 (a). Executive software means...

  4. 49 CFR 236.903 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... electrical, mechanical, hardware, or software) that is part of a system or subsystem. Configuration..., including the hardware components and software version, is documented and maintained through the life-cycle... or compensates individuals to perform the duties specified in § 236.921 (a). Executive software means...

  5. 49 CFR 236.903 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... electrical, mechanical, hardware, or software) that is part of a system or subsystem. Configuration..., including the hardware components and software version, is documented and maintained through the life-cycle... or compensates individuals to perform the duties specified in § 236.921 (a). Executive software means...

  6. 49 CFR 236.903 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... electrical, mechanical, hardware, or software) that is part of a system or subsystem. Configuration..., including the hardware components and software version, is documented and maintained through the life-cycle... or compensates individuals to perform the duties specified in § 236.921 (a). Executive software means...

  7. Oxygen Generation System Laptop Bus Controller Flight Software

    NASA Technical Reports Server (NTRS)

    Rowe, Chad; Panter, Donna

    2009-01-01

    The Oxygen Generation System Laptop Bus Controller Flight Software was developed to allow the International Space Station (ISS) program to activate specific components of the Oxygen Generation System (OGS) to perform a checkout of key hardware operation in a microgravity environment, as well as to perform preventative maintenance operations of system valves during a long period of what would otherwise be hardware dormancy. The software provides direct connectivity to the OGS Firmware Controller with pre-programmed tasks operated by on-orbit astronauts to exercise OGS valves and motors. The software is used to manipulate the pump, separator, and valves to alleviate the concerns of hardware problems due to long-term inactivity and to allow for operational verification of microgravity-sensitive components early enough so that, if problems are found, they can be addressed before the hardware is required for operation on-orbit. The decision was made to use existing on-orbit IBM ThinkPad A31p laptops and MIL-STD-1553B interface cards as the hardware configuration. The software at the time of this reporting was developed and tested for use under the Windows 2000 Professional operating system to ensure compatibility with the existing on-orbit computer systems.

  8. Development of a Methodology to Conduct Usability Evaluation for Hand Tools that May Reduce the Amount of Small Parts that are Dropped During Installation while Processing Space Flight Hardware

    NASA Technical Reports Server (NTRS)

    Miller, Darcy

    2000-01-01

    Foreign object debris (FOD) is an important concern while processing space flight hardware. FOD can be defined as "The debris that is left in or around flight hardware, where it could cause damage to that flight hardware," (United Space Alliance, 2000). Just one small screw left unintentionally in the wrong place could delay a launch schedule while it is retrieved, increase the cost of processing, or cause a potentially fatal accident. At this time, there is not a single solution to help reduce the number of dropped parts such as screws, bolts, nuts, and washers during installation. Most of the effort is currently focused on training employees and on capturing the parts once they are dropped. Advances in ergonomics and hand tool design suggest that a solution may be possible, in the form of specialty hand tools, which secure the small parts while they are being handled. To assist in the development of these new advances, a test methodology was developed to conduct a usability evaluation of hand tools, while performing tasks with risk of creating FOD. The methodology also includes hardware in the form of a testing board and the small parts that can be installed onto the board during a test. The usability of new hand tools was determined based on efficiency and the number of dropped parts. To validate the methodology, participants were tested while performing a task that is representative of the type of work that may be done when processing space flight hardware. Test participants installed small parts using their hands and two commercially available tools. The participants were from three groups: (1) students, (2) engineers / managers and (3) technicians. The test was conducted to evaluate the differences in performance when using the three installation methods, as well as the difference in performance of the three participant groups.

  9. Superconducting analog-to-digital converter with a triple-junction reversible flip-flop bidirectional counter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, G.S.

    1993-07-13

    A high-performance superconducting analog-to-digital converter is described, comprising: a bidirectional binary counter having n stages of triple-junction reversible flip-flops connected together in a cascade arrangement from the least significant bit (LSB) to the most significant bit (MSB) where n is the number of bits of the digital output, each triple-junction reversible flip-flop including first, second and third shunted Josephson tunnel junctions and a superconducting inductor connected in a bridge circuit, the Josephson junctions and the inductor forming upper and lower portions of the flip-flop, each reversible flip-flop being a bistable logic circuit in which the direction of the circulating currentmore » determines the state of the circuit; and means for applying an analog input current to the bidirectional counter; wherein the bidirectional counter algebraically counts incremental changes in the analog input current, increasing the binary count for positive incremental changes in the analog current and decreasing the binary count for negative incremental changes in the current, and wherein the counter does not require a gate bias, thus minimizing power dissipation.« less

  10. NASA Ames Research Center R and D Services Directorate Biomedical Systems Development

    NASA Technical Reports Server (NTRS)

    Pollitt, J.; Flynn, K.

    1999-01-01

    The Ames Research Center R&D Services Directorate teams with NASA, other government agencies and/or industry investigators for the development, design, fabrication, manufacturing and qualification testing of space-flight and ground-based experiment hardware for biomedical and general aerospace applications. In recent years, biomedical research hardware and software has been developed to support space-flight and ground-based experiment needs including the E 132 Biotelemetry system for the Research Animal Holding Facility (RAHF), E 100 Neurolab neuro-vestibular investigation systems, the Autogenic Feedback Systems, and the Standard Interface Glove Box (SIGB) experiment workstation module. Centrifuges, motion simulators, habitat design, environmental control systems, and other unique experiment modules and fixtures have also been developed. A discussion of engineered systems and capabilities will be provided to promote understanding of possibilities for future system designs in biomedical applications. In addition, an overview of existing engineered products will be shown. Examples of hardware and literature that demonstrate the organization's capabilities will be displayed. The Ames Research Center R&D Services Directorate is available to support the development of new hardware and software systems or adaptation of existing systems to meet the needs of academic, commercial/industrial, and government research requirements. The Ames R&D Services Directorate can provide specialized support for: System concept definition and feasibility Mathematical modeling and simulation of system performance Prototype hardware development Hardware and software design Data acquisition systems Graphical user interface development Motion control design Hardware fabrication and high-fidelity machining Composite materials development and application design Electronic/electrical system design and fabrication System performance verification testing and qualification.

  11. An adaptable neuromorphic model of orientation selectivity based on floating gate dynamics

    PubMed Central

    Gupta, Priti; Markan, C. M.

    2014-01-01

    The biggest challenge that the neuromorphic community faces today is to build systems that can be considered truly cognitive. Adaptation and self-organization are the two basic principles that underlie any cognitive function that the brain performs. If we can replicate this behavior in hardware, we move a step closer to our goal of having cognitive neuromorphic systems. Adaptive feature selectivity is a mechanism by which nature optimizes resources so as to have greater acuity for more abundant features. Developing neuromorphic feature maps can help design generic machines that can emulate this adaptive behavior. Most neuromorphic models that have attempted to build self-organizing systems, follow the approach of modeling abstract theoretical frameworks in hardware. While this is good from a modeling and analysis perspective, it may not lead to the most efficient hardware. On the other hand, exploiting hardware dynamics to build adaptive systems rather than forcing the hardware to behave like mathematical equations, seems to be a more robust methodology when it comes to developing actual hardware for real world applications. In this paper we use a novel time-staggered Winner Take All circuit, that exploits the adaptation dynamics of floating gate transistors, to model an adaptive cortical cell that demonstrates Orientation Selectivity, a well-known biological phenomenon observed in the visual cortex. The cell performs competitive learning, refining its weights in response to input patterns resembling different oriented bars, becoming selective to a particular oriented pattern. Different analysis performed on the cell such as orientation tuning, application of abnormal inputs, response to spatial frequency and periodic patterns reveal close similarity between our cell and its biological counterpart. Embedded in a RC grid, these cells interact diffusively exhibiting cluster formation, making way for adaptively building orientation selective maps in silicon. PMID:24765062

  12. On the use of inexact, pruned hardware in atmospheric modelling

    PubMed Central

    Düben, Peter D.; Joven, Jaume; Lingamneni, Avinash; McNamara, Hugh; De Micheli, Giovanni; Palem, Krishna V.; Palmer, T. N.

    2014-01-01

    Inexact hardware design, which advocates trading the accuracy of computations in exchange for significant savings in area, power and/or performance of computing hardware, has received increasing prominence in several error-tolerant application domains, particularly those involving perceptual or statistical end-users. In this paper, we evaluate inexact hardware for its applicability in weather and climate modelling. We expand previous studies on inexact techniques, in particular probabilistic pruning, to floating point arithmetic units and derive several simulated set-ups of pruned hardware with reasonable levels of error for applications in atmospheric modelling. The set-up is tested on the Lorenz ‘96 model, a toy model for atmospheric dynamics, using software emulation for the proposed hardware. The results show that large parts of the computation tolerate the use of pruned hardware blocks without major changes in the quality of short- and long-time diagnostics, such as forecast errors and probability density functions. This could open the door to significant savings in computational cost and to higher resolution simulations with weather and climate models. PMID:24842031

  13. Evaluation methodologies for an advanced information processing system

    NASA Technical Reports Server (NTRS)

    Schabowsky, R. S., Jr.; Gai, E.; Walker, B. K.; Lala, J. H.; Motyka, P.

    1984-01-01

    The system concept and requirements for an Advanced Information Processing System (AIPS) are briefly described, but the emphasis of this paper is on the evaluation methodologies being developed and utilized in the AIPS program. The evaluation tasks include hardware reliability, maintainability and availability, software reliability, performance, and performability. Hardware RMA and software reliability are addressed with Markov modeling techniques. The performance analysis for AIPS is based on queueing theory. Performability is a measure of merit which combines system reliability and performance measures. The probability laws of the performance measures are obtained from the Markov reliability models. Scalar functions of this law such as the mean and variance provide measures of merit in the AIPS performability evaluations.

  14. Direct multiplexed measurement of gene expression with color-coded probe pairs.

    PubMed

    Geiss, Gary K; Bumgarner, Roger E; Birditt, Brian; Dahl, Timothy; Dowidar, Naeem; Dunaway, Dwayne L; Fell, H Perry; Ferree, Sean; George, Renee D; Grogan, Tammy; James, Jeffrey J; Maysuria, Malini; Mitton, Jeffrey D; Oliveri, Paola; Osborn, Jennifer L; Peng, Tao; Ratcliffe, Amber L; Webster, Philippa J; Davidson, Eric H; Hood, Leroy; Dimitrov, Krassen

    2008-03-01

    We describe a technology, the NanoString nCounter gene expression system, which captures and counts individual mRNA transcripts. Advantages over existing platforms include direct measurement of mRNA expression levels without enzymatic reactions or bias, sensitivity coupled with high multiplex capability, and digital readout. Experiments performed on 509 human genes yielded a replicate correlation coefficient of 0.999, a detection limit between 0.1 fM and 0.5 fM, and a linear dynamic range of over 500-fold. Comparison of the NanoString nCounter gene expression system with microarrays and TaqMan PCR demonstrated that the nCounter system is more sensitive than microarrays and similar in sensitivity to real-time PCR. Finally, a comparison of transcript levels for 21 genes across seven samples measured by the nCounter system and SYBR Green real-time PCR demonstrated similar patterns of gene expression at all transcript levels.

  15. The use of carbon black-TiO2 composite prepared using solid state method as counter electrode and E. conferta as sensitizer for dye-sensitized solar cell (DSSC) applications

    NASA Astrophysics Data System (ADS)

    Jaafar, Hidayani; Ahmad, Zainal Arifin; Ain, Mohd Fadzil

    2018-05-01

    In this paper, counter electrodes based on carbon black (CB)-TiO2 composite are proposed as a cost-effective alternative to conventional Pt counter electrodes used in dye-sensitized solar cell (DSSC) applications. CB-TiO2 composite counter electrodes with different weight percentages of CB were prepared using the solid state method and coated onto fluorine-doped tin oxide (FTO) glass using doctor blade method while Eleiodoxa conferta (E. conferta) and Nb-doped TiO2 were used as sensitizer and photoanode, respectively, with electrolyte containing I-/I-3 redox couple. The experimental results revealed that the CB-TiO2 composite influenced the photovoltaic performance by enhancing the electrocatalytic activity. As the amount of CB increased, the catalytic activity improved due to the increase in surface area which then led to low charge-transfer resistance (RCT) at the electrolyte/CB electrode interface. Due to the use of the modified photoanode together with natural dye sensitizers, the counter electrode based on 15 wt% CB-TiO2 composite was able to produce the highest energy conversion efficiency (2.5%) making it a viable alternative counter electrode.

  16. Fundamental Research Applied To Enable Hardware Performance in Microgravity

    NASA Technical Reports Server (NTRS)

    Sheredy, William A.

    2005-01-01

    NASA sponsors microgravity research to generate knowledge in physical sciences. In some cases, that knowledge must be applied to enable future research. This article describes one such example. The Dust and Aerosol measurement Feasibility Test (DAFT) is a risk-mitigation experiment developed at the NASA Glenn Research Center by NASA and ZIN Technologies, Inc., in support of the Smoke Aerosol Measurement Experiment (SAME). SAME is an investigation that is being designed for operation in the Microgravity Science Glovebox aboard the International Space Station (ISS). The purpose of DAFT is to evaluate the performance of P-Trak (TSI Incorporated, Shoreview, MN)--a commercially available condensation nuclei counter and a key SAME diagnostic- -in long-duration microgravity because of concerns about its ability to operate properly in that environment. If its microgravity performance is proven, this device will advance the state of the art in particle measurement capabilities for space vehicles and facilities, such as aboard the ISS. The P-Trak, a hand-held instrument, can count individual particles as small as 20 nm in diameter in an aerosol stream. Particles are drawn into the device by a built-in suction pump. Upon entering the instrument, these particles pass through a saturator tube where they mix with an alcohol vapor (see the following figure). This mixture then flows through a cooled condenser tube where some of the alcohol condenses onto the sample particles, and the droplets grow in a controlled fashion until they are large enough to be counted. These larger droplets pass through an internal nozzle and past a focused laser beam, producing flashes of light that are sensed by a photodetector and then counted to determine particle number concentration. The operation of the instrument depends on the proper internal flow and recycling of isopropyl alcohol in both the vapor and liquid phases.

  17. Assessment of the Vapor Phase Catalytic Ammonia Removal (VPCAR) Technology at the MSFC ECLS Test Facility

    NASA Technical Reports Server (NTRS)

    Tomes, Kristin; Long, David; Carter, Layne; Flynn, Michael

    2007-01-01

    The Vapor Phase Catalytic Ammonia. Removal (VPCAR) technology has been previously discussed as a viable option for. the Exploration Water Recovery System. This technology integrates a phase change process with catalytic oxidation in the vapor phase to produce potable water from exploration mission wastewaters. A developmental prototype VPCAR was designed, built and tested under funding provided by a National Research. Announcement (NRA) project. The core technology, a Wiped Film Rotating Device (WFRD) was provided by Water Reuse Technologies under the NRA, whereas Hamilton Sundstrand Space Systems International performed the hardware integration and acceptance test. of the system. Personnel at the-Ames Research Center performed initial systems test of the VPCAR using ersatz solutions. To assess the viability of this hardware for Exploration. Life Support (ELS) applications, the hardware has been modified and tested at the MSFC ECLS Test facility. This paper summarizes the hardware modifications and test results and provides an assessment of this technology for the ELS application.

  18. Hardware Acceleration of Adaptive Neural Algorithms.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James, Conrad D.

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - worldmore » conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.« less

  19. Operations of cleanrooms during a forest fire including protocols and monitoring results

    NASA Astrophysics Data System (ADS)

    Matheson, Bruce A.; Egges, Joanne; Pirkey, Michael S.; Lobmeyer, Lynette D.

    2012-10-01

    Contamination-sensitive space flight hardware is typically built in cleanroom facilities in order to protect the hardware from particle contamination. Forest wildfires near the facilities greatly increase the number of particles and amount of vapors in the ambient outside air. Reasonable questions arise as to whether typical cleanroom facilities can adequately protect the hardware from these adverse environmental conditions. On Monday September 6, 2010 (Labor Day Holiday), a large wildfire ignited near the Boulder, Colorado Campus of Ball Aerospace. The fire was approximately 6 miles from the Boulder City limits. Smoke levels from the fire stayed very high in Boulder for the majority of the week after the fire began. Cleanroom operations were halted temporarily on contamination sensitive hardware, until particulate and non-volatile residue (NVR) sampling could be performed. Immediate monitoring showed little, if any effect on the cleanroom facilities, so programs were allowed to resume work while monitoring continued for several days and beyond in some cases. Little, if any, effect was ever noticed in the monitoring performed.

  20. Inexact hardware for modelling weather & climate

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, Tim

    2014-05-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing exact calculations in exchange for improvements in performance and potentially accuracy and a reduction in power consumption. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud resolving atmospheric modelling. The impact of both, hardware induced faults and low precision arithmetic is tested in the dynamical core of a global atmosphere model. Our simulations show that both approaches to inexact calculations do not substantially affect the quality of the model simulations, provided they are restricted to act only on smaller scales. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations.

  1. Extravehicular activity compatibility evaluation of developmental hardware for assembly and repair of precision reflectors

    NASA Technical Reports Server (NTRS)

    Heard, Walter L., Jr.; Lake, Mark S.; Bush, Harold G.; Jensen, J. Kermit; Phelps, James E.; Wallsom, Richard E.

    1992-01-01

    This report presents results of tests performed in neutral buoyancy by two pressure-suited test subjects to simulate Extravehicular Activity (EVA) tasks associated with the on-orbit construction and repair of a precision reflector spacecraft. Two complete neutral buoyancy assemblies of the test article (tetrahedral truss with three attached reflector panels) were performed. Truss joint hardware, two different panel attachment hardware concepts, and a panel replacement tool were evaluated. The test subjects found the operation and size of the truss joint hardware to be acceptable. Both panel attachment concepts were found to be EVA compatible, although one concept was judged by the test subjects to be considerably easier to operate. The average time to install a panel from a position within arm's reach of the test subjects was 1 min 14 sec. The panel replacement tool was used successfully to demonstrate the removal and replacement of a damaged reflector panel in 10 min 25 sec.

  2. TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics

    DOE PAGES

    Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...

    2015-04-16

    Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less

  3. Citropin 1.1 Trifluoroacetate to Chloride Counter-Ion Exchange in HCl-Saturated Organic Solutions: An Alternative Approach.

    PubMed

    Sikora, Karol; Neubauer, Damian; Jaśkiewicz, Maciej; Kamysz, Wojciech

    2018-01-01

    In view of the increasing interest in peptides in various market sectors, a stronger emphasis on topics related to their production has been seen. Fmoc-based solid phase peptide synthesis, although being fast and efficient, provides final products with significant amounts of trifluoroacetate ions in the form of either a counter-ion or an unbound impurity. Because of the proven toxicity towards cells and peptide activity inhibition, ion exchange to more biocompatible one is purposeful. Additionally, as most of the currently used counter-ion exchange techniques are time-consuming and burdened by peptide yield reduction risk, development of a new approach is still a sensible solution. In this study, we examined the potential of peptide counter-ion exchange using non-aqueous organic solvents saturated with HCl. Counter-ion exchange of a model peptide, citropin 1.1 (GLFDVIKKVASVIGGL-NH 2 ), for each solvent was conducted through incubation with subsequent evaporation under reduced pressure, dissolution in water and lyophilization. Each exchange was performed four times and compared to a reference method-lyophilization of the peptide from an 0.1 M HCl solution. The results showed superior counter-ion exchange efficiency for most of the organic solutions in relation to the reference method. Moreover, HCl-saturated acetonitrile and tert -butanol provided a satisfying exchange level after just one repetition. Thus, those two organic solvents can be potentially introduced into routine peptide counter-ion exchange.

  4. Integrating Reconfigurable Hardware-Based Grid for High Performance Computing

    PubMed Central

    Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos

    2015-01-01

    FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process. PMID:25874241

  5. AFOSR BRI: Co-Design of Hardware/Software for Predicting MAV Aerodynamics

    DTIC Science & Technology

    2016-09-27

    DOCUMENTATION PAGE Form ApprovedOMB No. 0704-0188 1. REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER 6. AUTHOR(S) 7...703-588-8494 AFOSR BRI While Moore’s Law theoretically doubles processor performance every 24 months, much of the realizable performance remains...past efforts to develop such CFD codes on accelerated processors showed limited success, our hardware/software co-design approach created malleable

  6. Comparison of Analytical Predictions and Experimental Results for a Dual Brayton Power System (Discussion on Test Hardware and Computer Model for a Dual Brayton System)

    NASA Technical Reports Server (NTRS)

    Johnson, Paul K.

    2007-01-01

    NASA Glenn Research Center (GRC) contracted Barber-Nichols, Arvada, CO to construct a dual Brayton power conversion system for use as a hardware proof of concept and to validate results from a computational code known as the Closed Cycle System Simulation (CCSS). Initial checkout tests were performed at Barber- Nichols to ready the system for delivery to GRC. This presentation describes the system hardware components and lists the types of checkout tests performed along with a couple issues encountered while conducting the tests. A description of the CCSS model is also presented. The checkout tests did not focus on generating data, therefore, no test data or model analyses are presented.

  7. [Integrated Development of Full-automatic Fluorescence Analyzer].

    PubMed

    Zhang, Mei; Lin, Zhibo; Yuan, Peng; Yao, Zhifeng; Hu, Yueming

    2015-10-01

    In view of the fact that medical inspection equipment sold in the domestic market is mainly imported from abroad and very expensive, we developed a full-automatic fluorescence analyzer in our center, presented in this paper. The present paper introduces the hardware architecture design of FPGA/DSP motion controlling card+PC+ STM32 embedded micro processing unit, software system based on C# multi thread, design and implementation of double-unit communication in detail. By simplifying the hardware structure, selecting hardware legitimately and adopting control system software to object-oriented technology, we have improved the precision and velocity of the control system significantly. Finally, the performance test showed that the control system could meet the needs of automated fluorescence analyzer on the functionality, performance and cost.

  8. Apollo experience report: Television system

    NASA Technical Reports Server (NTRS)

    Coan, P. P.

    1973-01-01

    The progress of the Apollo television systems from the early definition of requirements through the development and inflight use of color television hardware is presented. Television systems that have been used during the Apollo Program are discussed, beginning with a description of the specifications for each system. The document describes the technical approach taken for the development of each system and discusses the prototype and engineering hardware built to test the system itself and to perform the testing to verify compatibility with the spacecraft systems. Problems that occurred during the design and development phase are described. Finally, the flight hardware, operational characteristics, and performance during several Apollo missions are described, and specific recommendations for the remaining Apollo flights and future space missions are made.

  9. Research on Modelling of Aviation Piston Engine for the Hardware-in-the-loop Simulation

    NASA Astrophysics Data System (ADS)

    Yu, Bing; Shu, Wenjun; Bian, Wenchao

    2016-11-01

    In order to build the aero piston engine model which is real-time and accurate enough to operating conditions of the real engine for hardware in the loop simulation, the mean value model is studied. Firstly, the air-inlet model, the fuel model and the power-output model are established separately. Then, these sub models are combined and verified in MATLAB/SIMULINK. The results show that the model could reflect the steady-state and dynamic performance of aero engine, the errors between the simulation results and the bench test data are within the acceptable range. The model could be applied to verify the logic performance and control strategy of controller in the hardware-in-the-loop (HIL) simulation.

  10. Thruster-Specific Force Estimation and Trending of Cassini Hydrazine Thrusters at Saturn

    NASA Technical Reports Server (NTRS)

    Stupik, Joan; Burk, Thomas A.

    2016-01-01

    The Cassini spacecraft has been in orbit around Saturn since 2004 and has since been approved for both a first and second extended mission. As hardware reaches and exceeds its documented life expectancy, it becomes vital to closely monitor hardware performance. The performance of the 1-N hydrazine attitude control thrusters is especially important to study, because the spacecraft is currently operating on the back-up thruster branch. Early identification of hardware degradation allows more time to develop mitigation strategies. There is no direct measure of an individual thruster's thrust magnitude, but these values can be estimated by post-processing spacecraft telemetry. This paper develops an algorithm to calculate the individual thrust magnitudes using Euler's equation. The algorithm correctly shows the known degradation in the first thruster branch, validating the approach. Results for the current thruster branch show nominal performance as of August, 2015.

  11. Status of the Boeing Dish Engine Critical Component Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brau, H.W.; Diver, R.B.; Nelving, H.

    1999-01-08

    The Boeing Company's Dish Engine Critical Component (DECC) project started in April of 1998. It is a continuation of a solar energy program started by McDonnell Douglas (now Boeing) and United Stirling of Sweden in the mid 1980s. The overall objectives, schedule, and status of this project are presented in this paper. The hardware test configuration, hardware background, operation, and test plans are also discussed. A summary is given of the test data, which includes the daily power performance, generated energy, working-gas usage, mirror reflectivity, solar insolation, on-sun track time, generating time, and system availability. The system performance based uponmore » the present test data is compared to test data from the 1984/88 McDonnell Douglas/United Stirling AB/Southem California Edison test program. The test data shows that the present power, energy, and mirror performance is comparable to when the hardware was first manufactured 14 years ago.« less

  12. Status of the Boeing Dish Engine Critical Component project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stone, K.W.; Nelving, H.; Braun, H.W.

    1999-07-01

    The Boeing Company's Dish Engine Critical Component (DECC) project started in April of 1998. It is a continuation of a solar energy program started by McDonnel Douglas (now Boeing) and United Stirling of Sweden in the mid 1980s. The overall objectives, schedule, and status of this project are presented in this paper. The hardware test configuration, hardware background, operation, and test plans are also discussed. A summary is given of the test data, which includes the daily power performance, generated energy, working-gas usage, mirror reflectivity, solar insolation, on-sun track time. Generating time, and system availability. The system performance based uponmore » the present test data is compared to test data from the 1984/88 McDonnel Douglas/United Stirling AB/Southern California Edison test program. The test data shows that the present power, energy, and mirror performance is comparable to when the hardware was first manufactured 14 years ago.« less

  13. Development of low cost custom hybrid microcircuit technology

    NASA Technical Reports Server (NTRS)

    Perkins, K. L.; Licari, J. J.

    1981-01-01

    Selected potentially low cost, alternate packaging and interconnection techniques were developed and implemented in the manufacture of specific NASA/MSFC hardware, and the actual cost savings achieved by their use. The hardware chosen as the test bed for this evaluation ws the hybrids and modules manufactured by Rockwell International fo the MSFC Flight Accelerometer Safety Cut-Off System (FASCOS). Three potentially low cost packaging and interconnection alternates were selected for evaluation. This study was performed in three phases: hardware fabrication and testing, cost comparison, and reliability evaluation.

  14. Space shuttle solid rocket booster cost-per-flight analysis technique

    NASA Technical Reports Server (NTRS)

    Forney, J. A.

    1979-01-01

    A cost per flight computer model is described which considers: traffic model, component attrition, hardware useful life, turnaround time for refurbishment, manufacturing rates, learning curves on the time to perform tasks, cost improvement curves on quantity hardware buys, inflation, spares philosophy, long lead, hardware funding requirements, and other logistics and scheduling constraints. Additional uses of the model include assessing the cost per flight impact of changing major space shuttle program parameters and searching for opportunities to make cost effective management decisions.

  15. Area-delay trade-offs of texture decompressors for a graphics processing unit

    NASA Astrophysics Data System (ADS)

    Novoa Súñer, Emilio; Ituero, Pablo; López-Vallejo, Marisa

    2011-05-01

    Graphics Processing Units have become a booster for the microelectronics industry. However, due to intellectual property issues, there is a serious lack of information on implementation details of the hardware architecture that is behind GPUs. For instance, the way texture is handled and decompressed in a GPU to reduce bandwidth usage has never been dealt with in depth from a hardware point of view. This work addresses a comparative study on the hardware implementation of different texture decompression algorithms for both conventional (PCs and video game consoles) and mobile platforms. Circuit synthesis is performed targeting both a reconfigurable hardware platform and a 90nm standard cell library. Area-delay trade-offs have been extensively analyzed, which allows us to compare the complexity of decompressors and thus determine suitability of algorithms for systems with limited hardware resources.

  16. Spherical roller bearing analysis. SKF computer program SPHERBEAN. Volume 3: Program correlation with full scale hardware tests

    NASA Technical Reports Server (NTRS)

    Kleckner, R. J.; Rosenlieb, J. W.; Dyba, G.

    1980-01-01

    The results of a series of full scale hardware tests comparing predictions of the SPHERBEAN computer program with measured data are presented. The SPHERBEAN program predicts the thermomechanical performance characteristics of high speed lubricated double row spherical roller bearings. The degree of correlation between performance predicted by SPHERBEAN and measured data is demonstrated. Experimental and calculated performance data is compared over a range in speed up to 19,400 rpm (0.8 MDN) under pure radial, pure axial, and combined loads.

  17. ExaSAT: An exascale co-design tool for performance modeling

    DOE PAGES

    Unat, Didem; Chan, Cy; Zhang, Weiqun; ...

    2015-02-09

    One of the emerging challenges to designing HPC systems is understanding and projecting the requirements of exascale applications. In order to determine the performance consequences of different hardware designs, analytic models are essential because they can provide fast feedback to the co-design centers and chip designers without costly simulations. However, current attempts to analytically model program performance typically rely on the user manually specifying a performance model. Here we introduce the ExaSAT framework that automates the extraction of parameterized performance models directly from source code using compiler analysis. The parameterized analytic model enables quantitative evaluation of a broad range ofmore » hardware design trade-offs and software optimizations on a variety of different performance metrics, with a primary focus on data movement as a metric. Finally, we demonstrate the ExaSAT framework’s ability to perform deep code analysis of a proxy application from the Department of Energy Combustion Co-design Center to illustrate its value to the exascale co-design process. ExaSAT analysis provides insights into the hardware and software trade-offs and lays the groundwork for exploring a more targeted set of design points using cycle-accurate architectural simulators.« less

  18. A 45 ps time digitizer with a two-phase clock and dual-edge two-stage interpolation in a field programmable gate array device

    NASA Astrophysics Data System (ADS)

    Szplet, R.; Kalisz, J.; Jachna, Z.

    2009-02-01

    We present a time digitizer having 45 ps resolution, integrated in a field programmable gate array (FPGA) device. The time interval measurement is based on the two-stage interpolation method. A dual-edge two-phase interpolator is driven by the on-chip synthesized 250 MHz clock with precise phase adjustment. An improved dual-edge double synchronizer was developed to control the main counter. The nonlinearity of the digitizer's transfer characteristic is identified and utilized by the dedicated hardware code processor for the on-the-fly correction of the output data. Application of presented ideas has resulted in the measurement uncertainty of the digitizer below 70 ps RMS over the time interval ranging from 0 to 1 s. The use of the two-stage interpolation and a fast FIFO memory has allowed us to obtain the maximum measurement rate of five million measurements per second.

  19. Practical Applications of Cables and Ropes in the ISS Countermeasures System

    NASA Technical Reports Server (NTRS)

    Moore, Cherice; Svetlik, Randall; Williams, Antony

    2017-01-01

    As spaceflight durations have increased over the last four decades, the effects of weightlessness on the human body are far better understood, as are the countermeasures. A combination of aerobic and resistive exercise devices contribute to countering the losses in muscle strength, aerobic fitness, and bone strength of today's astronauts and cosmonauts that occur during their missions on the International Space Station. Creation of these systems has been a dynamically educational experience for designers and engineers. The ropes and cables in particular have experienced a wide range of challenges, providing a full set of lessons learned that have already enabled improvements in on-orbit reliability by initiating system design improvements. This paper examines the on-orbit experience of ropes and cables in several exercise devices and discusses the lessons learned from these hardware items, with the goal of informing future system design.

  20. DOE and JAEA Field Trial of the Single Chip Shift Register (SCSR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newell, Matthew R.

    2016-03-23

    Los Alamos National Laboratories (LANL) has recently developed a new data acquisition system for multiplicity analysis of neutron detector pulse streams. This new technology, the Single Chip Shift Register (SCSR), places the entire data acquisition system along with the communications hardware onto a single chip. This greatly simplifies the instrument and reduces the size. The SCSR is designed to be mounted into the neutron detector head alongside the instrument amplifiers. The user’s computer connects via USB directly to the neutron detector eliminating the external data acquisition electronics entirely. JAEA, through the INSEP program, asked LANL to demonstrate the functionality ofmore » the SCSR in Tokai using the JAEA Epithermal Neutron Multiplicity Counter, ENMC. In late September of 2015 LANL traveled to Tokai to install, demonstrate and uninstall the SCSR in the ENMC. This report documents the results of that field trial.« less

  1. [Status of traditional Chinese medicine materials seed and seedling breeding bases].

    PubMed

    Li, Ying; Huang, Lu-Qi; Zhang, Xiao-Bo; Wang, Hui; Cheng, Meng; Zhang, Tian; Yang, Guang

    2017-11-01

    Seeds and seedlings are the material basis of traditional Chinese medicine materials production, and the construction of traditional Chinese medicine materials seed and seedling breeding bases is beneficial to the production of high-quality traditional Chinese medicine materials. The construction of traditional Chinese medicine materials seed and seedling breeding bases is one of the major topics of Chinese medica resources census pilot. Targets, tasks of traditional Chinese medicine materials seed and seedling breeding bases based on Chinese medica resources census pilot were expounded.Construction progress including hardware construction, germplasm conservation and breeding, procedures and standardsestablishment, social servicesare presented. Development counter measures were proposed for the next step: perfect the standard and system, maintain and strengthen the breeding function, strengthen the cultivation of multi-level talents, explore market development model, joint efforts to deepen services and development. Copyright© by the Chinese Pharmaceutical Association.

  2. Reconfigurable fuzzy cell

    NASA Technical Reports Server (NTRS)

    Salazar, George A. (Inventor)

    1993-01-01

    This invention relates to a reconfigurable fuzzy cell comprising a digital control programmable gain operation amplifier, an analog-to-digital converter, an electrically erasable PROM, and 8-bit counter and comparator, and supporting logic configured to achieve in real-time fuzzy systems high throughput, grade-of-membership or membership-value conversion of multi-input sensor data. The invention provides a flexible multiplexing-capable configuration, implemented entirely in hardware, for effectuating S-, Z-, and PI-membership functions or combinations thereof, based upon fuzzy logic level-set theory. A membership value table storing 'knowledge data' for each of S-, Z-, and PI-functions is contained within a nonvolatile memory for storing bits of membership and parametric information in a plurality of address spaces. Based upon parametric and control signals, analog sensor data is digitized and converted into grade-of-membership data. In situ learn and recognition modes of operation are also provided.

  3. Unprecented syntonization and syncronization accuracy via simultaneous viewing with GPS receivers: Construction characteristics of an NBS/GPS receiver

    NASA Technical Reports Server (NTRS)

    Davis, D. D.; Weiss, M.; Clements, A.; Allan, D. W.

    1982-01-01

    The National Bureau of Standards/Global Positioning System (NBS/GPS) receiver is discussed. It is designed around the concept of obtaining high accuracy, low cost time and frequency comparisons between remote frequency standards and clocks with the intent to aid international time and frequency coordination. Preliminary tests of this comparison technique between Boulder, CO and Washington, D.C indicate the ability to do accurate time transfer to better that 10 ns, and frequency measurements to better than 1 part in 10 to the 14th power. The hardware and software of the receiver is detailed. The receiver is fully automatic with a built-in 0.1 ns resolution time interval counter. A microprocessor does data processing. Satellite signal stabilities are routinely at the 5 ns level for 15 s averages, and the internal receiver stabilities are at the 1 ns level.

  4. The Chimera II Real-Time Operating System for advanced sensor-based control applications

    NASA Technical Reports Server (NTRS)

    Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.

    1992-01-01

    Attention is given to the Chimera II Real-Time Operating System, which has been developed for advanced sensor-based control applications. The Chimera II provides a high-performance real-time kernel and a variety of IPC features. The hardware platform required to run Chimera II consists of commercially available hardware, and allows custom hardware to be easily integrated. The design allows it to be used with almost any type of VMEbus-based processors and devices. It allows radially differing hardware to be programmed using a common system, thus providing a first and necessary step towards the standardization of reconfigurable systems that results in a reduction of development time and cost.

  5. Optimization Model for Web Based Multimodal Interactive Simulations.

    PubMed

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-07-15

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update . In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.

  6. On-Orbit Constraints Test - Performing Pre-Flight Tests with Flight Hardware, Astronauts and Ground Support Equipment to Assure On-Orbit Success

    NASA Technical Reports Server (NTRS)

    Haddad, Michael E.

    2008-01-01

    On-Orbit Constraints Test (OOCT's) refers to mating flight hardware together on the ground before they will be mated on-orbit. The concept seems simple but it can be difficult to perform operations like this on the ground when the flight hardware is being designed to be mated on-orbit in a zero-g and/or vacuum environment of space. Also some of the items are manufactured years apart so how are mating tasks performed on these components if one piece is on-orbit before its mating piece is planned to be built. Both the Internal Vehicular Activity (IVA) and Extra-Vehicular Activity (EVA) OOCT's performed at Kennedy Space Center will be presented in this paper. Details include how OOCT's should mimic on-orbit operational scenarios, a series of photographs will be shown that were taken during OOCT's performed on International Space Station (ISS) flight elements, lessons learned as a result of the OOCT's will be presented and the paper will conclude with possible applications to Moon and Mars Surface operations planned for the Constellation Program.

  7. Optimization Model for Web Based Multimodal Interactive Simulations

    PubMed Central

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-01-01

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713

  8. A compact gas-filled avalanche counter for DANCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, C. Y.; Chyzh, A.; Kwan, E.

    2012-08-04

    A compact gas-filled avalanche counter for the detection of fission fragments was developed for a highly segmented 4π γ-ray calorimeter, namely the Detector for Advanced Neutron Capture Experiments located at the Lujan Center of the Los Alamos Neutron Science Center. It has been used successfully for experiments with 235U, 238Pu, 239Pu, and 241Pu isotopes to provide a unique signature to differentiate the fission from the competing neutron-capture reaction channel. We also used it to study the spontaneous fission in 252Cf. The design and performance of this avalanche counter for targets with extreme α-decay rate up to ~2.4×108/s are described.

  9. High-Energy-Density LCA-Coupled Structural Energetic Materials for Counter WMD Applications

    DTIC Science & Technology

    2014-04-01

    reactive ( thermite ) fillers as high-energy-density structural energetic materials. The specific objectives include performing fundamental studies to...a) investigate mechanics of dynamic densification and reaction initiation in Ta+Fe2O3 and Ta+Bi2O3 thermite powder mixtures and to (b) design and...initiation in the thermite filler and allow controlled fragmentation. Linear Cellular A; counter WMDs; shock-compression and impact-initiated reactions

  10. Counter-rotating type tidal stream power unit boarded on pillar (performances and flow conditions of tandem propellers)

    NASA Astrophysics Data System (ADS)

    Usui, Yuta; Kanemoto, Toshiaki; Hiraki, Koju

    2013-12-01

    The authors have invented the unique counter-rotating type tidal stream power unit composed of the tandem propellers and the double rotational armature type peculiar generator without the traditional stator. The front and the rear propellers counter-drive the inner and the outer armatures of the peculiar generator, respectively. The unit has the fruitful advantages that not only the output is sufficiently higher without supplementary equipment such as a gearbox, but also the rotational moment hardly act on the pillar because the rotational torque of both propellers/armatures are counter-balanced in the unit. This paper discusses experimentally the performances of the power unit and the effects of the propeller rotation on the sea surface. The axial force acting on the pillar increases naturally with the increase of not only the stream velocity but also the drag of the tandem propellers. Besides, the force vertical to the stream also acts on the pillar, which is induced from the Karman vortex street and the dominant frequencies appear owing to the front and the rear propeller rotations. The propeller rotating in close to the sea surface brings the abnormal wave and the amplitude increases as the stream velocity is faster and/or the drag is stronger.

  11. Performance Analysis of a Hardware Implemented Complex Signal Kurtosis Radio-Frequency Interference Detector

    NASA Technical Reports Server (NTRS)

    Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark

    2016-01-01

    Radio-frequency interference (RFI) is a known problem for passive remote sensing as evidenced in the L-band radiometers SMOS, Aquarius and more recently, SMAP. Various algorithms have been developed and implemented on SMAP to improve science measurements. This was achieved by the use of a digital microwave radiometer. RFI mitigation becomes more challenging for microwave radiometers operating at higher frequencies in shared allocations. At higher frequencies larger bandwidths are also desirable for lower measurement noise further adding to processing challenges. This work focuses on finding improved RFI mitigation techniques that will be effective at additional frequencies and at higher bandwidths. To aid the development and testing of applicable detection and mitigation techniques, a wide-band RFI algorithm testing environment has been developed using the Reconfigurable Open Architecture Computing Hardware System (ROACH) built by the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) Group. The testing environment also consists of various test equipment used to reproduce typical signals that a radiometer may see including those with and without RFI. The testing environment permits quick evaluations of RFI mitigation algorithms as well as show that they are implementable in hardware. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The complex signal kurtosis detector showed improved performance over the real kurtosis detector under certain conditions. The real kurtosis is implemented on SMAP at 24 MHz bandwidth. The complex signal kurtosis algorithm was then implemented in hardware at 200 MHz bandwidth using the ROACH. In this work, performance of the complex signal kurtosis and the real signal kurtosis are compared. Performance evaluations and comparisons in both simulation as well as experimental hardware implementations were done with the use of receiver operating characteristic (ROC) curves.

  12. The relationship between cell phone use and management of driver fatigue: It's complicated.

    PubMed

    Saxby, Dyani Juanita; Matthews, Gerald; Neubauer, Catherine

    2017-06-01

    Voice communication may enhance performance during monotonous, potentially fatiguing driving conditions (Atchley & Chan, 2011); however, it is unclear whether safety benefits of conversation are outweighed by costs. The present study tested whether personalized conversations intended to simulate hands-free cell phone conversation may counter objective and subjective fatigue effects elicited by vehicle automation. A passive fatigue state (Desmond & Hancock, 2001), characterized by disengagement from the task, was induced using full vehicle automation prior to drivers resuming full control over the driving simulator. A conversation was initiated shortly after reversion to manual control. During the conversation an emergency event occurred. The fatigue manipulation produced greater task disengagement and slower response to the emergency event, relative to a control condition. Conversation did not mitigate passive fatigue effects; rather, it added worry about matters unrelated to the driving task. Conversation moderately improved vehicle control, as measured by SDLP, but it failed to counter fatigue-induced slowing of braking in response to an emergency event. Finally, conversation appeared to have a hidden danger in that it reduced drivers' insights into performance impairments when in a state of passive fatigue. Automation induced passive fatigue, indicated by loss of task engagement; yet, simulated cell phone conversation did not counter the subjective automation-induced fatigue. Conversation also failed to counter objective loss of performance (slower braking speed) resulting from automation. Cell phone conversation in passive fatigue states may impair drivers' awareness of their performance deficits. Practical applications: Results suggest that conversation, even using a hands-free device, may not be a safe way to reduce fatigue and increase alertness during transitions from automated to manual vehicle control. Copyright © 2017 Elsevier Ltd and National Safety Council. All rights reserved.

  13. In situ preparation of NiS2/CoS2 composite electrocatalytic materials on conductive glass substrates with electronic modulation for high-performance counter electrodes of dye-sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Li, Faxin; Wang, Jiali; Zheng, Li; Zhao, Yaqiang; Huang, Niu; Sun, Panpan; Fang, Liang; Wang, Lei; Sun, Xiaohua

    2018-04-01

    The electrocatalytic composite materials of honeycomb structure NiS2 nanosheets loaded with metallic CoS2 nanoparticles are in situ prepared on F doped SnO2 conductive glass (FTO) substrates used as counter electrodes of DSSCs through chemical bath deposition (CBD) and sulfidizing process. Single crystalline NiS2 honeycomb structure array lay a foundation for the large surface area of NiS2/CoS2 composite CEs. The formed NiS2/CoS2 nanointerface modulates electronic structure of composite CEs from the synergetic interactions between CoS2 nanoparticles and NiS2 nanosheets, which dramatically improves the electrocatalytic activity of NiS2/CoS2 composite CEs; Metallic CoS2 nanoparticles covering NiS2 nanosheets electrodes adjusts the electrodes' structure and then reduces the series resistance (Rs) and the Nernst diffusion resistance (Zw) of counter electrodes. The improvement of these areas greatly enhances the electrocatalytic performance of CEs and the short circuit current density (Jsc) and Fill factor (FF) of DSSCs. Impressively, the DSSC based on NiS2/CoS2-0.1 CE shows the best photovoltaic performance with photovoltaic conversion efficiency of 8.22%, which is 24.36% higher than that (6.61%) of the DSSC with Pt CE. And the NiS2/CoS2-0.1 CE also displays a good stability in the iodine based electrolyte. This work indicates that rational construction of composite electrocatalytic materials paves an avenue for high-performance counter electrodes of DSSCs.

  14. Bosch CO2 Reduction System Development

    NASA Technical Reports Server (NTRS)

    Holmes, R. F.; King, C. D.; Keller, E. E.

    1976-01-01

    Development of a Bosch process CO2 reduction unit was continued, and, by means of hardware modifications, the performance was substantially improved. Benefits of the hardware upgrading were demonstrated by extensive unit operation and data acquisition in the laboratory. This work was accomplished on a cold seal configuration of the Bosch unit.

  15. Design Report for Low Power Acoustic Detector

    DTIC Science & Technology

    2013-08-01

    high speed integrated circuit (VHSIC) hardware description language ( VHDL ) implementation of both the HED and DCD detectors. Figures 4 and 5 show the...the hardware design, target detection algorithm design in both MATLAB and VHDL , and typical performance results. 15. SUBJECT TERMS Acoustic low...5 2.4 Algorithm Implementation ..............................................................................................6 3. Testing

  16. Hopkins works with the MDCA hardware replacement, and CIR maintenance

    NASA Image and Video Library

    2013-12-31

    ISS038-E-024145 (30 Dec. 2013) --- NASA astronaut Mike Hopkins, Expedition 38 flight engineer, performs in-flight maintenance on combustion research hardware in the Destiny laboratory of the International Space Station. Hopkins replaced a Multi-user Droplet Combustion Apparatus (MDCA) fuel reservoir inside the Combustion Integrated Rack (CIR).

  17. Radio astronomy Explorer-B postlaunch attitude operations analysis

    NASA Technical Reports Server (NTRS)

    Werking, R. D.; Berg, R.; Brokke, K.; Hattox, T.; Lerner, G.; Stewart, D.; Williams, R.

    1974-01-01

    The attitude support activities of the Radio Astronomy Explorer-B are reported. The performance of the spacecraft hardware and software are discussed along with details of the mission events, from launch through main boom deployment. Reproductions of displays are presented which were used during support activities. The interactive graphics proved the support function by providing the quality control necessary to ensure mission success in an environment where flight simulated ground testing of spacecraft hardware cannot be performed.

  18. [Violence against women in the perspective of community health agents].

    PubMed

    Hesler, Lilian Zielke; da Costa, Marta Cocco; Resta, Darielli Gindri; Colomé, Isabel Cristina dos Santos

    2013-03-01

    The current study has the objective of learning and understanding how Community Health Agents conceptualize, develop and perform strategies to counter violence against women attending the Family Health Strategies in a northeastern municipality of Rio Grande do Sul. It is an exploratory research, utilizing a descriptive and qualitative approach, carried out with 35 Community Health Agents. Semi-structured interviews were performed to collect the data, which were analyzed using the thematic model. Conceptions of violence against women are centered around violence as a social construction based on gender inequalities and on violence as having a multifactorial construction. Regarding care practices and interventions to counter violence, the following tools are highlighted construction of intervention strategies within the staff forming bonds, listening and dialogue with the women victims of violence; and directing victims to support services. We believe that this study contributes to the visibility of this theme as a need in health care, as well as for the construction of strategies to counter it.

  19. Performance analysis of a counter-rotating tubular type micro-turbine by experiment and CFD

    NASA Astrophysics Data System (ADS)

    Lee, N. J.; Choi, J. W.; Hwang, Y. H.; Kim, Y. T.; Lee, Y. H.

    2012-11-01

    Micro hydraulic turbines have a growing interest because of its small and simple structure, as well as a high possibility of using in micro and small hydropower applications. The differential pressure existing in city water pipelines can be used efficiently to generate electricity in a way similar to that of energy being generated through gravitational potential energy in dams. The pressure energy in the city pipelines is often wasted by using pressure reducing valves at the inlet of water cleaning centers. Instead of using the pressure reducing valves, a micro counter-rotating hydraulic turbine can be used to make use of the pressure energy. In the present paper, a counter-rotating tubular type micro-turbine is studied, with the front runner connected to the generator stator and the rear runner connected to the generator rotor. The performance of the turbine is investigated experimentally and numerically. A commercial ANSYS CFD code was used for numerical analysis.

  20. Characterization of Volatiles Loss from Soil Samples at Lunar Environments

    NASA Technical Reports Server (NTRS)

    Kleinhenz, Julie; Smith, Jim; Roush, Ted; Colaprete, Anthony; Zacny, Kris; Paulsen, Gale; Wang, Alex; Paz, Aaron

    2017-01-01

    Resource Prospector Integrated Thermal Vacuum Test Program A series of ground based dirty thermal vacuum tests are being conducted to better understand the subsurface sampling operations for RP Volatiles loss during sampling operations Hardware performance Sample removal and transfer Concept of operationsInstrumentation5 test campaigns over 5 years have been conducted with RP hardware with advancing hardware designs and additional RP subsystems Volatiles sampling 4 years Using flight-forward regolith sampling hardware, empirically determine volatile retention at lunar-relevant conditions Use data to improve theoretical predictions Determine driving variables for retention Bound water loss potential to define measurement uncertainties. The main goal of this talk is to introduce you to our approach to characterizing volatiles loss for RP. Introduce the facility and its capabilities Overview of the RP hardware used in integrated testing (most recent iteration) Summarize the test variables used thus farReview a sample of the results.

  1. Human-machine interface hardware: The next decade

    NASA Technical Reports Server (NTRS)

    Marcus, Elizabeth A.

    1991-01-01

    In order to understand where human-machine interface hardware is headed, it is important to understand where we are today, how we got there, and what our goals for the future are. As computers become more capable, faster, and programs become more sophisticated, it becomes apparent that the interface hardware is the key to an exciting future in computing. How can a user interact and control a seemingly limitless array of parameters effectively? Today, the answer is most often a limitless array of controls. The link between these controls and human sensory motor capabilities does not utilize existing human capabilities to their full extent. Interface hardware for teleoperation and virtual environments is now facing a crossroad in design. Therefore, we as developers need to explore how the combination of interface hardware, human capabilities, and user experience can be blended to get the best performance today and in the future.

  2. ESTL tracking and data relay satellite /TDRSS/ simulation system

    NASA Technical Reports Server (NTRS)

    Kapell, M. H.

    1980-01-01

    The Tracking Data Relay Satellite System (TDRSS) provides single access forward and return communication links with the Shuttle/Orbiter via S-band and Ku-band frequency bands. The ESTL (Electronic Systems Test Laboratory) at Lyndon B. Johnson Space Center (JSC) utilizes a TDRS satellite simulator and critical TDRS ground hardware for test operations. To accomplish Orbiter/TDRSS relay communications performance testing in the ESTL, a satellite simulator was developed which met the specification requirements of the TDRSS channels utilized by the Orbiter. Actual TDRSS ground hardware unique to the Orbiter communication interfaces was procured from individual vendors, integrated in the ESTL, and interfaced via a data bus for control and status monitoring. This paper discusses the satellite simulation hardware in terms of early development and subsequent modifications. The TDRS ground hardware configuration and the complex computer interface requirements are reviewed. Also, special test hardware such as a radio frequency interference test generator is discussed.

  3. Independent Orbiter Assessment (IOA): Analysis of the remote manipulator system

    NASA Technical Reports Server (NTRS)

    Tangorra, F.; Grasmeder, R. F.; Montgomery, A. D.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbiter Remote Manipulator System (RMS) are documented. The RMS hardware and software are primarily required for deploying and/or retrieving up to five payloads during a single mission, capture and retrieve free-flying payloads, and for performing Manipulator Foot Restraint operations. Specifically, the RMS hardware consists of the following components: end effector; displays and controls; manipulator controller interface unit; arm based electronics; and the arm. The IOA analysis process utilized available RMS hardware drawings, schematics and documents for defining hardware assemblies, components and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 574 failure modes analyzed, 413 were determined to be PCIs.

  4. Propulsion system-flight control integration and optimization: Flight evaluation and technology transition

    NASA Technical Reports Server (NTRS)

    Burcham, Frank W., Jr.; Gilyard, Glenn B.; Myers, Lawrence P.

    1990-01-01

    Integration of propulsion and flight control systems and their optimization offers significant performance improvements. Research programs were conducted which have developed new propulsion and flight control integration concepts, implemented designs on high-performance airplanes, demonstrated these designs in flight, and measured the performance improvements. These programs, first on the YF-12 airplane, and later on the F-15, demonstrated increased thrust, reduced fuel consumption, increased engine life, and improved airplane performance; with improvements in the 5 to 10 percent range achieved with integration and with no changes to hardware. The design, software and hardware developments, and testing requirements were shown to be practical.

  5. New tools using the hardware performance monitor to help users tune programs on the Cray X-MP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.; Rudsinski, L.; Doak, J.

    1991-09-25

    The performance of a Cray system is highly dependent on the tuning techniques used by individuals on their codes. Many of our users were not taking advantage of the tuning tools that allow them to monitor their own programs by using the Hardware Performance Monitor (HPM). We therefore modified UNICOS to collect HPM data for all processes and to report Mflop ratings based on users, programs, and time used. Our tuning efforts are now being focused on the users and programs that have the best potential for performance improvements. These modifications and some of the more striking performance improvements aremore » described.« less

  6. Coupled Loads Analysis of the Modified NASA Barge Pegasus and Space Launch System Hardware

    NASA Technical Reports Server (NTRS)

    Knight, J. Brent

    2015-01-01

    A Coupled Loads Analysis (CLA) has been performed for barge transport of Space Launch System hardware on the recently modified NASA barge Pegasus. The barge re-design was facilitated with detailed finite element analyses by the ARMY Corps of Engineers - Marine Design Center. The Finite Element Model (FEM) utilized in the design was also used in the subject CLA. The Pegasus FEM and CLA results are presented as well as a comparison of the analysis process to that of a payload being transported to space via the Space Shuttle. Discussion of the dynamic forcing functions is included as well. The process of performing a dynamic CLA of NASA hardware during marine transport is thought to be a first and can likely support minimization of undue conservatism.

  7. Frequency-tuned microwave photon counter based on a superconductive quantum interferometer

    NASA Astrophysics Data System (ADS)

    Shnyrkov, V. I.; Yangcao, Wu; Soroka, A. A.; Turutanov, O. G.; Lyakhno, V. Yu.

    2018-03-01

    Various types of single-photon counters operating in infrared, ultraviolet, and optical wavelength ranges are successfully used to study electromagnetic fields, analyze radiation sources, and solve problems in quantum informatics. However, their operating principles become ineffective at millimeter band, S-band, and ultra-high frequency bands of wavelengths due to the decrease in quantum energy by 4-5 orders of magnitude. Josephson circuits with discrete Hamiltonians and qubits are a good foundation for the construction of single-photon counters at these frequencies. This paper presents a frequency-tuned microwave photon counter based on a single-junction superconducting quantum interferometer and flux qutrit. The control pulse converts the interferometer into a two-level system for resonance absorption of photons. Decay of the photon-induced excited state changes the magnetic flux in the interferometer, which is measured by a SQUID magnetometer. Schemes for recording the magnetic flux using a DC SQUID or ideal parametric detector, based on a qutrit with high-frequency excitation, are discussed. It is shown that the counter consisting of an interferometer with a Josephson junction and a parametric detector demonstrates high performance and is capable of detecting single photons in a microwave band.

  8. Over-the-counter sales of antibiotics from community pharmacies in Abu Dhabi.

    PubMed

    Dameh, Majd; Green, James; Norris, Pauline

    2010-10-01

    The aim of this study is to investigate over-the-counter sale of antibiotics from community pharmacies in Abu Dhabi city, focusing on the extent, demographic and socioeconomic determinants of this practice. The study was conducted in the capital of the United Arab Emirates, Abu Dhabi, and involved 17 randomly selected private pharmacies. A cross-sectional design using structured observations of 30 clients purchasing antibiotics from a pharmacy staff (either a pharmacist or pharmacy assistant) at each selected pharmacy. A total of 510 interactions were observed. Statistical analysis was performed using SPSS. The extent and types of antibiotics sold over-the-counter in Abu Dhabi city as observed in the selected sample of community pharmacies, and the demographic and socioeconomic factors that contributed to this practice. Sixty eight percent (68.4%) of the observed antibiotic sales were sold over-the-counter without prescriptions. Injection antibiotics constituted 2.2% of the antibiotics sold, 45.5% of which were sold over-the-counter. Combination of penicillins including β-lactamase inhibitors (34.0%), penicillins with extended spectrum (22.3%) and second generation cephalosporins (11.2%) were the mostly commonly sold antibiotic groups. Respiratory conditions (63.1%) were the most frequent reason for purchasing antibiotics. Over-the-counter sales of antibiotics were related to client ethnicity and age, gender of pharmacy staff and health complaint. Our study revealed high sales of over-the-counter antibiotics, despite this being illegal. The ineffectiveness of antibiotics in treating respiratory conditions of viral origin and effects of such practice on the emergence of bacterial resistance necessitates prompt action.

  9. Performance Analysis of a Hardware Implemented Complex Signal Kurtosis Radio-Frequency Interference Detector

    NASA Technical Reports Server (NTRS)

    Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark

    2016-01-01

    Radio-frequency interference (RFI) is a known problem for passive remote sensing as evidenced in the L-band radiometers SMOS, Aquarius and more recently, SMAP. Various algorithms have been developed and implemented on SMAP to improve science measurements. This was achieved by the use of a digital microwave radiometer. RFI mitigation becomes more challenging for microwave radiometers operating at higher frequencies in shared allocations. At higher frequencies larger bandwidths are also desirable for lower measurement noise further adding to processing challenges. This work focuses on finding improved RFI mitigation techniques that will be effective at additional frequencies and at higher bandwidths. To aid the development and testing of applicable detection and mitigation techniques, a wide-band RFI algorithm testing environment has been developed using the Reconfigurable Open Architecture Computing Hardware System (ROACH) built by the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) Group. The testing environment also consists of various test equipment used to reproduce typical signals that a radiometer may see including those with and without RFI. The testing environment permits quick evaluations of RFI mitigation algorithms as well as show that they are implementable in hardware. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The complex signal kurtosis detector showed improved performance over the real kurtosis detector under certain conditions. The real kurtosis is implemented on SMAP at 24 MHz bandwidth. The complex signal kurtosis algorithm was then implemented in hardware at 200 MHz bandwidth using the ROACH. In this work, performance of the complex signal kurtosis and the real signal kurtosis are compared. Performance evaluations and comparisons in both simulation as well as experimental hardware implementations were done with the use of receiver operating characteristic (ROC) curves. The complex kurtosis algorithm has the potential to reduce data rate due to onboard processing in addition to improving RFI detection performance.

  10. Targeting TMPRSS2-ERG in Prostate Cancer

    DTIC Science & Technology

    2015-09-01

    small molecule microarrays ( SMM ) with lysates overexpressing ERG (months 1-12) 3a. Request compounds from compound management that scored from...preliminary SMM screen using 293T lysates overexpressing ERG (months 1-2 – completed November 2013) 3b. Reprint slides with compounds from 3a for...counter SMM assays (months 3-4 – completed December 2013) 3c. Perform counter SMM assay with 293T lysates expressing unrelated protein (months 5-10

  11. Targeting TMPRSS2-ERG in Prostate Cancer

    DTIC Science & Technology

    2014-09-01

    microarrays ( SMM ) with lysates overexpressing ERG (months 1-12) 3a. Request compounds from compound management that scored from preliminary SMM ...screen using 293T lysates overexpressing ERG (months 1-2 – completed November 2013) 3b. Reprint slides with compounds from 3a for counter SMM ...assays (months 3-4 – completed December 2013) 3c. Perform counter SMM assay with 293T lysates expressing unrelated protein (months 5-10 – completed

  12. Software feedback for monochromator tuning at UNICAT (abstract)

    NASA Astrophysics Data System (ADS)

    Jemian, Pete R.

    2002-03-01

    Automatic tuning of double-crystal monochromators presents an interesting challenge in software. The goal is to either maximize, or hold constant, the throughput of the monochromator. An additional goal of the software feedback is to disable itself when there is no beam and then, at the user's discretion, re-enable itself when the beam returns. These and other routine goals, such as adherence to limits of travel for positioners, are maintained by software controls. Many solutions exist to lock in and maintain a fixed throughput. Among these include a hardware solution involving a wave form generator, and a lock-in amplifier to autocorrelate the movement of a piezoelectric transducer (PZT) providing fine adjustment of the second crystal Bragg angle. This solution does not work when the positioner is a slow acting device such as a stepping motor. Proportional integral differential (PID) loops have been used to provide feedback through software but additional controls must be provided to maximize the monochromator throughput. Presented here is a software variation of the PID loop which meets the above goals. By using two floating point variables as inputs, representing the intensity of x rays measured before and after the monochromator, it attempts to maximize (or hold constant) the ratio of these two inputs by adjusting an output floating point variable. These floating point variables are connected to hardware channels corresponding to detectors and positioners. When the inputs go out of range, the software will stop making adjustments to the control output. Not limited to monochromator feedback, the software could be used, with beam steering positioners, to maintain a measure of beam position. Advantages of this software feedback are the flexibility of its various components. It has been used with stepping motors and PZTs as positioners. Various devices such as ion chambers, scintillation counters, photodiodes, and photoelectron collectors have been used as detectors. The software provides significant cost savings over hardware feedback methods. Presently implemented in EPICS, the software is sufficiently general to any automated instrument control system.

  13. Distributed Simulation Testing for Weapons System Performance of the F/A-18 and AIM-120 AMRAAM

    DTIC Science & Technology

    1998-01-01

    Support Facility (WSSF) at China Lake, CA and the AIM-120 Hardware in the Loop (HWIL) laboratory at Point Mugu, CA. The link was established in response to...ROCKET MOTOR TARGET DETECTION (FUZE) SEEKERIASSEMBLYWAH D . ANTENN ’ A TRA-kN.SiV, ITfrER’I" ACTUATOR ELECTRONICS DATA LIX -K PARAMETERS ADIMI20AI AIMI...test series. 3.2 Hardware in the Loop : The AMRAAM Hardware-In-the- Loop (HWIL) lab located at the Naval Air Warfare Center in Point Mugu, CA provides

  14. PC Scene Generation

    NASA Astrophysics Data System (ADS)

    Buford, James A., Jr.; Cosby, David; Bunfield, Dennis H.; Mayhall, Anthony J.; Trimble, Darian E.

    2007-04-01

    AMRDEC has successfully tested hardware and software for Real-Time Scene Generation for IR and SAL Sensors on COTS PC based hardware and video cards. AMRDEC personnel worked with nVidia and Concurrent Computer Corporation to develop a Scene Generation system capable of frame rates of at least 120Hz while frame locked to an external source (such as a missile seeker) with no dropped frames. Latency measurements and image validation were performed using COTS and in-house developed hardware and software. Software for the Scene Generation system was developed using OpenSceneGraph.

  15. Orbiter wheel and tire certification

    NASA Technical Reports Server (NTRS)

    Campbell, C. C., Jr.

    1985-01-01

    The orbiter wheel and tire development has required a unique series of certification tests to demonstrate the ability of the hardware to meet severe performance requirements. Early tests of the main landing gear wheel using conventional slow roll testing resulted in hardware failures. This resulted in a need to conduct high velocity tests with crosswind effects for assurance that the hardware was safe for a limited number of flights. Currently, this approach and the conventional slow roll and static tests are used to certify the wheel/tire assembly for operational use.

  16. Stretched Lens Array (SLA) Photovoltaic Concentrator Hardware Development and Testing

    NASA Technical Reports Server (NTRS)

    Piszczor, Michael; O'Neill, Mark J.; Eskenazi, Michael

    2003-01-01

    Over the past two years, the Stretched Lens Array (SLA) photovoltaic concentrator has evolved, under a NASA contract, from a concept with small component demonstrators to operational array hardware that is ready for space validation testing. A fully-functional four panel SLA solar array has been designed, built and tested. This paper will summarize the focus of the hardware development effort, discuss the results of recent testing conducted under this program and present the expected performance of a full size 7kW array designed to meet the requirements of future space missions.

  17. Efficient Phase Unwrapping Architecture for Digital Holographic Microscopy

    PubMed Central

    Hwang, Wen-Jyi; Cheng, Shih-Chang; Cheng, Chau-Jern

    2011-01-01

    This paper presents a novel phase unwrapping architecture for accelerating the computational speed of digital holographic microscopy (DHM). A fast Fourier transform (FFT) based phase unwrapping algorithm providing a minimum squared error solution is adopted for hardware implementation because of its simplicity and robustness to noise. The proposed architecture is realized in a pipeline fashion to maximize throughput of the computation. Moreover, the number of hardware multipliers and dividers are minimized to reduce the hardware costs. The proposed architecture is used as a custom user logic in a system on programmable chip (SOPC) for physical performance measurement. Experimental results reveal that the proposed architecture is effective for expediting the computational speed while consuming low hardware resources for designing an embedded DHM system. PMID:22163688

  18. Space station common module power system network topology and hardware development

    NASA Technical Reports Server (NTRS)

    Landis, D. M.

    1985-01-01

    Candidate power system newtork topologies for the space station common module are defined and developed and the necessary hardware for test and evaluation is provided. Martin Marietta's approach to performing the proposed program is presented. Performance of the tasks described will assure systematic development and evaluation of program results, and will provide the necessary management tools, visibility, and control techniques for performance assessment. The plan is submitted in accordance with the data requirements given and includes a comprehensive task logic flow diagram, time phased manpower requirements, a program milestone schedule, and detailed descriptions of each program task.

  19. High-performance reconfigurable hardware architecture for restricted Boltzmann machines.

    PubMed

    Ly, Daniel Le; Chow, Paul

    2010-11-01

    Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 × 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor.

  20. Palmprint and face score level fusion: hardware implementation of a contactless small sample biometric system

    NASA Astrophysics Data System (ADS)

    Poinsot, Audrey; Yang, Fan; Brost, Vincent

    2011-02-01

    Including multiple sources of information in personal identity recognition and verification gives the opportunity to greatly improve performance. We propose a contactless biometric system that combines two modalities: palmprint and face. Hardware implementations are proposed on the Texas Instrument Digital Signal Processor and Xilinx Field-Programmable Gate Array (FPGA) platforms. The algorithmic chain consists of a preprocessing (which includes palm extraction from hand images), Gabor feature extraction, comparison by Hamming distance, and score fusion. Fusion possibilities are discussed and tested first using a bimodal database of 130 subjects that we designed (uB database), and then two common public biometric databases (AR for face and PolyU for palmprint). High performance has been obtained for recognition and verification purpose: a recognition rate of 97.49% with AR-PolyU database and an equal error rate of 1.10% on the uB database using only two training samples per subject have been obtained. Hardware results demonstrate that preprocessing can easily be performed during the acquisition phase, and multimodal biometric recognition can be treated almost instantly (0.4 ms on FPGA). We show the feasibility of a robust and efficient multimodal hardware biometric system that offers several advantages, such as user-friendliness and flexibility.

  1. Acceleration of fluoro-CT reconstruction for a mobile C-Arm on GPU and FPGA hardware: a simulation study

    NASA Astrophysics Data System (ADS)

    Xue, Xinwei; Cheryauka, Arvi; Tubbs, David

    2006-03-01

    CT imaging in interventional and minimally-invasive surgery requires high-performance computing solutions that meet operational room demands, healthcare business requirements, and the constraints of a mobile C-arm system. The computational requirements of clinical procedures using CT-like data are increasing rapidly, mainly due to the need for rapid access to medical imagery during critical surgical procedures. The highly parallel nature of Radon transform and CT algorithms enables embedded computing solutions utilizing a parallel processing architecture to realize a significant gain of computational intensity with comparable hardware and program coding/testing expenses. In this paper, using a sample 2D and 3D CT problem, we explore the programming challenges and the potential benefits of embedded computing using commodity hardware components. The accuracy and performance results obtained on three computational platforms: a single CPU, a single GPU, and a solution based on FPGA technology have been analyzed. We have shown that hardware-accelerated CT image reconstruction can be achieved with similar levels of noise and clarity of feature when compared to program execution on a CPU, but gaining a performance increase at one or more orders of magnitude faster. 3D cone-beam or helical CT reconstruction and a variety of volumetric image processing applications will benefit from similar accelerations.

  2. Round Robin Fatigue Crack Growth Testing Results

    DTIC Science & Technology

    2006-11-01

    testing was accomplished, in accordance with ASTM E647, using two different capacity SATEC frames-a 20 kip test frame for the 7075-T6 panels and a 55 kip...Equipment and Setup a. SATEC b. 20 kip (7075-T6); 55 kip (2024-T351) c. Test control hardware/software i. Hardware: Teststar Ilm ii. Software: Station...5c. WROGRK M UN LEMBERTNME 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS( ES ) S. PERFORMING ORGANIZATION REPORT NUMBER Center for A44rcraft

  3. Performance of the Dual BAK-12 Aircraft Arresting System with Modular Hardware with Deadloads and Aircraft

    DTIC Science & Technology

    1976-04-15

    System, Dual-System, Single-Mode, and Dual-Mode configurations. Tests were conducted to determine the feasibility of incorporating modular hardware on a...and 11-1/2 feet OFF-CENTER with the BAK-12 configured in the Single and Dual Mode to determine the effect of engaging the aircraft arresting-hook...cable OFF-CENTER. 90,000- pound deadload arrestments were conducted ON-CENTER in the Dual Mode to determine system performance with high-energy

  4. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms.

    PubMed

    Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B; Liu, Shih-Chii

    2015-01-01

    Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.

  5. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms

    PubMed Central

    Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B.; Liu, Shih-Chii

    2015-01-01

    Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time. PMID:26217169

  6. Performance of dye sensitized solar cells (DSSC) using Syngonium Podophyllum Schott as natural dye and counter electrode

    NASA Astrophysics Data System (ADS)

    Oktariza, Lingga Ghufira; Yuliarto, Brian; Suyatman

    2018-05-01

    The extraction of chlorophyll pigment of Syngonium podophyllum Schott leaves which is used as natural dyes in this DSSC devices. The use of dye from nature with its simple production process is very effective to reduce DSSC production cost. Besides being used as a natural dye, chlorophyll can also be used as an alternative counter electrode. Chlorophyll that is used as a counter electrode has been through chemical activation and carbonization processes. The characterization were done using Uv-Vis, Cyclic Voltametry and DSSC device under solar simulator. Characterization of chlorophyll absorbance using UV-Vis has resulted in typical absorbance peak at visible light wavelength of 447 nm and 666 nm. The Tauc equation analysis of the Uv-Vis characterization showed 1.91 eV energy gap of chlorophyll. Chlorophyll carbonized dye is used as an alternative to Pt counter electrode. Carbonized chlorophyll dye resulted in lower conversion efficiency of 0.308% with HSE electrolyte.

  7. An improved PRoPHET routing protocol in delay tolerant network.

    PubMed

    Han, Seung Deok; Chung, Yun Won

    2015-01-01

    In delay tolerant network (DTN), an end-to-end path is not guaranteed and packets are delivered from a source node to a destination node via store-carry-forward based routing. In DTN, a source node or an intermediate node stores packets in buffer and carries them while it moves around. These packets are forwarded to other nodes based on predefined criteria and finally are delivered to a destination node via multiple hops. In this paper, we improve the dissemination speed of PRoPHET (probability routing protocol using history of encounters and transitivity) protocol by employing epidemic protocol for disseminating message m, if forwarding counter and hop counter values are smaller than or equal to the threshold values. The performance of the proposed protocol was analyzed from the aspect of delivery probability, average delay, and overhead ratio. Numerical results show that the proposed protocol can improve the delivery probability, average delay, and overhead ratio of PRoPHET protocol by appropriately selecting the threshold forwarding counter and threshold hop counter values.

  8. Electrokinetic Analysis of Cell Translocation in Low-Cost Microfluidic Cytometry for Tumor Cell Detection and Enumeration.

    PubMed

    Guo, Jinhong; Pui, Tze Sian; Ban, Yong-Ling; Rahman, Abdur Rub Abdur; Kang, Yuejun

    2013-12-01

    Conventional Coulter counters have been introduced as an important tool in biological cell assays since several decades ago. Recently, the emerging portable Coulter counter has demonstrated its merits in point of care diagnostics, such as on chip detection and enumeration of circulating tumor cells (CTC). The working principle is based on the cell translocation time and amplitude of electrical current change that the cell induces. In this paper, we provide an analysis of a Coulter counter that evaluates the hydrodynamic and electrokinetic properties of polystyrene microparticles in a microfluidic channel. The hydrodynamic force and electrokinetic force are concurrently analyzed to determine the translocation time and the electrical current pulses induced by the particles. Finally, we characterize the chip performance for CTC detection. The experimental results validate the numerical analysis of the microfluidic chip. The presented model can provide critical insight and guidance for developing micro-Coulter counter for point of care prognosis.

  9. Hardware for Accelerating N-Modular Redundant Systems for High-Reliability Computing

    NASA Technical Reports Server (NTRS)

    Dobbs, Carl, Sr.

    2012-01-01

    A hardware unit has been designed that reduces the cost, in terms of performance and power consumption, for implementing N-modular redundancy (NMR) in a multiprocessor device. The innovation monitors transactions to memory, and calculates a form of sumcheck on-the-fly, thereby relieving the processors of calculating the sumcheck in software

  10. Lab at Home: Hardware Kits for a Digital Design Lab

    ERIC Educational Resources Information Center

    Oliver, J. P.; Haim, F.

    2009-01-01

    An innovative laboratory methodology for an introductory digital design course is presented. Instead of having traditional lab experiences, where students have to come to school classrooms, a "lab at home" concept is proposed. Students perform real experiments in their own homes, using hardware kits specially developed for this purpose. They…

  11. Implementation strategies to promote community-engaged efforts to counter tobacco marketing at the point of sale.

    PubMed

    Leeman, Jennifer; Myers, Allison; Grant, Jennifer C; Wangen, Mary; Queen, Tara L

    2017-09-01

    The US tobacco industry spends $8.2 billion annually on marketing at the point of sale (POS), a practice known to increase tobacco use. Evidence-based policy interventions (EBPIs) are available to reduce exposure to POS marketing, and nationwide, states are funding community-based tobacco control partnerships to promote local enactment of these EBPIs. Little is known, however, about what implementation strategies best support community partnerships' success enacting EBPI. Guided by Kingdon's theory of policy change, Counter Tools provides tools, training, and other implementation strategies to support community partnerships' performance of five core policy change processes: document local problem, formulate policy solutions, engage partners, raise awareness of problems and solutions, and persuade decision makers to enact new policy. We assessed Counter Tools' impact at 1 year on (1) partnership coordinators' self-efficacy, (2) partnerships' performance of core policy change processes, (3) community progress toward EBPI enactment, and (4) salient contextual factors. Counter Tools provided implementation strategies to 30 partnerships. Data on self-efficacy were collected using a pre-post survey. Structured interviews assessed performance of core policy change processes. Data also were collected on progress toward EBPI enactment and contextual factors. Analysis included descriptive and bivariate statistics and content analysis. Following 1-year exposure to implementation strategies, coordinators' self-efficacy increased significantly. Partnerships completed the greatest proportion of activities within the "engage partners" and "document local problem" core processes. Communities made only limited progress toward policy enactment. Findings can inform delivery of implementation strategies and tests of their effects on community-level efforts to enact EBPIs.

  12. An investigation of acoustic noise requirements for the Space Station centrifuge facility

    NASA Technical Reports Server (NTRS)

    Castellano, Timothy

    1994-01-01

    Acoustic noise emissions from the Space Station Freedom (SSF) centrifuge facility hardware represent a potential technical and programmatic risk to the project. The SSF program requires that no payload exceed a Noise Criterion 40 (NC-40) noise contour in any octave band between 63 Hz and 8 kHz as measured 2 feet from the equipment item. Past experience with life science experiment hardware indicates that this requirement will be difficult to meet. The crew has found noise levels on Spacelab flights to be unacceptably high. Many past Ames Spacelab life science payloads have required waivers because of excessive noise. The objectives of this study were (1) to develop an understanding of acoustic measurement theory, instruments, and technique, and (2) to characterize the noise emission of analogous Facility components and previously flown flight hardware. Test results from existing hardware were reviewed and analyzed. Measurements of the spectral and intensity characteristics of fans and other rotating machinery were performed. The literature was reviewed and contacts were made with NASA and industry organizations concerned with or performing research on noise control.

  13. Wireless Energy Harvesting Two-Way Relay Networks with Hardware Impairments.

    PubMed

    Peng, Chunling; Li, Fangwei; Liu, Huaping

    2017-11-13

    This paper considers a wireless energy harvesting two-way relay (TWR) network where the relay has energy-harvesting abilities and the effects of practical hardware impairments are taken into consideration. In particular, power splitting (PS) receiver is adopted at relay to harvests the power it needs for relaying the information between the source nodes from the signals transmitted by the source nodes, and hardware impairments is assumed suffered by each node. We analyze the effect of hardware impairments [-20]on both decode-and-forward (DF) relaying and amplify-and-forward (AF) relaying networks. By utilizing the obtained new expressions of signal-to-noise-plus-distortion ratios, the exact analytical expressions of the achievable sum rate and ergodic capacities for both DF and AF relaying protocols are derived. Additionally, the optimal power splitting (OPS) ratio that maximizes the instantaneous achievable sum rate is formulated and solved for both protocols. The performances of DF and AF protocols are evaluated via numerical results, which also show the effects of various network parameters on the system performance and on the OPS ratio design.

  14. Analysis of a hardware and software fault tolerant processor for critical applications

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne B.

    1993-01-01

    Computer systems for critical applications must be designed to tolerate software faults as well as hardware faults. A unified approach to tolerating hardware and software faults is characterized by classifying faults in terms of duration (transient or permanent) rather than source (hardware or software). Errors arising from transient faults can be handled through masking or voting, but errors arising from permanent faults require system reconfiguration to bypass the failed component. Most errors which are caused by software faults can be considered transient, in that they are input-dependent. Software faults are triggered by a particular set of inputs. Quantitative dependability analysis of systems which exhibit a unified approach to fault tolerance can be performed by a hierarchical combination of fault tree and Markov models. A methodology for analyzing hardware and software fault tolerant systems is applied to the analysis of a hypothetical system, loosely based on the Fault Tolerant Parallel Processor. The models consider both transient and permanent faults, hardware and software faults, independent and related software faults, automatic recovery, and reconfiguration.

  15. A Circuit-Based Neural Network with Hybrid Learning of Backpropagation and Random Weight Change Algorithms

    PubMed Central

    Yang, Changju; Kim, Hyongsuk; Adhikari, Shyam Prasad; Chua, Leon O.

    2016-01-01

    A hybrid learning method of a software-based backpropagation learning and a hardware-based RWC learning is proposed for the development of circuit-based neural networks. The backpropagation is known as one of the most efficient learning algorithms. A weak point is that its hardware implementation is extremely difficult. The RWC algorithm, which is very easy to implement with respect to its hardware circuits, takes too many iterations for learning. The proposed learning algorithm is a hybrid one of these two. The main learning is performed with a software version of the BP algorithm, firstly, and then, learned weights are transplanted on a hardware version of a neural circuit. At the time of the weight transplantation, a significant amount of output error would occur due to the characteristic difference between the software and the hardware. In the proposed method, such error is reduced via a complementary learning of the RWC algorithm, which is implemented in a simple hardware. The usefulness of the proposed hybrid learning system is verified via simulations upon several classical learning problems. PMID:28025566

  16. Space Station Freedom biomedical monitoring and countermeasures: Biomedical facility hardware catalog

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This hardware catalog covers that hardware proposed under the Biomedical Monitoring and Countermeasures Development Program supported by the Johnson Space Center. The hardware items are listed separately by item, and are in alphabetical order. Each hardware item specification consists of four pages. The first page describes background information with an illustration, definition and a history/design status. The second page identifies the general specifications, performance, rack interface requirements, problems, issues, concerns, physical description, and functional description. The level of hardware design reliability is also identified under the maintainability and reliability category. The third page specifies the mechanical design guidelines and assumptions. Described are the material types and weights, modules, and construction methods. Also described is an estimation of percentage of construction which utilizes a particular method, and the percentage of required new mechanical design is documented. The fourth page analyzes the electronics, the scope of design effort, and the software requirements. Electronics are described by percentages of component types and new design. The design effort, as well as, the software requirements are identified and categorized.

  17. Independent Orbiter Assessment (IOA): Analysis of the orbital maneuvering system

    NASA Technical Reports Server (NTRS)

    Prust, C. D.; Paul, D. J.; Burkemper, V. J.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbital Maneuvering System (OMS) hardware are documented. The OMS provides the thrust to perform orbit insertion, orbit circularization, orbit transfer, rendezvous, and deorbit. The OMS is housed in two independent pods located one on each side of the tail and consists of the following subsystems: Helium Pressurization; Propellant Storage and Distribution; Orbital Maneuvering Engine; and Electrical Power Distribution and Control. The IOA analysis process utilized available OMS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluted and analyzed for possible failure modes and effects. Criticality was asigned based upon the severity of the effect for each failure mode.

  18. Development of a model counter-rotating type horizontal-axis tidal turbine

    NASA Astrophysics Data System (ADS)

    Huang, B.; Yoshida, K.; Kanemoto, T.

    2016-05-01

    In the past decade, the tidal energies have caused worldwide concern as it can provide regular and predictable renewable energy resource for power generation. The majority of technologies for exploiting the tidal stream energy are based on the concept of the horizontal axis tidal turbine (HATT). A unique counter-rotating type HATT was proposed in the present work. The original blade profiles were designed according to the developed blade element momentum theory (BEMT). CFD simulations and experimental tests were adopted to the performance of the model counter-rotating type HATT. The experimental data provides an evidence of validation of the CFD model. Further optimization of the blade profiles was also carried out based on the CFD results.

  19. Design and optimization of non-clogging counter-flow microconcentrator for enriching epidermoid cervical carcinoma cells.

    PubMed

    Tran-Minh, Nhut; Dong, Tao; Su, Qianhua; Yang, Zhaochu; Jakobsen, Henrik; Karlsen, Frank

    2011-02-01

    Clogging failure is common for microfilters in living cells concentration; for instance, the CaSki Cell-lines (Epidermoid cervical carcinoma cells) utilizing the flat membrane structure. In order to avoid the clogging, counter-flow concentration units with turbine blade-like micropillar are proposed in microconcentrator design. Due to the unusual geometrical-profiles and extraordinary microfluidic performance, the cells blocking does not occur even at permeate entrances. A counter-flow microconcentrator was designed, with both processing layer and collecting layer arranged in terms of the fractal based honeycomb structure. The device was optimized by coupling Artificial Neuron Network (ANN) and Computational Fluid Dynamics (CFD). The excellent concentration ratio of a final microconcentrator was presented in numerical results.

  20. A high resolution gas scintillation proportional counter for studying low energy cosmic X-ray sources

    NASA Technical Reports Server (NTRS)

    Hamilton, T. T.; Hailey, C. J.; Ku, W. H.-M.; Novick, R.

    1980-01-01

    In recent years much effort has been devoted to the development of large area gas scintillation proportional counters (GSPCs) suitable for use in X-ray astronomy. The paper deals with a low-energy GSPC for use in detecting sub-keV X-rays from cosmic sources. This instrument has a measured energy resolution of 85 eV (FWHM) at 149 eV over a sensitive area of 5 sq cm. The development of imaging capability for this instrument is discussed. Tests are performed on the feasibility of using an arrangement of several phototubes placed adjacent to one another to determine event locations in a large flat counter. A simple prototype has been constructed and successfully operated.

  1. CUDA compatible GPU cards as efficient hardware accelerators for Smith-Waterman sequence alignment

    PubMed Central

    Manavski, Svetlin A; Valle, Giorgio

    2008-01-01

    Background Searching for similarities in protein and DNA databases has become a routine procedure in Molecular Biology. The Smith-Waterman algorithm has been available for more than 25 years. It is based on a dynamic programming approach that explores all the possible alignments between two sequences; as a result it returns the optimal local alignment. Unfortunately, the computational cost is very high, requiring a number of operations proportional to the product of the length of two sequences. Furthermore, the exponential growth of protein and DNA databases makes the Smith-Waterman algorithm unrealistic for searching similarities in large sets of sequences. For these reasons heuristic approaches such as those implemented in FASTA and BLAST tend to be preferred, allowing faster execution times at the cost of reduced sensitivity. The main motivation of our work is to exploit the huge computational power of commonly available graphic cards, to develop high performance solutions for sequence alignment. Results In this paper we present what we believe is the fastest solution of the exact Smith-Waterman algorithm running on commodity hardware. It is implemented in the recently released CUDA programming environment by NVidia. CUDA allows direct access to the hardware primitives of the last-generation Graphics Processing Units (GPU) G80. Speeds of more than 3.5 GCUPS (Giga Cell Updates Per Second) are achieved on a workstation running two GeForce 8800 GTX. Exhaustive tests have been done to compare our implementation to SSEARCH and BLAST, running on a 3 GHz Intel Pentium IV processor. Our solution was also compared to a recently published GPU implementation and to a Single Instruction Multiple Data (SIMD) solution. These tests show that our implementation performs from 2 to 30 times faster than any other previous attempt available on commodity hardware. Conclusions The results show that graphic cards are now sufficiently advanced to be used as efficient hardware accelerators for sequence alignment. Their performance is better than any alternative available on commodity hardware platforms. The solution presented in this paper allows large scale alignments to be performed at low cost, using the exact Smith-Waterman algorithm instead of the largely adopted heuristic approaches. PMID:18387198

  2. Real-time lens distortion correction: speed, accuracy and efficiency

    NASA Astrophysics Data System (ADS)

    Bax, Michael R.; Shahidi, Ramin

    2014-11-01

    Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.

  3. Evaluating the Performance of the NASA LaRC CMF Motion Base Safety Devices

    NASA Technical Reports Server (NTRS)

    Gupton, Lawrence E.; Bryant, Richard B., Jr.; Carrelli, David J.

    2006-01-01

    This paper describes the initial measured performance results of the previously documented NASA Langley Research Center (LaRC) Cockpit Motion Facility (CMF) motion base hardware safety devices. These safety systems are required to prevent excessive accelerations that could injure personnel and damage simulator cockpits or the motion base structure. Excessive accelerations may be caused by erroneous commands or hardware failures driving an actuator to the end of its travel at high velocity, stepping a servo valve, or instantly reversing servo direction. Such commands may result from single order failures of electrical or hydraulic components within the control system itself, or from aggressive or improper cueing commands from the host simulation computer. The safety systems must mitigate these high acceleration events while minimizing the negative performance impacts. The system accomplishes this by controlling the rate of change of valve signals to limit excessive commanded accelerations. It also aids hydraulic cushion performance by limiting valve command authority as the actuator approaches its end of travel. The design takes advantage of inherent motion base hydraulic characteristics to implement all safety features using hardware only solutions.

  4. An Evaluation of One-Sided and Two-Sided Communication Paradigms on Relaxed-Ordering Interconnect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Khaled Z.; Hargrove, Paul H.; Iancu, Costin

    The Cray Gemini interconnect hardware provides multiple transfer mechanisms and out-of-order message delivery to improve communication throughput. In this paper we quantify the performance of one-sided and two-sided communication paradigms with respect to: 1) the optimal available hardware transfer mechanism, 2) message ordering constraints, 3) per node and per core message concurrency. In addition to using Cray native communication APIs, we use UPC and MPI micro-benchmarks to capture one- and two-sided semantics respectively. Our results indicate that relaxing the message delivery order can improve performance up to 4.6x when compared with strict ordering. When hardware allows it, high-level one-sided programmingmore » models can already take advantage of message reordering. Enforcing the ordering semantics of two-sided communication comes with a performance penalty. Furthermore, we argue that exposing out-of-order delivery at the application level is required for the next-generation programming models. Any ordering constraints in the language specifications reduce communication performance for small messages and increase the number of active cores required for peak throughput.« less

  5. Design and implementation of digital controllers for smart structures using field-programmable gate arrays

    NASA Astrophysics Data System (ADS)

    Kelly, Jamie S.; Bowman, Hiroshi C.; Rao, Vittal S.; Pottinger, Hardy J.

    1997-06-01

    Implementation issues represent an unfamiliar challenge to most control engineers, and many techniques for controller design ignore these issues outright. Consequently, the design of controllers for smart structural systems usually proceeds without regard for their eventual implementation, thus resulting either in serious performance degradation or in hardware requirements that squander power, complicate integration, and drive up cost. The level of integration assumed by the Smart Patch further exacerbates these difficulties, and any design inefficiency may render the realization of a single-package sensor-controller-actuator system infeasible. The goal of this research is to automate the controller implementation process and to relieve the design engineer of implementation concerns like quantization, computational efficiency, and device selection. We specifically target Field Programmable Gate Arrays (FPGAs) as our hardware platform because these devices are highly flexible, power efficient, and reprogrammable. The current study develops an automated implementation sequence that minimizes hardware requirements while maintaining controller performance. Beginning with a state space representation of the controller, the sequence automatically generates a configuration bitstream for a suitable FPGA implementation. MATLAB functions optimize and simulate the control algorithm before translating it into the VHSIC hardware description language. These functions improve power efficiency and simplify integration in the final implementation by performing a linear transformation that renders the controller computationally friendly. The transformation favors sparse matrices in order to reduce multiply operations and the hardware necessary to support them; simultaneously, the remaining matrix elements take on values that minimize limit cycles and parameter sensitivity. The proposed controller design methodology is implemented on a simple cantilever beam test structure using FPGA hardware. The experimental closed loop response is compared with that of an automated FPGA controller implementation. Finally, we explore the integration of FPGA based controllers into a multi-chip module, which we believe represents the next step towards the realization of the Smart Patch.

  6. SU-F-T-249: Application of Human Factors Methods: Usability Testing in the Radiation Oncology Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warkentin, H; Bubric, K; Giovannetti, H

    2016-06-15

    Purpose: As a quality improvement measure, we undertook this work to incorporate usability testing into the implementation procedures for new electronic documents and forms used by four affiliated radiation therapy centers. Methods: A human factors specialist provided training in usability testing for a team of medical physicists, radiation therapists, and radiation oncologists from four radiotherapy centers. A usability testing plan was then developed that included controlled scenarios and standardized forms for qualitative and quantitative feedback from participants, including patients. Usability tests were performed by end users using the same hardware and viewing conditions that are found in the clinical environment.more » A pilot test of a form used during radiotherapy CT simulation was performed in a single department; feedback informed adaptive improvements to the electronic form, hardware requirements, resource accessibility and the usability testing plan. Following refinements to the testing plan, usability testing was performed at three affiliated cancer centers with different vault layouts and hardware. Results: Feedback from the testing resulted in the detection of 6 critical errors (omissions and inability to complete task without assistance), 6 non-critical errors (recoverable), and multiple suggestions for improvement. Usability problems with room layout were detected at one center and problems with hardware were detected at one center. Upon amalgamation and summary of the results, three key recommendations were presented to the document’s authors for incorporation into the electronic form. Documented inefficiencies and patient safety concerns related to the room layout and hardware were presented to administration along with a request for funding to purchase upgraded hardware and accessories to allow a more efficient workflow within the simulator vault. Conclusion: By including usability testing as part of the process when introducing any new document or procedure into clinical use, associated risks can be identified and mitigated before patient care and clinical workflow are impacted.« less

  7. Performances of some low-cost counter electrode materials in CdS and CdSe quantum dot-sensitized solar cells.

    PubMed

    Jun, Hieng Kiat; Careem, Mohamed Abdul; Arof, Abdul Kariem

    2014-02-10

    Different counter electrode (CE) materials based on carbon and Cu2S were prepared for the application in CdS and CdSe quantum dot-sensitized solar cells (QDSSCs). The CEs were prepared using low-cost and facile methods. Platinum was used as the reference CE material to compare the performances of the other materials. While carbon-based materials produced the best solar cell performance in CdS QDSSCs, platinum and Cu2S were superior in CdSe QDSSCs. Different CE materials have different performance in the two types of QDSSCs employed due to the different type of sensitizers and composition of polysulfide electrolytes used. The poor performance of QDSSCs with some CE materials is largely due to the lower photocurrent density and open-circuit voltage. The electrochemical impedance spectroscopy performed on the cells showed that the poor-performing QDSSCs had higher charge-transfer resistances and CPE values at their CE/electrolyte interfaces.

  8. Exploring Cloud Computing for Large-scale Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Guang; Han, Binh; Yin, Jian

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less

  9. An integrated framework for high level design of high performance signal processing circuits on FPGAs

    NASA Astrophysics Data System (ADS)

    Benkrid, K.; Belkacemi, S.; Sukhsawas, S.

    2005-06-01

    This paper proposes an integrated framework for the high level design of high performance signal processing algorithms' implementations on FPGAs. The framework emerged from a constant need to rapidly implement increasingly complicated algorithms on FPGAs while maintaining the high performance needed in many real time digital signal processing applications. This is particularly important for application developers who often rely on iterative and interactive development methodologies. The central idea behind the proposed framework is to dynamically integrate high performance structural hardware description languages with higher level hardware languages in other to help satisfy the dual requirement of high level design and high performance implementation. The paper illustrates this by integrating two environments: Celoxica's Handel-C language, and HIDE, a structural hardware environment developed at the Queen's University of Belfast. On the one hand, Handel-C has been proven to be very useful in the rapid design and prototyping of FPGA circuits, especially control intensive ones. On the other hand, HIDE, has been used extensively, and successfully, in the generation of highly optimised parameterisable FPGA cores. In this paper, this is illustrated in the construction of a scalable and fully parameterisable core for image algebra's five core neighbourhood operations, where fully floorplanned efficient FPGA configurations, in the form of EDIF netlists, are generated automatically for instances of the core. In the proposed combined framework, highly optimised data paths are invoked dynamically from within Handel-C, and are synthesized using HIDE. Although the idea might seem simple prima facie, it could have serious implications on the design of future generations of hardware description languages.

  10. Neutron counter based on beryllium activation

    NASA Astrophysics Data System (ADS)

    Bienkowska, B.; Prokopowicz, R.; Scholz, M.; Kaczmarczyk, J.; Igielski, A.; Karpinski, L.; Paducha, M.; Pytel, K.

    2014-08-01

    The fusion reaction occurring in DD plasma is followed by emission of 2.45 MeV neutrons, which carry out information about fusion reaction rate and plasma parameters and properties as well. Neutron activation of beryllium has been chosen for detection of DD fusion neutrons. The cross-section for reaction 9Be(n, α)6He has a useful threshold near 1 MeV, which means that undesirable multiple-scattered neutrons do not undergo that reaction and therefore are not recorded. The product of the reaction, 6He, decays with half-life T1/2 = 0.807 s emitting β- particles which are easy to detect. Large area gas sealed proportional detector has been chosen as a counter of β-particles leaving activated beryllium plate. The plate with optimized dimensions adjoins the proportional counter entrance window. Such set-up is also equipped with appropriate electronic components and forms beryllium neutron activation counter. The neutron flux density on beryllium plate can be determined from the number of counts. The proper calibration procedure needs to be performed, therefore, to establish such relation. The measurements with the use of known β-source have been done. In order to determine the detector response function such experiment have been modeled by means of MCNP5-the Monte Carlo transport code. It allowed proper application of the results of transport calculations of β- particles emitted from radioactive 6He and reaching proportional detector active volume. In order to test the counter system and measuring procedure a number of experiments have been performed on PF devices. The experimental conditions have been simulated by means of MCNP5. The correctness of simulation outcome have been proved by measurements with known radioactive neutron source. The results of the DD fusion neutron measurements have been compared with other neutron diagnostics.

  11. System-on-chip architecture and validation for real-time transceiver optimization: APC implementation on FPGA

    NASA Astrophysics Data System (ADS)

    Suarez, Hernan; Zhang, Yan R.

    2015-05-01

    New radar applications need to perform complex algorithms and process large quantity of data to generate useful information for the users. This situation has motivated the search for better processing solutions that include low power high-performance processors, efficient algorithms, and high-speed interfaces. In this work, hardware implementation of adaptive pulse compression for real-time transceiver optimization are presented, they are based on a System-on-Chip architecture for Xilinx devices. This study also evaluates the performance of dedicated coprocessor as hardware accelerator units to speed up and improve the computation of computing-intensive tasks such matrix multiplication and matrix inversion which are essential units to solve the covariance matrix. The tradeoffs between latency and hardware utilization are also presented. Moreover, the system architecture takes advantage of the embedded processor, which is interconnected with the logic resources through the high performance AXI buses, to perform floating-point operations, control the processing blocks, and communicate with external PC through a customized software interface. The overall system functionality is demonstrated and tested for real-time operations using a Ku-band tested together with a low-cost channel emulator for different types of waveforms.

  12. Quantum non-demolition phonon counter with a hybrid optomechnical system

    NASA Astrophysics Data System (ADS)

    Song, Qiao; Zhang, KeYe; Dong, Ying; Zhang, WeiPing

    2018-05-01

    A phonon counting scheme based on the control of polaritons in an optomechanical system is proposed. This approach permits us to measure the number of phonons in a quantum non-demolition (QND) manner for arbitrary modes not limited by the frequency matching condition as in usual photon-phonon scattering detections. The performance on phonon number transfer and quantum state transfer of the counter are analyzed and simulated numerically by taking into account all relevant sources of noise.

  13. Test and Evaluation of a Prototyped Sensor-Camera Network for Persistent Intelligence, Surveillance, and Reconnaissance in Support of Tactical Coalition Networking Environments

    DTIC Science & Technology

    2006-06-01

    scenarios. The demonstration planned for May 2006, in Chiang Mai , Thailand, will have a first-responder, law enforcement, and counter-terrorism and counter...to local ( Chiang Mai ), theater (Bangkok), and global (Alameda, California) command and control centers. This fusion of information validates using...network performance to be tested during moderate environmental conditions. The third and fourth scenarios were conducted in Chiang Mai , Thailand

  14. A comparison of manual and electronic counting for total nucleated cell counts on synovial fluid from canine stifle joints.

    PubMed

    Atilola, M A; Lumsden, J H; Rooke, F

    1986-04-01

    Synovial fluids collected from the stifle joints of 20 physically normal adult dogs were subjected to cytological examination. A total nucleated cell count was performed on each sample using both an electronic cell counter and a hemocytometer. The mean of the total counts done with the electronic counter was significantly higher (1008 cells/microL) than that obtained manually with the hemocytometer (848 cells/microL).

  15. The use of UNIX in a real-time environment

    NASA Technical Reports Server (NTRS)

    Luken, R. D.; Simons, P. C.

    1986-01-01

    This paper describes a project to evaluate the feasibility of using commercial off-the-shelf hardware and the UNIX operating system, to implement a real-time control and monitor system. A functional subset of the Checkout, Control and Monitor System was chosen as the test bed for the project. The project consists of three separate architecture implementations: a local area bus network, a star network, and a central host. The motivation for this project stemmed from the need to find a way to implement real-time systems, without the cost burden of developing and maintaining custom hardware and unique software. This has always been accepted as the only option because of the need to optimize the implementation for performance. However, with the cost/performance of today's hardware, the inefficiencies of high-level languages and portable operating systems can be effectively overcome.

  16. MetAlign 3.0: performance enhancement by efficient use of advances in computer hardware.

    PubMed

    Lommen, Arjen; Kools, Harrie J

    2012-08-01

    A new, multi-threaded version of the GC-MS and LC-MS data processing software, metAlign, has been developed which is able to utilize multiple cores on one PC. This new version was tested using three different multi-core PCs with different operating systems. The performance of noise reduction, baseline correction and peak-picking was 8-19 fold faster compared to the previous version on a single core machine from 2008. The alignment was 5-10 fold faster. Factors influencing the performance enhancement are discussed. Our observations show that performance scales with the increase in processor core numbers we currently see in consumer PC hardware development.

  17. Performance of GeantV EM Physics Models

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2017-10-01

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  18. Qualification of Engineering Camera for Long-Duration Deep Space Missions

    NASA Technical Reports Server (NTRS)

    Ramesham, Rajeshuni; Maki, Justin N.; Pourangi, Ali M.; Lee, Steven W.

    2012-01-01

    Qualification and verification of advanced electronic packaging and interconnect technologies, and various other types of hardware elements for the Mars Exploration Rover s Spirit and Opportunity (MER)/Mars Science Laboratory (MSL) flight projects, has been performed to enhance the mission assurance. The qualification of hardware (engineering camera) under extreme cold temperatures has been performed with reference to various Mars-related project requirements. The flight-like packages, sensors, and subassemblies have been selected for the study to survive three times the total number of expected diurnal temperature cycles resulting from all environmental and operational exposures occurring over the life of the flight hardware, including all relevant manufacturing, ground operations, and mission phases. Qualification has been performed by subjecting above flight-like hardware to the environmental temperature extremes, and assessing any structural failures or degradation in electrical performance due to either overstress or thermal cycle fatigue. Engineering camera packaging designs, charge-coupled devices (CCDs), and temperature sensors were successfully qualified for MER and MSL per JPL design principles. Package failures were observed during qualification processes and the package redesigns were then made to enhance the reliability and subsequent mission assurance. These results show the technology certainly is promising for MSL, and especially for longterm extreme temperature missions to the extreme temperature conditions. The engineering camera has been completely qualified for the MSL project, with the proven ability to survive on Mars for 2010 sols, or 670 sols times three. Finally, the camera continued to be functional, even after 2010 thermal cycles.

  19. A Hardware-in-the-Loop Simulation Platform for the Verification and Validation of Safety Control Systems

    NASA Astrophysics Data System (ADS)

    Rankin, Drew J.; Jiang, Jin

    2011-04-01

    Verification and validation (V&V) of safety control system quality and performance is required prior to installing control system hardware within nuclear power plants (NPPs). Thus, the objective of the hardware-in-the-loop (HIL) platform introduced in this paper is to verify the functionality of these safety control systems. The developed platform provides a flexible simulated testing environment which enables synchronized coupling between the real and simulated world. Within the platform, National Instruments (NI) data acquisition (DAQ) hardware provides an interface between a programmable electronic system under test (SUT) and a simulation computer. Further, NI LabVIEW resides on this remote DAQ workstation for signal conversion and routing between Ethernet and standard industrial signals as well as for user interface. The platform is applied to the testing of a simplified implementation of Canadian Deuterium Uranium (CANDU) shutdown system no. 1 (SDS1) which monitors only the steam generator level of the simulated NPP. CANDU NPP simulation is performed on a Darlington NPP desktop training simulator provided by Ontario Power Generation (OPG). Simplified SDS1 logic is implemented on an Invensys Tricon v9 programmable logic controller (PLC) to test the performance of both the safety controller and the implemented logic. Prior to HIL simulation, platform availability of over 95% is achieved for the configuration used during the V&V of the PLC. Comparison of HIL simulation results to benchmark simulations shows good operational performance of the PLC following a postulated initiating event (PIE).

  20. Enhancing quantum annealing performance for the molecular similarity problem

    NASA Astrophysics Data System (ADS)

    Hernandez, Maritza; Aramon, Maliheh

    2017-05-01

    Quantum annealing is a promising technique which leverages quantum mechanics to solve hard optimization problems. Considerable progress has been made in the development of a physical quantum annealer, motivating the study of methods to enhance the efficiency of such a solver. In this work, we present a quantum annealing approach to measure similarity among molecular structures. Implementing real-world problems on a quantum annealer is challenging due to hardware limitations such as sparse connectivity, intrinsic control error, and limited precision. In order to overcome the limited connectivity, a problem must be reformulated using minor-embedding techniques. Using a real data set, we investigate the performance of a quantum annealer in solving the molecular similarity problem. We provide experimental evidence that common practices for embedding can be replaced by new alternatives which mitigate some of the hardware limitations and enhance its performance. Common practices for embedding include minimizing either the number of qubits or the chain length and determining the strength of ferromagnetic couplers empirically. We show that current criteria for selecting an embedding do not improve the hardware's performance for the molecular similarity problem. Furthermore, we use a theoretical approach to determine the strength of ferromagnetic couplers. Such an approach removes the computational burden of the current empirical approaches and also results in hardware solutions that can benefit from simple local classical improvement. Although our results are limited to the problems considered here, they can be generalized to guide future benchmarking studies.

  1. Independent Orbiter Assessment (IOA): Analysis of the guidance, navigation, and control subsystem

    NASA Technical Reports Server (NTRS)

    Trahan, W. H.; Odonnell, R. A.; Pietz, K. C.; Hiott, J. M.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) is presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results corresponding to the Orbiter Guidance, Navigation, and Control (GNC) Subsystem hardware are documented. The function of the GNC hardware is to respond to guidance, navigation, and control software commands to effect vehicle control and to provide sensor and controller data to GNC software. Some of the GNC hardware for which failure modes analysis was performed includes: hand controllers; Rudder Pedal Transducer Assembly (RPTA); Speed Brake Thrust Controller (SBTC); Inertial Measurement Unit (IMU); Star Tracker (ST); Crew Optical Alignment Site (COAS); Air Data Transducer Assembly (ADTA); Rate Gyro Assemblies; Accelerometer Assembly (AA); Aerosurface Servo Amplifier (ASA); and Ascent Thrust Vector Control (ATVC). The IOA analysis process utilized available GNC hardware drawings, workbooks, specifications, schematics, and systems briefs for defining hardware assemblies, components, and circuits. Each hardware item was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  2. Development of a hardware-in-loop attitude control simulator for a CubeSat satellite

    NASA Astrophysics Data System (ADS)

    Tapsawat, Wittawat; Sangpet, Teerawat; Kuntanapreeda, Suwat

    2018-01-01

    Attitude control is an important part in satellite on-orbit operation. It greatly affects the performance of satellites. Testing of an attitude determination and control subsystem (ADCS) is very challenging since it might require attitude dynamics and space environment in the orbit. This paper develops a low-cost hardware-in-loop (HIL) simulator for testing an ADCS of a CubeSat satellite. The simulator consists of a numerical simulation part, a hardware part, and a HIL interface hardware unit. The numerical simulation part includes orbital dynamics, attitude dynamics and Earth’s magnetic field. The hardware part is the real ADCS board of the satellite. The simulation part outputs satellite’s angular velocity and geomagnetic field information to the HIL interface hardware. Then, based on this information, the HIL interface hardware generates I2C signals mimicking the signals of the on-board rate-gyros and magnetometers and consequently outputs the signals to the ADCS board. The ADCS board reads the rate-gyro and magnetometer signals, calculates control signals, and drives the attitude actuators which are three magnetic torquers (MTQs). The responses of the MTQs sensed by a separated magnetometer are feedback to the numerical simulation part completing the HIL simulation loop. Experimental studies are conducted to demonstrate the feasibility and effectiveness of the simulator.

  3. Real-time computing platform for spiking neurons (RT-spike).

    PubMed

    Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael

    2006-07-01

    A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.

  4. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  5. The design and hardware implementation of a low-power real-time seizure detection algorithm

    NASA Astrophysics Data System (ADS)

    Raghunathan, Shriram; Gupta, Sumeet K.; Ward, Matthew P.; Worth, Robert M.; Roy, Kaushik; Irazoqui, Pedro P.

    2009-10-01

    Epilepsy affects more than 1% of the world's population. Responsive neurostimulation is emerging as an alternative therapy for the 30% of the epileptic patient population that does not benefit from pharmacological treatment. Efficient seizure detection algorithms will enable closed-loop epilepsy prostheses by stimulating the epileptogenic focus within an early onset window. Critically, this is expected to reduce neuronal desensitization over time and lead to longer-term device efficacy. This work presents a novel event-based seizure detection algorithm along with a low-power digital circuit implementation. Hippocampal depth-electrode recordings from six kainate-treated rats are used to validate the algorithm and hardware performance in this preliminary study. The design process illustrates crucial trade-offs in translating mathematical models into hardware implementations and validates statistical optimizations made with empirical data analyses on results obtained using a real-time functioning hardware prototype. Using quantitatively predicted thresholds from the depth-electrode recordings, the auto-updating algorithm performs with an average sensitivity and selectivity of 95.3 ± 0.02% and 88.9 ± 0.01% (mean ± SEα = 0.05), respectively, on untrained data with a detection delay of 8.5 s [5.97, 11.04] from electrographic onset. The hardware implementation is shown feasible using CMOS circuits consuming under 350 nW of power from a 250 mV supply voltage from simulations on the MIT 180 nm SOI process.

  6. A case study in nonconformance and performance trend analysis

    NASA Technical Reports Server (NTRS)

    Maloy, Joseph E.; Newton, Coy P.

    1990-01-01

    As part of NASA's effort to develop an agency-wide approach to trend analysis, a pilot nonconformance and performance trending analysis study was conducted on the Space Shuttle auxiliary power unit (APU). The purpose of the study was to (1) demonstrate that nonconformance analysis can be used to identify repeating failures of a specific item (and the associated failure modes and causes) and (2) determine whether performance parameters could be analyzed and monitored to provide an indication of component or system degradation prior to failure. The nonconformance analysis of the APU did identify repeating component failures, which possibly could be reduced if key performance parameters were monitored and analyzed. The performance-trending analysis verified that the characteristics of hardware parameters can be effective in detecting degradation of hardware performance prior to failure.

  7. Performance characterization of a Bosch CO sub 2 reduction subsystem

    NASA Technical Reports Server (NTRS)

    Heppner, D. B.; Hallick, T. M.; Schubert, F. H.

    1980-01-01

    The performance of Bosch hardware at the subsystem level (up to five-person capacity) in terms of five operating parameters was investigated. The five parameters were: (1) reactor temperature, (2) recycle loop mass flow rate, (3) recycle loop gas composition (percent hydrogen), (4) recycle loop dew point and (5) catalyst density. Experiments were designed and conducted in which the five operating parameters were varied and Bosch performance recorded. A total of 12 carbon collection cartridges provided over approximately 250 hours of operating time. Generally, one cartridge was used for each parameter that was varied. The Bosch hardware was found to perform reliably and reproducibly. No startup, reaction initiation or carbon containment problems were observed. Optimum performance points/ranges were identified for the five parameters investigated. The performance curves agreed with theoretical projections.

  8. Store-operate-coherence-on-value

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Dong; Heidelberger, Philip; Kumar, Sameer

    A system, method and computer program product for performing various store-operate instructions in a parallel computing environment that includes a plurality of processors and at least one cache memory device. A queue in the system receives, from a processor, a store-operate instruction that specifies under which condition a cache coherence operation is to be invoked. A hardware unit in the system runs the received store-operate instruction. The hardware unit evaluates whether a result of the running the received store-operate instruction satisfies the condition. The hardware unit invokes a cache coherence operation on a cache memory address associated with the receivedmore » store-operate instruction if the result satisfies the condition. Otherwise, the hardware unit does not invoke the cache coherence operation on the cache memory device.« less

  9. Compiler-assisted multiple instruction rollback recovery using a read buffer

    NASA Technical Reports Server (NTRS)

    Alewine, N. J.; Chen, S.-K.; Fuchs, W. K.; Hwu, W.-M.

    1993-01-01

    Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper focuses on compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations indicate improved efficiency over previous hardware-based and compiler-based schemes.

  10. High-performance image reconstruction in fluorescence tomography on desktop computers and graphics hardware.

    PubMed

    Freiberger, Manuel; Egger, Herbert; Liebmann, Manfred; Scharfetter, Hermann

    2011-11-01

    Image reconstruction in fluorescence optical tomography is a three-dimensional nonlinear ill-posed problem governed by a system of partial differential equations. In this paper we demonstrate that a combination of state of the art numerical algorithms and a careful hardware optimized implementation allows to solve this large-scale inverse problem in a few seconds on standard desktop PCs with modern graphics hardware. In particular, we present methods to solve not only the forward but also the non-linear inverse problem by massively parallel programming on graphics processors. A comparison of optimized CPU and GPU implementations shows that the reconstruction can be accelerated by factors of about 15 through the use of the graphics hardware without compromising the accuracy in the reconstructed images.

  11. Design and evaluation of a fault-tolerant multiprocessor using hardware recovery blocks

    NASA Technical Reports Server (NTRS)

    Lee, Y. H.; Shin, K. G.

    1982-01-01

    A fault-tolerant multiprocessor with a rollback recovery mechanism is discussed. The rollback mechanism is based on the hardware recovery block which is a hardware equivalent to the software recovery block. The hardware recovery block is constructed by consecutive state-save operations and several state-save units in every processor and memory module. When a fault is detected, the multiprocessor reconfigures itself to replace the faulty component and then the process originally assigned to the faulty component retreats to one of the previously saved states in order to resume fault-free execution. A mathematical model is proposed to calculate both the coverage of multi-step rollback recovery and the risk of restart. A performance evaluation in terms of task execution time is also presented.

  12. Efficient k-Winner-Take-All Competitive Learning Hardware Architecture for On-Chip Learning

    PubMed Central

    Ou, Chien-Min; Li, Hui-Ya; Hwang, Wen-Jyi

    2012-01-01

    A novel k-winners-take-all (k-WTA) competitive learning (CL) hardware architecture is presented for on-chip learning in this paper. The architecture is based on an efficient pipeline allowing k-WTA competition processes associated with different training vectors to be performed concurrently. The pipeline architecture employs a novel codeword swapping scheme so that neurons failing the competition for a training vector are immediately available for the competitions for the subsequent training vectors. The architecture is implemented by the field programmable gate array (FPGA). It is used as a hardware accelerator in a system on programmable chip (SOPC) for realtime on-chip learning. Experimental results show that the SOPC has significantly lower training time than that of other k-WTA CL counterparts operating with or without hardware support.

  13. Waste Collector System Technology Comparisons for Constellation Applications

    NASA Technical Reports Server (NTRS)

    Broyan, James Lee, Jr.

    2006-01-01

    The Waste Collection Systems (WCS) for space vehicles have utilized a variety of hardware for collecting human metabolic wastes. It has typically required multiple missions to resolve crew usability and hardware performance issues that are difficult to duplicate on the ground. New space vehicles should leverage off past WCS systems. Past WCS hardware designs are substantially different and unique for each vehicle. However, each WCS can be analyzed and compared as a subset of technologies which encompass fecal collection, urine collection, air systems, pretreatment systems. Technology components from the WCS of various vehicles can then be combined to reduce hardware mass and volume while maximizing use of previous technology and proven human-equipment interfaces. Analysis of past US and Russian WCS are compared and extrapolated to Constellation missions.

  14. Real-time high speed generator system emulation with hardware-in-the-loop application

    NASA Astrophysics Data System (ADS)

    Stroupe, Nicholas

    The emerging emphasis and benefits of distributed generation on smaller scale networks has prompted much attention and focus to research in this field. Much of the research that has grown in distributed generation has also stimulated the development of simulation software and techniques. Testing and verification of these distributed power networks is a complex task and real hardware testing is often desired. This is where simulation methods such as hardware-in-the-loop become important in which an actual hardware unit can be interfaced with a software simulated environment to verify proper functionality. In this thesis, a simulation technique is taken one step further by utilizing a hardware-in-the-loop technique to emulate the output voltage of a generator system interfaced to a scaled hardware distributed power system for testing. The purpose of this thesis is to demonstrate a new method of testing a virtually simulated generation system supplying a scaled distributed power system in hardware. This task is performed by using the Non-Linear Loads Test Bed developed by the Energy Conversion and Integration Thrust at the Center for Advanced Power Systems. This test bed consists of a series of real hardware developed converters consistent with the Navy's All-Electric-Ship proposed power system to perform various tests on controls and stability under the expected non-linear load environment of the Navy weaponry. This test bed can also explore other distributed power system research topics and serves as a flexible hardware unit for a variety of tests. In this thesis, the test bed will be utilized to perform and validate this newly developed method of generator system emulation. In this thesis, the dynamics of a high speed permanent magnet generator directly coupled with a micro turbine are virtually simulated on an FPGA in real-time. The calculated output stator voltage will then serve as a reference for a controllable three phase inverter at the input of the test bed that will emulate and reproduce these voltages on real hardware. The output of the inverter is then connected with the rest of the test bed and can consist of a variety of distributed system topologies for many testing scenarios. The idea is that the distributed power system under test in hardware can also integrate real generator system dynamics without physically involving an actual generator system. The benefits of successful generator system emulation are vast and lead to much more detailed system studies without the draw backs of needing physical generator units. Some of these advantages are safety, reduced costs, and the ability of scaling while still preserving the appropriate system dynamics. This thesis will introduce the ideas behind generator emulation and explain the process and necessary steps to obtaining such an objective. It will also demonstrate real results and verification of numerical values in real-time. The final goal of this thesis is to introduce this new idea and show that it is in fact obtainable and can prove to be a highly useful tool in the simulation and verification of distributed power systems.

  15. Three-Dimensional Nanobiocomputing Architectures With Neuronal Hypercells

    DTIC Science & Technology

    2007-06-01

    Neumann architectures, and CMOS fabrication. Novel solutions of massive parallel distributed computing and processing (pipelined due to systolic... and processing platforms utilizing molecular hardware within an enabling organization and architecture. The design technology is based on utilizing a...Microsystems and Nanotechnologies investigated a novel 3D3 (Hardware Software Nanotechnology) technology to design super-high performance computing

  16. A hardware-oriented algorithm for floating-point function generation

    NASA Technical Reports Server (NTRS)

    O'Grady, E. Pearse; Young, Baek-Kyu

    1991-01-01

    An algorithm is presented for performing accurate, high-speed, floating-point function generation for univariate functions defined at arbitrary breakpoints. Rapid identification of the breakpoint interval, which includes the input argument, is shown to be the key operation in the algorithm. A hardware implementation which makes extensive use of read/write memories is used to illustrate the algorithm.

  17. Benchmarking and Hardware-In-The-Loop Operation of a 2014 MAZDA SkyActiv (SAE 2016-01-1007)

    EPA Science Inventory

    Engine Performance evaluation in support of LD MTE. EPA used elements of its ALPHA model to apply hardware-in-the-loop (HIL) controls to the SKYACTIV engine test setup to better understand how the engine would operate in a chassis test after combined with future leading edge tech...

  18. Software cost/resource modeling: Software quality tradeoff measurement

    NASA Technical Reports Server (NTRS)

    Lawler, R. W.

    1980-01-01

    A conceptual framework for treating software quality from a total system perspective is developed. Examples are given to show how system quality objectives may be allocated to hardware and software; to illustrate trades among quality factors, both hardware and software, to achieve system performance objectives; and to illustrate the impact of certain design choices on software functionality.

  19. Potential Damage to Flight Hardware from MIL-STD-462 CS02 Setup

    NASA Technical Reports Server (NTRS)

    Harris, Patrick K.; Block, Nathan F.

    2002-01-01

    The MIL-STD-462 CS02 conducted susceptibility test setup, performed during electromagnetic compatibility (EMC) testing, consists of an audio transformer with the secondary used as an inductor and a large capacitor. Together, these two components form an L-type low-pass filter to minimize the injected test signal input into the power source. Some flight hardware power input configurations are not compatible with this setup and break into oscillation when powered up. This can damage flight hardware and caused a catastrophic failure to an item tested in the Goddard Space Flight Center (GSFC) Large EMC Test Facility.

  20. Core Community Specifications for Electron Microprobe Operating Systems: Software, Quality Control, and Data Management Issues

    NASA Technical Reports Server (NTRS)

    Fournelle, John; Carpenter, Paul

    2006-01-01

    Modem electron microprobe systems have become increasingly sophisticated. These systems utilize either UNIX or PC computer systems for measurement, automation, and data reduction. These systems have undergone major improvements in processing, storage, display, and communications, due to increased capabilities of hardware and software. Instrument specifications are typically utilized at the time of purchase and concentrate on hardware performance. The microanalysis community includes analysts, researchers, software developers, and manufacturers, who could benefit from exchange of ideas and the ultimate development of core community specifications (CCS) for hardware and software components of microprobe instrumentation and operating systems.

  1. Long life assurance study for manned spacecraft long life hardware. Volume 1: Summary of long life assurance guidelines

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A long life assurance program for the development of design, process, test, and application guidelines for achieving reliable spacecraft hardware was conducted. The study approach consisted of a review of technical data performed concurrently with a survey of the aerospace industry. The data reviewed included design and operating characteristics, failure histories and solutions, and similar documents. The topics covered by the guidelines are reported. It is concluded that long life hardware is achieved through meticulous attention to many details and no simple set of rules can suffice.

  2. LDEF materials data analysis: Representative examples

    NASA Technical Reports Server (NTRS)

    Pippin, H. Gary; Crutcher, E. R.

    1992-01-01

    Results of measurements on silverized teflon, heat shrink tubing and nylon tie downs on the wire harness clamps, silvered hex nuts, and contamination deposits are presented. We interpret the results in terms of our microenvironments exposure model and locations on the Long Duration Exposure Facility (LDEF). Distinct changes in the surface properties of FEP were observed as a function of UV exposure. Significant differences in outgassing characteristics were detected for hardware on the interior row 3 relative to identical hardware on the interior row 3 relative to identical hardware on nearby rows. The implications for in service performance are reviewed.

  3. An embedded controller for a 7-degree of freedom prosthetic arm.

    PubMed

    Tenore, Francesco; Armiger, Robert S; Vogelstein, R Jacob; Wenstrand, Douglas S; Harshbarger, Stuart D; Englehart, Kevin

    2008-01-01

    We present results from an embedded real-time hardware system capable of decoding surface myoelectric signals (sMES) to control a seven degree of freedom upper limb prosthesis. This is one of the first hardware implementations of sMES decoding algorithms and the most advanced controller to-date. We compare decoding results from the device to simulation results from a real-time PC-based operating system. Performance of both systems is shown to be similar, with decoding accuracy greater than 90% for the floating point software simulation and 80% for fixed point hardware and software implementations.

  4. Analog Signal Correlating Using an Analog-Based Signal Conditioning Front End

    NASA Technical Reports Server (NTRS)

    Prokop, Norman; Krasowski, Michael

    2013-01-01

    This innovation is capable of correlating two analog signals by using an analog-based signal conditioning front end to hard-limit the analog signals through adaptive thresholding into a binary bit stream, then performing the correlation using a Hamming "similarity" calculator function embedded in a one-bit digital correlator (OBDC). By converting the analog signal into a bit stream, the calculation of the correlation function is simplified, and less hardware resources are needed. This binary representation allows the hardware to move from a DSP where instructions are performed serially, into digital logic where calculations can be performed in parallel, greatly speeding up calculations.

  5. On studies of 3He and isobutane mixture as neutron proportional counter gas

    NASA Astrophysics Data System (ADS)

    Desai, S. S.; Shaikh, A. M.

    2006-02-01

    The performance of neutron detectors filled with 3He+iC 4H 10 (isobutane) gas mixtures has been studied and compared with the performance of detectors filled with 3He+Kr gas mixtures. The investigations are made to determine suitable concentration of isobutane in the gas mixture to design neutron proportional counters and linear position sensitive neutron detectors (1-D PSDs). Energy resolution, range of proportionality, plateau and gas gain characteristics are studied for various gas mixtures of 3He and isobutane. The values for various gas constants are determined by fitting the gas gains to Diethorn and Bateman's equations and their variation with isobutane concentration in the fill gas mixture is studied.

  6. The NASA, Marshall Space Flight Center drop tube user's manual

    NASA Technical Reports Server (NTRS)

    Rathz, Thomas J.; Robinson, Michael B.

    1990-01-01

    A comprehensive description of the structural and instrumentation hardware and the experimental capabilities of the 105-meter Marshall Space Flight Center Drop Tube Facility is given. This document is to serve as a guide to the investigator who wishes to perform materials processing experiments in the Drop Tube. Particular attention is given to the Tube's hardware to which an investigator must interface to perform experiments. This hardware consists of the permanent structural hardware (with such items as vacuum flanges), and the experimental hardware (with the furnaces and the sample insertion devices). Two furnaces, an electron-beam and an electromagnetic levitator, are currently used to melt metallic samples in a process environment that can range from 10(exp -6) Torr to 1 atmosphere. Details of these furnaces, the processing environment gases/vacuum, the electrical power, and data acquisition capabilities are specified to allow an investigator to design his/her experiment to maximize successful results and to reduce experimental setup time on the Tube. Various devices used to catch samples while inflicting minimum damage and to enhance turnaround time between experiments are described. Enough information is provided to allow an investigator who wishes to build his/her own furnace or sample catch devices to easily interface it to the Tube. The experimental instrumentation and data acquisition systems used to perform pre-drop and in-flight measurements of the melting and solidification process are also detailed. Typical experimental results are presented as an indicator of the type of data that is provided by the Drop Tube Facility. A summary bibliography of past Drop Tube experiments is provided, and an appendix explaining the noncontact temperature determination of free-falling drops is provided. This document is to be revised occasionally as improvements to the Facility are made and as the summary bibliography grows.

  7. Spectral-element Seismic Wave Propagation on CUDA/OpenCL Hardware Accelerators

    NASA Astrophysics Data System (ADS)

    Peter, D. B.; Videau, B.; Pouget, K.; Komatitsch, D.

    2015-12-01

    Seismic wave propagation codes are essential tools to investigate a variety of wave phenomena in the Earth. Furthermore, they can now be used for seismic full-waveform inversions in regional- and global-scale adjoint tomography. Although these seismic wave propagation solvers are crucial ingredients to improve the resolution of tomographic images to answer important questions about the nature of Earth's internal processes and subsurface structure, their practical application is often limited due to high computational costs. They thus need high-performance computing (HPC) facilities to improving the current state of knowledge. At present, numerous large HPC systems embed many-core architectures such as graphics processing units (GPUs) to enhance numerical performance. Such hardware accelerators can be programmed using either the CUDA programming environment or the OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted by additional hardware accelerators, like e.g. AMD graphic cards, ARM-based processors as well as Intel Xeon Phi coprocessors. For seismic wave propagation simulations using the open-source spectral-element code package SPECFEM3D_GLOBE, we incorporated an automatic source-to-source code generation tool (BOAST) which allows us to use meta-programming of all computational kernels for forward and adjoint runs. Using our BOAST kernels, we generate optimized source code for both CUDA and OpenCL languages within the source code package. Thus, seismic wave simulations are able now to fully utilize CUDA and OpenCL hardware accelerators. We show benchmarks of forward seismic wave propagation simulations using SPECFEM3D_GLOBE on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  8. Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free Spiking-Dynamic-Pixel ConvNets for Visual Processing

    PubMed Central

    Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; LeCun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2012-01-01

    Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons. PMID:22518097

  9. Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh I.; Keymeulen, Didier; Kimesh, Matthew A.

    2012-01-01

    Modern hyperspectral imaging systems are able to acquire far more data than can be downlinked from a spacecraft. Onboard data compression helps to alleviate this problem, but requires a system capable of power efficiency and high throughput. Software solutions have limited throughput performance and are power-hungry. Dedicated hardware solutions can provide both high throughput and power efficiency, while taking the load off of the main processor. Thus a hardware compression system was developed. The implementation uses a field-programmable gate array (FPGA). The implementation is based on the fast lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral-Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which achieves excellent compression performance and has low complexity. This algorithm performs predictive compression using an adaptive filtering method, and uses adaptive Golomb coding. The implementation also packetizes the coded data. The FL algorithm is well suited for implementation in hardware. In the FPGA implementation, one sample is compressed every clock cycle, which makes for a fast and practical realtime solution for space applications. Benefits of this implementation are: 1) The underlying algorithm achieves a combination of low complexity and compression effectiveness that exceeds that of techniques currently in use. 2) The algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. 3) Hardware acceleration provides a throughput improvement of 10 to 100 times vs. the software implementation. A prototype of the compressor is available in software, but it runs at a speed that does not meet spacecraft requirements. The hardware implementation targets the Xilinx Virtex IV FPGAs, and makes the use of this compressor practical for Earth satellites as well as beyond-Earth missions with hyperspectral instruments.

  10. Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free Spiking-Dynamic-Pixel ConvNets for Visual Processing.

    PubMed

    Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; Lecun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2012-01-01

    Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons.

  11. Research of the absorbance detection and fluorescence detection for multifunctional nutrition analyzer

    NASA Astrophysics Data System (ADS)

    Ni, Zhengyuan; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda

    2017-10-01

    The research of the multifunctional analyzer which integrates absorbance detection, fluorescence detection, time-resolved fluorescence detection, biochemical luminescence detection methods, can make efficient detection and analysis for a variety of human body nutrients. This article focuses on the absorbance detection and fluorescence detection system. The two systems are modular in design and controlled by embedded system, to achieve automatic measurement according to user settings. In the optical path design, the application of confocal design can improve the optical signal acquisition capability, and reduce the interference. A photon counter is used for detection, and a high performance counter module is designed to measure the output of photon counter. In the experiment, we use neutral density filters and potassium dichromate solution to test the absorbance detection system, and use fluorescein isothiocyanate FITC for fluorescence detection system performance test. The experimental results show that the absorbance detection system has a detection range of 0 4OD, and has good linearity in the detection range, while the fluorescence detection system has a high sensitivity of 1pmol/L concentration.

  12. Enhancement of the performance of cadmium sulfide quantum dot solar cells using a platinum-polyaniline counter electrode and a silver nanoparticle-sensitized photoanode

    NASA Astrophysics Data System (ADS)

    Nourolahi, Hamzeh; Bolorizadeh, Mohammadagha A.; Dorri, Navid; Behjat, Abbas

    2017-07-01

    A metal-polymer nanocomposite of platinum-polyaniline (Pt/PANI) was deposited on fluorine-doped tin oxide glass substrates to function as a counter electrode for polysulfide redox reactions in cadmium sulfide quantum dot-sensitized solar cells. In addition, front-side illuminated photoelectrodes were sensitized by silver (Ag) nanoparticles (NPs) as an interfacial layer between a transparent conducting oxide substrate and a TiO2 layer. This configuration, i.e., both the Pt/PANI counter electrode and the Ag NPs in the photoanode, leads to 1.92% in the power-conversion efficiency (PCE) of the fabricated cells. A PCE enhancement of around 21% was obtained for the Ag NPs-sensitized photoanodes, as compared with the Ag NPs-free one. The improved performance can be attributed to the easier transport of excited electrons and the inhibition of charge recombination due to the application of an Ag NPs layer. Electrochemical impedance spectroscopy measurements showed that once Ag NPs are incorporated in a photoanode, electron transport time decreases in the photoanode structure.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, Brian W.; Hemmert, K. Scott; Underwood, Keith Douglas

    Achieving the next three orders of magnitude performance increase to move from petascale to exascale computing will require a significant advancements in several fundamental areas. Recent studies have outlined many of the challenges in hardware and software that will be needed. In this paper, we examine these challenges with respect to high-performance networking. We describe the repercussions of anticipated changes to computing and networking hardware and discuss the impact that alternative parallel programming models will have on the network software stack. We also present some ideas on possible approaches that address some of these challenges.

  14. Extravehicular Activity (EVA) Power, Avionics, and Software (PAS) 101

    NASA Technical Reports Server (NTRS)

    Irimies, David

    2011-01-01

    EVA systems consist of a spacesuit or garment, a PLSS, a PAS system, and spacesuit interface hardware. The PAS system is responsible for providing power for the suit, communication of several types of data between the suit and other mission assets, avionics hardware to perform numerous data display and processing functions, and information systems that provide crewmembers data to perform their tasks with more autonomy and efficiency. Irimies discussed how technology development efforts have advanced the state-of-the-art in these areas and shared technology development challenges.

  15. Towards fully analog hardware reservoir computing for speech recognition

    NASA Astrophysics Data System (ADS)

    Smerieri, Anteo; Duport, François; Paquot, Yvan; Haelterman, Marc; Schrauwen, Benjamin; Massar, Serge

    2012-09-01

    Reservoir computing is a very recent, neural network inspired unconventional computation technique, where a recurrent nonlinear system is used in conjunction with a linear readout to perform complex calculations, leveraging its inherent internal dynamics. In this paper we show the operation of an optoelectronic reservoir computer in which both the nonlinear recurrent part and the readout layer are implemented in hardware for a speech recognition application. The performance obtained is close to the one of to state-of-the-art digital reservoirs, while the analog architecture opens the way to ultrafast computation.

  16. Tethered satellite system dynamics and control review panel and related activities, phase 3

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Two major tests of the Tethered Satellite System (TSS) engineering and flight units were conducted to demonstrate the functionality of the hardware and software. Deficiencies in the hardware/software integration tests (HSIT) led to a recommendation for more testing to be performed. Selected problem areas of tether dynamics were analyzed, including verification of the severity of skip rope oscillations, verification or comparison runs to explore dynamic phenomena observed in other simulations, and data generation runs to explore the performance of the time domain and frequency domain skip rope observers.

  17. A SURVEY ON THE ACCURACY OF WHOLE-BODY COUNTERS OPERATED IN FUKUSHIMA AFTER THE NUCLEAR DISASTER.

    PubMed

    Nakano, T; Kim, E; Tani, K; Kurihara, O; Sakai, K

    2016-09-01

    To check internal contamination, whole-body counters (WBCs) have been used continuously in Fukushima prefecture since the 2011 disaster. Many WBCs have been installed recently. The accuracy of these WBCs has been tested with bottle manikin absorption phantoms. No significant problems with the performance or accuracy of the WBCs have been found. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Optimized design of embedded DSP system hardware supporting complex algorithms

    NASA Astrophysics Data System (ADS)

    Li, Yanhua; Wang, Xiangjun; Zhou, Xinling

    2003-09-01

    The paper presents an optimized design method for a flexible and economical embedded DSP system that can implement complex processing algorithms as biometric recognition, real-time image processing, etc. It consists of a floating-point DSP, 512 Kbytes data RAM, 1 Mbytes FLASH program memory, a CPLD for achieving flexible logic control of input channel and a RS-485 transceiver for local network communication. Because of employing a high performance-price ratio DSP TMS320C6712 and a large FLASH in the design, this system permits loading and performing complex algorithms with little algorithm optimization and code reduction. The CPLD provides flexible logic control for the whole DSP board, especially in input channel, and allows convenient interface between different sensors and DSP system. The transceiver circuit can transfer data between DSP and host computer. In the paper, some key technologies are also introduced which make the whole system work efficiently. Because of the characters referred above, the hardware is a perfect flat for multi-channel data collection, image processing, and other signal processing with high performance and adaptability. The application section of this paper presents how this hardware is adapted for the biometric identification system with high identification precision. The result reveals that this hardware is easy to interface with a CMOS imager and is capable of carrying out complex biometric identification algorithms, which require real-time process.

  19. Dual-Chamber/Dual-Anode Proportional Counter Incorporating an Intervening Thin-Foil Solid Neutron Converter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boatner, Lynn A; Neal, John S; Blackston, Matthew A

    2012-01-01

    A dual-chamber/dual-anode gas proportional counter utilizing thin solid 6LiF or 10B neutron converters coated on a 2-micon-thick Mylar film that is positioned between the two counter chambers and anodes has been designed, fabricated, and tested using a variety of fill gases including naturally abundant helium. In this device, neutron conversion products emitted from both sides of the coated converter foil are detected rather than having half of the products absorbed in the wall of a conventional tube type counter where the solid neutron converter is deposited on the tube wall. Geant4-based radiation transport calculations were used to determine the optimummore » neutron converter coating thickness for both isotopes. Solution methods for applying these optimized-thickness coatings on a Mylar film were developed that were carried out at room temperature without any specialized equipment and that can be adapted to standard coating methods such as silk screen or ink jet printing. The performance characteristics of the dual-chamber/dual-anode neutron detector were determined for both types of isotopically enriched converters. The experimental performance of the 6LiF converter-based detector was described well by modeling results from Geant4. Additional modeling studies of multiple-foil/multiple-chamber/anode configurations addressed the basic issue of the relatively longer absorption range of neutrons versus the shorter range of the conversion products for 6LiF and 10B. Combined with the experimental results, these simulations indicate that a high-performance neutron detector can be realized in a single device through the application of these multiple-foil/solid converter, multiple-chamber detector concepts.« less

  20. A high-throughput AO/PI-based cell concentration and viability detection method using the Celigo image cytometry.

    PubMed

    Chan, Leo Li-Ying; Smith, Tim; Kumph, Kendra A; Kuksin, Dmitry; Kessel, Sarah; Déry, Olivier; Cribbes, Scott; Lai, Ning; Qiu, Jean

    2016-10-01

    To ensure cell-based assays are performed properly, both cell concentration and viability have to be determined so that the data can be normalized to generate meaningful and comparable results. Cell-based assays performed in immuno-oncology, toxicology, or bioprocessing research often require measuring of multiple samples and conditions, thus the current automated cell counter that uses single disposable counting slides is not practical for high-throughput screening assays. In the recent years, a plate-based image cytometry system has been developed for high-throughput biomolecular screening assays. In this work, we demonstrate a high-throughput AO/PI-based cell concentration and viability method using the Celigo image cytometer. First, we validate the method by comparing directly to Cellometer automated cell counter. Next, cell concentration dynamic range, viability dynamic range, and consistency are determined. The high-throughput AO/PI method described here allows for 96-well to 384-well plate samples to be analyzed in less than 7 min, which greatly reduces the time required for the single sample-based automated cell counter. In addition, this method can improve the efficiency for high-throughput screening assays, where multiple cell counts and viability measurements are needed prior to performing assays such as flow cytometry, ELISA, or simply plating cells for cell culture.

  1. RRAM-based hardware implementations of artificial neural networks: progress update and challenges ahead

    NASA Astrophysics Data System (ADS)

    Prezioso, M.; Merrikh-Bayat, F.; Chakrabarti, B.; Strukov, D.

    2016-02-01

    Artificial neural networks have been receiving increasing attention due to their superior performance in many information processing tasks. Typically, scaling up the size of the network results in better performance and richer functionality. However, large neural networks are challenging to implement in software and customized hardware are generally required for their practical implementations. In this work, we will discuss our group's recent efforts on the development of such custom hardware circuits, based on hybrid CMOS/memristor circuits, in particular of CMOL variety. We will start by reviewing the basics of memristive devices and of CMOL circuits. We will then discuss our recent progress towards demonstration of hybrid circuits, focusing on the experimental and theoretical results for artificial neural networks based on crossbarintegrated metal oxide memristors. We will conclude presentation with the discussion of the remaining challenges and the most pressing research needs.

  2. A SOPC-BASED Evaluation of AES for 2.4 GHz Wireless Network

    NASA Astrophysics Data System (ADS)

    Ken, Cai; Xiaoying, Liang

    In modern systems, data security is needed more than ever before and many cryptographic algorithms are utilized for security services. Wireless Sensor Networks (WSN) is an example of such technologies. In this paper an innovative SOPC-based approach for the security services evaluation in WSN is proposed that addresses the issues of scalability, flexible performance, and silicon efficiency for the hardware acceleration of encryption system. The design includes a Nios II processor together with custom designed modules for the Advanced Encryption Standard (AES) which has become the default choice for various security services in numerous applications. The objective of this mechanism is to present an efficient hardware realization of AES using very high speed integrated circuit hardware description language (Verilog HDL) and expand the usability for various applications. As compared to traditional customize processor design, the mechanism provides a very broad range of cost/performance points.

  3. Ka-Band Wide-Bandgap Solid-State Power Amplifier: Hardware Validation

    NASA Technical Reports Server (NTRS)

    Epp, L.; Khan, P.; Silva, A.

    2005-01-01

    Motivated by recent advances in wide-bandgap (WBG) gallium nitride (GaN) semiconductor technology, there is considerable interest in developing efficient solid-state power amplifiers (SSPAs) as an alternative to the traveling-wave tube amplifier (TWTA) for space applications. This article documents proof-of-concept hardware used to validate power-combining technologies that may enable a 120-W, 40 percent power-added efficiency (PAE) SSPA. Results in previous articles [1-3] indicate that architectures based on at least three power combiner designs are likely to enable the target SSPA. Previous architecture performance analyses and estimates indicate that the proposed architectures can power combine 16 to 32 individual monolithic microwave integrated circuits (MMICs) with >80 percent combining efficiency. This combining efficiency would correspond to MMIC requirements of 5- to 10-W output power and >48 percent PAE. In order to validate the performance estimates of the three proposed architectures, measurements of proof-of-concept hardware are reported here.

  4. Hardware Design of the Energy Efficient Fall Detection Device

    NASA Astrophysics Data System (ADS)

    Skorodumovs, A.; Avots, E.; Hofmanis, J.; Korāts, G.

    2016-04-01

    Health issues for elderly people may lead to different injuries obtained during simple activities of daily living. Potentially the most dangerous are unintentional falls that may be critical or even lethal to some patients due to the heavy injury risk. In the project "Wireless Sensor Systems in Telecare Application for Elderly People", we have developed a robust fall detection algorithm for a wearable wireless sensor. To optimise the algorithm for hardware performance and test it in field, we have designed an accelerometer based wireless fall detector. Our main considerations were: a) functionality - so that the algorithm can be applied to the chosen hardware, and b) power efficiency - so that it can run for a very long time. We have picked and tested the parts, built a prototype, optimised the firmware for lowest consumption, tested the performance and measured the consumption parameters. In this paper, we discuss our design choices and present the results of our work.

  5. perf-dump

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamblin, T.

    perf-dump is a library for dumping performance data in much the same way physics simulations dump checkpoints. It records per-process, per-timestep, per-phase, and per-thread performance counter data and dumps this large data periodically into an HDF5 data file.

  6. Using FastX on the Peregrine System | High-Performance Computing | NREL

    Science.gov Websites

    with full 3D hardware acceleration. The traditional method of displaying graphics applications to a remote X server (indirect rendering) supports 3D hardware acceleration, but this approach causes all of the OpenGL commands and 3D data to be sent over the network to be rendered on the client machine. With

  7. Detailed requirements document for the problem reporting data system (PDS). [space shuttle and batch processing

    NASA Technical Reports Server (NTRS)

    West, R. S.

    1975-01-01

    The system is described as a computer-based system designed to track the status of problems and corrective actions pertinent to space shuttle hardware. The input, processing, output, and performance requirements of the system are presented along with standard display formats and examples. Operational requirements, hardware, requirements, and test requirements are also included.

  8. Supercomputing with toys: harnessing the power of NVIDIA 8800GTX and playstation 3 for bioinformatics problem.

    PubMed

    Wilson, Justin; Dai, Manhong; Jakupovic, Elvis; Watson, Stanley; Meng, Fan

    2007-01-01

    Modern video cards and game consoles typically have much better performance to price ratios than that of general purpose CPUs. The parallel processing capabilities of game hardware are well-suited for high throughput biomedical data analysis. Our initial results suggest that game hardware is a cost-effective platform for some computationally demanding bioinformatics problems.

  9. Testing Microshutter Arrays Using Commercial FPGA Hardware

    NASA Technical Reports Server (NTRS)

    Rapchun, David

    2008-01-01

    NASA is developing micro-shutter arrays for the Near Infrared Spectrometer (NIRSpec) instrument on the James Webb Space Telescope (JWST). These micro-shutter arrays allow NIRspec to do Multi Object Spectroscopy, a key part of the mission. Each array consists of 62414 individual 100 x 200 micron shutters. These shutters are magnetically opened and held electrostatically. Individual shutters are then programmatically closed using a simple row/column addressing technique. A common approach to provide these data/clock patterns is to use a Field Programmable Gate Array (FPGA). Such devices require complex VHSIC Hardware Description Language (VHDL) programming and custom electronic hardware. Due to JWST's rapid schedule on the development of the micro-shutters, rapid changes were required to the FPGA code to facilitate new approaches being discovered to optimize the array performance. Such rapid changes simply could not be made using conventional VHDL programming. Subsequently, National Instruments introduced an FPGA product that could be programmed through a Labview interface. Because Labview programming is considerably easier than VHDL programming, this method was adopted and brought success. The software/hardware allowed the rapid change the FPGA code and timely results of new micro-shutter array performance data. As a result, numerous labor hours and money to the project were conserved.

  10. Precision Cleaning and Verification Processes Used at Marshall Space Flight Center for Critical Hardware Applications

    NASA Technical Reports Server (NTRS)

    Caruso, Salvadore V.; Cox, Jack A.; McGee, Kathleen A.

    1998-01-01

    Marshall Space Flight Center (MSFC) of the National Aeronautics and Space Administration performs many research and development programs that require hardware and assemblies to be cleaned to levels that are compatible with fuels and oxidizers (liquid oxygen, solid propellants, etc.). Also, MSFC is responsible for developing large telescope satellites which require a variety of optical systems to be cleaned. A precision cleaning shop is operated within MSFC by the Fabrication Services Division of the Materials & Processes Laboratory. Verification of cleanliness is performed for all precision cleaned articles in the Environmental and Analytical Chemistry Branch. Since the Montreal Protocol was instituted, MSFC had to find substitutes for many materials that have been in use for many years, including cleaning agents and organic solvents. As MSFC is a research center, there is a great variety of hardware that is processed in the Precision Cleaning Shop. This entails the use of many different chemicals and solvents, depending on the nature and configuration of the hardware and softgoods being cleaned. A review of the manufacturing cleaning and verification processes, cleaning materials and solvents used at MSFC and changes that resulted from the Montreal Protocol will be presented.

  11. Precision Cleaning and Verification Processes Used at Marshall Space Flight Center for Critical Hardware Applications

    NASA Technical Reports Server (NTRS)

    Caruso, Salvadore V.

    1999-01-01

    Marshall Space Flight Center (MSFC) of the National Aeronautics and Space Administration (NASA) performs many research and development programs that require hardware and assemblies to be cleaned to levels that are compatible with fuels and oxidizers (liquid oxygen, solid propellants, etc.). Also, the Center is responsible for developing large telescope satellites which requires a variety of optical systems to be cleaned. A precision cleaning shop is operated with-in MSFC by the Fabrication Services Division of the Materials & Processes Division. Verification of cleanliness is performed for all precision cleaned articles in the Analytical Chemistry Branch. Since the Montreal Protocol was instituted, MSFC had to find substitutes for many materials that has been in use for many years, including cleaning agents and organic solvents. As MSFC is a research Center, there is a great variety of hardware that is processed in the Precision Cleaning Shop. This entails the use of many different chemicals and solvents, depending on the nature and configuration of the hardware and softgoods being cleaned. A review of the manufacturing cleaning and verification processes, cleaning materials and solvents used at MSFC and changes that resulted from the Montreal Protocol will be presented.

  12. Automatic Thread-Level Parallelization in the Chombo AMR Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christen, Matthias; Keen, Noel; Ligocki, Terry

    2011-05-26

    The increasing on-chip parallelism has some substantial implications for HPC applications. Currently, hybrid programming models (typically MPI+OpenMP) are employed for mapping software to the hardware in order to leverage the hardware?s architectural features. In this paper, we present an approach that automatically introduces thread level parallelism into Chombo, a parallel adaptive mesh refinement framework for finite difference type PDE solvers. In Chombo, core algorithms are specified in the ChomboFortran, a macro language extension to F77 that is part of the Chombo framework. This domain-specific language forms an already used target language for an automatic migration of the large number ofmore » existing algorithms into a hybrid MPI+OpenMP implementation. It also provides access to the auto-tuning methodology that enables tuning certain aspects of an algorithm to hardware characteristics. Performance measurements are presented for a few of the most relevant kernels with respect to a specific application benchmark using this technique as well as benchmark results for the entire application. The kernel benchmarks show that, using auto-tuning, up to a factor of 11 in performance was gained with 4 threads with respect to the serial reference implementation.« less

  13. Magnetic Gimbal Proof-of-Concept Hardware performance results

    NASA Technical Reports Server (NTRS)

    Stuart, Keith O.

    1993-01-01

    The Magnetic Gimbal Proof-of-Concept Hardware activities, accomplishments, and test results are discussed. The Magnetic Gimbal Fabrication and Test (MGFT) program addressed the feasibility of using a magnetic gimbal to isolate an Electro-Optical (EO) sensor from the severe angular vibrations induced during the firing of divert and attitude control system (ACS) thrusters during space flight. The MGFT effort was performed in parallel with the fabrication and testing of a mechanically gimballed, flex pivot based isolation system by the Hughes Aircraft Missile Systems Group. Both servo systems supported identical EO sensor assembly mockups to facilitate direct comparison of performance. The results obtained from the MGFT effort indicate that the magnetic gimbal exhibits the ability to provide significant performance advantages over alternative mechanically gimballed techniques.

  14. Magnetic Gimbal Proof-of-Concept Hardware performance results

    NASA Astrophysics Data System (ADS)

    Stuart, Keith O.

    The Magnetic Gimbal Proof-of-Concept Hardware activities, accomplishments, and test results are discussed. The Magnetic Gimbal Fabrication and Test (MGFT) program addressed the feasibility of using a magnetic gimbal to isolate an Electro-Optical (EO) sensor from the severe angular vibrations induced during the firing of divert and attitude control system (ACS) thrusters during space flight. The MGFT effort was performed in parallel with the fabrication and testing of a mechanically gimballed, flex pivot based isolation system by the Hughes Aircraft Missile Systems Group. Both servo systems supported identical EO sensor assembly mockups to facilitate direct comparison of performance. The results obtained from the MGFT effort indicate that the magnetic gimbal exhibits the ability to provide significant performance advantages over alternative mechanically gimballed techniques.

  15. Motion of water droplets in the counter flow of high-temperature combustion products

    NASA Astrophysics Data System (ADS)

    Volkov, R. S.; Strizhak, P. A.

    2018-01-01

    This paper presents the experimental studies of the deceleration, reversal, and entrainment of water droplets sprayed in counter current flow to a rising stream of high-temperature (1100 K) combustion gases. The initial droplets velocities 0.5-2.5 m/s, radii 10-230 μm, relative volume concentrations 0.2·10-4-1.8·10-4 (m3 of water)/(m3 of gas) vary in the ranges corresponding to promising high-temperature (over 1000 K) gas-vapor-droplet applications (for example, polydisperse fire extinguishing using water mist, fog, or appropriate water vapor-droplet veils, thermal or flame treatment of liquids in the flow of combustion products or high-temperature air; creating coolants based on flue gas, vapor and water droplets; unfreezing of granular media and processing of the drossed surfaces of thermal-power equipment; ignition of liquid and slurry fuel droplets). A hardware-software cross-correlation complex, high-speed (up to 105 fps) video recording tools, panoramic optical techniques (Particle Image Velocimetry, Particle Tracking Velocimetry, Interferometric Particle Imagine, Shadow Photography), and the Tema Automotive software with the function of continuous monitoring have been applied to examine the characteristics of the processes under study. The scale of the influence of initial droplets concentration in the gas flow on the conditions and features of their entrainment by high-temperature gases has been specified. The dependencies Red = f(Reg) and Red' = f(Reg) have been obtained to predict the characteristics of the deceleration of droplets by gases at different droplets concentrations.

  16. Aqueye+: a new ultrafast single photon counter for optical high time resolution astrophysics

    NASA Astrophysics Data System (ADS)

    Zampieri, L.; Naletto, G.; Barbieri, C.; Verroi, E.; Barbieri, M.; Ceribella, G.; D'Alessandro, M.; Farisato, G.; Di Paola, A.; Zoccarato, P.

    2015-05-01

    Aqueye+ is a new ultrafast optical single photon counter, based on single photon avalanche photodiodes (SPAD) and a 4- fold split-pupil concept. It is a completely revisited version of its predecessor, Aqueye, successfully mounted at the 182 cm Copernicus telescope in Asiago. Here we will present the new technological features implemented on Aqueye+, namely a state of the art timing system, a dedicated and optimized optical train, a high sensitivity and high frame rate field camera and remote control, which will give Aqueye plus much superior performances with respect to its predecessor, unparalleled by any other existing fast photometer. The instrument will host also an optical vorticity module to achieve high performance astronomical coronography and a real time acquisition of atmospheric seeing unit. The present paper describes the instrument and its first performances.

  17. International Space Station Sustaining Engineering: A Ground-Based Test Bed for Evaluating Integrated Environmental Control and Life Support System and Internal Thermal Control System Flight Performance

    NASA Technical Reports Server (NTRS)

    Ray, Charles D.; Perry, Jay L.; Callahan, David M.

    2000-01-01

    As the International Space Station's (ISS) various habitable modules are placed in service on orbit, the need to provide for sustaining engineering becomes increasingly important to ensure the proper function of critical onboard systems. Chief among these are the Environmental Control and Life Support System (ECLSS) and the Internal Thermal Control System (ITCS). Without either, life onboard the ISS would prove difficult or nearly impossible. For this reason, a ground-based ECLSS/ITCS hardware performance simulation capability has been developed at NASA's Marshall Space Flight Center. The ECLSS/ITCS Sustaining Engineering Test Bed will be used to assist the ISS Program in resolving hardware anomalies and performing periodic performance assessments. The ISS flight configuration being simulated by the test bed is described as well as ongoing activities related to its preparation for supporting ISS Mission 5A. Growth options for the test facility are presented whereby the current facility may be upgraded to enhance its capability for supporting future station operation well beyond Mission 5A. Test bed capabilities for demonstrating technology improvements of ECLSS hardware are also described.

  18. Pre-Flight Tests with Astronauts, Flight and Ground Hardware, to Assure On-Orbit Success

    NASA Technical Reports Server (NTRS)

    Haddad Michael E.

    2010-01-01

    On-Orbit Constraints Test (OOCT's) refers to mating flight hardware together on the ground before they will be mated on-orbit or on the Lunar surface. The concept seems simple but it can be difficult to perform operations like this on the ground when the flight hardware is being designed to be mated on-orbit in a zero-g/vacuum environment of space or low-g/vacuum environment on the Lunar/Mars Surface. Also some of the items are manufactured years apart so how are mating tasks performed on these components if one piece is on-orbit/on Lunar/Mars surface before its mating piece is planned to be built. Both the Internal Vehicular Activity (IVA) and Extra-Vehicular Activity (EVA) OOCT's performed at Kennedy Space Center will be presented in this paper. Details include how OOCT's should mimic on-orbit/Lunar/Mars surface operational scenarios, a series of photographs will be shown that were taken during OOCT's performed on International Space Station (ISS) flight elements, lessons learned as a result of the OOCT's will be presented and the paper will conclude with possible applications to Moon and Mars Surface operations planned for the Constellation Program.

  19. EVA Development and Verification Testing at NASA's Neutral Buoyancy Laboratory

    NASA Technical Reports Server (NTRS)

    Jairala, Juniper C.; Durkin, Robert; Marak, Ralph J.; Sipila, Stepahnie A.; Ney, Zane A.; Parazynski, Scott E.; Thomason, Arthur H.

    2012-01-01

    As an early step in the preparation for future Extravehicular Activities (EVAs), astronauts perform neutral buoyancy testing to develop and verify EVA hardware and operations. Neutral buoyancy demonstrations at NASA Johnson Space Center's Sonny Carter Training Facility to date have primarily evaluated assembly and maintenance tasks associated with several elements of the International Space Station (ISS). With the retirement of the Shuttle, completion of ISS assembly, and introduction of commercial players for human transportation to space, evaluations at the Neutral Buoyancy Laboratory (NBL) will take on a new focus. Test objectives are selected for their criticality, lack of previous testing, or design changes that justify retesting. Assembly tasks investigated are performed using procedures developed by the flight hardware providers and the Mission Operations Directorate (MOD). Orbital Replacement Unit (ORU) maintenance tasks are performed using a more systematic set of procedures, EVA Concept of Operations for the International Space Station (JSC-33408), also developed by the MOD. This paper describes the requirements and process for performing a neutral buoyancy test, including typical hardware and support equipment requirements, personnel and administrative resource requirements, examples of ISS systems and operations that are evaluated, and typical operational objectives that are evaluated.

  20. A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.

    2014-01-01

    Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a Simulink(R) library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.

  1. A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.

    2015-01-01

    Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a SimulinkR library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.

  2. A Framework for Assessing the Reusability of Hardware (Reusable Rocket Engines)

    NASA Technical Reports Server (NTRS)

    Childress-Thompson, Rhonda; Farrington, Philip; Thomas, Dale

    2016-01-01

    Within the space flight community, reusability has taken center stage as the new buzzword. In order for reusable hardware to be competitive with its expendable counterpart, two major elements must be closely scrutinized. First, recovery and refurbishment costs must be lower than the development and acquisition costs. Additionally, the reliability for reused hardware must remain the same (or nearly the same) as "first use" hardware. Therefore, it is imperative that a systematic approach be established to enhance the development of reusable systems. However, before the decision can be made on whether it is more beneficial to reuse hardware or to replace it, the parameters that are needed to deem hardware worthy of reuse must be identified. For reusable hardware to be successful, the factors that must be considered are reliability (integrity, life, number of uses), operability (maintenance, accessibility), and cost (procurement, retrieval, refurbishment). These three factors are essential to the successful implementation of reusability while enabling the ability to meet performance goals. Past and present strategies and attempts at reuse within the space industry will be examined to identify important attributes of reusability that can be used to evaluate hardware when contemplating reusable versus expendable options. This paper will examine why reuse must be stated as an initial requirement rather than included as an afterthought in the final design. Late in the process, changes in the overall objective/purpose of components typically have adverse effects that potentially negate the benefits. A methodology for assessing the viability of reusing hardware will be presented by using the Space Shuttle Main Engine (SSME) to validate the approach. Because reliability, operability, and costs are key drivers in making this critical decision, they will be used to assess requirements for reuse as applied to components of the SSME.

  3. A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia Mae; Culley, Dennis E.; Aretskin-Hariton, Eliot D.

    2014-01-01

    Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (40,000 pound force thrust) (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a Simulink (R) library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.

  4. Software for Managing Inventory of Flight Hardware

    NASA Technical Reports Server (NTRS)

    Salisbury, John; Savage, Scott; Thomas, Shirman

    2003-01-01

    The Flight Hardware Support Request System (FHSRS) is a computer program that relieves engineers at Marshall Space Flight Center (MSFC) of most of the non-engineering administrative burden of managing an inventory of flight hardware. The FHSRS can also be adapted to perform similar functions for other organizations. The FHSRS affords a combination of capabilities, including those formerly provided by three separate programs in purchasing, inventorying, and inspecting hardware. The FHSRS provides a Web-based interface with a server computer that supports a relational database of inventory; electronic routing of requests and approvals; and electronic documentation from initial request through implementation of quality criteria, acquisition, receipt, inspection, storage, and final issue of flight materials and components. The database lists both hardware acquired for current projects and residual hardware from previous projects. The increased visibility of residual flight components provided by the FHSRS has dramatically improved the re-utilization of materials in lieu of new procurements, resulting in a cost savings of over $1.7 million. The FHSRS includes subprograms for manipulating the data in the database, informing of the status of a request or an item of hardware, and searching the database on any physical or other technical characteristic of a component or material. The software structure forces normalization of the data to facilitate inquiries and searches for which users have entered mixed or inconsistent values.

  5. First results of a simultaneous measurement of tritium and 14C in an ultra-low-background proportional counter for environmental sources of methane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mace, Emily K.; Aalseth, Craig E.; Day, Anthony R.

    Abstract Simultaneous measurement of tritium and 14C would provide an added tool for tracing organic compounds through environmental systems and is possible via beta energy spectroscopy of sample-derived methane in internal-source gas proportional counters. Since the mid-1960’s atmospheric tritium and 14C have fallen dramatically as the isotopic injections from above-ground nuclear testing have been diluted into the ocean and biosphere. In this work, the feasibility of simultaneous tritium and 14C measurements via proportional counters is revisited in light of significant changes in both the atmospheric and biosphere isotopics and the development of new ultra-low-background gas proportional counting capabilities for smallmore » samples (roughly 50 cc methane). A Geant4 Monte Carlo model of a Pacific Northwest National Laboratory (PNNL) proportional counter response to tritium and 14C is used to analyze small samples of two different methane sources to illustrate the range of applicability of contemporary simultaneous measurements and their limitations. Because the two methane sources examined were not sample size limited, we could compare the small-sample measurements performed at PNNL with analysis of larger samples performed at a commercial laboratory. The dual-isotope simultaneous measurement is well matched for methane samples that are atmospheric or have an elevated source of tritium (i.e. landfill gas). For samples with low/modern tritium isotopics (rainwater), commercial separation and counting is a better fit.« less

  6. Noise and LPI radar as part of counter-drone mitigation system measures

    NASA Astrophysics Data System (ADS)

    Zhang, Yan (Rockee); Huang, Yih-Ru; Thumann, Charles

    2017-05-01

    With the rapid proliferation of small unmanned aerial systems (UAS) in the national airspace, small operational drones are being sometimes considered as a security threat for critical infrastructures, such as sports stadiums, military facilities, and airports. There have been many civilian counter-drone solutions and products reported, including radar and electromagnetic counter measures. For the current electromagnetic solutions, they are usually limited to particular type of detection and counter-measure scheme, which is usually effective for the specific type of drones. Also, control and communication link technologies used in even RC drones nowadays are more sophisticated, making them more difficult to detect, decode and counter. Facing these challenges, our team proposes a "software-defined" solution based on noise and LPI radar. For the detection, wideband-noise radar has the resolution performance to discriminate possible micro-Doppler features of the drone versus biological scatterers. It also has the benefit of more adaptive to different types of drones, and covertly detecting for security application. For counter-measures, random noise can be combined with "random sweeping" jamming scheme, to achieve the optimal balance between peak power allowed and the effective jamming probabilities. Some theoretical analysis of the proposed solution is provided in this study, a design case study is developed, and initial laboratory experiments, as well as outdoor tests are conducted to validate the basic concepts and theories. The study demonstrates the basic feasibilities of the Drone Detection and Mitigation Radar (DDMR) concept, while there are still much work needs to be done for a complete and field-worthy technology development.

  7. A Software Defined Radio Based Airplane Communication Navigation Simulation System

    NASA Astrophysics Data System (ADS)

    He, L.; Zhong, H. T.; Song, D.

    2018-01-01

    Radio communication and navigation system plays important role in ensuring the safety of civil airplane in flight. Function and performance should be tested before these systems are installed on-board. Conventionally, a set of transmitter and receiver are needed for each system, thus all the equipment occupy a lot of space and are high cost. In this paper, software defined radio technology is applied to design a common hardware communication and navigation ground simulation system, which can host multiple airplane systems with different operating frequency, such as HF, VHF, VOR, ILS, ADF, etc. We use a broadband analog frontend hardware platform, universal software radio peripheral (USRP), to transmit/receive signal of different frequency band. Software is compiled by LabVIEW on computer, which interfaces with USRP through Ethernet, and is responsible for communication and navigation signal processing and system control. An integrated testing system is established to perform functional test and performance verification of the simulation signal, which demonstrate the feasibility of our design. The system is a low-cost and common hardware platform for multiple airplane systems, which provide helpful reference for integrated avionics design.

  8. Neuromorphic Hardware Architecture Using the Neural Engineering Framework for Pattern Recognition.

    PubMed

    Wang, Runchun; Thakur, Chetan Singh; Cohen, Gregory; Hamilton, Tara Julia; Tapson, Jonathan; van Schaik, Andre

    2017-06-01

    We present a hardware architecture that uses the neural engineering framework (NEF) to implement large-scale neural networks on field programmable gate arrays (FPGAs) for performing massively parallel real-time pattern recognition. NEF is a framework that is capable of synthesising large-scale cognitive systems from subnetworks and we have previously presented an FPGA implementation of the NEF that successfully performs nonlinear mathematical computations. That work was developed based on a compact digital neural core, which consists of 64 neurons that are instantiated by a single physical neuron using a time-multiplexing approach. We have now scaled this approach up to build a pattern recognition system by combining identical neural cores together. As a proof of concept, we have developed a handwritten digit recognition system using the MNIST database and achieved a recognition rate of 96.55%. The system is implemented on a state-of-the-art FPGA and can process 5.12 million digits per second. The architecture and hardware optimisations presented offer high-speed and resource-efficient means for performing high-speed, neuromorphic, and massively parallel pattern recognition and classification tasks.

  9. Analysis of performance improvements for host and GPU interface of the APENet+ 3D Torus network

    NASA Astrophysics Data System (ADS)

    Ammendola A, R.; Biagioni, A.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Paolucci, P. S.; Rossetti, D.; Simula, F.; Tosoratto, L.; Vicini, P.

    2014-06-01

    APEnet+ is an INFN (Italian Institute for Nuclear Physics) project aiming to develop a custom 3-Dimensional torus interconnect network optimized for hybrid clusters CPU-GPU dedicated to High Performance scientific Computing. The APEnet+ interconnect fabric is built on a FPGA-based PCI-express board with 6 bi-directional off-board links showing 34 Gbps of raw bandwidth per direction, and leverages upon peer-to-peer capabilities of Fermi and Kepler-class NVIDIA GPUs to obtain real zero-copy, GPU-to-GPU low latency transfers. The minimization of APEnet+ transfer latency is achieved through the adoption of RDMA protocol implemented in FPGA with specialized hardware blocks tightly coupled with embedded microprocessor. This architecture provides a high performance low latency offload engine for both trasmit and receive side of data transactions: preliminary results are encouraging, showing 50% of bandwidth increase for large packet size transfers. In this paper we describe the APEnet+ architecture, detailing the hardware implementation and discuss the impact of such RDMA specialized hardware on host interface latency and bandwidth.

  10. Performance monitoring can boost turboexpander efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McIntire, R.

    1982-07-05

    This paper discusses ways of improving the productivity of the turboexpander/refrigeration system's radial expander and radial compressor through systematic review of component performance. It reviews several techniques to determine the performance of an expander and compressor. It suggests that any performance improvement program requires quantifying the performance of separate components over a range of operating conditions; estimating the increase in performance associated with any hardware change; and developing an analytical (computer) model of the entire system by using the performance curve of individual components. The model is used to quantify the economic benefits of any change in the system, eithermore » a change in operating procedures or a hardware modification. Topics include proper ways of using antisurge control valves and modifying flow rate/shaft speed (Q/N). It is noted that compressor efficiency depends on the incidence angle of blade at the rotor leading edge and the angle of the incoming gas stream.« less

  11. Component-Level Electronic-Assembly Repair (CLEAR) Synthetic Instrument Capabilities Assessment and Test Report

    NASA Technical Reports Server (NTRS)

    Oeftering, Richard C.; Bradish, Martin A.

    2011-01-01

    The role of synthetic instruments (SIs) for Component-Level Electronic-Assembly Repair (CLEAR) is to provide an external lower-level diagnostic and functional test capability beyond the built-in-test capabilities of spacecraft electronics. Built-in diagnostics can report faults and symptoms, but isolating the root cause and performing corrective action requires specialized instruments. Often a fault can be revealed by emulating the operation of external hardware. This implies complex hardware that is too massive to be accommodated in spacecraft. The SI strategy is aimed at minimizing complexity and mass by employing highly reconfigurable instruments that perform diagnostics and emulate external functions. In effect, SI can synthesize an instrument on demand. The SI architecture section of this document summarizes the result of a recent program diagnostic and test needs assessment based on the International Space Station. The SI architecture addresses operational issues such as minimizing crew time and crew skill level, and the SI data transactions between the crew and supporting ground engineering searching for the root cause and formulating corrective actions. SI technology is described within a teleoperations framework. The remaining sections describe a lab demonstration intended to show that a single SI circuit could synthesize an instrument in hardware and subsequently clear the hardware and synthesize a completely different instrument on demand. An analysis of the capabilities and limitations of commercially available SI hardware and programming tools is included. Future work in SI technology is also described.

  12. FPGA implementation of sparse matrix algorithm for information retrieval

    NASA Astrophysics Data System (ADS)

    Bojanic, Slobodan; Jevtic, Ruzica; Nieto-Taladriz, Octavio

    2005-06-01

    Information text data retrieval requires a tremendous amount of processing time because of the size of the data and the complexity of information retrieval algorithms. In this paper the solution to this problem is proposed via hardware supported information retrieval algorithms. Reconfigurable computing may adopt frequent hardware modifications through its tailorable hardware and exploits parallelism for a given application through reconfigurable and flexible hardware units. The degree of the parallelism can be tuned for data. In this work we implemented standard BLAS (basic linear algebra subprogram) sparse matrix algorithm named Compressed Sparse Row (CSR) that is showed to be more efficient in terms of storage space requirement and query-processing timing over the other sparse matrix algorithms for information retrieval application. Although inverted index algorithm is treated as the de facto standard for information retrieval for years, an alternative approach to store the index of text collection in a sparse matrix structure gains more attention. This approach performs query processing using sparse matrix-vector multiplication and due to parallelization achieves a substantial efficiency over the sequential inverted index. The parallel implementations of information retrieval kernel are presented in this work targeting the Virtex II Field Programmable Gate Arrays (FPGAs) board from Xilinx. A recent development in scientific applications is the use of FPGA to achieve high performance results. Computational results are compared to implementations on other platforms. The design achieves a high level of parallelism for the overall function while retaining highly optimised hardware within processing unit.

  13. Reconfigurable HIL Testing of Earth Satellites

    NASA Technical Reports Server (NTRS)

    2008-01-01

    In recent years, hardware-in-the-loop (HIL) testing has carved a strong niche in several industries, such as automotive, aerospace, telecomm, and consumer electronics. As desktop computers have realized gains in speed, memory size, and data storage capacity, hardware/software platforms have evolved into high performance, deterministic HIL platforms, capable of hosting the most demanding applications for testing components and subsystems. Using simulation software to emulate the digital and analog I/O signals of system components, engineers of all disciplines can now test new systems in realistic environments to evaluate their function and performance prior to field deployment. Within the Aerospace industry, space-borne satellite systems are arguably some of the most demanding in terms of their requirement for custom engineering and testing. Typically, spacecraft are built one or few at a time to fulfill a space science or defense mission. In contrast to other industries that can amortize the cost of HIL systems over thousands, even millions of units, spacecraft HIL systems have been built as one-of-a-kind solutions, expensive in terms of schedule, cost, and risk, to assure satellite and spacecraft systems reliability. The focus of this paper is to present a new approach to HIL testing for spacecraft systems that takes advantage of a highly flexible hardware/software architecture based on National Instruments PXI reconfigurable hardware and virtual instruments developed using LabVIEW. This new approach to HIL is based on a multistage/multimode spacecraft bus emulation development model called Reconfigurable Hardware In-the-Loop or RHIL.

  14. A CPU benchmark for protein crystallographic refinement.

    PubMed

    Bourne, P E; Hendrickson, W A

    1990-01-01

    The CPU time required to complete a cycle of restrained least-squares refinement of a protein structure from X-ray crystallographic data using the FORTRAN codes PROTIN and PROLSQ are reported for 48 different processors, ranging from single-user workstations to supercomputers. Sequential, vector, VLIW, multiprocessor, and RISC hardware architectures are compared using both a small and a large protein structure. Representative compile times for each hardware type are also given, and the improvement in run-time when coding for a specific hardware architecture considered. The benchmarks involve scalar integer and vector floating point arithmetic and are representative of the calculations performed in many scientific disciplines.

  15. Computational System For Rapid CFD Analysis In Engineering

    NASA Technical Reports Server (NTRS)

    Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.

    1995-01-01

    Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.

  16. Hardware and software for automating the process of studying high-speed gas flows in wind tunnels of short-term action

    NASA Astrophysics Data System (ADS)

    Yakovlev, V. V.; Shakirov, S. R.; Gilyov, V. M.; Shpak, S. I.

    2017-10-01

    In this paper, we propose a variant of constructing automation systems for aerodynamic experiments on the basis of modern hardware-software means of domestic development. The structure of the universal control and data collection system for performing experiments in wind tunnels of continuous, periodic or short-term action is proposed. The proposed hardware and software development tools for ICT SB RAS and ITAM SB RAS, as well as subsystems based on them, can be widely applied to any scientific and experimental installations, as well as to the automation of technological processes in production.

  17. Microgravity Materials and Biotechnology Experiments

    NASA Technical Reports Server (NTRS)

    Vlasse, Marcus

    1998-01-01

    Presentation will deal with an overview of the Materials Science and Biotechnology/Crystal Growth flight experiments and their requirements for a successful execution. It will also deal with the hardware necessary to perform these experiments as well as the hardware requirements. This information will serve as a basis for the Abstract: workshop participants to review the poss7ibilifies for a low cost unmanned carrier and the simple automation to carry-out experiments in a microgravity environment with little intervention from the ground. The discussion will include what we have now and what will be needed to automate totally the hardware and experiment protocol at relatively low cost.

  18. Space and power efficient hybrid counters array

    DOEpatents

    Gara, Alan G [Mount Kisco, NY; Salapura, Valentina [Chappaqua, NY

    2009-05-12

    A hybrid counter array device for counting events. The hybrid counter array includes a first counter portion comprising N counter devices, each counter device for receiving signals representing occurrences of events from an event source and providing a first count value corresponding to a lower order bits of the hybrid counter array. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits of the hybrid counter array. A control device monitors each of the N counter devices of the first counter portion and initiates updating a value of a corresponding second count value stored at the corresponding addressable memory location in the second counter portion. Thus, a combination of the first and second count values provide an instantaneous measure of number of events received.

  19. Space and power efficient hybrid counters array

    DOEpatents

    Gara, Alan G.; Salapura, Valentina

    2010-03-30

    A hybrid counter array device for counting events. The hybrid counter array includes a first counter portion comprising N counter devices, each counter device for receiving signals representing occurrences of events from an event source and providing a first count value corresponding to a lower order bits of the hybrid counter array. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits of the hybrid counter array. A control device monitors each of the N counter devices of the first counter portion and initiates updating a value of a corresponding second count value stored at the corresponding addressable memory location in the second counter portion. Thus, a combination of the first and second count values provide an instantaneous measure of number of events received.

  20. Ku-band antenna acquisition and tracking performance study, volume 4

    NASA Technical Reports Server (NTRS)

    Huang, T. C.; Lindsey, W. C.

    1977-01-01

    The results pertaining to the tradeoff analysis and performance of the Ku-band shuttle antenna pointing and signal acquisition system are presented. The square, hexagonal and spiral antenna trajectories were investigated assuming the TDRS postulated uncertainty region and a flexible statistical model for the location of the TDRS within the uncertainty volume. The scanning trajectories, shuttle/TDRS signal parameters and dynamics, and three signal acquisition algorithms were integrated into a hardware simulation. The hardware simulation is quite flexible in that it allows for the evaluation of signal acquisition performance for an arbitrary (programmable) antenna pattern, a large range of C/N sub O's, various TDRS/shuttle a priori uncertainty distributions, and three distinct signal search algorithms.

Top