Sample records for ultra-reliable computer systems

  1. Fly-by-Wire Systems Enable Safer, More Efficient Flight

    NASA Technical Reports Server (NTRS)

    2012-01-01

    Using the ultra-reliable Apollo Guidance Computer that enabled the Apollo Moon missions, Dryden Flight Research Center engineers, in partnership with industry leaders such as Cambridge, Massachusetts-based Draper Laboratory, demonstrated that digital computers could be used to fly aircraft. Digital fly-by-wire systems have since been incorporated into large airliners, military jets, revolutionary new aircraft, and even cars and submarines.

  2. Fault-tolerant building-block computer study

    NASA Technical Reports Server (NTRS)

    Rennels, D. A.

    1978-01-01

    Ultra-reliable core computers are required for improving the reliability of complex military systems. Such computers can provide reliable fault diagnosis, failure circumvention, and, in some cases serve as an automated repairman for their host systems. A small set of building-block circuits which can be implemented as single very large integration devices, and which can be used with off-the-shelf microprocessors and memories to build self checking computer modules (SCCM) is described. Each SCCM is a microcomputer which is capable of detecting its own faults during normal operation and is described to communicate with other identical modules over one or more Mil Standard 1553A buses. Several SCCMs can be connected into a network with backup spares to provide fault-tolerant operation, i.e. automated recovery from faults. Alternative fault-tolerant SCCM configurations are discussed along with the cost and reliability associated with their implementation.

  3. Ultra Reliable Closed Loop Life Support for Long Space Missions

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Ewert, Michael K.

    2010-01-01

    Spacecraft human life support systems can achieve ultra reliability by providing sufficient spares to replace all failed components. The additional mass of spares for ultra reliability is approximately equal to the original system mass, provided that the original system reliability is not too low. Acceptable reliability can be achieved for the Space Shuttle and Space Station by preventive maintenance and by replacing failed units. However, on-demand maintenance and repair requires a logistics supply chain in place to provide the needed spares. In contrast, a Mars or other long space mission must take along all the needed spares, since resupply is not possible. Long missions must achieve ultra reliability, a very low failure rate per hour, since they will take years rather than weeks and cannot be cut short if a failure occurs. Also, distant missions have a much higher mass launch cost per kilogram than near-Earth missions. Achieving ultra reliable spacecraft life support systems with acceptable mass will require a well-planned and extensive development effort. Analysis must determine the reliability requirement and allocate it to subsystems and components. Ultra reliability requires reducing the intrinsic failure causes, providing spares to replace failed components and having "graceful" failure modes. Technologies, components, and materials must be selected and designed for high reliability. Long duration testing is needed to confirm very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The system must be designed, developed, integrated, and tested with system reliability in mind. Maintenance and reparability of failed units must not add to the probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass should start soon since it must be a long term effort.

  4. Methods and Costs to Achieve Ultra Reliable Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2012-01-01

    A published Mars mission is used to explore the methods and costs to achieve ultra reliable life support. The Mars mission and its recycling life support design are described. The life support systems were made triply redundant, implying that each individual system will have fairly good reliability. Ultra reliable life support is needed for Mars and other long, distant missions. Current systems apparently have insufficient reliability. The life cycle cost of the Mars life support system is estimated. Reliability can be increased by improving the intrinsic system reliability, adding spare parts, or by providing technically diverse redundant systems. The costs of these approaches are estimated. Adding spares is least costly but may be defeated by common cause failures. Using two technically diverse systems is effective but doubles the life cycle cost. Achieving ultra reliability is worth its high cost because the penalty for failure is very high.

  5. Developing Ultra Reliable Life Support for the Moon and Mars

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2009-01-01

    Recycling life support systems can achieve ultra reliability by using spares to replace failed components. The added mass for spares is approximately equal to the original system mass, provided the original system reliability is not very low. Acceptable reliability can be achieved for the space shuttle and space station by preventive maintenance and by replacing failed units, However, this maintenance and repair depends on a logistics supply chain that provides the needed spares. The Mars mission must take all the needed spares at launch. The Mars mission also must achieve ultra reliability, a very low failure rate per hour, since it requires years rather than weeks and cannot be cut short if a failure occurs. Also, the Mars mission has a much higher mass launch cost per kilogram than shuttle or station. Achieving ultra reliable space life support with acceptable mass will require a well-planned and extensive development effort. Analysis must define the reliability requirement and allocate it to subsystems and components. Technologies, components, and materials must be designed and selected for high reliability. Extensive testing is needed to ascertain very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The systems must be designed, produced, integrated, and tested without impairing system reliability. Maintenance and failed unit replacement should not introduce any additional probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass must start soon if it is to produce timely results for the moon and Mars.

  6. MOLAR: Modular Linux and Adaptive Runtime Support for HEC OS/R Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank Mueller

    2009-02-05

    MOLAR is a multi-institution research effort that concentrates on adaptive, reliable,and efficient operating and runtime system solutions for ultra-scale high-end scientific computing on the next generation of supercomputers. This research addresses the challenges outlined by the FAST-OS - forum to address scalable technology for runtime and operating systems --- and HECRTF --- high-end computing revitalization task force --- activities by providing a modular Linux and adaptable runtime support for high-end computing operating and runtime systems. The MOLAR research has the following goals to address these issues. (1) Create a modular and configurable Linux system that allows customized changes based onmore » the requirements of the applications, runtime systems, and cluster management software. (2) Build runtime systems that leverage the OS modularity and configurability to improve efficiency, reliability, scalability, ease-of-use, and provide support to legacy and promising programming models. (3) Advance computer reliability, availability and serviceability (RAS) management systems to work cooperatively with the OS/R to identify and preemptively resolve system issues. (4) Explore the use of advanced monitoring and adaptation to improve application performance and predictability of system interruptions. The overall goal of the research conducted at NCSU is to develop scalable algorithms for high-availability without single points of failure and without single points of control.« less

  7. Distributed Computing for the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Chudoba, J.

    2015-12-01

    Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system.

  8. Technology for organization of the onboard system for processing and storage of ERS data for ultrasmall spacecraft

    NASA Astrophysics Data System (ADS)

    Strotov, Valery V.; Taganov, Alexander I.; Konkin, Yuriy V.; Kolesenkov, Aleksandr N.

    2017-10-01

    Task of processing and analysis of obtained Earth remote sensing data on ultra-small spacecraft board is actual taking into consideration significant expenditures of energy for data transfer and low productivity of computers. Thereby, there is an issue of effective and reliable storage of the general information flow obtained from onboard systems of information collection, including Earth remote sensing data, into a specialized data base. The paper has considered peculiarities of database management system operation with the multilevel memory structure. For storage of data in data base the format has been developed that describes a data base physical structure which contains required parameters for information loading. Such structure allows reducing a memory size occupied by data base because it is not necessary to store values of keys separately. The paper has shown architecture of the relational database management system oriented into embedment into the onboard ultra-small spacecraft software. Data base for storage of different information, including Earth remote sensing data, can be developed by means of such database management system for its following processing. Suggested database management system architecture has low requirements to power of the computer systems and memory resources on the ultra-small spacecraft board. Data integrity is ensured under input and change of the structured information.

  9. Capturing and Processing Soil GHG Fluxes Using the LI-COR LI-8100A

    NASA Astrophysics Data System (ADS)

    Xu, Liukang; McDermitt, Dayle; Hupp, Jason; Johnson, Mark; Madsen, Rod

    2015-04-01

    The LI-COR LI-8100A Automated Soil CO2 Flux System is designed to measure soil CO2 efflux using automated chambers and a non-steady state measurement protocol. While CO2 is an important gas in many contexts, it is not the only gas of interest for many research applications. With some simple plumbing modifications, many third party analyzers capable of measuring other trace gases, e.g. N2O, CH4, or 13CO2 etc., can be interfaced with the LI-8100A System, and LI-COR's data processing software (SoilFluxPro™) can be used to compute fluxes for these additional gases. In this paper we describe considerations for selecting an appropriate third party analyzer to interface with the system, how to integrate data into the system, and the procedure used to compute fluxes of additional gases in SoilFluxPro™. A case study is presented to demonstrate methane flux measurements using an Ultra-Portable Greenhouse Gas Analyzer (Ultra-Portable GGA, model 915-0011), manufactured by Los Gatos Research and integrated into the LI-8100A System. Laboratory and field test results show that the soil CO2 efflux based on the time series of CO2 data measured either with the LI-8100A System or with the Ultra-Portable GGA are essentially the same. This suggests that soil GHG fluxes measured with both systems are reliable.

  10. An Experimental Study of an Ultra-Mobile Vehicle for Off-Road Transportation

    DTIC Science & Technology

    1984-05-01

    Adaptaive Hexapod Vehicle. M.S. thesis , The &io State University, August, 1982. 7. Tsai, C.K., Computer Control Design of an Energy-Efficient Leg, M.S...Applications, ASME, 1982. 9. Kao, M.L., A Reliable Multi-Microcomputer System for Real Time Control , M.S. thesis , The Ohio State University, December...13. Broerman, K.R., Development of a Proximity Sensor System for Foot Altitude Control of a Terrain-Adaptive Hexapod Robot, M.S. thesis , The Ohio State

  11. Ultra-wide Range Gamma Detector System for Search and Locate Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odell, D. Mackenzie Odell; Harpring, Larry J.; Moore, Frank S. Jr.

    2005-10-26

    Collecting debris samples following a nuclear event requires that operations be conducted from a considerable stand-off distance. An ultra-wide range gamma detector system has been constructed to accomplish both long range radiation search and close range hot sample collection functions. Constructed and tested on a REMOTEC Andros platform, the system has demonstrated reliable operation over six orders of magnitude of gamma dose from 100's of uR/hr to over 100 R/hr. Functional elements include a remotely controlled variable collimator assembly, a NaI(Tl)/photomultiplier tube detector, a proprietary digital radiation instrument, a coaxially mounted video camera, a digital compass, and both local andmore » remote control computers with a user interface designed for long range operations. Long range sensitivity and target location, as well as close range sample selection performance are presented.« less

  12. High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations

    NASA Technical Reports Server (NTRS)

    Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.

    2003-01-01

    Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.

  13. Designing Fault-Injection Experiments for the Reliability of Embedded Systems

    NASA Technical Reports Server (NTRS)

    White, Allan L.

    2012-01-01

    This paper considers the long-standing problem of conducting fault-injections experiments to establish the ultra-reliability of embedded systems. There have been extensive efforts in fault injection, and this paper offers a partial summary of the efforts, but these previous efforts have focused on realism and efficiency. Fault injections have been used to examine diagnostics and to test algorithms, but the literature does not contain any framework that says how to conduct fault-injection experiments to establish ultra-reliability. A solution to this problem integrates field-data, arguments-from-design, and fault-injection into a seamless whole. The solution in this paper is to derive a model reduction theorem for a class of semi-Markov models suitable for describing ultra-reliable embedded systems. The derivation shows that a tight upper bound on the probability of system failure can be obtained using only the means of system-recovery times, thus reducing the experimental effort to estimating a reasonable number of easily-observed parameters. The paper includes an example of a system subject to both permanent and transient faults. There is a discussion of integrating fault-injection with field-data and arguments-from-design.

  14. A Conceptual Design for a Reliable Optical Bus (ROBUS)

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.; Malekpour, Mahyar; Torres, Wilfredo

    2002-01-01

    The Scalable Processor-Independent Design for Electromagnetic Resilience (SPIDER) is a new family of fault-tolerant architectures under development at NASA Langley Research Center (LaRC). The SPIDER is a general-purpose computational platform suitable for use in ultra-reliable embedded control applications. The design scales from a small configuration supporting a single aircraft function to a large distributed configuration capable of supporting several functions simultaneously. SPIDER consists of a collection of simplex processing elements communicating via a Reliable Optical Bus (ROBUS). The ROBUS is an ultra-reliable, time-division multiple access broadcast bus with strictly enforced write access (no babbling idiots) providing basic fault-tolerant services using formally verified fault-tolerance protocols including Interactive Consistency (Byzantine Agreement), Internal Clock Synchronization, and Distributed Diagnosis. The conceptual design of the ROBUS is presented in this paper including requirements, topology, protocols, and the block-level design. Verification activities, including the use of formal methods, are also discussed.

  15. Design of nodes for embedded and ultra low-power wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Xu, Jun; You, Bo; Cui, Juan; Ma, Jing; Li, Xin

    2008-10-01

    Sensor network integrates sensor technology, MEMS (Micro-Electro-Mechanical system) technology, embedded computing, wireless communication technology and distributed information management technology. It is of great value to use it where human is quite difficult to reach. Power consumption and size are the most important consideration when nodes are designed for distributed WSN (wireless sensor networks). Consequently, it is of great importance to decrease the size of a node, reduce its power consumption and extend its life in network. WSN nodes have been designed using JN5121-Z01-M01 module produced by jennic company and IEEE 802.15.4/ZigBee technology. Its new features include support for CPU sleep modes and a long-term ultra low power sleep mode for the entire node. In low power configuration the node resembles existing small low power nodes. An embedded temperature sensor node has been developed to verify and explore our architecture. The experiment results indicate that the WSN has the characteristic of high reliability, good stability and ultra low power consumption.

  16. The use of automatic programming techniques for fault tolerant computing systems

    NASA Technical Reports Server (NTRS)

    Wild, C.

    1985-01-01

    It is conjectured that the production of software for ultra-reliable computing systems such as required by Space Station, aircraft, nuclear power plants and the like will require a high degree of automation as well as fault tolerance. In this paper, the relationship between automatic programming techniques and fault tolerant computing systems is explored. Initial efforts in the automatic synthesis of code from assertions to be used for error detection as well as the automatic generation of assertions and test cases from abstract data type specifications is outlined. Speculation on the ability to generate truly diverse designs capable of recovery from errors by exploring alternate paths in the program synthesis tree is discussed. Some initial thoughts on the use of knowledge based systems for the global detection of abnormal behavior using expectations and the goal-directed reconfiguration of resources to meet critical mission objectives are given. One of the sources of information for these systems would be the knowledge captured during the automatic programming process.

  17. Technical description of space ultra reliable modular computer (SUMC), model 2 B

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The design features of the SUMC-2B computer, also called the IBM-HTC are described. It is general purpose digital computer implemented with flexible hardware elements and microprograming to enable low cost customizing to a wide range of applications. It executes the S/360 standard instruction set to maintain problem state compability. Memory technology, extended instruction sets, and I/O channel variations are among the available options.

  18. Design and simulation of the direct drive servo system

    NASA Astrophysics Data System (ADS)

    Ren, Changzhi; Liu, Zhao; Song, Libin; Yi, Qiang; Chen, Ken; Zhang, Zhenchao

    2010-07-01

    As direct drive technology is finding their way into telescope drive designs for its many advantages, it would push to more reliable and cheaper solutions for future telescope complex motion system. However, the telescope drive system based on the direct drive technology is one high integrated electromechanical system, which one complex electromechanical design method is adopted to improve the efficiency, reliability and quality of the system during the design and manufacture circle. The telescope is one ultra-exact, ultra-speed, high precision and huge inertial instrument, which the direct torque motor adopted by the telescope drive system is different from traditional motor. This paper explores the design process and some simulation results are discussed.

  19. Intelligent neuroprocessors for in-situ launch vehicle propulsion systems health management

    NASA Technical Reports Server (NTRS)

    Gulati, S.; Tawel, R.; Thakoor, A. P.

    1993-01-01

    Efficacy of existing on-board propulsion systems health management systems (HMS) are severely impacted by computational limitations (e.g., low sampling rates); paradigmatic limitations (e.g., low-fidelity logic/parameter redlining only, false alarms due to noisy/corrupted sensor signatures, preprogrammed diagnostics only); and telemetry bandwidth limitations on space/ground interactions. Ultra-compact/light, adaptive neural networks with massively parallel, asynchronous, fast reconfigurable and fault-tolerant information processing properties have already demonstrated significant potential for inflight diagnostic analyses and resource allocation with reduced ground dependence. In particular, they can automatically exploit correlation effects across multiple sensor streams (plume analyzer, flow meters, vibration detectors, etc.) so as to detect anomaly signatures that cannot be determined from the exploitation of single sensor. Furthermore, neural networks have already demonstrated the potential for impacting real-time fault recovery in vehicle subsystems by adaptively regulating combustion mixture/power subsystems and optimizing resource utilization under degraded conditions. A class of high-performance neuroprocessors, developed at JPL, that have demonstrated potential for next-generation HMS for a family of space transportation vehicles envisioned for the next few decades, including HLLV, NLS, and space shuttle is presented. Of fundamental interest are intelligent neuroprocessors for real-time plume analysis, optimizing combustion mixture-ratio, and feedback to hydraulic, pneumatic control systems. This class includes concurrently asynchronous reprogrammable, nonvolatile, analog neural processors with high speed, high bandwidth electronic/optical I/O interfaced, with special emphasis on NASA's unique requirements in terms of performance, reliability, ultra-high density ultra-compactness, ultra-light weight devices, radiation hardened devices, power stringency, and long life terms.

  20. Comparison between various patch wise strategies for reconstruction of ultra-spectral cubes captured with a compressive sensing system

    NASA Astrophysics Data System (ADS)

    Oiknine, Yaniv; August, Isaac Y.; Revah, Liat; Stern, Adrian

    2016-05-01

    Recently we introduced a Compressive Sensing Miniature Ultra-Spectral Imaging (CS-MUSI) system. The system is based on a single Liquid Crystal (LC) cell and a parallel sensor array where the liquid crystal cell performs spectral encoding. Within the framework of compressive sensing, the CS-MUSI system is able to reconstruct ultra-spectral cubes captured with only an amount of ~10% samples compared to a conventional system. Despite the compression, the technique is extremely complex computationally, because reconstruction of ultra-spectral images requires processing huge data cubes of Gigavoxel size. Fortunately, the computational effort can be alleviated by using separable operation. An additional way to reduce the reconstruction effort is to perform the reconstructions on patches. In this work, we consider processing on various patch shapes. We present an experimental comparison between various patch shapes chosen to process the ultra-spectral data captured with CS-MUSI system. The patches may be one dimensional (1D) for which the reconstruction is carried out spatially pixel-wise, or two dimensional (2D) - working on spatial rows/columns of the ultra-spectral cube, as well as three dimensional (3D).

  1. Design and control of the precise tracking bed based on complex electromechanical design theory

    NASA Astrophysics Data System (ADS)

    Ren, Changzhi; Liu, Zhao; Wu, Liao; Chen, Ken

    2010-05-01

    The precise tracking technology is wide used in astronomical instruments, satellite tracking and aeronautic test bed. However, the precise ultra low speed tracking drive system is one high integrated electromechanical system, which one complexly electromechanical design method is adopted to improve the efficiency, reliability and quality of the system during the design and manufacture circle. The precise Tracking Bed is one ultra-exact, ultra-low speed, high precision and huge inertial instrument, which some kind of mechanism and environment of the ultra low speed is different from general technology. This paper explores the design process based on complex electromechanical optimizing design theory, one non-PID with a CMAC forward feedback control method is used in the servo system of the precise tracking bed and some simulation results are discussed.

  2. Final Report for DOE Award ER25756

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kesselman, Carl

    2014-11-17

    The SciDAC-funded Center for Enabling Distributed Petascale Science (CEDPS) was established to address technical challenges that arise due to the frequent geographic distribution of data producers (in particular, supercomputers and scientific instruments) and data consumers (people and computers) within the DOE laboratory system. Its goal is to produce technical innovations that meet DOE end-user needs for (a) rapid and dependable placement of large quantities of data within a distributed high-performance environment, and (b) the convenient construction of scalable science services that provide for the reliable and high-performance processing of computation and data analysis requests from many remote clients. The Centermore » is also addressing (c) the important problem of troubleshooting these and other related ultra-high-performance distributed activities from the perspective of both performance and functionality« less

  3. Coexistence of enhanced mobile broadband communications and ultra-reliable low-latency communications in mobile front-haul

    NASA Astrophysics Data System (ADS)

    Ying, Kai; Kowalski, John M.; Nogami, Toshizo; Yin, Zhanping; Sheng, Jia

    2018-01-01

    5G systems are supposed to support coexistence of multiple services such as ultra reliable low latency communications (URLLC) and enhanced mobile broadband (eMBB) communications. The target of eMBB communications is to meet the high-throughput requirement while URLLC are used for some high priority services. Due to the sporadic nature and low latency requirement, URLLC transmission may pre-empt the resource of eMBB transmission. Our work is to analyze the URLLC impact on eMBB transmission in mobile front-haul. Then, some solutions are proposed to guarantee the reliability/latency requirements for URLLC services and minimize the impact to eMBB services at the same time.

  4. TOPICAL REVIEW: Ultra-thin film encapsulation processes for micro-electro-mechanical devices and systems

    NASA Astrophysics Data System (ADS)

    Stoldt, Conrad R.; Bright, Victor M.

    2006-05-01

    A range of physical properties can be achieved in micro-electro-mechanical systems (MEMS) through their encapsulation with solid-state, ultra-thin coatings. This paper reviews the application of single source chemical vapour deposition and atomic layer deposition (ALD) in the growth of submicron films on polycrystalline silicon microstructures for the improvement of microscale reliability and performance. In particular, microstructure encapsulation with silicon carbide, tungsten, alumina and alumina-zinc oxide alloy ultra-thin films is highlighted, and the mechanical, electrical, tribological and chemical impact of these overlayers is detailed. The potential use of solid-state, ultra-thin coatings in commercial microsystems is explored using radio frequency MEMS as a case study for the ALD alloy alumina-zinc oxide thin film.

  5. Ultra-short heart rate variability recording reliability: The effect of controlled paced breathing.

    PubMed

    Melo, Hiago M; Martins, Thiago C; Nascimento, Lucas M; Hoeller, Alexandre A; Walz, Roger; Takase, Emílio

    2018-06-04

    Recent studies have reported that Heart Rate Variability (HRV) indices remain reliable even during recordings shorter than 5 min, suggesting the ultra-short recording method as a valuable tool for autonomic assessment. However, the minimum time-epoch to obtain a reliable record for all HRV domains (time, frequency, and Poincare geometric measures), as well as the effect of respiratory rate on the reliability of these indices remains unknown. Twenty volunteers had their HRV recorded in a seated position during spontaneous and controlled respiratory rhythms. HRV intervals with 1, 2, and 3 min were correlated with the gold standard period (6-min duration) and the mean values of all indices were compared in the two respiratory rhythm conditions. rMSSD and SD1 were more reliable for recordings with ultra-short duration at all time intervals (r values from 0.764 to 0.950, p < 0.05) for spontaneous breathing condition, whereas the other indices require longer recording time to obtain reliable values. The controlled breathing rhythm evokes stronger r values for time domain indices (r values from 0.83 to 0.99, p < 0.05 for rMSSD), but impairs the mean values replicability of domains across most time intervals. Although the use of standardized breathing increases the correlations coefficients, all HRV indices showed an increase in mean values (t values from 3.79 to 14.94, p < 0.001) except the RR and HF that presented a decrease (t = 4.14 and 5.96, p < 0.0001). Our results indicate that proper ultra-short-term recording method can provide a quick and reliable source of cardiac autonomic nervous system assessment. © 2018 Wiley Periodicals, Inc.

  6. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  7. Integrating Formal Methods and Testing 2002

    NASA Technical Reports Server (NTRS)

    Cukic, Bojan

    2002-01-01

    Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.

  8. Integrated computational study of ultra-high heat flux cooling using cryogenic micro-solid nitrogen spray

    NASA Astrophysics Data System (ADS)

    Ishimoto, Jun; Oh, U.; Tan, Daisuke

    2012-10-01

    A new type of ultra-high heat flux cooling system using the atomized spray of cryogenic micro-solid nitrogen (SN2) particles produced by a superadiabatic two-fluid nozzle was developed and numerically investigated for application to next generation super computer processor thermal management. The fundamental characteristics of heat transfer and cooling performance of micro-solid nitrogen particulate spray impinging on a heated substrate were numerically investigated and experimentally measured by a new type of integrated computational-experimental technique. The employed Computational Fluid Dynamics (CFD) analysis based on the Euler-Lagrange model is focused on the cryogenic spray behavior of atomized particulate micro-solid nitrogen and also on its ultra-high heat flux cooling characteristics. Based on the numerically predicted performance, a new type of cryogenic spray cooling technique for application to a ultra-high heat power density device was developed. In the present integrated computation, it is clarified that the cryogenic micro-solid spray cooling characteristics are affected by several factors of the heat transfer process of micro-solid spray which impinges on heated surface as well as by atomization behavior of micro-solid particles. When micro-SN2 spraying cooling was used, an ultra-high cooling heat flux level was achieved during operation, a better cooling performance than that with liquid nitrogen (LN2) spray cooling. As micro-SN2 cooling has the advantage of direct latent heat transport which avoids the film boiling state, the ultra-short time scale heat transfer in a thin boundary layer is more possible than in LN2 spray. The present numerical prediction of the micro-SN2 spray cooling heat flux profile can reasonably reproduce the measurement results of cooling wall heat flux profiles. The application of micro-solid spray as a refrigerant for next generation computer processors is anticipated, and its ultra-high heat flux technology is expected to result in an extensive improvement in the effective cooling performance of large scale supercomputer systems.

  9. Ultra-Structure database design methodology for managing systems biology data and analyses

    PubMed Central

    Maier, Christopher W; Long, Jeffrey G; Hemminger, Bradley M; Giddings, Morgan C

    2009-01-01

    Background Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping). Results We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research. Conclusion We find Ultra-Structure offers substantial benefits for biological information systems, the largest being the integration of diverse information sources into a common framework. This facilitates systems biology research by integrating data from disparate high-throughput techniques. It also enables us to readily incorporate new data types, sources, and domain knowledge with no change to the database structure or associated computer code. Ultra-Structure may be a significant step towards solving the hard problem of data management and integration in the systems biology era. PMID:19691849

  10. Test experience on an ultrareliable computer communication network

    NASA Technical Reports Server (NTRS)

    Abbott, L. W.

    1984-01-01

    The dispersed sensor processing mesh (DSPM) is an experimental, ultra-reliable, fault-tolerant computer communications network that exhibits an organic-like ability to regenerate itself after suffering damage. The regeneration is accomplished by two routines - grow and repair. This paper discusses the DSPM concept for achieving fault tolerance and provides a brief description of the mechanization of both the experiment and the six-node experimental network. The main topic of this paper is the system performance of the growth algorithm contained in the grow routine. The characteristics imbued to DSPM by the growth algorithm are also discussed. Data from an experimental DSPM network and software simulation of larger DSPM-type networks are used to examine the inherent limitation on growth time by the growth algorithm and the relationship of growth time to network size and topology.

  11. Enhancing thermal reliability of fiber-optic sensors for bio-inspired applications at ultra-high temperatures

    NASA Astrophysics Data System (ADS)

    Kang, Donghoon; Kim, Heon-Young; Kim, Dae-Hyun

    2014-07-01

    The rapid growth of bio-(inspired) sensors has led to an improvement in modern healthcare and human-robot systems in recent years. Higher levels of reliability and better flexibility, essential features of these sensors, are very much required in many application fields (e.g. applications at ultra-high temperatures). Fiber-optic sensors, and fiber Bragg grating (FBG) sensors in particular, are being widely studied as suitable sensors for improved structural health monitoring (SHM) due to their many merits. To enhance the thermal reliability of FBG sensors, thermal sensitivity, generally expressed as αf + ξf and considered a constant, should be investigated more precisely. For this purpose, the governing equation of FBG sensors is modified using differential derivatives between the wavelength shift and the temperature change in this study. Through a thermal test ranging from RT to 900 °C, the thermal sensitivity of FBG sensors is successfully examined and this guarantees thermal reliability of FBG sensors at ultra-high temperatures. In detail, αf + ξf has a non-linear dependence on temperature and varies from 6.0 × 10-6 °C-1 (20 °C) to 10.6 × 10-6 °C-1 (650 °C). Also, FBGs should be carefully used for applications at ultra-high temperatures due to signal disappearance near 900 °C.

  12. Ultra Reliability Workshop Introduction

    NASA Technical Reports Server (NTRS)

    Shapiro, Andrew A.

    2006-01-01

    This plan is the accumulation of substantial work by a large number of individuals. The Ultra-Reliability team consists of representatives from each center who have agreed to champion the program and be the focal point for their center. A number of individuals from NASA, government agencies (including the military), universities, industry and non-governmental organizations also contributed significantly to this effort. Most of their names may be found on the Ultra-Reliability PBMA website.

  13. A low-cost, ultra-fast and ultra-low noise preamplifier for silicon avalanche photodiodes

    NASA Astrophysics Data System (ADS)

    Gasmi, Khaled

    2018-02-01

    An ultra-fast and ultra-low noise preamplifier for amplifying the fast and weak electrical signals generated by silicon avalanche photodiodes has been designed and developed. It is characterized by its simplicity, compactness, reliability and low cost of construction. A very wide bandwidth of 300 MHz, a very good linearity from 1 kHz to 280 MHz, an ultra-low noise level at the input of only 1.7 nV Hz-1/2 and a very good stability are its key features. The compact size (70 mm  ×  90 mm) and light weight (45 g), as well as its excellent characteristics, make this preamplifier very competitive compared to any commercial preamplifier. The preamplifier, which is a main part of the detection system of a homemade laser remote sensing system, has been successfully tested. In addition, it is versatile and can be used in any optical detection system requiring high speed and very low noise electronics.

  14. HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing.

    PubMed

    Wan, Shixiang; Zou, Quan

    2017-01-01

    Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.

  15. Space Shuttle Communications Coverage Analysis for Thermal Tile Inspection

    NASA Technical Reports Server (NTRS)

    Kroll, Quin D.; Hwu, Shian U.; Upanavage, Matthew; Boster, John P.; Chavez, Mark A.

    2009-01-01

    The space shuttle ultra-high frequency Space-to-Space Communication System has to provide adequate communication coverage for astronauts who are performing thermal tile inspection and repair on the underside of the space shuttle orbiter (SSO). Careful planning and quantitative assessment are necessary to ensure successful system operations and mission safety in this work environment. This study assesses communication systems performance for astronauts who are working in the underside, non-line-of-sight shadow region on the space shuttle. All of the space shuttle and International Space Station (ISS) transmitting antennas are blocked by the SSO structure. To ensure communication coverage at planned inspection worksites, the signal strength and link margin between the SSO/ISS antennas and the extravehicular activity astronauts, whose line-of-sight is blocked by vehicle structure, was analyzed. Investigations were performed using rigorous computational electromagnetic modeling techniques. Signal strength was obtained by computing the reflected and diffracted fields along the signal propagation paths between transmitting and receiving antennas. Radio frequency (RF) coverage was determined for thermal tile inspection and repair missions using the results of this computation. Analysis results from this paper are important in formulating the limits on reliable communication range and RF coverage at planned underside inspection and repair worksites.

  16. Fly-by-light technology development plan

    NASA Technical Reports Server (NTRS)

    Todd, J. R.; Williams, T.; Goldthorpe, S.; Hay, J.; Brennan, M.; Sherman, B.; Chen, J.; Yount, Larry J.; Hess, Richard F.; Kravetz, J.

    1990-01-01

    The driving factors and developments which make a fly-by-light (FBL) viable are discussed. Documentation, analyses, and recommendations are provided on the major issues pertinent to facilitating the U.S. implementation of commercial FBL aircraft before the turn of the century. Areas of particular concern include ultra-reliable computing (hardware/software); electromagnetic environment (EME); verification and validation; optical techniques; life-cycle maintenance; and basis and procedures for certification.

  17. Dual-mode ultraflow access networks: a hybrid solution for the access bottleneck

    NASA Astrophysics Data System (ADS)

    Kazovsky, Leonid G.; Shen, Thomas Shunrong; Dhaini, Ahmad R.; Yin, Shuang; De Leenheer, Marc; Detwiler, Benjamin A.

    2013-12-01

    Optical Flow Switching (OFS) is a promising solution for large Internet data transfers. In this paper, we introduce UltraFlow Access, a novel optical access network architecture that offers dual-mode service to its end-users: IP and OFS. With UltraFlow Access, we design and implement a new dual-mode control plane and a new dual-mode network stack to ensure efficient connection setup and reliable and optimal data transmission. We study the impact of the UltraFlow system's design on the network throughput. Our experimental results show that with an optimized system design, near optimal (around 10 Gb/s) OFS data throughput can be attained when the line rate is 10Gb/s.

  18. Artificial Intelligence in Medical Practice: The Question to the Answer?

    PubMed

    Miller, D Douglas; Brown, Eric W

    2018-02-01

    Computer science advances and ultra-fast computing speeds find artificial intelligence (AI) broadly benefitting modern society-forecasting weather, recognizing faces, detecting fraud, and deciphering genomics. AI's future role in medical practice remains an unanswered question. Machines (computers) learn to detect patterns not decipherable using biostatistics by processing massive datasets (big data) through layered mathematical models (algorithms). Correcting algorithm mistakes (training) adds to AI predictive model confidence. AI is being successfully applied for image analysis in radiology, pathology, and dermatology, with diagnostic speed exceeding, and accuracy paralleling, medical experts. While diagnostic confidence never reaches 100%, combining machines plus physicians reliably enhances system performance. Cognitive programs are impacting medical practice by applying natural language processing to read the rapidly expanding scientific literature and collate years of diverse electronic medical records. In this and other ways, AI may optimize the care trajectory of chronic disease patients, suggest precision therapies for complex illnesses, reduce medical errors, and improve subject enrollment into clinical trials. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. PerSEUS: Ultra-Low-Power High Performance Computing for Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Doxas, I.; Andreou, A.; Lyon, J.; Angelopoulos, V.; Lu, S.; Pritchett, P. L.

    2017-12-01

    Peta-op SupErcomputing Unconventional System (PerSEUS) aims to explore the use for High Performance Scientific Computing (HPC) of ultra-low-power mixed signal unconventional computational elements developed by Johns Hopkins University (JHU), and demonstrate that capability on both fluid and particle Plasma codes. We will describe the JHU Mixed-signal Unconventional Supercomputing Elements (MUSE), and report initial results for the Lyon-Fedder-Mobarry (LFM) global magnetospheric MHD code, and a UCLA general purpose relativistic Particle-In-Cell (PIC) code.

  20. Advanced information processing system: The Army Fault-Tolerant Architecture detailed design overview

    NASA Technical Reports Server (NTRS)

    Harper, Richard E.; Babikyan, Carol A.; Butler, Bryan P.; Clasen, Robert J.; Harris, Chris H.; Lala, Jaynarayan H.; Masotto, Thomas K.; Nagle, Gail A.; Prizant, Mark J.; Treadwell, Steven

    1994-01-01

    The Army Avionics Research and Development Activity (AVRADA) is pursuing programs that would enable effective and efficient management of large amounts of situational data that occurs during tactical rotorcraft missions. The Computer Aided Low Altitude Night Helicopter Flight Program has identified automated Terrain Following/Terrain Avoidance, Nap of the Earth (TF/TA, NOE) operation as key enabling technology for advanced tactical rotorcraft to enhance mission survivability and mission effectiveness. The processing of critical information at low altitudes with short reaction times is life-critical and mission-critical necessitating an ultra-reliable/high throughput computing platform for dependable service for flight control, fusion of sensor data, route planning, near-field/far-field navigation, and obstacle avoidance operations. To address these needs the Army Fault Tolerant Architecture (AFTA) is being designed and developed. This computer system is based upon the Fault Tolerant Parallel Processor (FTPP) developed by Charles Stark Draper Labs (CSDL). AFTA is hard real-time, Byzantine, fault-tolerant parallel processor which is programmed in the ADA language. This document describes the results of the Detailed Design (Phase 2 and 3 of a 3-year project) of the AFTA development. This document contains detailed descriptions of the program objectives, the TF/TA NOE application requirements, architecture, hardware design, operating systems design, systems performance measurements and analytical models.

  1. Ultra Compact Optical Pickup with Integrated Optical System

    NASA Astrophysics Data System (ADS)

    Nakata, Hideki; Nagata, Takayuki; Tomita, Hironori

    2006-08-01

    Smaller and thinner optical pickups are needed for portable audio-visual (AV) products and notebook personal computers (PCs). We have newly developed an ultra compact recordable optical pickup for Mini Disc (MD) that measures less than 4 mm from the disc surface to the bottom of the optical pickup, making the optical system markedly compact. We have integrated all the optical components into an objective lens actuator moving unit, while fully satisfying recording and playback performance requirements. In this paper, we propose an ultra compact optical pickup applicable to portable MD recorders.

  2. Autonomous, Decentralized Grid Architecture: Prosumer-Based Distributed Autonomous Cyber-Physical Architecture for Ultra-Reliable Green Electricity Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2012-01-11

    GENI Project: Georgia Tech is developing a decentralized, autonomous, internet-like control architecture and control software system for the electric power grid. Georgia Tech’s new architecture is based on the emerging concept of electricity prosumers—economically motivated actors that can produce, consume, or store electricity. Under Georgia Tech’s architecture, all of the actors in an energy system are empowered to offer associated energy services based on their capabilities. The actors achieve their sustainability, efficiency, reliability, and economic objectives, while contributing to system-wide reliability and efficiency goals. This is in marked contrast to the current one-way, centralized control paradigm.

  3. Validation of radiative transfer computation with Monte Carlo method for ultra-relativistic background flow

    NASA Astrophysics Data System (ADS)

    Ishii, Ayako; Ohnishi, Naofumi; Nagakura, Hiroki; Ito, Hirotaka; Yamada, Shoichi

    2017-11-01

    We developed a three-dimensional radiative transfer code for an ultra-relativistic background flow-field by using the Monte Carlo (MC) method in the context of gamma-ray burst (GRB) emission. For obtaining reliable simulation results in the coupled computation of MC radiation transport with relativistic hydrodynamics which can reproduce GRB emission, we validated radiative transfer computation in the ultra-relativistic regime and assessed the appropriate simulation conditions. The radiative transfer code was validated through two test calculations: (1) computing in different inertial frames and (2) computing in flow-fields with discontinuous and smeared shock fronts. The simulation results of the angular distribution and spectrum were compared among three different inertial frames and in good agreement with each other. If the time duration for updating the flow-field was sufficiently small to resolve a mean free path of a photon into ten steps, the results were thoroughly converged. The spectrum computed in the flow-field with a discontinuous shock front obeyed a power-law in frequency whose index was positive in the range from 1 to 10 MeV. The number of photons in the high-energy side decreased with the smeared shock front because the photons were less scattered immediately behind the shock wave due to the small electron number density. The large optical depth near the shock front was needed for obtaining high-energy photons through bulk Compton scattering. Even one-dimensional structure of the shock wave could affect the results of radiation transport computation. Although we examined the effect of the shock structure on the emitted spectrum with a large number of cells, it is hard to employ so many computational cells per dimension in multi-dimensional simulations. Therefore, a further investigation with a smaller number of cells is required for obtaining realistic high-energy photons with multi-dimensional computations.

  4. Reliability modeling of fault-tolerant computer based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1987-01-01

    Digital fault-tolerant computer-based systems have become commonplace in military and commercial avionics. These systems hold the promise of increased availability, reliability, and maintainability over conventional analog-based systems through the application of replicated digital computers arranged in fault-tolerant configurations. Three tightly coupled factors of paramount importance, ultimately determining the viability of these systems, are reliability, safety, and profitability. Reliability, the major driver affects virtually every aspect of design, packaging, and field operations, and eventually produces profit for commercial applications or increased national security. However, the utilization of digital computer systems makes the task of producing credible reliability assessment a formidable one for the reliability engineer. The root of the problem lies in the digital computer's unique adaptability to changing requirements, computational power, and ability to test itself efficiently. Addressed here are the nuances of modeling the reliability of systems with large state sizes, in the Markov sense, which result from systems based on replicated redundant hardware and to discuss the modeling of factors which can reduce reliability without concomitant depletion of hardware. Advanced fault-handling models are described and methods of acquiring and measuring parameters for these models are delineated.

  5. System and method for magnetic current density imaging at ultra low magnetic fields

    DOEpatents

    Espy, Michelle A.; George, John Stevens; Kraus, Robert Henry; Magnelind, Per; Matlashov, Andrei Nikolaevich; Tucker, Don; Turovets, Sergei; Volegov, Petr Lvovich

    2016-02-09

    Preferred systems can include an electrical impedance tomography apparatus electrically connectable to an object; an ultra low field magnetic resonance imaging apparatus including a plurality of field directions and disposable about the object; a controller connected to the ultra low field magnetic resonance imaging apparatus and configured to implement a sequencing of one or more ultra low magnetic fields substantially along one or more of the plurality of field directions; and a display connected to the controller, and wherein the controller is further configured to reconstruct a displayable image of an electrical current density in the object. Preferred methods, apparatuses, and computer program products are also disclosed.

  6. Ultra-thin carbon-fiber paper fabrication and carbon-fiber distribution homogeneity evaluation method

    NASA Astrophysics Data System (ADS)

    Zhang, L. F.; Chen, D. Y.; Wang, Q.; Li, H.; Zhao, Z. G.

    2018-01-01

    A preparation technology of ultra-thin Carbon-fiber paper is reported. Carbon fiber distribution homogeneity has a great influence on the properties of ultra-thin Carbon-fiber paper. In this paper, a self-developed homogeneity analysis system is introduced to assist users to evaluate the distribution homogeneity of Carbon fiber among two or more two-value images of carbon-fiber paper. A relative-uniformity factor W/H is introduced. The experimental results show that the smaller the W/H factor, the higher uniformity of the distribution of Carbon fiber is. The new uniformity-evaluation method provides a practical and reliable tool for analyzing homogeneity of materials.

  7. UltraForm Finishing (UFF) a 5-axis computer controlled precision optical component grinding and polishing system

    NASA Astrophysics Data System (ADS)

    Bechtold, Michael; Mohring, David; Fess, Edward

    2007-05-01

    OptiPro Systems has developed a new finishing process for the manufacturing of precision optical components. UltraForm Finishing (UFF) has evolved from a tire shaped tool with polishing material on its periphery, to its newest design, which incorporates a precision rubber wheel wrapped with a band of polishing material passing over it. Through our research we have developed a user friendly graphical interface giving the optician a deterministic path for finishing precision optical components. Complex UFF Algorithms combine the removal function and desired depth of removal into a motion controlled tool path which minimizes surface roughness and form errors. The UFF process includes 5 axes of computer controlled motion, (3 linear and 2 rotary) which provide the flexibility for finishing a variety of shapes including spheres, aspheres, and freeform optics. The long arm extension, along with a range of diameters for the "UltraWheel" provides a unique solution for the finishing of steep concave shapes such as ogives and domes. The UltraForm process utilizes, fixed and loose abrasives, in combination with our proprietary "UltraBelts" made of a range of materials such as polyurethane, felt, resin, diamond and others.

  8. A human body model for efficient numerical characterization of UWB signal propagation in wireless body area networks.

    PubMed

    Lim, Hooi Been; Baumann, Dirk; Li, Er-Ping

    2011-03-01

    Wireless body area network (WBAN) is a new enabling system with promising applications in areas such as remote health monitoring and interpersonal communication. Reliable and optimum design of a WBAN system relies on a good understanding and in-depth studies of the wave propagation around a human body. However, the human body is a very complex structure and is computationally demanding to model. This paper aims to investigate the effects of the numerical model's structure complexity and feature details on the simulation results. Depending on the application, a simplified numerical model that meets desired simulation accuracy can be employed for efficient simulations. Measurements of ultra wideband (UWB) signal propagation along a human arm are performed and compared to the simulation results obtained with numerical arm models of different complexity levels. The influence of the arm shape and size, as well as tissue composition and complexity is investigated.

  9. Bayesian sparse channel estimation

    NASA Astrophysics Data System (ADS)

    Chen, Chulong; Zoltowski, Michael D.

    2012-05-01

    In Orthogonal Frequency Division Multiplexing (OFDM) systems, the technique used to estimate and track the time-varying multipath channel is critical to ensure reliable, high data rate communications. It is recognized that wireless channels often exhibit a sparse structure, especially for wideband and ultra-wideband systems. In order to exploit this sparse structure to reduce the number of pilot tones and increase the channel estimation quality, the application of compressed sensing to channel estimation is proposed. In this article, to make the compressed channel estimation more feasible for practical applications, it is investigated from a perspective of Bayesian learning. Under the Bayesian learning framework, the large-scale compressed sensing problem, as well as large time delay for the estimation of the doubly selective channel over multiple consecutive OFDM symbols, can be avoided. Simulation studies show a significant improvement in channel estimation MSE and less computing time compared to the conventional compressed channel estimation techniques.

  10. Scientific Services on the Cloud

    NASA Astrophysics Data System (ADS)

    Chapman, David; Joshi, Karuna P.; Yesha, Yelena; Halem, Milt; Yesha, Yaacov; Nguyen, Phuong

    Scientific Computing was one of the first every applications for parallel and distributed computation. To this date, scientific applications remain some of the most compute intensive, and have inspired creation of petaflop compute infrastructure such as the Oak Ridge Jaguar and Los Alamos RoadRunner. Large dedicated hardware infrastructure has become both a blessing and a curse to the scientific community. Scientists are interested in cloud computing for much the same reason as businesses and other professionals. The hardware is provided, maintained, and administrated by a third party. Software abstraction and virtualization provide reliability, and fault tolerance. Graduated fees allow for multi-scale prototyping and execution. Cloud computing resources are only a few clicks away, and by far the easiest high performance distributed platform to gain access to. There may still be dedicated infrastructure for ultra-scale science, but the cloud can easily play a major part of the scientific computing initiative.

  11. Common Cause Failures and Ultra Reliability

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2012-01-01

    A common cause failure occurs when several failures have the same origin. Common cause failures are either common event failures, where the cause is a single external event, or common mode failures, where two systems fail in the same way for the same reason. Common mode failures can occur at different times because of a design defect or a repeated external event. Common event failures reduce the reliability of on-line redundant systems but not of systems using off-line spare parts. Common mode failures reduce the dependability of systems using off-line spare parts and on-line redundancy.

  12. Design and Analysis of a Flexible, Reliable Deep Space Life Support System

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2012-01-01

    This report describes a flexible, reliable, deep space life support system design approach that uses either storage or recycling or both together. The design goal is to provide the needed life support performance with the required ultra reliability for the minimum Equivalent System Mass (ESM). Recycling life support systems used with multiple redundancy can have sufficient reliability for deep space missions but they usually do not save mass compared to mixed storage and recycling systems. The best deep space life support system design uses water recycling with sufficient water storage to prevent loss of crew if recycling fails. Since the amount of water needed for crew survival is a small part of the total water requirement, the required amount of stored water is significantly less than the total to be consumed. Water recycling with water, oxygen, and carbon dioxide removal material storage can achieve the high reliability of full storage systems with only half the mass of full storage and with less mass than the highly redundant recycling systems needed to achieve acceptable reliability. Improved recycling systems with lower mass and higher reliability could perform better than systems using storage.

  13. Intraday and Interday Reliability of Ultra-Short-Term Heart Rate Variability in Rugby Union Players.

    PubMed

    Nakamura, Fábio Y; Pereira, Lucas A; Esco, Michael R; Flatt, Andrew A; Moraes, José E; Cal Abad, Cesar C; Loturco, Irineu

    2017-02-01

    Nakamura, FY, Pereira, LA, Esco, MR, Flatt, AA, Moraes, JE, Cal Abad, CC, and Loturco, I. Intraday and interday reliability of ultra-short-term heart rate variability in rugby union players. J Strength Cond Res 31(2): 548-551, 2017-The aim of this study was to examine the intraday and interday reliability of ultra-short-term vagal-related heart rate variability (HRV) in elite rugby union players. Forty players from the Brazilian National Rugby Team volunteered to participate in this study. The natural log of the root mean square of successive RR interval differences (lnRMSSD) assessments were performed on 4 different days. The HRV was assessed twice (intraday reliability) on the first day and once per day on the following 3 days (interday reliability). The RR interval recordings were obtained from 2-minute recordings using a portable heart rate monitor. The relative reliability of intraday and interday lnRMSSD measures was analyzed using the intraclass correlation coefficient (ICC). The typical error of measurement (absolute reliability) of intraday and interday lnRMSSD assessments was analyzed using the coefficient of variation (CV). Both intraday (ICC = 0.96; CV = 3.99%) and interday (ICC = 0.90; CV = 7.65%) measures were highly reliable. The ultra-short-term lnRMSSD is a consistent measure for evaluating elite rugby union players, in both intraday and interday settings. This study provides further validity to using this shortened method in practical field conditions with highly trained team sports athletes.

  14. Scientific workflow and support for high resolution global climate modeling at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, V.; Mayer, B.; Wang, F.; Hack, J.; McKenna, D.; Hartman-Baker, R.

    2012-04-01

    The Oak Ridge Leadership Computing Facility (OLCF) facilitates the execution of computational experiments that require tens of millions of CPU hours (typically using thousands of processors simultaneously) while generating hundreds of terabytes of data. A set of ultra high resolution climate experiments in progress, using the Community Earth System Model (CESM), will produce over 35,000 files, ranging in sizes from 21 MB to 110 GB each. The execution of the experiments will require nearly 70 Million CPU hours on the Jaguar and Titan supercomputers at OLCF. The total volume of the output from these climate modeling experiments will be in excess of 300 TB. This model output must then be archived, analyzed, distributed to the project partners in a timely manner, and also made available more broadly. Meeting this challenge would require efficient movement of the data, staging the simulation output to a large and fast file system that provides high volume access to other computational systems used to analyze the data and synthesize results. This file system also needs to be accessible via high speed networks to an archival system that can provide long term reliable storage. Ideally this archival system is itself directly available to other systems that can be used to host services making the data and analysis available to the participants in the distributed research project and to the broader climate community. The various resources available at the OLCF now support this workflow. The available systems include the new Jaguar Cray XK6 2.63 petaflops (estimated) supercomputer, the 10 PB Spider center-wide parallel file system, the Lens/EVEREST analysis and visualization system, the HPSS archival storage system, the Earth System Grid (ESG), and the ORNL Climate Data Server (CDS). The ESG features federated services, search & discovery, extensive data handling capabilities, deep storage access, and Live Access Server (LAS) integration. The scientific workflow enabled on these systems, and developed as part of the Ultra-High Resolution Climate Modeling Project, allows users of OLCF resources to efficiently share simulated data, often multi-terabyte in volume, as well as the results from the modeling experiments and various synthesized products derived from these simulations. The final objective in the exercise is to ensure that the simulation results and the enhanced understanding will serve the needs of a diverse group of stakeholders across the world, including our research partners in U.S. Department of Energy laboratories & universities, domain scientists, students (K-12 as well as higher education), resource managers, decision makers, and the general public.

  15. Reliability model of a monopropellant auxiliary propulsion system

    NASA Technical Reports Server (NTRS)

    Greenberg, J. S.

    1971-01-01

    A mathematical model and associated computer code has been developed which computes the reliability of a monopropellant blowdown hydrazine spacecraft auxiliary propulsion system as a function of time. The propulsion system is used to adjust or modify the spacecraft orbit over an extended period of time. The multiple orbit corrections are the multiple objectives which the auxiliary propulsion system is designed to achieve. Thus the reliability model computes the probability of successfully accomplishing each of the desired orbit corrections. To accomplish this, the reliability model interfaces with a computer code that models the performance of a blowdown (unregulated) monopropellant auxiliary propulsion system. The computer code acts as a performance model and as such gives an accurate time history of the system operating parameters. The basic timing and status information is passed on to and utilized by the reliability model which establishes the probability of successfully accomplishing the orbit corrections.

  16. Computer-Aided Reliability Estimation

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.; Stiffler, J. J.; Bryant, L. A.; Petersen, P. L.

    1986-01-01

    CARE III (Computer-Aided Reliability Estimation, Third Generation) helps estimate reliability of complex, redundant, fault-tolerant systems. Program specifically designed for evaluation of fault-tolerant avionics systems. However, CARE III general enough for use in evaluation of other systems as well.

  17. Advanced reliability modeling of fault-tolerant computer-based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1982-01-01

    Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.

  18. Computational Modeling Develops Ultra-Hard Steel

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Glenn Research Center's Mechanical Components Branch developed a spiral bevel or face gear test rig for testing thermal behavior, surface fatigue, strain, vibration, and noise; a full-scale, 500-horsepower helicopter main-rotor transmission testing stand; a gear rig that allows fundamental studies of the dynamic behavior of gear systems and gear noise; and a high-speed helical gear test for analyzing thermal behavior for rotorcraft. The test rig provides accelerated fatigue life testing for standard spur gears at speeds of up to 10,000 rotations per minute. The test rig enables engineers to investigate the effects of materials, heat treat, shot peen, lubricants, and other factors on the gear's performance. QuesTek Innovations LLC, based in Evanston, Illinois, recently developed a carburized, martensitic gear steel with an ultra-hard case using its computational design methodology, but needed to verify surface fatigue, lifecycle performance, and overall reliability. The Battelle Memorial Institute introduced the company to researchers at Glenn's Mechanical Components Branch and facilitated a partnership allowing researchers at the NASA Center to conduct spur gear fatigue testing for the company. Testing revealed that QuesTek's gear steel outperforms the current state-of-the-art alloys used for aviation gears in contact fatigue by almost 300 percent. With the confidence and credibility provided by the NASA testing, QuesTek is commercializing two new steel alloys. Uses for this new class of steel are limitless in areas that demand exceptional strength for high throughput applications.

  19. Ultra-Low-Dropout Linear Regulator

    NASA Technical Reports Server (NTRS)

    Thornton, Trevor; Lepkowski, William; Wilk, Seth

    2011-01-01

    A radiation-tolerant, ultra-low-dropout linear regulator can operate between -150 and 150 C. Prototype components were demonstrated to be performing well after a total ionizing dose of 1 Mrad (Si). Unlike existing components, the linear regulator developed during this activity is unconditionally stable over all operating regimes without the need for an external compensation capacitor. The absence of an external capacitor reduces overall system mass/volume, increases reliability, and lowers cost. Linear regulators generate a precisely controlled voltage for electronic circuits regardless of fluctuations in the load current that the circuit draws from the regulator.

  20. A new method for computing the reliability of consecutive k-out-of-n:F systems

    NASA Astrophysics Data System (ADS)

    Gökdere, Gökhan; Gürcan, Mehmet; Kılıç, Muhammet Burak

    2016-01-01

    In many physical systems, reliability evaluation, such as ones encountered in telecommunications, the design of integrated circuits, microwave relay stations, oil pipeline systems, vacuum systems in accelerators, computer ring networks, and spacecraft relay stations, have had applied consecutive k-out-of-n system models. These systems are characterized as logical connections among the components of the systems placed in lines or circles. In literature, a great deal of attention has been paid to the study of the reliability evaluation of consecutive k-out-of-n systems. In this paper, we propose a new method to compute the reliability of consecutive k-out-of-n:F systems, with n linearly and circularly arranged components. The proposed method provides a simple way for determining the system failure probability. Also, we write R-Project codes based on our proposed method to compute the reliability of the linear and circular systems which have a great number of components.

  1. Design of a modular digital computer system, DRL 4. [for meeting future requirements of spaceborne computers

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The design is reported of an advanced modular computer system designated the Automatically Reconfigurable Modular Multiprocessor System, which anticipates requirements for higher computing capacity and reliability for future spaceborne computers. Subjects discussed include: an overview of the architecture, mission analysis, synchronous and nonsynchronous scheduling control, reliability, and data transmission.

  2. Ultrareliable PACS: design and clinical evaluation

    NASA Astrophysics Data System (ADS)

    Goble, John C.; Kronander, Torbjorn; Wilske, Nils-Olof; Yngvesson, Jonas T.; Ejderholm, Henrik; Ekstrom, Marie

    1999-07-01

    We describe our experience in the design, installation and clinical evaluation o fan ultra-reliable PACS - a system in which the fundamental design constraint was system availability. This syste has ben constructed using commercial, off-the-shelf hardware and software, using an open system, standards-based approach. The system is deployed in the film-free Department of Pediatric Radiology at the Astrid Lindgren Barnsjukhus a nit of the Karolinska Institute in Stockholm, Sweden.

  3. SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.

  4. SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.

  5. Reliability of Computer Systems ODRA 1305 and R-32,

    DTIC Science & Technology

    1983-03-25

    RELIABILITY OF COMPUTER SYSTEMS ODRA 1305 AND R-32 By: Wit Drewniak English pages: 12 Source: Informatyka , Vol. 14, Nr. 7, 1979, pp. 5-8 Country of...JS EMC computers installed in ZETO, Katowice", Informatyka , No. 7-8/78, deals with various reliability classes * within the family of the machines of

  6. Assessment of spare reliability for multi-state computer networks within tolerable packet unreliability

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Kuei; Huang, Cheng-Fu

    2015-04-01

    From a quality of service viewpoint, the transmission packet unreliability and transmission time are both critical performance indicators in a computer system when assessing the Internet quality for supervisors and customers. A computer system is usually modelled as a network topology where each branch denotes a transmission medium and each vertex represents a station of servers. Almost every branch has multiple capacities/states due to failure, partial failure, maintenance, etc. This type of network is known as a multi-state computer network (MSCN). This paper proposes an efficient algorithm that computes the system reliability, i.e., the probability that a specified amount of data can be sent through k (k ≥ 2) disjoint minimal paths within both the tolerable packet unreliability and time threshold. Furthermore, two routing schemes are established in advance to indicate the main and spare minimal paths to increase the system reliability (referred to as spare reliability). Thus, the spare reliability can be readily computed according to the routing scheme.

  7. SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.

  8. SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (SUN VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.

  9. System reliability approaches for advanced propulsion system structures

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Mahadevan, S.

    1991-01-01

    This paper identifies significant issues that pertain to the estimation and use of system reliability in the design of advanced propulsion system structures. Linkages between the reliabilities of individual components and their effect on system design issues such as performance, cost, availability, and certification are examined. The need for system reliability computation to address the continuum nature of propulsion system structures and synergistic progressive damage modes has been highlighted. Available system reliability models are observed to apply only to discrete systems. Therefore a sequential structural reanalysis procedure is formulated to rigorously compute the conditional dependencies between various failure modes. The method is developed in a manner that supports both top-down and bottom-up analyses in system reliability.

  10. Quantification of dopamine transporters in the mouse brain using ultra-high resolution single-photon emission tomography.

    PubMed

    Acton, Paul D; Choi, Seok-Rye; Plössl, Karl; Kung, Hank F

    2002-05-01

    Functional imaging of small animals, such as mice and rats, using ultra-high resolution positron emission tomography (PET) and single-photon emission tomography (SPET), is becoming a valuable tool for studying animal models of human disease. While several studies have shown the utility of PET imaging in small animals, few have used SPET in real research applications. In this study we aimed to demonstrate the feasibility of using ultra-high resolution SPET in quantitative studies of dopamine transporters (DAT) in the mouse brain. Four healthy ICR male mice were injected with (mean+/-SD) 704+/-154 MBq [(99m)Tc]TRODAT-1, and scanned using an ultra-high resolution SPET system equipped with pinhole collimators (spatial resolution 0.83 mm at 3 cm radius of rotation). Each mouse had two studies, to provide an indication of test-retest reliability. Reference tissue kinetic modeling analysis of the time-activity data in the striatum and cerebellum was used to quantitate the availability of DAT. A simple equilibrium ratio of striatum to cerebellum provided another measure of DAT binding. The SPET imaging results were compared against ex vivo biodistribution data from the striatum and cerebellum. The mean distribution volume ratio (DVR) from the reference tissue kinetic model was 2.17+/-0.34, with a test-retest reliability of 2.63%+/-1.67%. The ratio technique gave similar results (DVR=2.03+/-0.38, test-retest reliability=6.64%+/-3.86%), and the ex vivo analysis gave DVR=2.32+/-0.20. Correlations between the kinetic model and the ratio technique ( R(2)=0.86, P<0.001) and the ex vivo data ( R(2)=0.92, P=0.04) were both excellent. This study demonstrated clearly that ultra-high resolution SPET of small animals is capable of accurate, repeatable, and quantitative measures of DAT binding, and should open up the possibility of further studies of cerebral binding sites in mice using pinhole SPET.

  11. Human Mobility Monitoring in Very Low Resolution Visual Sensor Network

    PubMed Central

    Bo Bo, Nyan; Deboeverie, Francis; Eldib, Mohamed; Guan, Junzhi; Xie, Xingzhe; Niño, Jorge; Van Haerenborgh, Dirk; Slembrouck, Maarten; Van de Velde, Samuel; Steendam, Heidi; Veelaert, Peter; Kleihorst, Richard; Aghajan, Hamid; Philips, Wilfried

    2014-01-01

    This paper proposes an automated system for monitoring mobility patterns using a network of very low resolution visual sensors (30 × 30 pixels). The use of very low resolution sensors reduces privacy concern, cost, computation requirement and power consumption. The core of our proposed system is a robust people tracker that uses low resolution videos provided by the visual sensor network. The distributed processing architecture of our tracking system allows all image processing tasks to be done on the digital signal controller in each visual sensor. In this paper, we experimentally show that reliable tracking of people is possible using very low resolution imagery. We also compare the performance of our tracker against a state-of-the-art tracking method and show that our method outperforms. Moreover, the mobility statistics of tracks such as total distance traveled and average speed derived from trajectories are compared with those derived from ground truth given by Ultra-Wide Band sensors. The results of this comparison show that the trajectories from our system are accurate enough to obtain useful mobility statistics. PMID:25375754

  12. A Byzantine-Fault Tolerant Self-Stabilizing Protocol for Distributed Clock Synchronization Systems

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2006-01-01

    Embedded distributed systems have become an integral part of safety-critical computing applications, necessitating system designs that incorporate fault tolerant clock synchronization in order to achieve ultra-reliable assurance levels. Many efficient clock synchronization protocols do not, however, address Byzantine failures, and most protocols that do tolerate Byzantine failures do not self-stabilize. Of the Byzantine self-stabilizing clock synchronization algorithms that exist in the literature, they are based on either unjustifiably strong assumptions about initial synchrony of the nodes or on the existence of a common pulse at the nodes. The Byzantine self-stabilizing clock synchronization protocol presented here does not rely on any assumptions about the initial state of the clocks. Furthermore, there is neither a central clock nor an externally generated pulse system. The proposed protocol converges deterministically, is scalable, and self-stabilizes in a short amount of time. The convergence time is linear with respect to the self-stabilization period. Proofs of the correctness of the protocol as well as the results of formal verification efforts are reported.

  13. Energy efficient hybrid computing systems using spin devices

    NASA Astrophysics Data System (ADS)

    Sharad, Mrigank

    Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.

  14. Structural system reliability calculation using a probabilistic fault tree analysis method

    NASA Technical Reports Server (NTRS)

    Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.

    1992-01-01

    The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.

  15. Reliability model derivation of a fault-tolerant, dual, spare-switching, digital computer system

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A computer based reliability projection aid, tailored specifically for application in the design of fault-tolerant computer systems, is described. Its more pronounced characteristics include the facility for modeling systems with two distinct operational modes, measuring the effect of both permanent and transient faults, and calculating conditional system coverage factors. The underlying conceptual principles, mathematical models, and computer program implementation are presented.

  16. PDSS/IMC CIS user's guide

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The Spacelab Payload Development Support System PDSS Image Motion Compensator (IMC) computer interface simulation (CIS) user's manual is given. The software provides a real time interface simulation for the following IMC subsystems: the Dry Rotor Reference Unit, the Advanced Star/Target Reference Optical sensor, the Ultra Violet imaging telescope, the Wisconson Ultraviolet Photopolarimetry Experiment, the Cruciform Power distributor, and the Spacelab Experiment Computer Operating System.

  17. Ionization-Assisted Getter Pumping for Ultra-Stable Trapped Ion Frequency Standards

    NASA Technical Reports Server (NTRS)

    Tjoelker, Robert L.; Burt, Eric A.

    2010-01-01

    A method eliminates (or recovers from) residual methane buildup in getter-pumped atomic frequency standard systems by applying ionizing assistance. Ultra-high stability trapped ion frequency standards for applications requiring very high reliability, and/or low power and mass (both for ground-based and space-based platforms) benefit from using sealed vacuum systems. These systems require careful material selection and system processing (cleaning and high-temperature bake-out). Even under the most careful preparation, residual hydrogen outgassing from vacuum chamber walls typically limits the base pressure. Non-evaporable getter pumps (NEGs) provide a convenient pumping option for sealed systems because of low mass and volume, and no power once activated. An ion gauge in conjunction with a NEG can be used to provide a low mass, low-power method for avoiding the deleterious effects of methane buildup in high-performance frequency standard vacuum systems.

  18. Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.

    PubMed

    Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander

    2018-04-10

    A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing techniques and assessment of soft computing techniques to predict reliability. The parameter considered while estimating and prediction of reliability are also discussed. This study can be used in estimation and prediction of the reliability of various instruments used in the medical system, software engineering, computer engineering and mechanical engineering also. These concepts can be applied to both software and hardware, to predict the reliability using CBSE.

  19. A Low-Power High-Speed Smart Sensor Design for Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi

    1997-01-01

    A low-power high-speed smart sensor system based on a large format active pixel sensor (APS) integrated with a programmable neural processor for space exploration missions is presented. The concept of building an advanced smart sensing system is demonstrated by a system-level microchip design that is composed with an APS sensor, a programmable neural processor, and an embedded microprocessor in a SOI CMOS technology. This ultra-fast smart sensor system-on-a-chip design mimics what is inherent in biological vision systems. Moreover, it is programmable and capable of performing ultra-fast machine vision processing in all levels such as image acquisition, image fusion, image analysis, scene interpretation, and control functions. The system provides about one tera-operation-per-second computing power which is a two order-of-magnitude increase over that of state-of-the-art microcomputers. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation.

  20. Design of a modular digital computer system

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A design tradeoff study is reported for a modular spaceborne computer system that is responsive to many mission types and phases. The computer uses redundancy to maximize reliability, and multiprocessing to maximize processing capacity. Fault detection and recovery features provide optimal reliability.

  1. Computer calculation of device, circuit, equipment, and system reliability.

    NASA Technical Reports Server (NTRS)

    Crosby, D. R.

    1972-01-01

    A grouping into four classes is proposed for all reliability computations that are related to electronic equipment. Examples are presented of reliability computations in three of these four classes. Each of the three specific reliability tasks described was originally undertaken to satisfy an engineering need for reliability data. The form and interpretation of the print-out of the specific reliability computations is presented. The justification for the costs of these computations is indicated. The skills of the personnel used to conduct the analysis, the interfaces between the personnel, and the timing of the projects is discussed.

  2. Models for evaluating the performability of degradable computing systems

    NASA Technical Reports Server (NTRS)

    Wu, L. T.

    1982-01-01

    Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.

  3. Automatic documentation system extension to multi-manufacturers' computers and to measure, improve, and predict software reliability

    NASA Technical Reports Server (NTRS)

    Simmons, D. B.

    1975-01-01

    The DOMONIC system has been modified to run on the Univac 1108 and the CDC 6600 as well as the IBM 370 computer system. The DOMONIC monitor system has been implemented to gather data which can be used to optimize the DOMONIC system and to predict the reliability of software developed using DOMONIC. The areas of quality metrics, error characterization, program complexity, program testing, validation and verification are analyzed. A software reliability model for estimating program completion levels and one on which to base system acceptance have been developed. The DAVE system which performs flow analysis and error detection has been converted from the University of Colorado CDC 6400/6600 computer to the IBM 360/370 computer system for use with the DOMONIC system.

  4. Temporal reliability of ultra-high field resting-state MRI for single-subject sensorimotor and language mapping.

    PubMed

    Branco, Paulo; Seixas, Daniela; Castro, São Luís

    2018-03-01

    Resting-state fMRI is a well-suited technique to map functional networks in the brain because unlike task-based approaches it requires little collaboration from subjects. This is especially relevant in clinical settings where a number of subjects cannot comply with task demands. Previous studies using conventional scanner fields have shown that resting-state fMRI is able to map functional networks in single subjects, albeit with moderate temporal reliability. Ultra-high resolution (7T) imaging provides higher signal-to-noise ratio and better spatial resolution and is thus well suited to assess the temporal reliability of mapping results, and to determine if resting-state fMRI can be applied in clinical decision making including preoperative planning. We used resting-state fMRI at ultra-high resolution to examine whether the sensorimotor and language networks are reliable over time - same session and one week after. Resting-state networks were identified for all subjects and sessions with good accuracy. Both networks were well delimited within classical regions of interest. Mapping was temporally reliable at short and medium time-scales as demonstrated by high values of overlap in the same session and one week after for both networks. Results were stable independently of data quality metrics and physiological variables. Taken together, these findings provide strong support for the suitability of ultra-high field resting-state fMRI mapping at the single-subject level. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Standard measurement procedures for the characterization of fs-laser optical components

    NASA Astrophysics Data System (ADS)

    Starke, Kai; Ristau, Detlev; Welling, Herbert

    2003-05-01

    Ultra-short pulse laser systems are considered as promising tools in the fields of precise micro-machining and medicine applications. In the course of the development of reliable table top laser systems, a rapid growth of ultra-short pulse applications could be observed during the recent years. The key for improving the performance of high power laser systems is the quality of the optical components concerning spectral characteristics, optical losses and the power handling capability. In the field of ultra-short pulses, standard measurement procedures in quality management have to be validated in respect to effects induced by the extremely high peak power densities. The present work, which is embedded in the EUREKA-project CHOCLAB II, is predominantly concentrated on measuring the multiple-pulse LIDT (ISO 11254-2) in the fs-regime. A measurement facility based on a Ti:Sapphire-CPA system was developed to investigate the damage behavior of optical components. The set-up was supplied with an improved pulse energy detector discriminating the influence of pulse-to-pulse energy fluctuations on the incidence of damage. Aditionally, a laser-calorimetric measurement facility determining the absorption (ISO 11551) utilizing a fs-Ti:Sapphire laser was accomplished. The investigation for different pulse durations between 130 fs and 1 ps revealed a drastic increase of absorption in titania coatings for ultra-short pulses.

  6. Network reliability maximization for stochastic-flow network subject to correlated failures using genetic algorithm and tabu\\xA0search

    NASA Astrophysics Data System (ADS)

    Yeh, Cheng-Ta; Lin, Yi-Kuei; Yang, Jo-Yun

    2018-07-01

    Network reliability is an important performance index for many real-life systems, such as electric power systems, computer systems and transportation systems. These systems can be modelled as stochastic-flow networks (SFNs) composed of arcs and nodes. Most system supervisors respect the network reliability maximization by finding the optimal multi-state resource assignment, which is one resource to each arc. However, a disaster may cause correlated failures for the assigned resources, affecting the network reliability. This article focuses on determining the optimal resource assignment with maximal network reliability for SFNs. To solve the problem, this study proposes a hybrid algorithm integrating the genetic algorithm and tabu search to determine the optimal assignment, called the hybrid GA-TS algorithm (HGTA), and integrates minimal paths, recursive sum of disjoint products and the correlated binomial distribution to calculate network reliability. Several practical numerical experiments are adopted to demonstrate that HGTA has better computational quality than several popular soft computing algorithms.

  7. Reliability history of the Apollo guidance computer

    NASA Technical Reports Server (NTRS)

    Hall, E. C.

    1972-01-01

    The Apollo guidance computer was designed to provide the computation necessary for guidance, navigation and control of the command module and the lunar landing module of the Apollo spacecraft. The computer was designed using the technology of the early 1960's and the production was completed by 1969. During the development, production, and operational phase of the program, the computer has accumulated a very interesting history which is valuable for evaluating the technology, production methods, system integration, and the reliability of the hardware. The operational experience in the Apollo guidance systems includes 17 computers which flew missions and another 26 flight type computers which are still in various phases of prelaunch activity including storage, system checkout, prelaunch spacecraft checkout, etc. These computers were manufactured and maintained under very strict quality control procedures with requirements for reporting and analyzing all indications of failure. Probably no other computer or electronic equipment with equivalent complexity has been as well documented and monitored. Since it has demonstrated a unique reliability history, it is important to evaluate the techniques and methods which have contributed to the high reliability of this computer.

  8. A Survey of Techniques for Modeling and Improving Reliability of Computing Systems

    DOE PAGES

    Mittal, Sparsh; Vetter, Jeffrey S.

    2015-04-24

    Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based onmore » their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.« less

  9. A Survey of Techniques for Modeling and Improving Reliability of Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S.

    Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based onmore » their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.« less

  10. User's guide to the Reliability Estimation System Testbed (REST)

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

    1992-01-01

    The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

  11. Interface For Fault-Tolerant Control System

    NASA Technical Reports Server (NTRS)

    Shaver, Charles; Williamson, Michael

    1989-01-01

    Interface unit and controller emulator developed for research on electronic helicopter-flight-control systems equipped with artificial intelligence. Interface unit interrupt-driven system designed to link microprocessor-based, quadruply-redundant, asynchronous, ultra-reliable, fault-tolerant control system (controller) with electronic servocontrol unit that controls set of hydraulic actuators. Receives digital feedforward messages from, and transmits digital feedback messages to, controller through differential signal lines or fiber-optic cables (thus far only differential signal lines have been used). Analog signals transmitted to and from servocontrol unit via coaxial cables.

  12. Large Eddy Simulations of Colorless Distributed Combustion Systems

    NASA Astrophysics Data System (ADS)

    Abdulrahman, Husam F.; Jaberi, Farhad; Gupta, Ashwani

    2014-11-01

    Development of efficient and low-emission colorless distributed combustion (CDC) systems for gas turbine applications require careful examination of the role of various flow and combustion parameters. Numerical simulations of CDC in a laboratory-scale combustor have been conducted to carefully examine the effects of these parameters on the CDC. The computational model is based on a hybrid modeling approach combining large eddy simulation (LES) with the filtered mass density function (FMDF) equations, solved with high order numerical methods and complex chemical kinetics. The simulated combustor operates based on the principle of high temperature air combustion (HiTAC) and has shown to significantly reduce the NOx, and CO emissions while improving the reaction pattern factor and stability without using any flame stabilizer and with low pressure drop and noise. The focus of the current work is to investigate the mixing of air and hydrocarbon fuels and the non-premixed and premixed reactions within the combustor by the LES/FMDF with the reduced chemical kinetic mechanisms for the same flow conditions and configurations investigated experimentally. The main goal is to develop better CDC with higher mixing and efficiency, ultra-low emission levels and optimum residence time. The computational results establish the consistency and the reliability of LES/FMDF and its Lagrangian-Eulerian numerical methodology.

  13. Design and Fabrication of Millimeter Wave Hexagonal Nano-Ferrite Circulator on Silicon CMOS Substrate

    NASA Astrophysics Data System (ADS)

    Oukacha, Hassan

    The rapid advancement of Complementary Metal Oxide Semiconductor (CMOS) technology has formed the backbone of the modern computing revolution enabling the development of computationally intensive electronic devices that are smaller, faster, less expensive, and consume less power. This well-established technology has transformed the mobile computing and communications industries by providing high levels of system integration on a single substrate, high reliability and low manufacturing cost. The driving force behind this computing revolution is the scaling of semiconductor devices to smaller geometries which has resulted in faster switching speeds and the promise of replacing traditional, bulky radio frequency (RF) components with miniaturized devices. Such devices play an important role in our society enabling ubiquitous computing and on-demand data access. This thesis presents the design and development of a magnetic circulator component in a standard 180 nm CMOS process. The design approach involves integration of nanoscale ferrite materials on a CMOS chip to avoid using bulky magnetic materials employed in conventional circulators. This device constitutes the next generation broadband millimeter-wave circulator integrated in CMOS using ferrite materials operating in the 60GHz frequency band. The unlicensed ultra-high frequency spectrum around 60GHz offers many benefits: very high immunity to interference, high security, and frequency re-use. Results of both simulations and measurements are presented in this thesis. The presented results show the benefits of this technique and the potential that it has in incorporating a complete system-on-chip (SoC) that includes low noise amplifier, power amplier, and antenna. This system-on-chip can be used in the same applications where the conventional circulator has been employed, including communication systems, radar systems, navigation and air traffic control, and military equipment. This set of applications of circulator shows how crucial this device is to many industries and the need for smaller, cost effective RF components.

  14. Developing an Advanced Life Support System for the Flexible Path into Deep Space

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Kliss, Mark H.

    2010-01-01

    Long duration human missions beyond low Earth orbit, such as a permanent lunar base, an asteroid rendezvous, or exploring Mars, will use recycling life support systems to preclude supplying large amounts of metabolic consumables. The International Space Station (ISS) life support design provides a historic guiding basis for future systems, but both its system architecture and the subsystem technologies should be reconsidered. Different technologies for the functional subsystems have been investigated and some past alternates appear better for flexible path destinations beyond low Earth orbit. There is a need to develop more capable technologies that provide lower mass, increased closure, and higher reliability. A major objective of redesigning the life support system for the flexible path is achieving the maintainability and ultra-reliability necessary for deep space operations.

  15. Impact of coverage on the reliability of a fault tolerant computer

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1975-01-01

    A mathematical reliability model is established for a reconfigurable fault tolerant avionic computer system utilizing state-of-the-art computers. System reliability is studied in light of the coverage probabilities associated with the first and second independent hardware failures. Coverage models are presented as a function of detection, isolation, and recovery probabilities. Upper and lower bonds are established for the coverage probabilities and the method for computing values for the coverage probabilities is investigated. Further, an architectural variation is proposed which is shown to enhance coverage.

  16. Modeling of unit operating considerations in generating-capacity reliability evaluation. Volume 1. Mathematical models, computing methods, and results. Final report. [GENESIS, OPCON and OPPLAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, A.D.; Ayoub, A.K.; Singh, C.

    1982-07-01

    Existing methods for generating capacity reliability evaluation do not explicitly recognize a number of operating considerations which may have important effects in system reliability performance. Thus, current methods may yield estimates of system reliability which differ appreciably from actual observed reliability. Further, current methods offer no means of accurately studying or evaluating alternatives which may differ in one or more operating considerations. Operating considerations which are considered to be important in generating capacity reliability evaluation include: unit duty cycles as influenced by load cycle shape, reliability performance of other units, unit commitment policy, and operating reserve policy; unit start-up failuresmore » distinct from unit running failures; unit start-up times; and unit outage postponability and the management of postponable outages. A detailed Monte Carlo simulation computer model called GENESIS and two analytical models called OPCON and OPPLAN have been developed which are capable of incorporating the effects of many operating considerations including those noted above. These computer models have been used to study a variety of actual and synthetic systems and are available from EPRI. The new models are shown to produce system reliability indices which differ appreciably from index values computed using traditional models which do not recognize operating considerations.« less

  17. Free space optical ultra-wideband communications over atmospheric turbulence channels.

    PubMed

    Davaslioğlu, Kemal; Cağiral, Erman; Koca, Mutlu

    2010-08-02

    A hybrid impulse radio ultra-wideband (IR-UWB) communication system in which UWB pulses are transmitted over long distances through free space optical (FSO) links is proposed. FSO channels are characterized by random fluctuations in the received light intensity mainly due to the atmospheric turbulence. For this reason, theoretical detection error probability analysis is presented for the proposed system for a time-hopping pulse-position modulated (TH-PPM) UWB signal model under weak, moderate and strong turbulence conditions. For the optical system output distributed over radio frequency UWB channels, composite error analysis is also presented. The theoretical derivations are verified via simulation results, which indicate a computationally and spectrally efficient UWB-over-FSO system.

  18. kW-class direct diode laser for sheet metal cutting based on DWDM of pump modules by use of ultra-steep dielectric filters.

    PubMed

    Witte, U; Schneider, F; Traub, M; Hoffmann, D; Drovs, S; Brand, T; Unger, A

    2016-10-03

    A direct diode laser was built with > 800 W output power at 940 nm to 980 nm. The radiation is coupled into a 100 µm fiber and the NA ex fiber is 0.17. The laser system is based on pump modules that are wavelength stabilized by VBGs. Dense and coarse wavelength multiplexing are realized with commercially available ultra-steep dielectric filters. The electro-optical efficiency is above 30%. Based on a detailed analysis of losses, an improved e-o-efficiency in the range of 40% to 45% is expected in the near future. System performance and reliability were demonstrated with sheet metal cutting tests on stainless steel with a thickness of 4.2 mm.

  19. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  20. High-reliability computing for the smarter planet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather M; Graham, Paul; Manuzzato, Andrea

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities ofmore » inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is necessary. Already critical infrastructure is failing too frequently. In this paper, we will introduce the Cross-Layer Reliability concept for designing more reliable computer systems.« less

  1. Developing a novel hierarchical approach for multiscale structural reliability predictions for ultra-high consequence applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emery, John M.; Coffin, Peter; Robbins, Brian A.

    Microstructural variabilities are among the predominant sources of uncertainty in structural performance and reliability. We seek to develop efficient algorithms for multiscale calcu- lations for polycrystalline alloys such as aluminum alloy 6061-T6 in environments where ductile fracture is the dominant failure mode. Our approach employs concurrent multiscale methods, but does not focus on their development. They are a necessary but not sufficient ingredient to multiscale reliability predictions. We have focused on how to efficiently use concurrent models for forward propagation because practical applications cannot include fine-scale details throughout the problem domain due to exorbitant computational demand. Our approach begins withmore » a low-fidelity prediction at the engineering scale that is sub- sequently refined with multiscale simulation. The results presented in this report focus on plasticity and damage at the meso-scale, efforts to expedite Monte Carlo simulation with mi- crostructural considerations, modeling aspects regarding geometric representation of grains and second-phase particles, and contrasting algorithms for scale coupling.« less

  2. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  3. Upper and lower bounds for semi-Markov reliability models of reconfigurable systems

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1984-01-01

    This paper determines the information required about system recovery to compute the reliability of a class of reconfigurable systems. Upper and lower bounds are derived for these systems. The class consists of those systems that satisfy five assumptions: the components fail independently at a low constant rate, fault occurrence and system reconfiguration are independent processes, the reliability model is semi-Markov, the recovery functions which describe system configuration have small means and variances, and the system is well designed. The bounds are easy to compute, and examples are included.

  4. Program for computer aided reliability estimation

    NASA Technical Reports Server (NTRS)

    Mathur, F. P. (Inventor)

    1972-01-01

    A computer program for estimating the reliability of self-repair and fault-tolerant systems with respect to selected system and mission parameters is presented. The computer program is capable of operation in an interactive conversational mode as well as in a batch mode and is characterized by maintenance of several general equations representative of basic redundancy schemes in an equation repository. Selected reliability functions applicable to any mathematical model formulated with the general equations, used singly or in combination with each other, are separately stored. One or more system and/or mission parameters may be designated as a variable. Data in the form of values for selected reliability functions is generated in a tabular or graphic format for each formulated model.

  5. A forward view on reliable computers for flight control

    NASA Technical Reports Server (NTRS)

    Goldberg, J.; Wensley, J. H.

    1976-01-01

    The requirements for fault-tolerant computers for flight control of commercial aircraft are examined; it is concluded that the reliability requirements far exceed those typically quoted for space missions. Examination of circuit technology and alternative computer architectures indicates that the desired reliability can be achieved with several different computer structures, though there are obvious advantages to those that are more economic, more reliable, and, very importantly, more certifiable as to fault tolerance. Progress in this field is expected to bring about better computer systems that are more rigorously designed and analyzed even though computational requirements are expected to increase significantly.

  6. Reliability/safety analysis of a fly-by-wire system

    NASA Technical Reports Server (NTRS)

    Brock, L. D.; Goddman, H. A.

    1980-01-01

    An analysis technique has been developed to estimate the reliability of a very complex, safety-critical system by constructing a diagram of the reliability equations for the total system. This diagram has many of the characteristics of a fault-tree or success-path diagram, but is much easier to construct for complex redundant systems. The diagram provides insight into system failure characteristics and identifies the most likely failure modes. A computer program aids in the construction of the diagram and the computation of reliability. Analysis of the NASA F-8 Digital Fly-by-Wire Flight Control System is used to illustrate the technique.

  7. Examples of Nonconservatism in the CARE 3 Program

    NASA Technical Reports Server (NTRS)

    Dotson, Kelly J.

    1988-01-01

    This paper presents parameter regions in the CARE 3 (Computer-Aided Reliability Estimation version 3) computer program where the program overestimates the reliability of a modeled system without warning the user. Five simple models of fault-tolerant computer systems are analyzed; and, the parameter regions where reliability is overestimated are given. The source of the error in the reliability estimates for models which incorporate transient fault occurrences was not readily apparent. However, the source of much of the error for models with permanent and intermittent faults can be attributed to the choice of values for the run-time parameters of the program.

  8. A Simplified Baseband Prefilter Model with Adaptive Kalman Filter for Ultra-Tight COMPASS/INS Integration

    PubMed Central

    Luo, Yong; Wu, Wenqi; Babu, Ravindra; Tang, Kanghua; Luo, Bing

    2012-01-01

    COMPASS is an indigenously developed Chinese global navigation satellite system and will share many features in common with GPS (Global Positioning System). Since the ultra-tight GPS/INS (Inertial Navigation System) integration shows its advantage over independent GPS receivers in many scenarios, the federated ultra-tight COMPASS/INS integration has been investigated in this paper, particularly, by proposing a simplified prefilter model. Compared with a traditional prefilter model, the state space of this simplified system contains only carrier phase, carrier frequency and carrier frequency rate tracking errors. A two-quadrant arctangent discriminator output is used as a measurement. Since the code tracking error related parameters were excluded from the state space of traditional prefilter models, the code/carrier divergence would destroy the carrier tracking process, and therefore an adaptive Kalman filter algorithm tuning process noise covariance matrix based on state correction sequence was incorporated to compensate for the divergence. The federated ultra-tight COMPASS/INS integration was implemented with a hardware COMPASS intermediate frequency (IF), and INS's accelerometers and gyroscopes signal sampling system. Field and simulation test results showed almost similar tracking and navigation performances for both the traditional prefilter model and the proposed system; however, the latter largely decreased the computational load. PMID:23012564

  9. Reliability Considerations of ULP Scaled CMOS in Spacecraft Systems

    NASA Technical Reports Server (NTRS)

    White, Mark; MacNeal, Kristen; Cooper, Mark

    2012-01-01

    NASA, the aerospace community, and other high reliability (hi-rel) users of advanced microelectronic products face many challenges as technology continues to scale into the deep sub-micron region. Decreasing the feature size of CMOS devices not only allows more components to be placed on a single chip, but it increases performance by allowing faster switching (or clock) speeds with reduced power compared to larger scaled devices. Higher performance, and lower operating and stand-by power characteristics of Ultra-Low Power (ULP) microelectronics are not only desirable, but also necessary to meet low power consumption design goals of critical spacecraft systems. The integration of these components in such systems, however, must be balanced with the overall risk tolerance of the project.

  10. Transformational electronics: a powerful way to revolutionize our information world

    NASA Astrophysics Data System (ADS)

    Rojas, Jhonathan P.; Torres Sevilla, Galo A.; Ghoneim, Mohamed T.; Hussain, Aftab M.; Ahmed, Sally M.; Nassar, Joanna M.; Bahabry, Rabab R.; Nour, Maha; Kutbee, Arwa T.; Byas, Ernesto; Al-Saif, Bidoor; Alamri, Amal M.; Hussain, Muhammad M.

    2014-06-01

    With the emergence of cloud computation, we are facing the rising waves of big data. It is our time to leverage such opportunity by increasing data usage both by man and machine. We need ultra-mobile computation with high data processing speed, ultra-large memory, energy efficiency and multi-functionality. Additionally, we have to deploy energy-efficient multi-functional 3D ICs for robust cyber-physical system establishment. To achieve such lofty goals we have to mimic human brain, which is inarguably the world's most powerful and energy efficient computer. Brain's cortex has folded architecture to increase surface area in an ultra-compact space to contain its neuron and synapses. Therefore, it is imperative to overcome two integration challenges: (i) finding out a low-cost 3D IC fabrication process and (ii) foldable substrates creation with ultra-large-scale-integration of high performance energy efficient electronics. Hence, we show a low-cost generic batch process based on trench-protect-peel-recycle to fabricate rigid and flexible 3D ICs as well as high performance flexible electronics. As of today we have made every single component to make a fully flexible computer including non-planar state-of-the-art FinFETs. Additionally we have demonstrated various solid-state memory, movable MEMS devices, energy harvesting and storage components. To show the versatility of our process, we have extended our process towards other inorganic semiconductor substrates such as silicon germanium and III-V materials. Finally, we report first ever fully flexible programmable silicon based microprocessor towards foldable brain computation and wirelessly programmable stretchable and flexible thermal patch for pain management for smart bionics.

  11. Enhancing ultra-high CPV passive cooling using least-material finned heat sinks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Micheli, Leonardo, E-mail: lm409@exeter.ac.uk; Mallick, Tapas K., E-mail: T.K.Mallick@exeter.ac.uk; Fernandez, Eduardo F., E-mail: E.Fernandez-Fernandez2@exeter.ac.uk

    2015-09-28

    Ultra-high concentrating photovoltaic (CPV) systems aim to increase the cost-competiveness of CPV by increasing the concentrations over 2000 suns. In this work, the design of a heat sink for ultra-high concentrating photovoltaic (CPV) applications is presented. For the first time, the least-material approach, widely used in electronics to maximize the thermal dissipation while minimizing the weight of the heat sink, has been applied in CPV. This method has the potential to further decrease the cost of this technology and to keep the multijunction cell within the operative temperature range. The designing procedure is described in the paper and the resultsmore » of a thermal simulation are shown to prove the reliability of the solution. A prediction of the costs is also reported: a cost of 0.151$/W{sub p} is expected for a passive least-material heat sink developed for 4000x applications.« less

  12. Fog-computing concept usage as means to enhance information and control system reliability

    NASA Astrophysics Data System (ADS)

    Melnik, E. V.; Klimenko, A. B.; Ivanov, D. Ya

    2018-05-01

    This paper focuses on the reliability issue of information and control systems (ICS). The authors propose using the elements of the fog-computing concept to enhance the reliability function. The key idea of fog-computing is to shift computations to the fog-layer of the network, and thus to decrease the workload of the communication environment and data processing components. As for ICS, workload also can be distributed among sensors, actuators and network infrastructure facilities near the sources of data. The authors simulated typical workload distribution situations for the “traditional” ICS architecture and for the one with fogcomputing concept elements usage. The paper contains some models, selected simulation results and conclusion about the prospects of the fog-computing as a means to enhance ICS reliability.

  13. Massively parallel information processing systems for space applications

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.

    1979-01-01

    NASA is developing massively parallel systems for ultra high speed processing of digital image data collected by satellite borne instrumentation. Such systems contain thousands of processing elements. Work is underway on the design and fabrication of the 'Massively Parallel Processor', a ground computer containing 16,384 processing elements arranged in a 128 x 128 array. This computer uses existing technology. Advanced work includes the development of semiconductor chips containing thousands of feedthrough paths. Massively parallel image analog to digital conversion technology is also being developed. The goal is to provide compact computers suitable for real-time onboard processing of images.

  14. Definition and trade-off study of reconfigurable airborne digital computer system organizations

    NASA Technical Reports Server (NTRS)

    Conn, R. B.

    1974-01-01

    A highly-reliable, fault-tolerant reconfigurable computer system for aircraft applications was developed. The development and application reliability and fault-tolerance assessment techniques are described. Particular emphasis is placed on the needs of an all-digital, fly-by-wire control system appropriate for a passenger-carrying airplane.

  15. Care 3 model overview and user's guide, first revision

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.; Petersen, P. L.

    1985-01-01

    A manual was written to introduce the CARE III (Computer-Aided Reliability Estimation) capability to reliability and design engineers who are interested in predicting the reliability of highly reliable fault-tolerant systems. It was also structured to serve as a quick-look reference manual for more experienced users. The guide covers CARE III modeling and reliability predictions for execution in the CDC CYber 170 series computers, DEC VAX-11/700 series computer, and most machines that compile ANSI Standard FORTRAN 77.

  16. Electrical breakdown detection system for dielectric elastomer actuators

    NASA Astrophysics Data System (ADS)

    Ghilardi, Michele; Busfield, James J. C.; Carpi, Federico

    2017-04-01

    Electrical breakdown of dielectric elastomer actuators (DEAs) is an issue that has to be carefully addressed when designing systems based on this novel technology. Indeed, in some systems electrical breakdown might have serious consequences, not only in terms of interruption of the desired function but also in terms of safety of the overall system (e.g. overheating and even burning). The risk for electrical breakdown often cannot be completely avoided by simply reducing the driving voltages, either because completely safe voltages might not generate sufficient actuation or because internal or external factors might change some properties of the actuator whilst in operation (for example the aging or fatigue of the material, or an externally imposed deformation decreasing the distance between the compliant electrodes). So, there is the clear need for reliable, simple and cost-effective detection systems that are able to acknowledge the occurrence of a breakdown event, making DEA-based devices able to monitor their status and become safer and "selfaware". Here a simple solution for a portable detection system is reported that is based on a voltage-divider configuration that detects the voltage drop at the DEA terminals and assesses the occurrence of breakdown via a microcontroller (Beaglebone Black single-board computer) combined with a real-time, ultra-low-latency processing unit (Bela cape an open-source embedded platform developed at Queen Mary University of London). The system was used to both generate the control signal that drives the actuator and constantly monitor the functionality of the actuator, detecting any breakdown event and discontinuing the supplied voltage accordingly, so as to obtain a safer controlled actuation. This paper presents preliminary tests of the detection system in different scenarios in order to assess its reliability.

  17. System reliability of randomly vibrating structures: Computational modeling and laboratory testing

    NASA Astrophysics Data System (ADS)

    Sundar, V. S.; Ammanagi, S.; Manohar, C. S.

    2015-09-01

    The problem of determination of system reliability of randomly vibrating structures arises in many application areas of engineering. We discuss in this paper approaches based on Monte Carlo simulations and laboratory testing to tackle problems of time variant system reliability estimation. The strategy we adopt is based on the application of Girsanov's transformation to the governing stochastic differential equations which enables estimation of probability of failure with significantly reduced number of samples than what is needed in a direct simulation study. Notably, we show that the ideas from Girsanov's transformation based Monte Carlo simulations can be extended to conduct laboratory testing to assess system reliability of engineering structures with reduced number of samples and hence with reduced testing times. Illustrative examples include computational studies on a 10-degree of freedom nonlinear system model and laboratory/computational investigations on road load response of an automotive system tested on a four-post test rig.

  18. An approximation formula for a class of fault-tolerant computers

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1986-01-01

    An approximation formula is derived for the probability of failure for fault-tolerant process-control computers. These computers use redundancy and reconfiguration to achieve high reliability. Finite-state Markov models capture the dynamic behavior of component failure and system recovery, and the approximation formula permits an estimation of system reliability by an easy examination of the model.

  19. Evaluation Applied to Reliability Analysis of Reconfigurable, Highly Reliable, Fault-Tolerant, Computing Systems for Avionics

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1979-01-01

    Emulation techniques are proposed as a solution to a difficulty arising in the analysis of the reliability of highly reliable computer systems for future commercial aircraft. The difficulty, viz., the lack of credible precision in reliability estimates obtained by analytical modeling techniques are established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible, (2) a complex system design technique, fault tolerance, (3) system reliability dominated by errors due to flaws in the system definition, and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. The technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. The use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques.

  20. Comparison of core-shell and totally porous ultra high performance liquid chromatographic stationary phases based on their selectivity towards alfuzosin compounds.

    PubMed

    Szulfer, Jarosław; Plenis, Alina; Bączek, Tomasz

    2014-06-13

    This paper focuses on the application of a column classification system based on the Katholieke Universiteit Leuven for the characterization of physicochemical properties of core-shell and ultra-high performance liquid chromatographic stationary phases, followed by the verification of the reliability of the obtained column classification in pharmaceutical practice. In the study, 7 stationary phases produced in core-shell technology and 18 ultra-high performance liquid chromatographic columns were chromatographically tested, and ranking lists were built on the FKUL-values calculated against two selected reference columns. In the column performance test, an analysis of alfuzosin in the presence of related substances was carried out using the brands of the stationary phases with the highest ranking positions. Next, a system suitability test as described by the European Pharmacopoeia monograph was performed. Moreover, a study was also performed to achieve a purposeful shortening of the analysis time of the compounds of interest using the selected stationary phases. Finally, it was checked whether methods using core-shell and ultra-high performance liquid chromatographic columns can be an interesting alternative to the high-performance liquid chromatographic method for the analysis of alfuzosin in pharmaceutical practice. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. An Acoustic Charge Transport Imager for High Definition Television Applications: Reliability Modeling and Parametric Yield Prediction of GaAs Multiple Quantum Well Avalanche Photodiodes. Degree awarded Oct. 1997

    NASA Technical Reports Server (NTRS)

    Hunt, W. D.; Brennan, K. F.; Summers, C. J.; Yun, Ilgu

    1994-01-01

    Reliability modeling and parametric yield prediction of GaAs/AlGaAs multiple quantum well (MQW) avalanche photodiodes (APDs), which are of interest as an ultra-low noise image capture mechanism for high definition systems, have been investigated. First, the effect of various doping methods on the reliability of GaAs/AlGaAs multiple quantum well (MQW) avalanche photodiode (APD) structures fabricated by molecular beam epitaxy is investigated. Reliability is examined by accelerated life tests by monitoring dark current and breakdown voltage. Median device lifetime and the activation energy of the degradation mechanism are computed for undoped, doped-barrier, and doped-well APD structures. Lifetimes for each device structure are examined via a statistically designed experiment. Analysis of variance shows that dark-current is affected primarily by device diameter, temperature and stressing time, and breakdown voltage depends on the diameter, stressing time and APD type. It is concluded that the undoped APD has the highest reliability, followed by the doped well and doped barrier devices, respectively. To determine the source of the degradation mechanism for each device structure, failure analysis using the electron-beam induced current method is performed. This analysis reveals some degree of device degradation caused by ionic impurities in the passivation layer, and energy-dispersive spectrometry subsequently verified the presence of ionic sodium as the primary contaminant. However, since all device structures are similarly passivated, sodium contamination alone does not account for the observed variation between the differently doped APDs. This effect is explained by the dopant migration during stressing, which is verified by free carrier concentration measurements using the capacitance-voltage technique.

  2. Formal design and verification of a reliable computing platform for real-time control. Phase 1: Results

    NASA Technical Reports Server (NTRS)

    Divito, Ben L.; Butler, Ricky W.; Caldwell, James L.

    1990-01-01

    A high-level design is presented for a reliable computing platform for real-time control applications. Design tradeoffs and analyses related to the development of the fault-tolerant computing platform are discussed. The architecture is formalized and shown to satisfy a key correctness property. The reliable computing platform uses replicated processors and majority voting to achieve fault tolerance. Under the assumption of a majority of processors working in each frame, it is shown that the replicated system computes the same results as a single processor system not subject to failures. Sufficient conditions are obtained to establish that the replicated system recovers from transient faults within a bounded amount of time. Three different voting schemes are examined and proved to satisfy the bounded recovery time conditions.

  3. Industrial WSN Based on IR-UWB and a Low-Latency MAC Protocol

    NASA Astrophysics Data System (ADS)

    Reinhold, Rafael; Underberg, Lisa; Wulf, Armin; Kays, Ruediger

    2016-07-01

    Wireless sensor networks for industrial communication require high reliability and low latency. As current wireless sensor networks do not entirely meet these requirements, novel system approaches need to be developed. Since ultra wideband communication systems seem to be a promising approach, this paper evaluates the performance of the IEEE 802.15.4 impulse-radio ultra-wideband physical layer and the IEEE 802.15.4 Low Latency Deterministic Network (LLDN) MAC for industrial applications. Novel approaches and system adaptions are proposed to meet the application requirements. In this regard, a synchronization approach based on circular average magnitude difference functions (CAMDF) and on a clean template (CT) is presented for the correlation receiver. An adapted MAC protocol titled aggregated low latency (ALL) MAC is proposed to significantly reduce the resulting latency. Based on the system proposals, a hardware prototype has been developed, which proves the feasibility of the system and visualizes the real-time performance of the MAC protocol.

  4. Sensitivity Analysis of ProSEDS (Propulsive Small Expendable Deployer System) Data Communication System

    NASA Technical Reports Server (NTRS)

    Park, Nohpill; Reagan, Shawn; Franks, Greg; Jones, William G.

    1999-01-01

    This paper discusses analytical approaches to evaluating performance of Spacecraft On-Board Computing systems, thereby ultimately achieving a reliable spacecraft data communications systems. The sensitivity analysis approach of memory system on the ProSEDS (Propulsive Small Expendable Deployer System) as a part of its data communication system will be investigated. Also, general issues and possible approaches to reliable Spacecraft On-Board Interconnection Network and Processor Array will be shown. The performance issues of a spacecraft on-board computing systems such as sensitivity, throughput, delay and reliability will be introduced and discussed.

  5. Versatile, low-cost, computer-controlled, sample positioning system for vacuum applications

    NASA Technical Reports Server (NTRS)

    Vargas-Aburto, Carlos; Liff, Dale R.

    1991-01-01

    A versatile, low-cost, easy to implement, microprocessor-based motorized positioning system (MPS) suitable for accurate sample manipulation in a Second Ion Mass Spectrometry (SIMS) system, and for other ultra-high vacuum (UHV) applications was designed and built at NASA LeRC. The system can be operated manually or under computer control. In the latter case, local, as well as remote operation is possible via the IEEE-488 bus. The position of the sample can be controlled in three linear orthogonal and one angular coordinates.

  6. Ultra-short pulse delivery at high average power with low-loss hollow core fibers coupled to TRUMPF's TruMicro laser platforms for industrial applications

    NASA Astrophysics Data System (ADS)

    Baumbach, S.; Pricking, S.; Overbuschmann, J.; Nutsch, S.; Kleinbauer, J.; Gebs, R.; Tan, C.; Scelle, R.; Kahmann, M.; Budnicki, A.; Sutter, D. H.; Killi, A.

    2017-02-01

    Multi-megawatt ultrafast laser systems at micrometer wavelength are commonly used for material processing applications, including ablation, cutting and drilling of various materials or cleaving of display glass with excellent quality. There is a need for flexible and efficient beam guidance, avoiding free space propagation of light between the laser head and the processing unit. Solid core step index fibers are only feasible for delivering laser pulses with peak powers in the kW-regime due to the optical damage threshold in bulk silica. In contrast, hollow core fibers are capable of guiding ultra-short laser pulses with orders of magnitude higher peak powers. This is possible since a micro-structured cladding confines the light within the hollow core and therefore minimizes the spatial overlap between silica and the electro-magnetic field. We report on recent results of single-mode ultra-short pulse delivery over several meters in a lowloss hollow core fiber packaged with industrial connectors. TRUMPF's ultrafast TruMicro laser platforms equipped with advanced temperature control and precisely engineered opto-mechanical components provide excellent position and pointing stability. They are thus perfectly suited for passive coupling of ultra-short laser pulses into hollow core fibers. Neither active beam launching components nor beam trackers are necessary for a reliable beam delivery in a space and cost saving packaging. Long term tests with weeks of stable operation, excellent beam quality and an overall transmission efficiency of above 85 percent even at high average power confirm the reliability for industrial applications.

  7. Digital avionics design and reliability analyzer

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The description and specifications for a digital avionics design and reliability analyzer are given. Its basic function is to provide for the simulation and emulation of the various fault-tolerant digital avionic computer designs that are developed. It has been established that hardware emulation at the gate-level will be utilized. The primary benefit of emulation to reliability analysis is the fact that it provides the capability to model a system at a very detailed level. Emulation allows the direct insertion of faults into the system, rather than waiting for actual hardware failures to occur. This allows for controlled and accelerated testing of system reaction to hardware failures. There is a trade study which leads to the decision to specify a two-machine system, including an emulation computer connected to a general-purpose computer. There is also an evaluation of potential computers to serve as the emulation computer.

  8. Design of ultra high performance concrete as an overlay in pavements and bridge decks.

    DOT National Transportation Integrated Search

    2014-08-01

    The main objective of this research was to develop ultra-high performance concrete (UHPC) as a reliable, economic, low carbon foot : print and durable concrete overlay material that can offer shorter traffic closures due to faster construction. The U...

  9. Aerospace Applications of Weibull and Monte Carlo Simulation with Importance Sampling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1998-01-01

    Recent developments in reliability modeling and computer technology have made it practical to use the Weibull time to failure distribution to model the system reliability of complex fault-tolerant computer-based systems. These system models are becoming increasingly popular in space systems applications as a result of mounting data that support the decreasing Weibull failure distribution and the expectation of increased system reliability. This presentation introduces the new reliability modeling developments and demonstrates their application to a novel space system application. The application is a proposed guidance, navigation, and control (GN&C) system for use in a long duration manned spacecraft for a possible Mars mission. Comparisons to the constant failure rate model are presented and the ramifications of doing so are discussed.

  10. Superior model for fault tolerance computation in designing nano-sized circuit systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, N. S. S., E-mail: narinderjit@petronas.com.my; Muthuvalu, M. S., E-mail: msmuthuvalu@gmail.com; Asirvadam, V. S., E-mail: vijanth-sagayan@petronas.com.my

    2014-10-24

    As CMOS technology scales nano-metrically, reliability turns out to be a decisive subject in the design methodology of nano-sized circuit systems. As a result, several computational approaches have been developed to compute and evaluate reliability of desired nano-electronic circuits. The process of computing reliability becomes very troublesome and time consuming as the computational complexity build ups with the desired circuit size. Therefore, being able to measure reliability instantly and superiorly is fast becoming necessary in designing modern logic integrated circuits. For this purpose, the paper firstly looks into the development of an automated reliability evaluation tool based on the generalizationmore » of Probabilistic Gate Model (PGM) and Boolean Difference-based Error Calculator (BDEC) models. The Matlab-based tool allows users to significantly speed-up the task of reliability analysis for very large number of nano-electronic circuits. Secondly, by using the developed automated tool, the paper explores into a comparative study involving reliability computation and evaluation by PGM and, BDEC models for different implementations of same functionality circuits. Based on the reliability analysis, BDEC gives exact and transparent reliability measures, but as the complexity of the same functionality circuits with respect to gate error increases, reliability measure by BDEC tends to be lower than the reliability measure by PGM. The lesser reliability measure by BDEC is well explained in this paper using distribution of different signal input patterns overtime for same functionality circuits. Simulation results conclude that the reliability measure by BDEC depends not only on faulty gates but it also depends on circuit topology, probability of input signals being one or zero and also probability of error on signal lines.« less

  11. Computer aided reliability, availability, and safety modeling for fault-tolerant computer systems with commentary on the HARP program

    NASA Technical Reports Server (NTRS)

    Shooman, Martin L.

    1991-01-01

    Many of the most challenging reliability problems of our present decade involve complex distributed systems such as interconnected telephone switching computers, air traffic control centers, aircraft and space vehicles, and local area and wide area computer networks. In addition to the challenge of complexity, modern fault-tolerant computer systems require very high levels of reliability, e.g., avionic computers with MTTF goals of one billion hours. Most analysts find that it is too difficult to model such complex systems without computer aided design programs. In response to this need, NASA has developed a suite of computer aided reliability modeling programs beginning with CARE 3 and including a group of new programs such as: HARP, HARP-PC, Reliability Analysts Workbench (Combination of model solvers SURE, STEM, PAWS, and common front-end model ASSIST), and the Fault Tree Compiler. The HARP program is studied and how well the user can model systems using this program is investigated. One of the important objectives will be to study how user friendly this program is, e.g., how easy it is to model the system, provide the input information, and interpret the results. The experiences of the author and his graduate students who used HARP in two graduate courses are described. Some brief comparisons were made with the ARIES program which the students also used. Theoretical studies of the modeling techniques used in HARP are also included. Of course no answer can be any more accurate than the fidelity of the model, thus an Appendix is included which discusses modeling accuracy. A broad viewpoint is taken and all problems which occurred in the use of HARP are discussed. Such problems include: computer system problems, installation manual problems, user manual problems, program inconsistencies, program limitations, confusing notation, long run times, accuracy problems, etc.

  12. An experiment in software reliability

    NASA Technical Reports Server (NTRS)

    Dunham, J. R.; Pierce, J. L.

    1986-01-01

    The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay.

  13. Position-insensitive long range inductive power transfer

    NASA Astrophysics Data System (ADS)

    Kwan, Christopher H.; Lawson, James; Yates, David C.; Mitcheson, Paul D.

    2014-11-01

    This paper presents results of an improved inductive wireless power transfer system for reliable long range powering of sensors with milliwatt-level consumption. An ultra-low power flyback impedance emulator operating in open loop is used to present the optimal load to the receiver's resonant tank. Transmitter power modulation is implemented in order to maintain constant receiver power and to prevent damage to the receiver electronics caused by excessive received voltage. Received power is steady up to 3 m at around 30 mW. The receiver electronics and feedback system consumes 3.1 mW and so with a transmitter input power of 163.3 W the receiver becomes power neutral at 4.75 m. Such an IPT system can provide a reliable alternative to energy harvesters for supplying power concurrently to multiple remote sensors.

  14. Comparative performance analysis for computer aided lung nodule detection and segmentation on ultra-low-dose vs. standard-dose CT

    NASA Astrophysics Data System (ADS)

    Wiemker, Rafael; Rogalla, Patrik; Opfer, Roland; Ekin, Ahmet; Romano, Valentina; Bülow, Thomas

    2006-03-01

    The performance of computer aided lung nodule detection (CAD) and computer aided nodule volumetry is compared between standard-dose (70-100 mAs) and ultra-low-dose CT images (5-10 mAs). A direct quantitative performance comparison was possible, since for each patient both an ultra-low-dose and a standard-dose CT scan were acquired within the same examination session. The data sets were recorded with a multi-slice CT scanner at the Charite university hospital Berlin with 1 mm slice thickness. Our computer aided nodule detection and segmentation algorithms were deployed on both ultra-low-dose and standard-dose CT data without any dose-specific fine-tuning or preprocessing. As a reference standard 292 nodules from 20 patients were visually identified, each nodule both in ultra-low-dose and standard-dose data sets. The CAD performance was analyzed by virtue of multiple FROC curves for different lower thresholds of the nodule diameter. For nodules with a volume-equivalent diameter equal or larger than 4 mm (149 nodules pairs), we observed a detection rate of 88% at a median false positive rate of 2 per patient in standard-dose images, and 86% detection rate in ultra-low-dose images, also at 2 FPs per patient. Including even smaller nodules equal or larger than 2 mm (272 nodules pairs), we observed a detection rate of 86% in standard-dose images, and 84% detection rate in ultra-low-dose images, both at a rate of 5 FPs per patient. Moreover, we observed a correlation of 94% between the volume-equivalent nodule diameter as automatically measured on ultra-low-dose versus on standard-dose images, indicating that ultra-low-dose CT is also feasible for growth-rate assessment in follow-up examinations. The comparable performance of lung nodule CAD in ultra-low-dose and standard-dose images is of particular interest with respect to lung cancer screening of asymptomatic patients.

  15. Reliability techniques for computer executive programs

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Computer techniques for increasing the stability and reliability of executive and supervisory systems were studied. Program segmentation characteristics are discussed along with a validation system which is designed to retain the natural top down outlook in coding. An analysis of redundancy techniques and roll back procedures is included.

  16. Quantum preservation of the measurements precision using ultra-short strong pulses in exact analytical solution

    NASA Astrophysics Data System (ADS)

    Berrada, K.; Eleuch, H.

    2017-09-01

    Various schemes have been proposed to improve the parameter-estimation precision. In the present work, we suggest an alternative method to preserve the estimation precision by considering a model that closely describes a realistic experimental scenario. We explore this active way to control and enhance the measurements precision for a two-level quantum system interacting with classical electromagnetic field using ultra-short strong pulses with an exact analytical solution, i.e. beyond the rotating wave approximation. In particular, we investigate the variation of the precision with a few cycles pulse and a smooth phase jump over a finite time interval. We show that by acting on the shape of the phase transient and other parameters of the considered system, the amount of information may be increased and has smaller decay rate in the long time. These features make two-level systems incorporated in ultra-short, of-resonant and gradually changing phase good candidates for implementation of schemes for the quantum computation and the coherent information processing.

  17. Development of a 32 Inch Diameter Levitated Ducted Fan Conceptual Design

    NASA Technical Reports Server (NTRS)

    Eichenberg, Dennis J.; Gallo, Christopher a.; Solano, Paul A.; Thompson, William K.; Vrnak, Daniel R.

    2006-01-01

    The NASA John H. Glenn Research Center has developed a revolutionary 32 in. diameter Levitated Ducted Fan (LDF) conceptual design. The objective of this work is to develop a viable non-contact propulsion system utilizing Halbach arrays for all-electric flight, and many other applications. This concept will help to reduce harmful emissions, reduce the Nation s dependence on fossil fuels, and mitigate many of the concerns and limitations encountered in conventional aircraft propulsors. The physical layout consists of a ducted fan drum rotor with blades attached at the outer diameter and supported by a stress tuner ring at the inner diameter. The rotor is contained within a stator. This concept exploits the unique physical dimensions and large available surface area to optimize a custom, integrated, electromagnetic system that provides both the levitation and propulsion functions. The rotor is driven by modulated electromagnetic fields between the rotor and the stator. When set in motion, the time varying magnetic fields interact with passive coils in the stator assembly to produce repulsive forces between the stator and the rotor providing magnetic suspension. LDF can provide significant improvements in aviation efficiency, reliability, and safety, and has potential application in ultra-efficient motors, computers, and space power systems.

  18. 125Mbps ultra-wideband system evaluation for cortical implant devices.

    PubMed

    Luo, Yi; Winstead, Chris; Chiang, Patrick

    2012-01-01

    This paper evaluates the performance of a 125Mbps Impulse Ratio Ultra-Wideband (IR-UWB) system for cortical implant devices by using low-Q inductive coil link operating in the near-field domain. We examine design tradeoffs between transmitted signal amplitude, reliability, noise and clock jitter. The IR-UWB system is modeled using measured parameters from a reported UWB transceiver implemented in 90nm-CMOS technology. Non-optimized inductive coupling coils with low-Q value for near-field data transmission are modeled in order to build a full channel from the transmitter (Tx) to the receiver (Rx). On-off keying (OOK) modulation is used together with a low-complexity convolutional error correcting code. The simulation results show that even though the low-Q coils decrease the amplitude of the received pulses, the UWB system can still achieve acceptable performance when error correction is used. These results predict that UWB is a good candidate for delivering high data rates in cortical implant devices.

  19. An assessment of the real-time application capabilities of the SIFT computer system

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1982-01-01

    The real-time capabilities of the SIFT computer system, a highly reliable multicomputer architecture developed to support the flight controls of a relaxed static stability aircraft, are discussed. The SIFT computer system was designed to meet extremely high reliability requirements and to facilitate a formal proof of its correctness. Although SIFT represents a significant achievement in fault-tolerant system research it presents an unusual and restrictive interface to its users. The characteristics of the user interface and its impact on application system design are assessed.

  20. Ku-band signal design study. [space shuttle orbiter data processing network

    NASA Technical Reports Server (NTRS)

    Rubin, I.

    1978-01-01

    Analytical tools, methods and techniques for assessing the design and performance of the space shuttle orbiter data processing system (DPS) are provided. The computer data processing network is evaluated in the key areas of queueing behavior synchronization and network reliability. The structure of the data processing network is described as well as the system operation principles and the network configuration. The characteristics of the computer systems are indicated. System reliability measures are defined and studied. System and network invulnerability measures are computed. Communication path and network failure analysis techniques are included.

  1. Emulation applied to reliability analysis of reconfigurable, highly reliable, fault-tolerant computing systems

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1979-01-01

    Emulation techniques applied to the analysis of the reliability of highly reliable computer systems for future commercial aircraft are described. The lack of credible precision in reliability estimates obtained by analytical modeling techniques is first established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Next, the technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. Use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques. Finally an illustrative example is presented to demonstrate from actual use the promise of the proposed application of emulation.

  2. PICSiP: new system-in-package technology using a high bandwidth photonic interconnection layer for converged microsystems

    NASA Astrophysics Data System (ADS)

    Tekin, Tolga; Töpper, Michael; Reichl, Herbert

    2009-05-01

    Technological frontiers between semiconductor technology, packaging, and system design are disappearing. Scaling down geometries [1] alone does not provide improvement of performance, less power, smaller size, and lower cost. It will require "More than Moore" [2] through the tighter integration of system level components at the package level. System-in-Package (SiP) will deliver the efficient use of three dimensions (3D) through innovation in packaging and interconnect technology. A key bottleneck to the implementation of high-performance microelectronic systems, including SiP, is the lack of lowlatency, high-bandwidth, and high density off-chip interconnects. Some of the challenges in achieving high-bandwidth chip-to-chip communication using electrical interconnects include the high losses in the substrate dielectric, reflections and impedance discontinuities, and susceptibility to crosstalk [3]. Obviously, the incentive for the use of photonics to overcome the challenges and leverage low-latency and highbandwidth communication will enable the vision of optical computing within next generation architectures. Supercomputers of today offer sustained performance of more than petaflops, which can be increased by utilizing optical interconnects. Next generation computing architectures are needed with ultra low power consumption; ultra high performance with novel interconnection technologies. In this paper we will discuss a CMOS compatible underlying technology to enable next generation optical computing architectures. By introducing a new optical layer within the 3D SiP, the development of converged microsystems, deployment for next generation optical computing architecture will be leveraged.

  3. The application of emulation techniques in the analysis of highly reliable, guidance and control computer systems

    NASA Technical Reports Server (NTRS)

    Migneault, Gerard E.

    1987-01-01

    Emulation techniques can be a solution to a difficulty that arises in the analysis of the reliability of guidance and control computer systems for future commercial aircraft. Described here is the difficulty, the lack of credibility of reliability estimates obtained by analytical modeling techniques. The difficulty is an unavoidable consequence of the following: (1) a reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Use of emulation techniques for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques is then discussed. Finally several examples of the application of emulation techniques are described.

  4. GLS-Finder: An Automated Data-Mining System for Fast Profiling Glucosinolates and its Application in Brassica Vegetables

    USDA-ARS?s Scientific Manuscript database

    A rapid computer-aided program for profiling glucosinolates, “GLS-Finder", was developed. GLS-Finder is a Matlab script based expert system that is capable for qualitative and semi-quantitative analysis of glucosinolates in samples using data generated by ultra-high performance liquid chromatograph...

  5. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization

    PubMed Central

    Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan

    2017-01-01

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325

  6. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    PubMed

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  7. Towards early software reliability prediction for computer forensic tools (case study).

    PubMed

    Abu Talib, Manar

    2016-01-01

    Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.

  8. Structural reliability assessment capability in NESSUS

    NASA Technical Reports Server (NTRS)

    Millwater, H.; Wu, Y.-T.

    1992-01-01

    The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.

  9. Structural reliability assessment capability in NESSUS

    NASA Astrophysics Data System (ADS)

    Millwater, H.; Wu, Y.-T.

    1992-07-01

    The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.

  10. Co-detection: ultra-reliable nanoparticle-based electrical detection of biomolecules in the presence of large background interference.

    PubMed

    Liu, Yang; Gu, Ming; Alocilja, Evangelyn C; Chakrabartty, Shantanu

    2010-11-15

    An ultra-reliable technique for detecting trace quantities of biomolecules is reported. The technique called "co-detection" exploits the non-linear redundancy amongst synthetically patterned biomolecular logic circuits for deciphering the presence or absence of target biomolecules in a sample. In this paper, we verify the "co-detection" principle on gold-nanoparticle-based conductimetric soft-logic circuits which use a silver-enhancement technique for signal amplification. Using co-detection, we have been able to demonstrate a great improvement in the reliability of detecting mouse IgG at concentration levels that are 10(5) lower than the concentration of rabbit IgG which serves as background interference. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. High Resolution X-Ray Micro-CT of Ultra-Thin Wall Space Components

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Rauser, R. W.; Bowman, Randy R.; Bonacuse, Peter; Martin, Richard E.; Locci, I. E.; Kelley, M.

    2012-01-01

    A high resolution micro-CT system has been assembled and is being used to provide optimal characterization for ultra-thin wall space components. The Glenn Research Center NDE Sciences Team, using this CT system, has assumed the role of inspection vendor for the Advanced Stirling Convertor (ASC) project at NASA. This article will discuss many aspects of the development of the CT scanning for this type of component, including CT system overview; inspection requirements; process development, software utilized and developed to visualize, process, and analyze results; calibration sample development; results on actual samples; correlation with optical/SEM characterization; CT modeling; and development of automatic flaw recognition software. Keywords: Nondestructive Evaluation, NDE, Computed Tomography, Imaging, X-ray, Metallic Components, Thin Wall Inspection

  12. Modeling and Simulation Reliable Spacecraft On-Board Computing

    NASA Technical Reports Server (NTRS)

    Park, Nohpill

    1999-01-01

    The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.

  13. Space Shuttle Program Primary Avionics Software System (PASS) Success Legacy - Quality and Reliability Date

    NASA Technical Reports Server (NTRS)

    Orr, James K.; Peltier, Daryl

    2010-01-01

    Thsi slide presentation reviews the avionics software system on board the space shuttle, with particular emphasis on the quality and reliability. The Primary Avionics Software System (PASS) provides automatic and fly-by-wire control of critical shuttle systems which executes in redundant computers. Charts given show the number of space shuttle flights vs time, PASS's development history, and other charts that point to the reliability of the system's development. The reliability of the system is also compared to predicted reliability.

  14. Formal design and verification of a reliable computing platform for real-time control. Phase 2: Results

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Divito, Ben L.

    1992-01-01

    The design and formal verification of the Reliable Computing Platform (RCP), a fault tolerant computing system for digital flight control applications is presented. The RCP uses N-Multiply Redundant (NMR) style redundancy to mask faults and internal majority voting to flush the effects of transient faults. The system is formally specified and verified using the Ehdm verification system. A major goal of this work is to provide the system with significant capability to withstand the effects of High Intensity Radiated Fields (HIRF).

  15. Circuit design advances for ultra-low power sensing platforms

    NASA Astrophysics Data System (ADS)

    Wieckowski, Michael; Dreslinski, Ronald G.; Mudge, Trevor; Blaauw, David; Sylvester, Dennis

    2010-04-01

    This paper explores the recent advances in circuit structures and design methodologies that have enabled ultra-low power sensing platforms and opened up a host of new applications. Central to this theme is the development of Near Threshold Computing (NTC) as a viable design space for low power sensing platforms. In this paradigm, the system's supply voltage is approximately equal to the threshold voltage of its transistors. Operating in this "near-threshold" region provides much of the energy savings previously demonstrated for subthreshold operation while offering more favorable performance and variability characteristics. This makes NTC applicable to a broad range of power-constrained computing segments including energy constrained sensing platforms. This paper explores the barriers to the adoption of NTC and describes current work aimed at overcoming these obstacles in the circuit design space.

  16. Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer

    NASA Technical Reports Server (NTRS)

    Goldberg, J.; Kautz, W. H.; Melliar-Smith, P. M.; Green, M. W.; Levitt, K. N.; Schwartz, R. L.; Weinstock, C. B.

    1984-01-01

    SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness.

  17. A Novel Approach to Photonic Generation and Modulation of Ultra-Wideband Pulses

    NASA Astrophysics Data System (ADS)

    Xiang, Peng; Guo, Hao; Chen, Dalei; Zhu, Huatao

    2016-01-01

    A novel approach to photonic generation of ultra-wideband (UWB) signals is proposed in this paper. The proposed signal generator is capable of generating UWB doublet pulses with flexible reconfigurability, and many different pulse modulation formats, including the commonly used pulse-position modulation (PPM) and bi-phase modulation (BPM) can be realized. Moreover, the photonic UWB pulse generator is capable of generating UWB signals with a tunable spectral notch-band, which is desirable to realize the interference avoidance between UWB and other narrow band systems, such as Wi-Fi. A mathematical model describing the proposed system is developed and the generation of UWB signals with different modulation formats is demonstrated via computer simulations.

  18. Virtual Colonoscopy Screening With Ultra Low-Dose CT and Less-Stressful Bowel Preparation: A Computer Simulation Study

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Wang, Su; Li, Lihong; Fan, Yi; Lu, Hongbing; Liang, Zhengrong

    2008-10-01

    Computed tomography colonography (CTC) or CT-based virtual colonoscopy (VC) is an emerging tool for detection of colonic polyps. Compared to the conventional fiber-optic colonoscopy, VC has demonstrated the potential to become a mass screening modality in terms of safety, cost, and patient compliance. However, current CTC delivers excessive X-ray radiation to the patient during data acquisition. The radiation is a major concern for screening application of CTC. In this work, we performed a simulation study to demonstrate a possible ultra low-dose CT technique for VC. The ultra low-dose abdominal CT images were simulated by adding noise to the sinograms of the patient CTC images acquired with normal dose scans at 100 mA s levels. The simulated noisy sinogram or projection data were first processed by a Karhunen-Loeve domain penalized weighted least-squares (KL-PWLS) restoration method and then reconstructed by a filtered backprojection algorithm for the ultra low-dose CT images. The patient-specific virtual colon lumen was constructed and navigated by a VC system after electronic colon cleansing of the orally-tagged residue stool and fluid. By the KL-PWLS noise reduction, the colon lumen can successfully be constructed and the colonic polyp can be detected in an ultra low-dose level below 50 mA s. Polyp detection can be found more easily by the KL-PWLS noise reduction compared to the results using the conventional noise filters, such as Hanning filter. These promising results indicate the feasibility of an ultra low-dose CTC pipeline for colon screening with less-stressful bowel preparation by fecal tagging with oral contrast.

  19. Novel Ruggedized Packaging Technology for VCSELs

    DTIC Science & Technology

    2017-03-01

    Novel Ruggedized Packaging Technology for VCSELs Charlie Kuznia ckuznia@ultracomm-inc.com Ultra Communications, Inc. Vista, CA, USA, 92081...n ac hieve l ow-power, E MI-immune links within hi gh-performance m ilitary computing an d sensor systems. Figure 1. Chip-scale-packaging of

  20. Verification Methodology of Fault-tolerant, Fail-safe Computers Applied to MAGLEV Control Computer Systems

    DOT National Transportation Integrated Search

    1993-05-01

    The Maglev control computer system should be designed to verifiably possess high reliability and safety as well as high availability to make Maglev a dependable and attractive transportation alternative to the public. A Maglev computer system has bee...

  1. Review of Computational Stirling Analysis Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.

    2004-01-01

    Nuclear thermal to electric power conversion carries the promise of longer duration missions and higher scientific data transmission rates back to Earth for both Mars rovers and deep space missions. A free-piston Stirling convertor is a candidate technology that is considered an efficient and reliable power conversion device for such purposes. While already very efficient, it is believed that better Stirling engines can be developed if the losses inherent its current designs could be better understood. However, they are difficult to instrument and so efforts are underway to simulate a complete Stirling engine numerically. This has only recently been attempted and a review of the methods leading up to and including such computational analysis is presented. And finally it is proposed that the quality and depth of Stirling loss understanding may be improved by utilizing the higher fidelity and efficiency of recently developed numerical methods. One such method, the Ultra HI-Fl technique is presented in detail.

  2. The 747 primary flight control systems reliability and maintenance study

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The major operational characteristics of the 747 Primary Flight Control Systems (PFCS) are described. Results of reliability analysis for separate control functions are presented. The analysis makes use of a NASA computer program which calculates reliability of redundant systems. Costs for maintaining the 747 PFCS in airline service are assessed. The reliabilities and cost will provide a baseline for use in trade studies of future flight control system design.

  3. Electronic structure and optical properties of metal doped tetraphenylporphyrins

    NASA Astrophysics Data System (ADS)

    Shah, Esha V.; Roy, Debesh R.

    2018-05-01

    A density functional scrutiny on the structure, electronic and optical properties of metal doped tetraphenylporphyrins MTPP (M=Fe, Co, Ni) is performed. The structural stability of the molecules is evaluated based on the electronic parameters like HOMO-LUMO gap (HLG), chemical hardness (η) and binding energy of the central metal atom to the molecular frame etc. The computed UltraViolet-Visible (UV-Vis) optical absorption spectra for all the compounds are also compared. The molecular structures reported are the lowest energy configurations. The entire calculations are carried out with a widely reliable functional, viz. B3LYP with a popular basis set which includes a scaler relativistic effect, viz. LANL2DZ.

  4. Bringing the CMS distributed computing system into scalable operations

    NASA Astrophysics Data System (ADS)

    Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.

    2010-04-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.

  5. [An ultra-low power, wearable, long-term ECG monitoring system with mass storage].

    PubMed

    Liu, Na; Chen, Yingmin; Zhang, Wenzan; Luo, Zhangyuan; Jin, Xun; Ying, Weihai

    2012-01-01

    In this paper, we described an ultra-low power, wearable ECG system capable of long term monitoring and mass storage. This system is based on micro-chip PIC18F27J13 with consideration of its high level of integration and low power consumption. The communication with the micro-SD card is achieved through SPI bus. Through the USB, it can be connected to the computer for replay and disease diagnosis. Given its low power cost, lithium cells are used to support continuous ECG acquiring and storage for up to 15 days. Meanwhile, the wearable electrodes avoid the pains and possible risks in implanting. Besides, the mini size of the system makes long wearing possible for patients and meets the needs of long-term dynamic monitoring and mass storage requirements.

  6. Software reliability models for fault-tolerant avionics computers and related topics

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1987-01-01

    Software reliability research is briefly described. General research topics are reliability growth models, quality of software reliability prediction, the complete monotonicity property of reliability growth, conceptual modelling of software failure behavior, assurance of ultrahigh reliability, and analysis techniques for fault-tolerant systems.

  7. Comparison of a Commercial Accelerometer with Polysomnography and Actigraphy in Children and Adolescents

    PubMed Central

    Meltzer, Lisa J.; Hiruma, Laura S.; Avis, Kristin; Montgomery-Downs, Hawley; Valentin, Judith

    2015-01-01

    Study Objectives: To evaluate the reliability and validity of the commercially available Fitbit Ultra (2012) accelerometer compared to polysomnography (PSG) and two different actigraphs in a pediatric sample. Design and Setting: All subjects wore the Fitbit Ultra while undergoing overnight clinical polysomnography in a sleep laboratory; a randomly selected subset of participants also wore either the Ambulatory Monitoring Inc. Motionlogger Sleep Watch (AMI) or Phillips-Respironics Mini-Mitter Spectrum (PRMM). Participants: 63 youth (32 females, 31 males), ages 3–17 years (mean 9.7 years, SD 4.6 years). Measurements: Both “Normal” and “Sensitive” sleep-recording Fitbit Ultra modes were examined. Outcome variables included total sleep time (TST), wake after sleep onset (WASO), and sleep efficiency (SE). Primary analyses examined the differences between Fitbit Ultra and PSG using repeated-measures ANCOVA, with epoch-by-epoch comparisons between Fitbit Ultra and PSG used to determine sensitivity, specificity, and accuracy. Intra-device reliability, differences between Fitbit Ultra and actigraphy, and differences by both developmental age group and sleep disordered breathing (SDB) status were also examined. Results: Compared to PSG, the Normal Fitbit Ultra mode demonstrated good sensitivity (0.86) and accuracy (0.84), but poor specificity (0.52); conversely, the Sensitive Fitbit Ultra mode demonstrated adequate specificity (0.79), but inadequate sensitivity (0.70) and accuracy (0.71). Compared to PSG, the Fitbit Ultra significantly overestimated TST (41 min) and SE (8%) in Normal mode, and underestimated TST (105 min) and SE (21%) in Sensitive mode. Similar differences were found between Fitbit Ultra (both modes) and both brands of actigraphs. Conclusions: Despite its low cost and ease of use for consumers, neither sleep-recording mode of the Fitbit Ultra accelerometer provided clinically comparable results to PSG. Further, pediatric sleep researchers and clinicians should be cautious about substituting these devices for validated actigraphs, with a significant risk of either overestimating or underestimating outcome data including total sleep time and sleep efficiency. Citation: Meltzer LJ, Hiruma LS, Avis K, Montgomery-Downs H, Valentin J. Comparison of a commercial accelerometer with polysomnography and actigraphy in children and adolescents. SLEEP 2015;38(8):1323–1330. PMID:26118555

  8. Multidisciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  9. Multi-Disciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song

    1997-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  10. Reliability Considerations for Ultra- Low Power Space Applications

    NASA Technical Reports Server (NTRS)

    White, Mark; Johnston, Allan

    2012-01-01

    NASA, the aerospace community, and other high reliability (hi-rel) users of advanced microelectronic products face many challenges as technology continues to scale into the deep sub- micron region and ULP devices are sought after. Technology trends, ULP microelectronics, scaling and performance tradeoffs, reliability considerations, and spacecraft environments will be presented from a ULP perspective for space applications.

  11. COTS-Based Fault Tolerance in Deep Space: Qualitative and Quantitative Analyses of a Bus Network Architecture

    NASA Technical Reports Server (NTRS)

    Tai, Ann T.; Chau, Savio N.; Alkalai, Leon

    2000-01-01

    Using COTS products, standards and intellectual properties (IPs) for all the system and component interfaces is a crucial step toward significant reduction of both system cost and development cost as the COTS interfaces enable other COTS products and IPs to be readily accommodated by the target system architecture. With respect to the long-term survivable systems for deep-space missions, the major challenge for us is, under stringent power and mass constraints, to achieve ultra-high reliability of the system comprising COTS products and standards that are not developed for mission-critical applications. The spirit of our solution is to exploit the pertinent standard features of a COTS product to circumvent its shortcomings, though these standard features may not be originally designed for highly reliable systems. In this paper, we discuss our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. We first derive and qualitatively analyze a -'stacktree topology" that not only complies with IEEE 1394 but also enables the implementation of a fault-tolerant bus architecture without node redundancy. We then present a quantitative evaluation that demonstrates significant reliability improvement from the COTS-based fault tolerance.

  12. Hierarchical specification of the SIFT fault tolerant flight control system

    NASA Technical Reports Server (NTRS)

    Melliar-Smith, P. M.; Schwartz, R. L.

    1981-01-01

    The specification and mechanical verification of the Software Implemented Fault Tolerance (SIFT) flight control system is described. The methodology employed in the verification effort is discussed, and a description of the hierarchical models of the SIFT system is given. To meet the objective of NASA for the reliability of safety critical flight control systems, the SIFT computer must achieve a reliability well beyond the levels at which reliability can be actually measured. The methodology employed to demonstrate rigorously that the SIFT computer meets as reliability requirements is described. The hierarchy of design specifications from very abstract descriptions of system function down to the actual implementation is explained. The most abstract design specifications can be used to verify that the system functions correctly and with the desired reliability since almost all details of the realization were abstracted out. A succession of lower level models refine these specifications to the level of the actual implementation, and can be used to demonstrate that the implementation has the properties claimed of the abstract design specifications.

  13. A PC program to optimize system configuration for desired reliability at minimum cost

    NASA Technical Reports Server (NTRS)

    Hills, Steven W.; Siahpush, Ali S.

    1994-01-01

    High reliability is desired in all engineered systems. One way to improve system reliability is to use redundant components. When redundant components are used, the problem becomes one of allocating them to achieve the best reliability without exceeding other design constraints such as cost, weight, or volume. Systems with few components can be optimized by simply examining every possible combination but the number of combinations for most systems is prohibitive. A computerized iteration of the process is possible but anything short of a super computer requires too much time to be practical. Many researchers have derived mathematical formulations for calculating the optimum configuration directly. However, most of the derivations are based on continuous functions whereas the real system is composed of discrete entities. Therefore, these techniques are approximations of the true optimum solution. This paper describes a computer program that will determine the optimum configuration of a system of multiple redundancy of both standard and optional components. The algorithm is a pair-wise comparative progression technique which can derive the true optimum by calculating only a small fraction of the total number of combinations. A designer can quickly analyze a system with this program on a personal computer.

  14. Design Strategy for a Formally Verified Reliable Computing Platform

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Caldwell, James L.; DiVito, Ben L.

    1991-01-01

    This paper presents a high-level design for a reliable computing platform for real-time control applications. The design tradeoffs and analyses related to the development of a formally verified reliable computing platform are discussed. The design strategy advocated in this paper requires the use of techniques that can be completely characterized mathematically as opposed to more powerful or more flexible algorithms whose performance properties can only be analyzed by simulation and testing. The need for accurate reliability models that can be related to the behavior models is also stressed. Tradeoffs between reliability and voting complexity are explored. In particular, the transient recovery properties of the system are found to be fundamental to both the reliability analysis as well as the "correctness" models.

  15. Structural Mechanics and Dynamics Branch

    NASA Technical Reports Server (NTRS)

    Stefko, George

    2003-01-01

    The 2002 annual report of the Structural Mechanics and Dynamics Branch reflects the majority of the work performed by the branch staff during the 2002 calendar year. Its purpose is to give a brief review of the branch s technical accomplishments. The Structural Mechanics and Dynamics Branch develops innovative computational tools, benchmark experimental data, and solutions to long-term barrier problems in the areas of propulsion aeroelasticity, active and passive damping, engine vibration control, rotor dynamics, magnetic suspension, structural mechanics, probabilistics, smart structures, engine system dynamics, and engine containment. Furthermore, the branch is developing a compact, nonpolluting, bearingless electric machine with electric power supplied by fuel cells for future "more electric" aircraft. An ultra-high-power-density machine that can generate projected power densities of 50 hp/lb or more, in comparison to conventional electric machines, which generate usually 0.2 hp/lb, is under development for application to electric drives for propulsive fans or propellers. In the future, propulsion and power systems will need to be lighter, to operate at higher temperatures, and to be more reliable in order to achieve higher performance and economic viability. The Structural Mechanics and Dynamics Branch is working to achieve these complex, challenging goals.

  16. Reliability Evaluation of Computer Systems

    DTIC Science & Technology

    1979-04-01

    detection mechanisms. The model rrvided values for the system availa bility, mean time before failure (VITBF) , and the proportion of time that the 4 system...Stanford University Comm~iuter Science 311, (also Electrical Engineering 482), Advanced Computer Organization. Graduate course in computer architeture

  17. Automation of reliability evaluation procedures through CARE - The computer-aided reliability estimation program.

    NASA Technical Reports Server (NTRS)

    Mathur, F. P.

    1972-01-01

    Description of an on-line interactive computer program called CARE (Computer-Aided Reliability Estimation) which can model self-repair and fault-tolerant organizations and perform certain other functions. Essentially CARE consists of a repository of mathematical equations defining the various basic redundancy schemes. These equations, under program control, are then interrelated to generate the desired mathematical model to fit the architecture of the system under evaluation. The mathematical model is then supplied with ground instances of its variables and is then evaluated to generate values for the reliability-theoretic functions applied to the model.

  18. Active Nodal Task Seeking for High-Performance, Ultra-Dependable Computing

    DTIC Science & Technology

    1994-07-01

    implementation. Figure 1 shows a hardware organization of ANTS: stand-alone computing nodes inter - connected by buses. 2.1 Run Time Partitioning The...nodes in 14 respond to changing loads [27] or system reconfiguration [26]. Existing techniques are all source-initiated or server-initiated [27]. 5.1...short-running task segments. The task segments must be short-running in order that processors will become avalable often enough to satisfy changing

  19. The Case for Modular Redundancy in Large-Scale High Performance Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Ong, Hong Hoe; Scott, Stephen L

    2009-01-01

    Recent investigations into resilience of large-scale high-performance computing (HPC) systems showed a continuous trend of decreasing reliability and availability. Newly installed systems have a lower mean-time to failure (MTTF) and a higher mean-time to recover (MTTR) than their predecessors. Modular redundancy is being used in many mission critical systems today to provide for resilience, such as for aerospace and command \\& control systems. The primary argument against modular redundancy for resilience in HPC has always been that the capability of a HPC system, and respective return on investment, would be significantly reduced. We argue that modular redundancy can significantly increasemore » compute node availability as it removes the impact of scale from single compute node MTTR. We further argue that single compute nodes can be much less reliable, and therefore less expensive, and still be highly available, if their MTTR/MTTF ratio is maintained.« less

  20. Life Prediction Issues in Thermal/Environmental Barrier Coatings in Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Brewer, David N.; Murthy, Pappu L. N.

    2001-01-01

    Issues and design requirements for the environmental barrier coating (EBC)/thermal barrier coating (TBC) life that are general and those specific to the NASA Ultra-Efficient Engine Technology (UEET) development program have been described. The current state and trend of the research, methods in vogue related to the failure analysis, and long-term behavior and life prediction of EBCITBC systems are reported. Also, the perceived failure mechanisms, variables, and related uncertainties governing the EBCITBC system life are summarized. A combined heat transfer and structural analysis approach based on the oxidation kinetics using the Arrhenius theory is proposed to develop a life prediction model for the EBC/TBC systems. Stochastic process-based reliability approach that includes the physical variables such as gas pressure, temperature, velocity, moisture content, crack density, oxygen content, etc., is suggested. Benefits of the reliability-based approach are also discussed in the report.

  1. Markov reward processes

    NASA Technical Reports Server (NTRS)

    Smith, R. M.

    1991-01-01

    Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the behavior of the system with a continuous-time Markov chain, where a reward rate is associated with each state. In a reliability/availability model, upstates may have reward rate 1 and down states may have reward rate zero associated with them. In a queueing model, the number of jobs of certain type in a given state may be the reward rate attached to that state. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Expected steady-state reward rate and expected instantaneous reward rate are clearly useful measures of the Markov reward model. More generally, the distribution of accumulated reward or time-averaged reward over a finite time interval may be determined from the solution of the Markov reward model. This information is of great practical significance in situations where the workload can be well characterized (deterministically, or by continuous functions e.g., distributions). The design process in the development of a computer system is an expensive and long term endeavor. For aerospace applications the reliability of the computer system is essential, as is the ability to complete critical workloads in a well defined real time interval. Consequently, effective modeling of such systems must take into account both performance and reliability. This fact motivates our use of Markov reward models to aid in the development and evaluation of fault tolerant computer systems.

  2. Multiferroic nanomagnetic logic: Hybrid spintronics-straintronic paradigm for ultra-low energy computing

    NASA Astrophysics Data System (ADS)

    Salehi Fashami, Mohammad

    Excessive energy dissipation in CMOS devices during switching is the primary threat to continued downscaling of computing devices in accordance with Moore's law. In the quest for alternatives to traditional transistor based electronics, nanomagnet-based computing [1, 2] is emerging as an attractive alternative since: (i) nanomagnets are intrinsically more energy-efficient than transistors due to the correlated switching of spins [3], and (ii) unlike transistors, magnets have no leakage and hence have no standby power dissipation. However, large energy dissipation in the clocking circuit appears to be a barrier to the realization of ultra low power logic devices with such nanomagnets. To alleviate this issue, we propose the use of a hybrid spintronics-straintronics or straintronic nanomagnetic logic (SML) paradigm. This uses a piezoelectric layer elastically coupled to an elliptically shaped magnetostrictive nanomagnetic layer for both logic [4-6] and memory [7-8] and other information processing [9-10] applications that could potentially be 2-3 orders of magnitude more energy efficient than current CMOS based devices. This dissertation focuses on studying the feasibility, performance and reliability of such nanomagnetic logic circuits by simulating the nanoscale magnetization dynamics of dipole coupled nanomagnets clocked by stress. Specifically, the topics addressed are: 1. Theoretical study of multiferroic nanomagnetic arrays laid out in specific geometric patterns to implement a "logic wire" for unidirectional information propagation and a universal logic gate [4-6]. 2. Monte Carlo simulations of the magnetization trajectories in a simple system of dipole coupled nanomagnets and NAND gate described by the Landau-Lifshitz-Gilbert (LLG) equations simulated in the presence of random thermal noise to understand the dynamics switching error [11, 12] in such devices. 3. Arriving at a lower bound for energy dissipation as a function of switching error [13] for a practical nanomagnetic logic scheme. 4. Clocking of nanomagnetic logic with surface acoustic waves (SAW) to drastically decrease the lithographic burden needed to contact each multiferroic nanomagnet while maintaining pipelined information processing. 5. Nanomagnets with four (or higher states) implemented with shape engineering. Two types of magnet that encode four states: (i) diamond, and (ii) concave nanomagnets are studied for coherence of the switching process.

  3. Flexible superconducting Nb transmission lines on thin film polyimide for quantum computing applications

    NASA Astrophysics Data System (ADS)

    Tuckerman, David B.; Hamilton, Michael C.; Reilly, David J.; Bai, Rujun; Hernandez, George A.; Hornibrook, John M.; Sellers, John A.; Ellis, Charles D.

    2016-08-01

    We describe progress and initial results achieved towards the goal of developing integrated multi-conductor arrays of shielded controlled-impedance flexible superconducting transmission lines with ultra-miniature cross sections and wide bandwidths (dc to >10 GHz) over meter-scale lengths. Intended primarily for use in future scaled-up quantum computing systems, such flexible thin-film niobium/polyimide ribbon cables could provide a physically compact and ultra-low thermal conductance alternative to the rapidly increasing number of discrete coaxial cables that are currently used by quantum computing experimentalists to transmit signals between the several low-temperature stages (from ˜4 K down to ˜20 mK) of a dilution refrigerator. We have concluded that these structures are technically feasible to fabricate, and so far they have exhibited acceptable thermo-mechanical reliability. S-parameter results are presented for individual 2-metal layer Nb microstrip structures having 50 Ω characteristic impedance; lengths ranging from 50 to 550 mm were successfully fabricated. Solderable pads at the end terminations allowed testing using conventional rf connectors. Weakly coupled open-circuit microstrip resonators provided a sensitive measure of the overall transmission line loss as a function of frequency, temperature, and power. Two common microelectronic-grade polyimide dielectrics, one conventional and the other photo-definable (PI-2611 and HD-4100, respectively) were compared. Our most striking result, not previously reported to our knowledge, was that the dielectric loss tangents of both polyimides, over frequencies from 1 to 20 GHz, are remarkably low at deep cryogenic temperatures, typically 100× smaller than corresponding room temperature values. This enables fairly long-distance (meter-scale) transmission of microwave signals without excessive attenuation, and also permits usefully high rf power levels to be transmitted without creating excessive dielectric heating. We observed loss tangents as low as 2.2 × 10-5 at 20 mK, although losses increased somewhat at very low rf power levels, similar to the well-known behavior of amorphous inorganic dielectrics such as SiO2. Our fabrication techniques could be extended to more complex structures such as multiconductor cables, embedded microstrip, 3-metal layer stripline or rectangular coax, and integrated attenuators and thermalization structures.

  4. Proposal for an All-Spin Artificial Neural Network: Emulating Neural and Synaptic Functionalities Through Domain Wall Motion in Ferromagnets.

    PubMed

    Sengupta, Abhronil; Shim, Yong; Roy, Kaushik

    2016-12-01

    Non-Boolean computing based on emerging post-CMOS technologies can potentially pave the way for low-power neural computing platforms. However, existing work on such emerging neuromorphic architectures have either focused on solely mimicking the neuron, or the synapse functionality. While memristive devices have been proposed to emulate biological synapses, spintronic devices have proved to be efficient at performing the thresholding operation of the neuron at ultra-low currents. In this work, we propose an All-Spin Artificial Neural Network where a single spintronic device acts as the basic building block of the system. The device offers a direct mapping to synapse and neuron functionalities in the brain while inter-layer network communication is accomplished via CMOS transistors. To the best of our knowledge, this is the first demonstration of a neural architecture where a single nanoelectronic device is able to mimic both neurons and synapses. The ultra-low voltage operation of low resistance magneto-metallic neurons enables the low-voltage operation of the array of spintronic synapses, thereby leading to ultra-low power neural architectures. Device-level simulations, calibrated to experimental results, was used to drive the circuit and system level simulations of the neural network for a standard pattern recognition problem. Simulation studies indicate energy savings by  ∼  100× in comparison to a corresponding digital/analog CMOS neuron implementation.

  5. Rater reliability and concurrent validity of the Keyboard Personal Computer Style instrument (K-PeCS).

    PubMed

    Baker, Nancy A; Cook, James R; Redfern, Mark S

    2009-01-01

    This paper describes the inter-rater and intra-rater reliability, and the concurrent validity of an observational instrument, the Keyboard Personal Computer Style instrument (K-PeCS), which assesses stereotypical postures and movements associated with computer keyboard use. Three trained raters independently rated the video clips of 45 computer keyboard users to ascertain inter-rater reliability, and then re-rated a sub-sample of 15 video clips to ascertain intra-rater reliability. Concurrent validity was assessed by comparing the ratings obtained using the K-PeCS to scores developed from a 3D motion analysis system. The overall K-PeCS had excellent reliability [inter-rater: intra-class correlation coefficients (ICC)=.90; intra-rater: ICC=.92]. Most individual items on the K-PeCS had from good to excellent reliability, although six items fell below ICC=.75. Those K-PeCS items that were assessed for concurrent validity compared favorably to the motion analysis data for all but two items. These results suggest that most items on the K-PeCS can be used to reliably document computer keyboarding style.

  6. Enhanced fluorescence microscope and its application

    NASA Astrophysics Data System (ADS)

    Wang, Susheng; Li, Qin; Yu, Xin

    1997-12-01

    A high gain fluorescence microscope is developed to meet the needs in medical and biological research. By the help of an image intensifier with luminance gain of 4 by 104 the sensitivity of the system can achieve 10-6 1x level and be 104 times higher than ordinary fluorescence microscope. Ultra-weak fluorescence image can be detected by it. The concentration of fluorescent label and emitting light intensity of the system are decreased as much as possible, therefore, the natural environment of the detected call can be kept. The CCD image acquisition set-up controlled by computer obtains the quantitative data of each point according to the gray scale. The relation between luminous intensity and output of CCD is obtained by using a wide range weak photometry. So the system not only shows the image of ultra-weak fluorescence distribution but also gives the intensity of fluorescence of each point. Using this system, we obtained the images of distribution of hypocrellin A (HA) in Hela cell, the images of Hela cell being protected by antioxidant reagent Vit. E, SF and BHT. The images show that the digitized ultra-sensitive fluorescence microscope is a useful tool for medical and biological research.

  7. Two tradeoffs between economy and reliability in loss of load probability constrained unit commitment

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Wang, Mingqiang; Ning, Xingyao

    2018-02-01

    Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.

  8. The growth of the UniTree mass storage system at the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Tarshish, Adina; Salmon, Ellen

    1993-01-01

    In October 1992, the NASA Center for Computational Sciences made its Convex-based UniTree system generally available to users. The ensuing months saw the growth of near-online data from nil to nearly three terabytes, a doubling of the number of CPU's on the facility's Cray YMP (the primary data source for UniTree), and the necessity for an aggressive regimen for repacking sparse tapes and hierarchical 'vaulting' of old files to freestanding tape. Connectivity was enhanced as well with the addition of UltraNet HiPPI. This paper describes the increasing demands placed on the storage system's performance and throughput that resulted from the significant augmentation of compute-server processor power and network speed.

  9. Probabilistic design of fibre concrete structures

    NASA Astrophysics Data System (ADS)

    Pukl, R.; Novák, D.; Sajdlová, T.; Lehký, D.; Červenka, J.; Červenka, V.

    2017-09-01

    Advanced computer simulation is recently well-established methodology for evaluation of resistance of concrete engineering structures. The nonlinear finite element analysis enables to realistically predict structural damage, peak load, failure, post-peak response, development of cracks in concrete, yielding of reinforcement, concrete crushing or shear failure. The nonlinear material models can cover various types of concrete and reinforced concrete: ordinary concrete, plain or reinforced, without or with prestressing, fibre concrete, (ultra) high performance concrete, lightweight concrete, etc. Advanced material models taking into account fibre concrete properties such as shape of tensile softening branch, high toughness and ductility are described in the paper. Since the variability of the fibre concrete material properties is rather high, the probabilistic analysis seems to be the most appropriate format for structural design and evaluation of structural performance, reliability and safety. The presented combination of the nonlinear analysis with advanced probabilistic methods allows evaluation of structural safety characterized by failure probability or by reliability index respectively. Authors offer a methodology and computer tools for realistic safety assessment of concrete structures; the utilized approach is based on randomization of the nonlinear finite element analysis of the structural model. Uncertainty of the material properties or their randomness obtained from material tests are accounted in the random distribution. Furthermore, degradation of the reinforced concrete materials such as carbonation of concrete, corrosion of reinforcement, etc. can be accounted in order to analyze life-cycle structural performance and to enable prediction of the structural reliability and safety in time development. The results can serve as a rational basis for design of fibre concrete engineering structures based on advanced nonlinear computer analysis. The presented methodology is illustrated on results from two probabilistic studies with different types of concrete structures related to practical applications and made from various materials (with the parameters obtained from real material tests).

  10. Oak Ridge Leadership Computing Facility Position Paper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oral, H Sarp; Hill, Jason J; Thach, Kevin G

    This paper discusses the business, administration, reliability, and usability aspects of storage systems at the Oak Ridge Leadership Computing Facility (OLCF). The OLCF has developed key competencies in architecting and administration of large-scale Lustre deployments as well as HPSS archival systems. Additionally as these systems are architected, deployed, and expanded over time reliability and availability factors are a primary driver. This paper focuses on the implementation of the Spider parallel Lustre file system as well as the implementation of the HPSS archive at the OLCF.

  11. Achieving reliability - The evolution of redundancy in American manned spacecraft computers

    NASA Technical Reports Server (NTRS)

    Tomayko, J. E.

    1985-01-01

    The Shuttle is the first launch system deployed by NASA with full redundancy in the on-board computer systems. Fault-tolerance, i.e., restoring to a backup with less capabilities, was the method selected for Apollo. The Gemini capsule was the first to carry a computer, which also served as backup for Titan launch vehicle guidance. Failure of the Gemini computer resulted in manual control of the spacecraft. The Apollo system served vehicle flight control and navigation functions. The redundant computer on Skylab provided attitude control only in support of solar telescope pointing. The STS digital, fly-by-wire avionics system requires 100 percent reliability. The Orbiter carries five general purpose computers, four being fully-redundant and the fifth being soley an ascent-descent tool. The computers are synchronized at input and output points at a rate of about six times a second. The system is projected to cause a loss of an Orbiter only four times in a billion flights.

  12. Ultra-smooth finishing of aspheric surfaces using CAST technology

    NASA Astrophysics Data System (ADS)

    Kong, John; Young, Kevin

    2014-06-01

    Growing applications for astronomical ground-based adaptive systems and air-born telescope systems demand complex optical surface designs combined with ultra-smooth finishing. The use of more sophisticated and accurate optics, especially aspheric ones, allows for shorter optical trains with smaller sizes and a reduced number of components. This in turn reduces fabrication and alignment time and costs. These aspheric components include the following: steep surfaces with large aspheric departures; more complex surface feature designs like stand-alone off-axis-parabola (OAP) and free form optics that combine surface complexity with a requirement for ultra-high smoothness, as well as special optic materials such as lightweight silicon carbide (SiC) for air-born systems. Various fabrication technologies for finishing ultra-smooth aspheric surfaces are progressing to meet these growing and demanding challenges, especially Magnetorheological Finishing (MRF) and ion-milling. These methods have demonstrated some good success as well as a certain level of limitations. Amongst them, computer-controlled asphere surface-finishing technology (CAST), developed by Precision Asphere Inc. (PAI), plays an important role in a cost effective manufacturing environment and has successfully delivered numerous products for the applications mentioned above. One of the most recent successes is the Gemini Planet Imager (GPI), the world's most powerful planet-hunting instrument, with critical aspheric components (seven OAPs and free form optics) made using CAST technology. GPI showed off its first images in a press release on January 7, 2014 . This paper reviews features of today's technologies in handling the ultra-smooth aspheric optics, especially the capabilities of CAST on these challenging products. As examples, three groups of aspheres deployed in astronomical optics systems, both polished and finished using CAST, will be discussed in detail.

  13. RICIS Symposium 1992: Mission and Safety Critical Systems Research and Applications

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This conference deals with computer systems which control systems whose failure to operate correctly could produce the loss of life and or property, mission and safety critical systems. Topics covered are: the work of standards groups, computer systems design and architecture, software reliability, process control systems, knowledge based expert systems, and computer and telecommunication protocols.

  14. Assessment of physical server reliability in multi cloud computing system

    NASA Astrophysics Data System (ADS)

    Kalyani, B. J. D.; Rao, Kolasani Ramchand H.

    2018-04-01

    Business organizations nowadays functioning with more than one cloud provider. By spreading cloud deployment across multiple service providers, it creates space for competitive prices that minimize the burden on enterprises spending budget. To assess the software reliability of multi cloud application layered software reliability assessment paradigm is considered with three levels of abstractions application layer, virtualization layer, and server layer. The reliability of each layer is assessed separately and is combined to get the reliability of multi-cloud computing application. In this paper, we focused on how to assess the reliability of server layer with required algorithms and explore the steps in the assessment of server reliability.

  15. RELIABILITY, AVAILABILITY, AND SERVICEABILITY FOR PETASCALE HIGH-END COMPUTING AND BEYOND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chokchai "Box" Leangsuksun

    2011-05-31

    Our project is a multi-institutional research effort that adopts interplay of RELIABILITY, AVAILABILITY, and SERVICEABILITY (RAS) aspects for solving resilience issues in highend scientific computing in the next generation of supercomputers. results lie in the following tracks: Failure prediction in a large scale HPC; Investigate reliability issues and mitigation techniques including in GPGPU-based HPC system; HPC resilience runtime & tools.

  16. Real-time emergency forecasting technique for situation management systems

    NASA Astrophysics Data System (ADS)

    Kopytov, V. V.; Kharechkin, P. V.; Naumenko, V. V.; Tretyak, R. S.; Tebueva, F. B.

    2018-05-01

    The article describes the real-time emergency forecasting technique that allows increasing accuracy and reliability of forecasting results of any emergency computational model applied for decision making in situation management systems. Computational models are improved by the Improved Brown’s method applying fractal dimension to forecast short time series data being received from sensors and control systems. Reliability of emergency forecasting results is ensured by the invalid sensed data filtering according to the methods of correlation analysis.

  17. New electrostatic coal cleaning method cuts sulfur content by 40%

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1985-12-01

    An emission control system that electrically charges pollutants and coal particles promises to reduce sulfur 40% at half the cost. The dry coal cleaning processes offer superior performance and better economics than conventional flotation cleaning. Advanced Energy Dynamics, Inc. (AED) is developing both fine and ultra fine processes which increase combustion efficiency and boiler reliability and reduced operating costs. The article gives details from the performance tests and comparisons and summarizes the economic analyses. 4 tables.

  18. The active modulation of drug release by an ionic field effect transistor for an ultra-low power implantable nanofluidic system.

    PubMed

    Bruno, Giacomo; Canavese, Giancarlo; Liu, Xuewu; Filgueira, Carly S; Sacco, Adriano; Demarchi, Danilo; Ferrari, Mauro; Grattoni, Alessandro

    2016-11-10

    We report an electro-nanofluidic membrane for tunable, ultra-low power drug delivery employing an ionic field effect transistor. Therapeutic release from a drug reservoir was successfully modulated, with high energy efficiency, by actively adjusting the surface charge of slit-nanochannels 50, 110, and 160 nm in size, by the polarization of a buried gate electrode and the consequent variation of the electrical double layer in the nanochannel. We demonstrated control over the transport of ionic species, including two relevant hypertension drugs, atenolol and perindopril, that could benefit from such modulation. By leveraging concentration-driven diffusion, we achieve a 2 to 3 order of magnitude reduction in power consumption as compared to other electrokinetic phenomena. The application of a small gate potential (±5 V) in close proximity (150 nm) of 50 nm nanochannels generated a sufficiently strong electric field, which doubled or blocked the ionic flux depending on the polarity of the voltage applied. These compelling findings can lead to next generation, more reliable, smaller, and longer lasting drug delivery implants with ultra-low power consumption.

  19. Improving nondestructive characterization of dual phase steels using data fusion

    NASA Astrophysics Data System (ADS)

    Kahrobaee, Saeed; Haghighi, Mehdi Salkhordeh; Akhlaghi, Iman Ahadi

    2018-07-01

    The aim of this paper is to introduce a novel methodology for nondestructive determination of microstructural and mechanical properties (due to the various heat treatments), as well as thickness variations (as a result of corrosion effect) of dual phase steels. The characterizations are based on the variations in the electromagnetic properties extracted from magnetic hysteresis loop and eddy current methods which are coupled with a data fusion system. This study was conducted on six groups of samples (with different thicknesses, from 1 mm to 4 mm) subjected to the various intercritical annealing processes to produce different fractions of martensite/ferrite phases and consequently, changes in hardness, yield strength and ultra tensile strength (UTS). This study proposes a novel soft computing technique to increase accuracy of nondestructive measurements and resolving overlapped NDE outputs related to the various samples. The empirical results indicate that applying the proposed data fusion technique on the two electromagnetic NDE data sets nondestructively, causes an increase in the accuracy and reliability of determining material features including ferrite fraction, hardness, yield strength, UTS, as well as thickness variations.

  20. Interaction Entropy: A New Paradigm for Highly Efficient and Reliable Computation of Protein-Ligand Binding Free Energy.

    PubMed

    Duan, Lili; Liu, Xiao; Zhang, John Z H

    2016-05-04

    Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.

  1. Picosecond and femtosecond lasers for industrial material processing

    NASA Astrophysics Data System (ADS)

    Mayerhofer, R.; Serbin, J.; Deeg, F. W.

    2016-03-01

    Cold laser materials processing using ultra short pulsed lasers has become one of the most promising new technologies for high-precision cutting, ablation, drilling and marking of almost all types of material, without causing unwanted thermal damage to the part. These characteristics have opened up new application areas and materials for laser processing, allowing previously impossible features to be created and also reducing the amount of post-processing required to an absolute minimum, saving time and cost. However, short pulse widths are only one part of thee story for industrial manufacturing processes which focus on total costs and maximum productivity and production yield. Like every other production tool, ultra-short pulse lasers have too provide high quality results with maximum reliability. Robustness and global on-site support are vital factors, as well ass easy system integration.

  2. Anomaly Trends for Missions to Mars: Mars Global Surveyor and Mars Odyssey

    NASA Technical Reports Server (NTRS)

    Green, Nelson W.; Hoffman, Alan R.

    2008-01-01

    Conducted as a part of NASA Ultra-Reliability effort: Goal is to design for increased reliability in all NASA missions. Desire is to increase reliability by a factor of 10. Study provides a baseline for current technology. Analyzed anomalies for spacecraft orbiting Mars. Long lived spacecraft. Comparison with current rover missions and past orbiters. Looked for trends to assist design of future missions.

  3. Characterization of Polyimide Foams for Ultra-Lightweight Space Structures

    NASA Technical Reports Server (NTRS)

    Meador, Michael (Technical Monitor); Hillman, Keithan; Veazie, David R.

    2003-01-01

    Ultra-lightweight materials have played a significant role in nearly every area of human activity ranging from magnetic tapes and artificial organs to atmospheric balloons and space inflatables. The application range of ultra-lightweight materials in past decades has expanded dramatically due to their unsurpassed efficiency in terms of low weight and high compliance properties. A new generation of ultra-lightweight materials involving advanced polymeric materials, such as TEEK (TM) polyimide foams, is beginning to emerge to produce novel performance from ultra-lightweight systems for space applications. As a result, they require that special conditions be fulfilled to ensure adequate structural performance, shape retention, and thermal stability. It is therefore important and essential to develop methodologies for predicting the complex properties of ultra-lightweight foams. To support NASA programs such as the Reusable Launch Vehicle (RLV), Clark Atlanta University, along with SORDAL, Inc., has initiated projects for commercial process development of polyimide foams for the proposed cryogenic tank integrated structure (see figure 1). Fabrication and characterization of high temperature, advanced aerospace-grade polyimide foams and filled foam sandwich composites for specified lifetimes in NASA space applications, as well as quantifying the lifetime of components, are immensely attractive goals. In order to improve the development, durability, safety, and life cycle performance of ultra-lightweight polymeric foams, test methods for the properties are constant concerns in terms of timeliness, reliability, and cost. A major challenge is to identify the mechanisms of failures (i.e., core failure, interfacial debonding, and crack development) that are reflected in the measured properties. The long-term goal of the this research is to develop the tools and capabilities necessary to successfully engineer ultra-lightweight polymeric foams. The desire is to reduce density at the material and structural levels, while at the same time maintaining or increasing mechanical and other properties.

  4. Bringing Superconductor Digital Technology to the Market Place

    NASA Astrophysics Data System (ADS)

    Nisenoff, Martin

    The unique properties of superconductivity can be exploited to provide the ultimate in electronic technology for systems such as ultra-precise analogue-to-digital and digital-to-analogue converters, precise DC and AC voltage standards, ultra high speed logic circuits and systems (both digital and hybrid analogue-digital systems), and very high throughput network routers and supercomputers which would have superior electrical performance at lower overall electrical power consumption compared to systems with comparable performance which are fabricated using conventional room temperature technologies. This potential for high performance electronics with reduced power consumption would have a positive impact on slowing the increase in the demand for electrical utility power by the information technology community on the overall electrical power grid. However, before this technology can be successfully brought to the commercial market place, there must be an aggressive investment of resources and funding to develop the required infrastructure needed to yield these high performance superconductor systems, which will be reliable and available at low cost. The author proposes that it will require a concerted effort by the superconductor and cryogenic communities to bring this technology to the commercial market place or make it available for widespread use in scientific instrumentation.

  5. A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.

    NASA Astrophysics Data System (ADS)

    Wehner, M. F.; Oliker, L.; Shalf, J.

    2008-12-01

    Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.

  6. Early study on the application of Nexcera ultra low thermal expansion ceramic to space telescopes

    NASA Astrophysics Data System (ADS)

    Kamiya, Tomohiro; Sugawara, Jun; Mizutani, Tadahito; Yasuda, Susumu; Kitamoto, Kazuya

    2017-09-01

    Optical mirrors for space telescopes, which require high precision and high thermal stability, have commonly been made of glass materials such as ultra low expansion glass (e.g. ULE®) or extremely low expansion glassceramic (e.g. ZERODUR® or CLEARCERAM®). These materials have been well-known for their reliability due to their long history of achievements in many space applications.

  7. Work-a-day world of NPRDS: what makes it tick

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Nuclear Plant Reliability Data System (NPRDS) is a computer-based data bank of reliability information on safety-related nuclear-power-plant systems and components. Until January 1982, the system was administered by the American Nuclear Society 58.20 Subcommittee. The data base was maintained by Southwest Research Institute in San Antonio, Texas. In October 1982, it was decided that the Institute of Nuclear Power Operations (INPO) would maintain the data base on its own computer. The transition is currently in progress.

  8. Advanced techniques in reliability model representation and solution

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Nicol, David M.

    1992-01-01

    The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.

  9. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabási-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using other methods and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  10. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  11. Researcher Biographies

    Science.gov Websites

    interest: mechanical system design sensitivity analysis and optimization of linear and nonlinear structural systems, reliability analysis and reliability-based design optimization, computational methods in committee member, ISSMO; Associate Editor, Mechanics Based Design of Structures and Machines; Associate

  12. The Aviation Paradox: Why We Can 'Know' Jetliners But Not Reactors.

    PubMed

    Downer, John

    2017-01-01

    Publics and policymakers increasingly have to contend with the risks of complex, safety-critical technologies, such as airframes and reactors. As such, 'technological risk' has become an important object of modern governance, with state regulators as core agents, and 'reliability assessment' as the most essential metric. The Science and Technology Studies (STS) literature casts doubt on whether or not we should place our faith in these assessments because predictively calculating the ultra-high reliability required of such systems poses seemingly insurmountable epistemological problems. This paper argues that these misgivings are warranted in the nuclear sphere, despite evidence from the aviation sphere suggesting that such calculations can be accurate. It explains why regulatory calculations that predict the reliability of new airframes cannot work in principle, and then it explains why those calculations work in practice. It then builds on this explanation to argue that the means by which engineers manage reliability in aviation is highly domain-specific, and to suggest how a more nuanced understanding of jetliners could inform debates about nuclear energy.

  13. Cochlear Implant Electrode Localization Using an Ultra-High Resolution Scan Mode on Conventional 64-Slice and New Generation 192-Slice Multi-Detector Computed Tomography.

    PubMed

    Carlson, Matthew L; Leng, Shuai; Diehn, Felix E; Witte, Robert J; Krecke, Karl N; Grimes, Josh; Koeller, Kelly K; Bruesewitz, Michael R; McCollough, Cynthia H; Lane, John I

    2017-08-01

    A new generation 192-slice multi-detector computed tomography (MDCT) clinical scanner provides enhanced image quality and superior electrode localization over conventional MDCT. Currently, accurate and reliable cochlear implant electrode localization using conventional MDCT scanners remains elusive. Eight fresh-frozen cadaveric temporal bones were implanted with full-length cochlear implant electrodes. Specimens were subsequently scanned with conventional 64-slice and new generation 192-slice MDCT scanners utilizing ultra-high resolution modes. Additionally, all specimens were scanned with micro-CT to provide a reference criterion for electrode position. Images were reconstructed according to routine temporal bone clinical protocols. Three neuroradiologists, blinded to scanner type, reviewed images independently to assess resolution of individual electrodes, scalar localization, and severity of image artifact. Serving as the reference standard, micro-CT identified scalar crossover in one specimen; imaging of all remaining cochleae demonstrated complete scala tympani insertions. The 192-slice MDCT scanner exhibited improved resolution of individual electrodes (p < 0.01), superior scalar localization (p < 0.01), and reduced blooming artifact (p < 0.05), compared with conventional 64-slice MDCT. There was no significant difference between platforms when comparing streak or ring artifact. The new generation 192-slice MDCT scanner offers several notable advantages for cochlear implant imaging compared with conventional MDCT. This technology provides important feedback regarding electrode position and course, which may help in future optimization of surgical technique and electrode design.

  14. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 1: HARP introduction and user's guide

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Boyd, Mark A.; Geist, Robert M.; Smotherman, Mark D.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed to be compatible with most computing platforms and operating systems, and some programs have been beta tested, within the aerospace community for over 8 years. Volume 1 provides an introduction to the HARP program. Comprehensive information on HARP mathematical models can be found in the references.

  15. Agent autonomy approach to probabilistic physics-of-failure modeling of complex dynamic systems with interacting failure mechanisms

    NASA Astrophysics Data System (ADS)

    Gromek, Katherine Emily

    A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.

  16. Fast algorithm for radio propagation modeling in realistic 3-D urban environment

    NASA Astrophysics Data System (ADS)

    Rauch, A.; Lianghai, J.; Klein, A.; Schotten, H. D.

    2015-11-01

    Next generation wireless communication systems will consist of a large number of mobile or static terminals and should be able to fulfill multiple requirements depending on the current situation. Low latency and high packet success transmission rates should be mentioned in this context and can be summarized as ultra-reliable communications (URC). Especially for domains like mobile gaming, mobile video services but also for security relevant scenarios like traffic safety, traffic control systems and emergency management URC will be more and more required to guarantee a working communication between the terminals all the time.

  17. Processing and Properties Of Refractory Zirconium Diboride Composites For Use In High Temperature Applications

    NASA Technical Reports Server (NTRS)

    Stackpoole, Margaret; Gusman, M.; Ellerby, D.; Johnson, S. M.; Arnold, Jim (Technical Monitor)

    2001-01-01

    The Thermal Protection Materials and Systems Branch at NASA Ames Research Center is involved in the development of a class of refractory oxidation-resistant diboride composites termed Ultra High Temperature Ceramics or UHTCs. These composites have good high temperature properties making them candidate materials for thermal protection system (TPS) applications. The current research focuses on improving processing methods to develop more reliable composites with enhanced thermal and mechanical properties. This presentation will concentrate on the processing of ZrB2/SiC composites. Some preliminary mechanical properties and oxidation data will also be presented.

  18. Inertial confinement fusion quarterly report, October--December 1992. Volume 3, No. 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dixit, S.N.

    1992-12-31

    This report contains papers on the following topics: The Beamlet Front End: Prototype of a new pulse generation system;imaging biological objects with x-ray lasers; coherent XUV generation via high-order harmonic generation in rare gases; theory of high-order harmonic generation; two-dimensional computer simulations of ultra- intense, short-pulse laser-plasma interactions; neutron detectors for measuring the fusion burn history of ICF targets; the recirculator; and lasnex evolves to exploit computer industry advances.

  19. Graphical workstation capability for reliability modeling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.

    1992-01-01

    In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.

  20. A technique for computation of noise temperature due to a beam waveguide shroud

    NASA Technical Reports Server (NTRS)

    Veruttipong, W.; Franco, M. M.

    1993-01-01

    Direct analytical computation of the noise temperature of real beam waveguide (BWG) systems, including all mirrors and the surrounding shroud, is an extremely complex problem and virtually impossible to achieve. Yet the DSN antennas are required to be ultra low-noise in order to be effective, and a reasonably accurate prediction is essential. This article presents a relatively simple technique to compute a real BWG system noise temperature by combining analytical techniques with data from experimental tests. Specific expressions and parameters for X-band (8.45-GHz) BWG noise computation are obtained for DSS 13 and DSS 24, now under construction. These expressions are also valid for various conditions of the BWG feed systems, including horn sizes and positions, and mirror sizes, curvatures, and positions. Parameters for S- and Ka-bands (2.3 and 32.0 GHz) have not been determined; however, those can be obtained following the same procedure as for X-band.

  1. Care 3, phase 1, volume 2

    NASA Technical Reports Server (NTRS)

    Stiffler, J. J.; Bryant, L. A.; Guccione, L.

    1979-01-01

    A computer program was developed as a general purpose reliability tool for fault tolerant avionics systems. The computer program requirements, together with several appendices containing computer printouts are presented.

  2. Guide to Camouflage for DARCOM Equipment Developers

    DTIC Science & Technology

    1978-04-29

    the trails by dragging devices, etc., can delay recognition of a tracked- veicle trail. A missile system, having fixed physical characteristics which...systems applicable to surface-to-air, air-to-air, and air-to-surface missiles. Sensors in the 0.2 to 0.4 and 1.0 to 5.0 micron bands are hybrid ...a wide variety of ultra- violet, visible and near infrared sensor systems. Actual sensors are hybrid computer controlled in six degrees-of-freedom

  3. Review of ultraresolution (10-100 megapixel) visualization systems built by tiling commercial display components

    NASA Astrophysics Data System (ADS)

    Hopper, Darrel G.; Haralson, David G.; Simpson, Matthew A.; Longo, Sam J.

    2002-08-01

    Ultra-resolution visualization systems are achieved by the technique of tiling many direct or project-view displays. During the past fews years, several such systems have been built from commercial electronics components (displays, computers, image generators, networks, communication links, and software). Civil applications driving this development have independently determined that they require images at 10-100 megapixel (Mpx) resolution to enable state-of-the-art research, engineering, design, stock exchanges, flight simulators, business information and enterprise control centers, education, art and entertainment. Military applications also press the art of the possible to improve the productivity of warfighters and lower the cost of providing for the national defense. The environment in some 80% of defense applications can be addressed by ruggedization of commercial components. This paper reviews the status of ultra-resolution systems based on commercial components and describes a vision for their integration into advanced yet affordable military command centers, simulator/trainers, and, eventually, crew stations in air, land, sea and space systems.

  4. Techniques for modeling the reliability of fault-tolerant systems with the Markov state-space approach

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Johnson, Sally C.

    1995-01-01

    This paper presents a step-by-step tutorial of the methods and the tools that were used for the reliability analysis of fault-tolerant systems. The approach used in this paper is the Markov (or semi-Markov) state-space method. The paper is intended for design engineers with a basic understanding of computer architecture and fault tolerance, but little knowledge of reliability modeling. The representation of architectural features in mathematical models is emphasized. This paper does not present details of the mathematical solution of complex reliability models. Instead, it describes the use of several recently developed computer programs SURE, ASSIST, STEM, and PAWS that automate the generation and the solution of these models.

  5. Non-functional Avionics Requirements

    NASA Astrophysics Data System (ADS)

    Paulitsch, Michael; Ruess, Harald; Sorea, Maria

    Embedded systems in aerospace become more and more integrated in order to reduce weight, volume/size, and power of hardware for more fuel-effi ciency. Such integration tendencies change architectural approaches of system ar chi tec tures, which subsequently change non-functional requirements for plat forms. This paper provides some insight into state-of-the-practice of non-func tional requirements for developing ultra-critical embedded systems in the aero space industry, including recent changes and trends. In particular, formal requi re ment capture and formal analysis of non-functional requirements of avionic systems - including hard-real time, fault-tolerance, reliability, and per for mance - are exemplified by means of recent developments in SAL and HiLiTE.

  6. Integrated System Test of the Advanced Instructional System (AIS). Final Report.

    ERIC Educational Resources Information Center

    Lintz, Larry M.; And Others

    The integrated system test for the Advanced Instructional System (AIS) was designed to provide quantitative information regarding training time reductions resulting from certain computer managed instruction features. The reliabilities of these features and of support systems were also investigated. Basic computer managed instruction reduced…

  7. Hybrid automated reliability predictor integrated work station (HiREL)

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1991-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated reliability (HiREL) workstation tool system marks another step toward the goal of producing a totally integrated computer aided design (CAD) workstation design capability. Since a reliability engineer must generally graphically represent a reliability model before he can solve it, the use of a graphical input description language increases productivity and decreases the incidence of error. The captured image displayed on a cathode ray tube (CRT) screen serves as a documented copy of the model and provides the data for automatic input to the HARP reliability model solver. The introduction of dependency gates to a fault tree notation allows the modeling of very large fault tolerant system models using a concise and visually recognizable and familiar graphical language. In addition to aiding in the validation of the reliability model, the concise graphical representation presents company management, regulatory agencies, and company customers a means of expressing a complex model that is readily understandable. The graphical postprocessor computer program HARPO (HARP Output) makes it possible for reliability engineers to quickly analyze huge amounts of reliability/availability data to observe trends due to exploratory design changes.

  8. Optical technologies for space sensor

    NASA Astrophysics Data System (ADS)

    Wang, Hu; Liu, Jie; Xue, Yaoke; Liu, Yang; Liu, Meiying; Wang, Lingguang; Yang, Shaodong; Lin, Shangmin; Chen, Su; Luo, Jianjun

    2015-10-01

    Space sensors are used in navigation sensor fields. The sun, the earth, the moon and other planets are used as frame of reference to obtain stellar position coordinates, and then to control the attitude of an aircraft. Being the "eyes" of the space sensors, Optical sensor system makes images of the infinite far stars and other celestial bodies. It directly affects measurement accuracy of the space sensor, indirectly affecting the data updating rate. Star sensor technology is the pilot for Space sensors. At present more and more attention is paid on all-day star sensor technology. By day and night measurements of the stars, the aircraft's attitude in the inertial coordinate system can be provided. Facing the requirements of ultra-high-precision, large field of view, wide spectral range, long life and high reliability, multi-functional optical system, we integration, integration optical sensors will be future space technology trends. In the meantime, optical technologies for space-sensitive research leads to the development of ultra-precision optical processing, optical and precision test machine alignment technology. It also promotes the development of long-life optical materials and applications. We have achieved such absolute distortion better than ±1um, Space life of at least 15years of space-sensitive optical system.

  9. Probabilistic resource allocation system with self-adaptive capability

    NASA Technical Reports Server (NTRS)

    Yufik, Yan M. (Inventor)

    1996-01-01

    A probabilistic resource allocation system is disclosed containing a low capacity computational module (Short Term Memory or STM) and a self-organizing associative network (Long Term Memory or LTM) where nodes represent elementary resources, terminal end nodes represent goals, and directed links represent the order of resource association in different allocation episodes. Goals and their priorities are indicated by the user, and allocation decisions are made in the STM, while candidate associations of resources are supplied by the LTM based on the association strength (reliability). Reliability values are automatically assigned to the network links based on the frequency and relative success of exercising those links in the previous allocation decisions. Accumulation of allocation history in the form of an associative network in the LTM reduces computational demands on subsequent allocations. For this purpose, the network automatically partitions itself into strongly associated high reliability packets, allowing fast approximate computation and display of allocation solutions satisfying the overall reliability and other user-imposed constraints. System performance improves in time due to modification of network parameters and partitioning criteria based on the performance feedback.

  10. Reliability analysis of a robotic system using hybridized technique

    NASA Astrophysics Data System (ADS)

    Kumar, Naveen; Komal; Lather, J. S.

    2017-09-01

    In this manuscript, the reliability of a robotic system has been analyzed using the available data (containing vagueness, uncertainty, etc). Quantification of involved uncertainties is done through data fuzzification using triangular fuzzy numbers with known spreads as suggested by system experts. With fuzzified data, if the existing fuzzy lambda-tau (FLT) technique is employed, then the computed reliability parameters have wide range of predictions. Therefore, decision-maker cannot suggest any specific and influential managerial strategy to prevent unexpected failures and consequently to improve complex system performance. To overcome this problem, the present study utilizes a hybridized technique. With this technique, fuzzy set theory is utilized to quantify uncertainties, fault tree is utilized for the system modeling, lambda-tau method is utilized to formulate mathematical expressions for failure/repair rates of the system, and genetic algorithm is utilized to solve established nonlinear programming problem. Different reliability parameters of a robotic system are computed and the results are compared with the existing technique. The components of the robotic system follow exponential distribution, i.e., constant. Sensitivity analysis is also performed and impact on system mean time between failures (MTBF) is addressed by varying other reliability parameters. Based on analysis some influential suggestions are given to improve the system performance.

  11. TIGER reliability analysis in the DSN

    NASA Technical Reports Server (NTRS)

    Gunn, J. M.

    1982-01-01

    The TIGER algorithm, the inputs to the program and the output are described. TIGER is a computer program designed to simulate a system over a period of time to evaluate system reliability and availability. Results can be used in the Deep Space Network for initial spares provisioning and system evaluation.

  12. NREL, Hewlett-Packard Developed Ultra-Efficient, High-Performance Computing

    Science.gov Websites

    and allows the heat captured from the supercomputer to provide all the heating needs for the Energy Systems Integration Facility. And there's even enough heat left over to melt snow outside on sidewalks during the winter. During the summer, the unused heat can be rejected via cooling towers. R&D

  13. Nanomagnetic Logic

    NASA Astrophysics Data System (ADS)

    Carlton, David Bryan

    The exponential improvements in speed, energy efficiency, and cost that the computer industry has relied on for growth during the last 50 years are in danger of ending within the decade. These improvements all have relied on scaling the size of the silicon-based transistor that is at the heart of every modern CPU down to smaller and smaller length scales. However, as the size of the transistor reaches scales that are measured in the number of atoms that make it up, it is clear that this scaling cannot continue forever. As a result of this, there has been a great deal of research effort directed at the search for the next device that will continue to power the growth of the computer industry. However, due to the billions of dollars of investment that conventional silicon transistors have received over the years, it is unlikely that a technology will emerge that will be able to beat it outright in every performance category. More likely, different devices will possess advantages over conventional transistors for certain applications and uses. One of these emerging computing platforms is nanomagnetic logic (NML). NML-based circuits process information by manipulating the magnetization states of single-domain nanomagnets coupled to their nearest neighbors through magnetic dipole interactions. The state variable is magnetization direction and computations can take place without passing an electric current. This makes them extremely attractive as a replacement for conventional transistor-based computing architectures for certain ultra-low power applications. In most work to date, nanomagnetic logic circuits have used an external magnetic clocking field to reset the system between computations. The clocking field is then subsequently removed very slowly relative to the magnetization dynamics, guiding the nanomagnetic logic circuit adiabatically into its magnetic ground state. In this dissertation, I will discuss the dynamics behind this process and show that it is greatly influenced by thermal fluctuations. The magnetic ground state containing the answer to the computation is reached by a stochastic process very similar to the thermal annealing of crystalline materials. We will discuss how these dynamics affect the expected reliability, speed, and energy dissipation of NML systems operating under these conditions. Next I will show how a slight change in the properties of the nanomagnets that make up a NML circuit can completely alter the dynamics by which computations take place. The addition of biaxial anisotropy to the magnetic energy landscape creates a metastable state along the hard axis of the nanomagnet. This metastability can be used to remove the stochastic nature of the computation and has large implications for reliability, speed, and energy dissipation which will all be discussed. The changes to NML operation by the addition of biaxial anisotropy introduce new challenges to realizing a commercially viable logic architecture. In the final chapter, I will discuss these challenges and talk about the architectural changes that are necessary to make a working NML circuit based on nanomagnets with biaxial anisotropy.

  14. Reliability of lower limb alignment measures using an established landmark-based method with a customized computer software program

    PubMed Central

    Sled, Elizabeth A.; Sheehy, Lisa M.; Felson, David T.; Costigan, Patrick A.; Lam, Miu; Cooke, T. Derek V.

    2010-01-01

    The objective of the study was to evaluate the reliability of frontal plane lower limb alignment measures using a landmark-based method by (1) comparing inter- and intra-reader reliability between measurements of alignment obtained manually with those using a computer program, and (2) determining inter- and intra-reader reliability of computer-assisted alignment measures from full-limb radiographs. An established method for measuring alignment was used, involving selection of 10 femoral and tibial bone landmarks. 1) To compare manual and computer methods, we used digital images and matching paper copies of five alignment patterns simulating healthy and malaligned limbs drawn using AutoCAD. Seven readers were trained in each system. Paper copies were measured manually and repeat measurements were performed daily for 3 days, followed by a similar routine with the digital images using the computer. 2) To examine the reliability of computer-assisted measures from full-limb radiographs, 100 images (200 limbs) were selected as a random sample from 1,500 full-limb digital radiographs which were part of the Multicenter Osteoarthritis (MOST) Study. Three trained readers used the software program to measure alignment twice from the batch of 100 images, with two or more weeks between batch handling. Manual and computer measures of alignment showed excellent agreement (intraclass correlations [ICCs] 0.977 – 0.999 for computer analysis; 0.820 – 0.995 for manual measures). The computer program applied to full-limb radiographs produced alignment measurements with high inter- and intra-reader reliability (ICCs 0.839 – 0.998). In conclusion, alignment measures using a bone landmark-based approach and a computer program were highly reliable between multiple readers. PMID:19882339

  15. Tutorial: Advanced fault tree applications using HARP

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.

    1993-01-01

    Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.

  16. SURE reliability analysis: Program and mathematics

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; White, Allan L.

    1988-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The computational methods on which the program is based provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  17. Care 3 phase 2 report, maintenance manual

    NASA Technical Reports Server (NTRS)

    Bryant, L. A.; Stiffler, J. J.

    1982-01-01

    CARE 3 (Computer-Aided Reliability Estimation, version three) is a computer program designed to help estimate the reliability of complex, redundant systems. Although the program can model a wide variety of redundant structures, it was developed specifically for fault-tolerant avionics systems--systems distinguished by the need for extremely reliable performance since a system failure could well result in the loss of human life. It substantially generalizes the class of redundant configurations that could be accommodated, and includes a coverage model to determine the various coverage probabilities as a function of the applicable fault recovery mechanisms (detection delay, diagnostic scheduling interval, isolation and recovery delay, etc.). CARE 3 further generalizes the class of system structures that can be modeled and greatly expands the coverage model to take into account such effects as intermittent and transient faults, latent faults, error propagation, etc.

  18. Plasma-Enhanced Pulsed Laser Deposition of Wide Bandgap Nitrides for Space Power Applications

    NASA Technical Reports Server (NTRS)

    Triplett, G. E., Jr.; Durbin, S. M.

    2004-01-01

    The need for a reliable, inexpensive technology for small-scale space power applications where photovoltaic or chemical battery approaches are not feasible has prompted renewed interest in radioisotope-based energy conversion devices. Although a number of devices have been developed using a variety of semiconductors, the single most limiting factor remains the overall lifetime of the radioisotope battery. Recent advances in growth techniques for ultra-wide bandgap III-nitride semiconductors provide the means to explore a new group of materials with the promise of significant radiation resistance. Additional benefits resulting from the use of ultra-wide bandgap materials include a reduction in leakage current and higher operating voltage without a loss of energy transfer efficiency. This paper describes the development of a novel plasma-enhanced pulsed laser deposition system for the growth of cubic boron nitride semiconducting thin films, which will be used to construct pn junction devices for alphavoltaic applications.

  19. Multi-constituent determination and fingerprint analysis of Scutellaria indica L. using ultra high performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry.

    PubMed

    Liang, Xianrui; Zhao, Cui; Su, Weike

    2015-11-01

    An ultra-performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry method integrating multi-constituent determination and fingerprint analysis has been established for quality assessment and control of Scutellaria indica L. The optimized method possesses the advantages of speediness, efficiency, and allows multi-constituents determination and fingerprint analysis in one chromatographic run within 11 min. 36 compounds were detected, and 23 of them were unequivocally identified or tentatively assigned. The established fingerprint method was applied to the analysis of ten S. indica samples from different geographic locations. The quality assessment was achieved by using principal component analysis. The proposed method is useful and reliable for the characterization of multi-constituents in a complex chemical system and the overall quality assessment of S. indica. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  1. A Manganin Thin Film Ultra-High Pressure Sensor for Microscale Detonation Pressure Measurement

    PubMed Central

    Zhang, Guodong; Zhao, Yulong; Zhao, Yun; Wang, Xinchen; Ren, Wei; Li, Hui; Zhao, You

    2018-01-01

    With the development of energetic materials (EMs) and microelectromechanical systems (MEMS) initiating explosive devices, the measurement of detonation pressure generated by EMs in the microscale has become a pressing need. This paper develops a manganin thin film ultra-high pressure sensor based on MEMS technology for measuring the output pressure from micro-detonator. A reliable coefficient is proposed for designing the sensor’s sensitive element better. The sensor employs sandwich structure: the substrate uses a 0.5 mm thick alumina ceramic, the manganin sensitive element with a size of 0.2 mm × 0.1 mm × 2 μm and copper electrodes of 2 μm thick are sputtered sequentially on the substrate, and a 25 μm thick insulating layer of polyimide is wrapped on the sensitive element. The static test shows that the piezoresistive coefficient of manganin thin film is 0.0125 GPa−1. The dynamic experiment indicates that the detonation pressure of micro-detonator is 12.66 GPa, and the response time of the sensor is 37 ns. In a word, the sensor developed in this study is suitable for measuring ultra-high pressure in microscale and has a shorter response time than that of foil-like manganin gauges. Simultaneously, this study could be beneficial to research on ultra-high-pressure sensors with smaller size. PMID:29494519

  2. Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    NASA Technical Reports Server (NTRS)

    Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.

    1992-01-01

    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.

  3. Design of a modular digital computer system

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A Central Control Element (CCE) module which controls the Automatically Reconfigurable Modular System (ARMS) and allows both redundant processing and multi-computing in the same computer with real time mode switching, is discussed. The same hardware is used for either reliability enhancement, speed enhancement, or for a combination of both.

  4. A high specific power solar array for low to mid-power spacecraft

    NASA Technical Reports Server (NTRS)

    Jones, P. Alan; White, Stephen F.; Harvey, T. Jeffery; Smith, Brian S.

    1993-01-01

    UltraFlex is the generic term for a solar array system which delivers on-orbit power in the 400 to 6,000 watt per wing sizes with end-of-life specific power performance ranging to 150 watts-per-kilogram. Such performance is accomplished with off-the-shelf solar cells and state-of the-art materials and processes. Much of the recent work in photovoltaics is centered on advanced solar cell development. Successful as such work has been, no integrated solar array system has emerged which meets NASA's stated goals of 'increasing the end-of-life performance of space solar cells and arrays while minimizing their mass and cost.' This issue is addressed; namely, is there an array design that satisfies the usual requirements for space-rated hardware and that is inherently reliable, inexpensive, easily manufactured and simple, which can be used with both advanced cells currently in development and with inexpensive silicon cells? The answer is yes. The UltraFlex array described incorporates use of a blanket substrate which is thermally compatible with silicon and other materials typical of advanced multi-junction devices. The blanket materials are intrinsically insensitive to atomic oxygen degradation, are space rated, and are compatible with standard cell bonding processes. The deployment mechanism is simple and reliable and the structure is inherently stiff (high natural frequency). Mechanical vibration modes are also readily damped. The basic design is presented as well as supporting analysis and development tests.

  5. A high specific power solar array for low to mid-power spacecraft

    NASA Astrophysics Data System (ADS)

    Jones, P. Alan; White, Stephen F.; Harvey, T. Jeffery; Smith, Brian S.

    1993-05-01

    UltraFlex is the generic term for a solar array system which delivers on-orbit power in the 400 to 6,000 watt per wing sizes with end-of-life specific power performance ranging to 150 watts-per-kilogram. Such performance is accomplished with off-the-shelf solar cells and state-of the-art materials and processes. Much of the recent work in photovoltaics is centered on advanced solar cell development. Successful as such work has been, no integrated solar array system has emerged which meets NASA's stated goals of 'increasing the end-of-life performance of space solar cells and arrays while minimizing their mass and cost.' This issue is addressed; namely, is there an array design that satisfies the usual requirements for space-rated hardware and that is inherently reliable, inexpensive, easily manufactured and simple, which can be used with both advanced cells currently in development and with inexpensive silicon cells? The answer is yes. The UltraFlex array described incorporates use of a blanket substrate which is thermally compatible with silicon and other materials typical of advanced multi-junction devices. The blanket materials are intrinsically insensitive to atomic oxygen degradation, are space rated, and are compatible with standard cell bonding processes. The deployment mechanism is simple and reliable and the structure is inherently stiff (high natural frequency). Mechanical vibration modes are also readily damped. The basic design is presented as well as supporting analysis and development tests.

  6. Impact evaluation of conducted UWB transients on loads in power-line networks

    NASA Astrophysics Data System (ADS)

    Li, Bing; Månsson, Daniel

    2017-09-01

    Nowadays, faced with the ever-increasing dependence on diverse electronic devices and systems, the proliferation of potential electromagnetic interference (EMI) becomes a critical threat for reliable operation. A typical issue is the electronics working reliably in power-line networks when exposed to electromagnetic environment. In this paper, we consider a conducted ultra-wideband (UWB) disturbance, as an example of intentional electromagnetic interference (IEMI) source, and perform the impact evaluation at the loads in a network. With the aid of fast Fourier transform (FFT), the UWB transient is characterized in the frequency domain. Based on a modified Baum-Liu-Tesche (BLT) method, the EMI received at the loads, with complex impedance, is computed. Through inverse FFT (IFFT), we obtain time-domain responses of the loads. To evaluate the impact on loads, we employ five common, but important quantifiers, i.e., time-domain peak, total signal energy, peak signal power, peak time rate of change and peak time integral of the pulse. Moreover, to perform a comprehensive analysis, we also investigate the effects of the attributes (capacitive, resistive, or inductive) of other loads connected to the network, the rise time and pulse width of the UWB transient, and the lengths of power lines. It is seen that, for the loads distributed in a network, the impact evaluation of IEMI should be based on the characteristics of the IEMI source, and the network features, such as load impedances, layout, and characteristics of cables.

  7. Closed-form solution of decomposable stochastic models

    NASA Technical Reports Server (NTRS)

    Sjogren, Jon A.

    1990-01-01

    Markov and semi-Markov processes are increasingly being used in the modeling of complex reconfigurable systems (fault tolerant computers). The estimation of the reliability (or some measure of performance) of the system reduces to solving the process for its state probabilities. Such a model may exhibit numerous states and complicated transition distributions, contributing to an expensive and numerically delicate solution procedure. Thus, when a system exhibits a decomposition property, either structurally (autonomous subsystems), or behaviorally (component failure versus reconfiguration), it is desirable to exploit this decomposition in the reliability calculation. In interesting cases there can be failure states which arise from non-failure states of the subsystems. Equations are presented which allow the computation of failure probabilities of the total (combined) model without requiring a complete solution of the combined model. This material is presented within the context of closed-form functional representation of probabilities as utilized in the Symbolic Hierarchical Automated Reliability and Performance Evaluator (SHARPE) tool. The techniques adopted enable one to compute such probability functions for a much wider class of systems at a reduced computational cost. Several examples show how the method is used, especially in enhancing the versatility of the SHARPE tool.

  8. Neutron Detection With Ultra-Fast Digitizer and Pulse Identification Techniques on DIII-D

    NASA Astrophysics Data System (ADS)

    Zhu, Y. B.; Heidbrink, W. W.; Piglowski, D. A.

    2013-10-01

    A prototype system for neutron detection with an ultra-fast digitizer and pulse identification techniques has been implemented on the DIII-D tokamak. The system consists of a cylindrical neutron fission chamber, a charge sensitive amplifier, and a GaGe Octopus 12-bit CompuScope digitizer card installed in a Linux computer. Digital pulse identification techniques have been successfully performed at maximum data acquisition rate of 50 MSPS with on-board memory of 2 GS. Compared to the traditional approach with fast nuclear electronics for pulse counting, this straightforward digital solution has many advantages, including reduced expense, improved accuracy, higher counting rate, and easier maintenance. The system also provides the capability of neutron-gamma pulse shape discrimination and pulse height analysis. Plans for the upgrade of the old DIII-D neutron counting system with these techniques will be presented. Work supported by the US Department of Energy under SC-G903402, and DE-FC02-04ER54698.

  9. Modeling reality

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1990-01-01

    Although powerful computers have allowed complex physical and manmade hardware systems to be modeled successfully, we have encountered persistent problems with the reliability of computer models for systems involving human learning, human action, and human organizations. This is not a misfortune; unlike physical and manmade systems, human systems do not operate under a fixed set of laws. The rules governing the actions allowable in the system can be changed without warning at any moment, and can evolve over time. That the governing laws are inherently unpredictable raises serious questions about the reliability of models when applied to human situations. In these domains, computers are better used, not for prediction and planning, but for aiding humans. Examples are systems that help humans speculate about possible futures, offer advice about possible actions in a domain, systems that gather information from the networks, and systems that track and support work flows in organizations.

  10. Fault-tolerant clock synchronization validation methodology. [in computer systems

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.

    1987-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.

  11. Reliability models for dataflow computer systems

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.; Buckles, B. P.

    1985-01-01

    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

  12. Advances in the production of freeform optical surfaces

    NASA Astrophysics Data System (ADS)

    Tohme, Yazid E.; Luniya, Suneet S.

    2007-05-01

    Recent market demands for free-form optics have challenged the industry to find new methods and techniques to manufacture free-form optical surfaces with a high level of accuracy and reliability. Production techniques are becoming a mix of multi-axis single point diamond machining centers or deterministic ultra precision grinding centers coupled with capable measurement systems to accomplish the task. It has been determined that a complex software tool is required to seamlessly integrate all aspects of the manufacturing process chain. Advances in computational power and improved performance of computer controlled precision machinery have driven the use of such software programs to measure, visualize, analyze, produce and re-validate the 3D free-form design thus making the process of manufacturing such complex surfaces a viable task. Consolidation of the entire production cycle in a comprehensive software tool that can interact with all systems in design, production and measurement phase will enable manufacturers to solve these complex challenges providing improved product quality, simplified processes, and enhanced performance. The work being presented describes the latest advancements in developing such software package for the entire fabrication process chain for aspheric and free-form shapes. It applies a rational B-spline based kernel to transform an optical design in the form of parametrical definition (optical equation), standard CAD format, or a cloud of points to a central format that drives the simulation. This software tool creates a closed loop for the fabrication process chain. It integrates surface analysis and compensation, tool path generation, and measurement analysis in one package.

  13. Effects of computing time delay on real-time control systems

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Cui, Xianzhong

    1988-01-01

    The reliability of a real-time digital control system depends not only on the reliability of the hardware and software used, but also on the speed in executing control algorithms. The latter is due to the negative effects of computing time delay on control system performance. For a given sampling interval, the effects of computing time delay are classified into the delay problem and the loss problem. Analysis of these two problems is presented as a means of evaluating real-time control systems. As an example, both the self-tuning predicted (STP) control and Proportional-Integral-Derivative (PID) control are applied to the problem of tracking robot trajectories, and their respective effects of computing time delay on control performance are comparatively evaluated. For this example, the STP (PID) controller is shown to outperform the PID (STP) controller in coping with the delay (loss) problem.

  14. Large-scale-system effectiveness analysis. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, A.D.; Ayoub, A.K.; Foster, J.W.

    1979-11-01

    Objective of the research project has been the investigation and development of methods for calculating system reliability indices which have absolute, and measurable, significance to consumers. Such indices are a necessary prerequisite to any scheme for system optimization which includes the economic consequences of consumer service interruptions. A further area of investigation has been joint consideration of generation and transmission in reliability studies. Methods for finding or estimating the probability distributions of some measures of reliability performance have been developed. The application of modern Monte Carlo simulation methods to compute reliability indices in generating systems has been studied.

  15. Multiprocessor switch with selective pairing

    DOEpatents

    Gara, Alan; Gschwind, Michael K; Salapura, Valentina

    2014-03-11

    System, method and computer program product for a multiprocessing system to offer selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). Each paired microprocessor or processor cores that provide one highly reliable thread for high-reliability connect with a system components such as a memory "nest" (or memory hierarchy), an optional system controller, and optional interrupt controller, optional I/O or peripheral devices, etc. The memory nest is attached to a selective pairing facility via a switch or a bus

  16. Improving multi-GNSS ultra-rapid orbit determination for real-time precise point positioning

    NASA Astrophysics Data System (ADS)

    Li, Xingxing; Chen, Xinghan; Ge, Maorong; Schuh, Harald

    2018-03-01

    Currently, with the rapid development of multi-constellation Global Navigation Satellite Systems (GNSS), the real-time positioning and navigation are undergoing dramatic changes with potential for a better performance. To provide more precise and reliable ultra-rapid orbits is critical for multi-GNSS real-time positioning, especially for the three merging constellations Beidou, Galileo and QZSS which are still under construction. In this contribution, we present a five-system precise orbit determination (POD) strategy to fully exploit the GPS + GLONASS + BDS + Galileo + QZSS observations from CDDIS + IGN + BKG archives for the realization of hourly five-constellation ultra-rapid orbit update. After adopting the optimized 2-day POD solution (updated every hour), the predicted orbit accuracy can be obviously improved for all the five satellite systems in comparison to the conventional 1-day POD solution (updated every 3 h). The orbit accuracy for the BDS IGSO satellites can be improved by about 80, 45 and 50% in the radial, cross and along directions, respectively, while the corresponding accuracy improvement for the BDS MEO satellites reaches about 50, 20 and 50% in the three directions, respectively. Furthermore, the multi-GNSS real-time precise point positioning (PPP) ambiguity resolution has been performed by using the improved precise satellite orbits. Numerous results indicate that combined GPS + BDS + GLONASS + Galileo (GCRE) kinematic PPP ambiguity resolution (AR) solutions can achieve the shortest time to first fix (TTFF) and highest positioning accuracy in all coordinate components. With the addition of the BDS, GLONASS and Galileo observations to the GPS-only processing, the GCRE PPP AR solution achieves the shortest average TTFF of 11 min with 7{°} cutoff elevation, while the TTFF of GPS-only, GR, GE and GC PPP AR solution is 28, 15, 20 and 17 min, respectively. As the cutoff elevation increases, the reliability and accuracy of GPS-only PPP AR solutions decrease dramatically, but there is no evident decrease for the accuracy of GCRE fixed solutions which can still achieve an accuracy of a few centimeters in the east and north components.

  17. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    NASA Technical Reports Server (NTRS)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  18. Computer Disaster Recovery Planning.

    ERIC Educational Resources Information Center

    Clark, Orvin R.

    Arguing that complete, reliable, up-to-date system documentation is critical for every data processing environment, this paper on computer disaster recovery planning begins by discussing the importance of such documentation both for recovering from a systems crash, and for system maintenance and enhancement. The various components of system…

  19. Gigascale Silicon Photonic Transmitters Integrating HBT-based Carrier-injection Electroabsorption Modulator Structures

    NASA Astrophysics Data System (ADS)

    Fu, Enjin

    Demand for more bandwidth is rapidly increasing, which is driven by data intensive applications such as high-definition (HD) video streaming, cloud storage, and terascale computing applications. Next-generation high-performance computing systems require power efficient chip-to-chip and intra-chip interconnect yielding densities on the order of 1Tbps/cm2. The performance requirements of such system are the driving force behind the development of silicon integrated optical interconnect, providing a cost-effective solution for fully integrated optical interconnect systems on a single substrate. Compared to conventional electrical interconnect, optical interconnects have several advantages, including frequency independent insertion loss resulting in ultra wide bandwidth and link latency reduction. For high-speed optical transmitter modules, the optical modulator is a key component of the optical I/O channel. This thesis presents a silicon integrated optical transmitter module design based on a novel silicon HBT-based carrier injection electroabsorption modulator (EAM), which has the merits of wide optical bandwidth, high speed, low power, low drive voltage, small footprint, and high modulation efficiency. The structure, mechanism, and fabrication of the modulator structure will be discussed which is followed by the electrical modeling of the post-processed modulator device. The design and realization of a 10Gbps monolithic optical transmitter module integrating the driver circuit architecture and the HBT-based EAM device in a 130nm BiCMOS process is discussed. For high power efficiency, a 6Gbps ultra-low power driver IC implemented in a 130nm BiCMOS process is presented. The driver IC incorporates an integrated 27-1 pseudo-random bit sequence (PRBS) generator for reliable high-speed testing, and a driver circuit featuring digitally-tuned pre-emphasis signal strength. With outstanding drive capability, the driver module can be applied to a wide range of carrier injection modulators and light-emitting diodes (LED) with drive voltage requirements below 1.5V. Measurement results show an optical link based on a 70MHz red LED work well at 300Mbps by using the pre-emphasis driver module. A traveling wave electrode (TWE) modulator structure is presented, including a novel design methodology to address process limitations imposed by a commercial silicon fabrication technology. Results from 3D full wave EM simulation demonstrate the application of the design methodology to achieve specifications, including phase velocity matching, insertion loss, and impedance matching. Results show the HBT-based TWE-EAM system has the bandwidth higher than 60GHz.

  20. Accuracy, intra- and inter-unit reliability, and comparison between GPS and UWB-based position-tracking systems used for time-motion analyses in soccer.

    PubMed

    Bastida Castillo, Alejandro; Gómez Carmona, Carlos D; De la Cruz Sánchez, Ernesto; Pino Ortega, José

    2018-05-01

    There is interest in the accuracy and inter-unit reliability of position-tracking systems to monitor players. Research into this technology, although relatively recent, has grown exponentially in the last years, and it is difficult to find professional team sport that does not use Global Positioning System (GPS) technology at least. The aim of this study is to know the accuracy of both GPS-based and Ultra Wide Band (UWB)-based systems on a soccer field and their inter- and intra-unit reliability. A secondary aim is to compare them for practical applications in sport science. Following institutional ethical approval and familiarization, 10 healthy and well-trained former soccer players (20 ± 1.6 years, 1.76 ± 0.08 cm, and 69.5 ± 9.8 kg) performed three course tests: (i) linear course, (ii) circular course, and (iii) a zig-zag course, all using UWB and GPS technologies. The average speed and distance covered were compared with timing gates and the real distance as references. The UWB technology showed better accuracy (bias: 0.57-5.85%), test-retest reliability (%TEM: 1.19), and inter-unit reliability (bias: 0.18) in determining distance covered than the GPS technology (bias: 0.69-6.05%; %TEM: 1.47; bias: 0.25) overall. Also, UWB showed better results (bias: 0.09; ICC: 0.979; bias: 0.01) for mean velocity measurement than GPS (bias: 0.18; ICC: 0.951; bias: 0.03).

  1. Rocket engine system reliability analyses using probabilistic and fuzzy logic techniques

    NASA Technical Reports Server (NTRS)

    Hardy, Terry L.; Rapp, Douglas C.

    1994-01-01

    The reliability of rocket engine systems was analyzed by using probabilistic and fuzzy logic techniques. Fault trees were developed for integrated modular engine (IME) and discrete engine systems, and then were used with the two techniques to quantify reliability. The IRRAS (Integrated Reliability and Risk Analysis System) computer code, developed for the U.S. Nuclear Regulatory Commission, was used for the probabilistic analyses, and FUZZYFTA (Fuzzy Fault Tree Analysis), a code developed at NASA Lewis Research Center, was used for the fuzzy logic analyses. Although both techniques provided estimates of the reliability of the IME and discrete systems, probabilistic techniques emphasized uncertainty resulting from randomness in the system whereas fuzzy logic techniques emphasized uncertainty resulting from vagueness in the system. Because uncertainty can have both random and vague components, both techniques were found to be useful tools in the analysis of rocket engine system reliability.

  2. The SURE reliability analysis program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  3. The SURE Reliability Analysis Program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  4. NASA Formal Methods Workshop, 1990

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W. (Compiler)

    1990-01-01

    The workshop brought together researchers involved in the NASA formal methods research effort for detailed technical interchange and provided a mechanism for interaction with representatives from the FAA and the aerospace industry. The workshop also included speakers from industry to debrief the formal methods researchers on the current state of practice in flight critical system design, verification, and certification. The goals were: define and characterize the verification problem for ultra-reliable life critical flight control systems and the current state of practice in industry today; determine the proper role of formal methods in addressing these problems, and assess the state of the art and recent progress toward applying formal methods to this area.

  5. Active optical control system design of the SONG-China Telescope

    NASA Astrophysics Data System (ADS)

    Ye, Yu; Kou, Songfeng; Niu, Dongsheng; Li, Cheng; Wang, Guomin

    2012-09-01

    The standard SONG node structure of control system is presented. The active optical control system of the project is a distributed system, and a host computer and a slave intelligent controller are included. The host control computer collects the information from wave front sensor and sends commands to the slave computer to realize a closed loop model. For intelligent controller, a programmable logic controller (PLC) system is used. This system combines with industrial personal computer (IPC) and PLC to make up a control system with powerful and reliable.

  6. Using Penelope to assess the correctness of NASA Ada software: A demonstration of formal methods as a counterpart to testing

    NASA Technical Reports Server (NTRS)

    Eichenlaub, Carl T.; Harper, C. Douglas; Hird, Geoffrey

    1993-01-01

    Life-critical applications warrant a higher level of software reliability than has yet been achieved. Since it is not certain that traditional methods alone can provide the required ultra reliability, new methods should be examined as supplements or replacements. This paper describes a mathematical counterpart to the traditional process of empirical testing. ORA's Penelope verification system is demonstrated as a tool for evaluating the correctness of Ada software. Grady Booch's Ada calendar utility package, obtained through NASA, was specified in the Larch/Ada language. Formal verification in the Penelope environment established that many of the package's subprograms met their specifications. In other subprograms, failed attempts at verification revealed several errors that had escaped detection by testing.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busch, M.; Abgrall, N.; Alvis, S. I.

    Here, the Majorana Collaboration is searching for the neutrinoless double-beta decay of the nucleus 76Ge. The Majorana Demonstrator is an array of germanium detectors deployed with the aim of implementing background reduction techniques suitable for a tonne scale 76Ge-based search (the LEGEND collaboration). In the Demonstrator, germanium detectors operate in an ultra-pure vacuum cryostat at 80 K. One special challenge of an ultra-pure environment is to develop reliable cables, connectors, and electronics that do not significantly contribute to the radioactive background of the experiment. This paper highlights the experimental requirements and how these requirements were met for the Majorana Demonstrator,more » including plans to upgrade the wiring for higher reliability in the summer of 2018. Also described are requirements for LEGEND R&D efforts underway to meet these additional requirements« less

  8. Reliability testing of ultra-low noise InGaAs quad photoreceivers

    NASA Astrophysics Data System (ADS)

    Joshi, Abhay M.; Datta, Shubhashish; Prasad, Narasimha; Sivertz, Michael

    2018-02-01

    We have developed ultra-low noise quadrant InGaAs photoreceivers for multiple applications ranging from Laser Interferometric Gravitional Wave Detection, to 3D Wind Profiling. Devices with diameters of 0.5 mm, 1mm, and 2 mm were processed, with the nominal capacitance of a single quadrant of a 1 mm quad photodiode being 2.5 pF. The 1 mm diameter InGaAs quad photoreceivers, using a low-noise, bipolar-input OpAmp circuitry exhibit an equivalent input noise per quadrant of <1.7 pA/√Hz in 2 to 20 MHz frequency range. The InGaAs Quad Photoreceivers have undergone the following reliability tests: 30 MeV Proton Radiation up to a Total Ionizing Dose (TID) of 50 krad, Mechanical Shock, and Sinusoidal Vibration.

  9. Development and ultra-structure of an ultra-thin silicone epidermis of bioengineered alternative tissue.

    PubMed

    Wessels, Quenton; Pretorius, Etheresia

    2015-08-01

    Burn wound care today has a primary objective of temporary or permanent wound closure. Commercially available engineered alternative tissues have become a valuable adjunct to the treatment of burn injuries. Their constituents can be biological, alloplastic or a combination of both. Here the authors describe the aspects of the development of a siloxane epidermis for a collagen-glycosaminoglycan and for nylon-based artificial skin replacement products. A method to fabricate an ultra-thin epidermal equivalent is described. Pores, to allow the escape of wound exudate, were punched and a tri-filament nylon mesh or collagen scaffold was imbedded and silicone polymerisation followed at 120°C for 5 minutes. The ultra-structure of these bilaminates was assessed through scanning electron microscopy. An ultra-thin biomedical grade siloxane film was reliably created through precision coating on a pre-treated polyethylene terephthalate carrier. © 2013 The Authors. International Wound Journal © 2013 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  10. A system-level approach for embedded memory robustness

    NASA Astrophysics Data System (ADS)

    Mariani, Riccardo; Boschi, Gabriele

    2005-11-01

    New ultra-deep submicron technologies are bringing not only new advantages such extraordinary transistor densities or unforeseen performances, but also new uncertainties such soft-error susceptibility, modelling complexity, coupling effects, leakage contribution and increased sensitivity to internal and external disturbs. Nowadays, embedded memories are taking profit of such new technologies and they are more and more used in systems: therefore as robustness and reliability requirement increase, memory systems must be protected against different kind of faults (permanent and transient) and that should be done in an efficient way. It means that reliability and costs, such overhead and performance degradation, must be efficiently tuned based on the system and on the application. Moreover, the new emerging norms for safety-critical applications such IEC 61508 are requiring precise answers in terms of robustness also in the case of memory systems. In this paper, classical protection techniques for error detection and correction are enriched with a system-aware approach, where the memory system is analyzed based on its role in the application. A configurable memory protection system is presented, together with the results of its application to a proof-of-concept architecture. This work has been developed in the framework of MEDEA+ T126 project called BLUEBERRIES.

  11. Microstructured graphene arrays for highly sensitive flexible tactile sensors.

    PubMed

    Zhu, Bowen; Niu, Zhiqiang; Wang, Hong; Leow, Wan Ru; Wang, Hua; Li, Yuangang; Zheng, Liyan; Wei, Jun; Huo, Fengwei; Chen, Xiaodong

    2014-09-24

    A highly sensitive tactile sensor is devised by applying microstructured graphene arrays as sensitive layers. The combination of graphene and anisotropic microstructures endows this sensor with an ultra-high sensitivity of -5.53 kPa(-1) , an ultra-fast response time of only 0.2 ms, as well as good reliability, rendering it promising for the application of tactile sensing in artificial skin and human-machine interface. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Preliminary Analysis of LORAN-C System Reliability for Civil Aviation.

    DTIC Science & Technology

    1981-09-01

    overviev of the analysis technique. Section 3 describes the computerized LORAN-C coverage model which is used extensively in the reliability analysis...Xth Plenary Assembly, Geneva, 1963, published by International Telecomunications Union. S. Braff, R., Computer program to calculate a Karkov Chain Reliability Model, unpublished york, MITRE Corporation. A-1 I.° , 44J Ili *Y 0E 00 ...F i8 1110 Prelim inary Analysis of Program Engineering & LORAN’C System ReliabilityMaintenance Service i ~Washington. D.C.

  13. Feasibility of an ultra-low power digital signal processor platform as a basis for a fully implantable brain-computer interface system.

    PubMed

    Wang, Po T; Gandasetiawan, Keulanna; McCrimmon, Colin M; Karimi-Bidhendi, Alireza; Liu, Charles Y; Heydari, Payam; Nenadic, Zoran; Do, An H

    2016-08-01

    A fully implantable brain-computer interface (BCI) can be a practical tool to restore independence to those affected by spinal cord injury. We envision that such a BCI system will invasively acquire brain signals (e.g. electrocorticogram) and translate them into control commands for external prostheses. The feasibility of such a system was tested by implementing its benchtop analogue, centered around a commercial, ultra-low power (ULP) digital signal processor (DSP, TMS320C5517, Texas Instruments). A suite of signal processing and BCI algorithms, including (de)multiplexing, Fast Fourier Transform, power spectral density, principal component analysis, linear discriminant analysis, Bayes rule, and finite state machine was implemented and tested in the DSP. The system's signal acquisition fidelity was tested and characterized by acquiring harmonic signals from a function generator. In addition, the BCI decoding performance was tested, first with signals from a function generator, and subsequently using human electroencephalogram (EEG) during eyes opening and closing task. On average, the system spent 322 ms to process and analyze 2 s of data. Crosstalk (<;-65 dB) and harmonic distortion (~1%) were minimal. Timing jitter averaged 49 μs per 1000 ms. The online BCI decoding accuracies were 100% for both function generator and EEG data. These results show that a complex BCI algorithm can be executed on an ULP DSP without compromising performance. This suggests that the proposed hardware platform may be used as a basis for future, fully implantable BCI systems.

  14. Forward Period Analysis Method of the Periodic Hamiltonian System.

    PubMed

    Wang, Pengfei

    2016-01-01

    Using the forward period analysis (FPA), we obtain the period of a Morse oscillator and mathematical pendulum system, with the accuracy of 100 significant digits. From these results, the long-term [0, 1060] (time unit) solutions, ranging from the Planck time to the age of the universe, are computed reliably and quickly with a parallel multiple-precision Taylor series (PMT) scheme. The application of FPA to periodic systems can greatly reduce the computation time of long-term reliable simulations. This scheme provides an efficient way to generate reference solutions, against which long-term simulations using other schemes can be tested.

  15. Ground-to-satellite quantum teleportation.

    PubMed

    Ren, Ji-Gang; Xu, Ping; Yong, Hai-Lin; Zhang, Liang; Liao, Sheng-Kai; Yin, Juan; Liu, Wei-Yue; Cai, Wen-Qi; Yang, Meng; Li, Li; Yang, Kui-Xing; Han, Xuan; Yao, Yong-Qiang; Li, Ji; Wu, Hai-Yan; Wan, Song; Liu, Lei; Liu, Ding-Quan; Kuang, Yao-Wu; He, Zhi-Ping; Shang, Peng; Guo, Cheng; Zheng, Ru-Hua; Tian, Kai; Zhu, Zhen-Cai; Liu, Nai-Le; Lu, Chao-Yang; Shu, Rong; Chen, Yu-Ao; Peng, Cheng-Zhi; Wang, Jian-Yu; Pan, Jian-Wei

    2017-09-07

    An arbitrary unknown quantum state cannot be measured precisely or replicated perfectly. However, quantum teleportation enables unknown quantum states to be transferred reliably from one object to another over long distances, without physical travelling of the object itself. Long-distance teleportation is a fundamental element of protocols such as large-scale quantum networks and distributed quantum computation. But the distances over which transmission was achieved in previous teleportation experiments, which used optical fibres and terrestrial free-space channels, were limited to about 100 kilometres, owing to the photon loss of these channels. To realize a global-scale 'quantum internet' the range of quantum teleportation needs to be greatly extended. A promising way of doing so involves using satellite platforms and space-based links, which can connect two remote points on Earth with greatly reduced channel loss because most of the propagation path of the photons is in empty space. Here we report quantum teleportation of independent single-photon qubits from a ground observatory to a low-Earth-orbit satellite, through an uplink channel, over distances of up to 1,400 kilometres. To optimize the efficiency of the link and to counter the atmospheric turbulence in the uplink, we use a compact ultra-bright source of entangled photons, a narrow beam divergence and high-bandwidth and high-accuracy acquiring, pointing and tracking. We demonstrate successful quantum teleportation of six input states in mutually unbiased bases with an average fidelity of 0.80 ± 0.01, well above the optimal state-estimation fidelity on a single copy of a qubit (the classical limit). Our demonstration of a ground-to-satellite uplink for reliable and ultra-long-distance quantum teleportation is an essential step towards a global-scale quantum internet.

  16. Ground-to-satellite quantum teleportation

    NASA Astrophysics Data System (ADS)

    Ren, Ji-Gang; Xu, Ping; Yong, Hai-Lin; Zhang, Liang; Liao, Sheng-Kai; Yin, Juan; Liu, Wei-Yue; Cai, Wen-Qi; Yang, Meng; Li, Li; Yang, Kui-Xing; Han, Xuan; Yao, Yong-Qiang; Li, Ji; Wu, Hai-Yan; Wan, Song; Liu, Lei; Liu, Ding-Quan; Kuang, Yao-Wu; He, Zhi-Ping; Shang, Peng; Guo, Cheng; Zheng, Ru-Hua; Tian, Kai; Zhu, Zhen-Cai; Liu, Nai-Le; Lu, Chao-Yang; Shu, Rong; Chen, Yu-Ao; Peng, Cheng-Zhi; Wang, Jian-Yu; Pan, Jian-Wei

    2017-09-01

    An arbitrary unknown quantum state cannot be measured precisely or replicated perfectly. However, quantum teleportation enables unknown quantum states to be transferred reliably from one object to another over long distances, without physical travelling of the object itself. Long-distance teleportation is a fundamental element of protocols such as large-scale quantum networks and distributed quantum computation. But the distances over which transmission was achieved in previous teleportation experiments, which used optical fibres and terrestrial free-space channels, were limited to about 100 kilometres, owing to the photon loss of these channels. To realize a global-scale ‘quantum internet’ the range of quantum teleportation needs to be greatly extended. A promising way of doing so involves using satellite platforms and space-based links, which can connect two remote points on Earth with greatly reduced channel loss because most of the propagation path of the photons is in empty space. Here we report quantum teleportation of independent single-photon qubits from a ground observatory to a low-Earth-orbit satellite, through an uplink channel, over distances of up to 1,400 kilometres. To optimize the efficiency of the link and to counter the atmospheric turbulence in the uplink, we use a compact ultra-bright source of entangled photons, a narrow beam divergence and high-bandwidth and high-accuracy acquiring, pointing and tracking. We demonstrate successful quantum teleportation of six input states in mutually unbiased bases with an average fidelity of 0.80 ± 0.01, well above the optimal state-estimation fidelity on a single copy of a qubit (the classical limit). Our demonstration of a ground-to-satellite uplink for reliable and ultra-long-distance quantum teleportation is an essential step towards a global-scale quantum internet.

  17. Development of a quadrupole-based Secondary-Ion Mass Spectrometry (SIMS) system at Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Vargas-Aburto, Carlos; Aron, Paul R.; Liff, Dale R.

    1990-01-01

    The design, construction, and initial use of an ion microprobe to carry out secondary ion mass spectrometry (SIMS) of solid samples is reported. The system is composed of a differentially pumped custom-made UHV (Ultra High Vacuum) chamber, a quadrupole mass spectrometer and a telefocus A-DIDA ion gun with the capability of producing beams of Cesium, as well as inert and reactive gases. The computer control and acquisition of the data were designed and implemented using a personal computer with plug-in boards, and external circuitry built as required to suit the system needs. The software is being developed by using a FORTH-like language. Initial tests aimed at characterizing the system, as well as preliminary surface and depth-profiling studies are presently underway.

  18. Digital tooth-based superimposition method for assessment of alveolar bone levels on cone-beam computed tomography images.

    PubMed

    Romero-Delmastro, Alejandro; Kadioglu, Onur; Currier, G Frans; Cook, Tanner

    2014-08-01

    Cone-beam computed tomography images have been previously used for evaluation of alveolar bone levels around teeth before, during, and after orthodontic treatment. Protocols described in the literature have been vague, have used unstable landmarks, or have required several software programs, file conversions, or hand tracings, among other factors that could compromise the precision of the measurements. The purposes of this article are to describe a totally digital tooth-based superimposition method for the quantitative assessment of alveolar bone levels and to evaluate its reliability. Ultra cone-beam computed tomography images (0.1-mm reconstruction) from 10 subjects were obtained from the data pool of the University of Oklahoma; 80 premolars were measured twice by the same examiner and a third time by a second examiner to determine alveolar bone heights and thicknesses before and more than 6 months after orthodontic treatment using OsiriX (version 3.5.1; Pixeo, Geneva, Switzerland). Intraexaminer and interexaminer reliabilities were evaluated, and Dahlberg's formula was used to calculate the error of the measurements. Cross-sectional and longitudinal evaluations of alveolar bone levels were possible using a digital tooth-based superimposition method. The mean differences for buccal alveolar crest heights and thicknesses were below 0.10 mm for the same examiner and below 0.17 mm for all examiners. The ranges of errors for any measurement were between 0.02 and 0.23 mm for intraexaminer errors, and between 0.06 and 0.29 mm for interexaminer errors. This protocol can be used for cross-sectional or longitudinal assessment of alveolar bone levels with low interexaminer and intraexaminer errors, and it eliminates the use of less reliable or less stable landmarks and the need for multiple software programs and image printouts. Standardization of the methods for bone assessment in orthodontics is necessary; this method could be the answer to this need. Copyright © 2014 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  19. Advanced telemetry systems for payloads. Technology needs, objectives and issues

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The current trends in advanced payload telemetry are the new developments in advanced modulation/coding, the applications of intelligent techniques, data distribution processing, and advanced signal processing methodologies. Concerted efforts will be required to design ultra-reliable man-rated software to cope with these applications. The intelligence embedded and distributed throughout various segments of the telemetry system will need to be overridden by an operator in case of life-threatening situations, making it a real-time integration issue. Suitable MIL standards on physical interfaces and protocols will be adopted to suit the payload telemetry system. New technologies and techniques will be developed for fast retrieval of mass data. Currently, these technology issues are being addressed to provide more efficient, reliable, and reconfigurable systems. There is a need, however, to change the operation culture. The current role of NASA as a leader in developing all the new innovative hardware should be altered to save both time and money. We should use all the available hardware/software developed by the industry and use the existing standards rather than inventing our own.

  20. Time Dependent Dielectric Breakdown in Copper Low-k Interconnects: Mechanisms and Reliability Models

    PubMed Central

    Wong, Terence K.S.

    2012-01-01

    The time dependent dielectric breakdown phenomenon in copper low-k damascene interconnects for ultra large-scale integration is reviewed. The loss of insulation between neighboring interconnects represents an emerging back end-of-the-line reliability issue that is not fully understood. After describing the main dielectric leakage mechanisms in low-k materials (Poole-Frenkel and Schottky emission), the major dielectric reliability models that had appeared in the literature are discussed, namely: the Lloyd model, 1/E model, thermochemical E model, E1/2 models, E2 model and the Haase model. These models can be broadly categorized into those that consider only intrinsic breakdown (Lloyd, 1/E, E and Haase) and those that take into account copper migration in low-k materials (E1/2, E2). For each model, the physical assumptions and the proposed breakdown mechanism will be discussed, together with the quantitative relationship predicting the time to breakdown and supporting experimental data. Experimental attempts on validation of dielectric reliability models using data obtained from low field stressing are briefly discussed. The phenomenon of soft breakdown, which often precedes hard breakdown in porous ultra low-k materials, is highlighted for future research.

  1. Ultra-Low Background Measurements Of Decayed Aerosol Filters

    NASA Astrophysics Data System (ADS)

    Miley, H.

    2009-04-01

    To experimentally evaluate the opportunity to apply ultra-low background measurement methods to samples collected, for instance, by the Comprehensive Test Ban Treaty International Monitoring System (IMS), aerosol samples collected on filter media were measured using HPGe spectrometers of varying low-background technology approaches. In this way, realistic estimates of the impact of low-background methodology can be assessed on the Minimum Detectable Activities obtained in systems such as the IMS. The current measurement requirement of stations in the IMS is 30 microBq per cubic meter of air for 140Ba, or about 106 fissions per daily sample. Importantly, this is for a fresh aerosol filter. Decay varying form 3 days to one week reduce the intrinsic background from radon daughters in the sample. Computational estimates of the improvement factor for these decayed filters for underground-based HPGe in clean shielding materials are orders of magnitude less, even when the decay of the isotopes of interest is included.

  2. Safety assessment of ultra-wideband antennas for microwave breast imaging.

    PubMed

    De Santis, Valerio; Sill, Jeff M; Bourqui, Jeremie; Fear, Elise C

    2012-04-01

    This article deals with the safety assessment of several ultra-wideband (UWB) antenna designs for use in prototype microwave breast imaging systems. First, the performances of the antennas are validated by comparison of measured and simulated data collected for a simple test case. An efficient approach to estimating the specific energy absorption (SA) is introduced and validated. Next, SA produced by the UWB antennas inside more realistic breast models is computed. In particular, the power levels and pulse repetition periods adopted for the SA evaluation follow the measurement protocol employed by a tissue sensing adaptive radar (TSAR) prototype system. Results indicate that the SA for the antennas examined is below limits prescribed in standards for exposure of the general population; however, the difficulties inherent in applying such standards to UWB exposures are discussed. The results also suggest that effective tools for the rapid evaluation of new sensors have been developed. © 2011 Wiley Periodicals, Inc.

  3. The art of fault-tolerant system reliability modeling

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Johnson, Sally C.

    1990-01-01

    A step-by-step tutorial of the methods and tools used for the reliability analysis of fault-tolerant systems is presented. Emphasis is on the representation of architectural features in mathematical models. Details of the mathematical solution of complex reliability models are not presented. Instead the use of several recently developed computer programs--SURE, ASSIST, STEM, PAWS--which automate the generation and solution of these models is described.

  4. A three-stage birandom program for unit commitment with wind power uncertainty.

    PubMed

    Zhang, Na; Li, Weidong; Liu, Rao; Lv, Quan; Sun, Liang

    2014-01-01

    The integration of large-scale wind power adds a significant uncertainty to power system planning and operating. The wind forecast error is decreased with the forecast horizon, particularly when it is from one day to several hours ahead. Integrating intraday unit commitment (UC) adjustment process based on updated ultra-short term wind forecast information is one way to improve the dispatching results. A novel three-stage UC decision method, in which the day-ahead UC decisions are determined in the first stage, the intraday UC adjustment decisions of subfast start units are determined in the second stage, and the UC decisions of fast-start units and dispatching decisions are determined in the third stage is presented. Accordingly, a three-stage birandom UC model is presented, in which the intraday hours-ahead forecasted wind power is formulated as a birandom variable, and the intraday UC adjustment event is formulated as a birandom event. The equilibrium chance constraint is employed to ensure the reliability requirement. A birandom simulation based hybrid genetic algorithm is designed to solve the proposed model. Some computational results indicate that the proposed model provides UC decisions with lower expected total costs.

  5. New modalities of ultrasound-based intima-media thickness, arterial stiffness and non-coronary vascular calcifications detection to assess cardiovascular risk.

    PubMed

    Flore, R; Ponziani, F R; Tinelli, G; Arena, V; Fonnesu, C; Nesci, A; Santoro, L; Tondi, P; Santoliquido, A

    2015-04-01

    Carotid intima-media thickness (c-IMT), arterial stiffness (AS) and vascular calcification (VC) are now considered important new markers of atherosclerosis and have been associated with increased prevalence of cardiovascular events. An accurate, reproducible and easy detection of these parameters could increase the prognostic value of the traditional cardiovascular risk factors in many subjects at low and intermediate risk. Today, c-IMT and AS can be measured by ultrasound, while cardiac computed tomography is the gold standard to quantify coronary VC, although concern about the reproducibility of the former and the safety of the latter have been raised. Nevertheless, a safe and reliable method to quantify non-coronary (i.e., peripheral) VC has not been detected yet. To review the most innovative and accurate ultrasound-based modalities of c-IMT and AS detection and to describe a novel UltraSound-Based Carotid, Aortic and Lower limbs Calcification Score (USB-CALCs, simply named CALC), allowing to quantify peripheral calcifications. Finally, to propose a system for cardiovascular risk reclassification derived from the global evaluation of "Quality Intima-Media Thickness", "Quality Arterial Stiffness", and "CALC score" in addition to the Framingham score.

  6. Recent advances in the UltraScan SOlution MOdeller (US-SOMO) hydrodynamic and small-angle scattering data analysis and simulation suite.

    PubMed

    Brookes, Emre; Rocco, Mattia

    2018-03-28

    The UltraScan SOlution MOdeller (US-SOMO) is a comprehensive, public domain, open-source suite of computer programs centred on hydrodynamic modelling and small-angle scattering (SAS) data analysis and simulation. We describe here the advances that have been implemented since its last official release (#3087, 2017), which are available from release #3141 for Windows, Linux and Mac operating systems. A major effort has been the transition from the legacy Qt3 cross platform software development and user interface library to the modern Qt5 release. Apart from improved graphical support, this has allowed the direct implementation of the newest, almost two-orders of magnitude faster version of the ZENO hydrodynamic computation algorithm for all operating systems. Coupled with the SoMo-generated bead models with overlaps, ZENO provides the most accurate translational friction computations from atomic-level structures available (Rocco and Byron Eur Biophys J 44:417-431, 2015a), with computational times comparable with or faster than those of other methods. In addition, it has allowed us to introduce the direct representation of each atom in a structure as a (hydrated) bead, opening interesting new modelling possibilities. In the small-angle scattering (SAS) part of the suite, an indirect Fourier transform Bayesian algorithm has been implemented for the computation of the pairwise distance distribution function from SAS data. Finally, the SAS HPLC module, recently upgraded with improved baseline correction and Gaussian decomposition of not baseline-resolved peaks and with advanced statistical evaluation tools (Brookes et al. J Appl Cryst 49:1827-1841, 2016), now allows automatic top-peak frame selection and averaging.

  7. Time Triggered Protocol (TTP) for Integrated Modular Avionics

    NASA Technical Reports Server (NTRS)

    Motzet, Guenter; Gwaltney, David A.; Bauer, Guenther; Jakovljevic, Mirko; Gagea, Leonard

    2006-01-01

    Traditional avionics computing systems are federated, with each system provided on a number of dedicated hardware units. Federated applications are physically separated from one another and analysis of the systems is undertaken individually. Integrated Modular Avionics (IMA) takes these federated functions and integrates them on a common computing platform in a tightly deterministic distributed real-time network of computing modules in which the different applications can run. IMA supports different levels of criticality in the same computing resource and provides a platform for implementation of fault tolerance through hardware and application redundancy. Modular implementation has distinct benefits in design, testing and system maintainability. This paper covers the requirements for fault tolerant bus systems used to provide reliable communication between IMA computing modules. An overview of the Time Triggered Protocol (TTP) specification and implementation as a reliable solution for IMA systems is presented. Application examples in aircraft avionics and a development system for future space application are covered. The commercially available TTP controller can be also be implemented in an FPGA and the results from implementation studies are covered. Finally future direction for the application of TTP and related development activities are presented.

  8. [Computer assisted application of mandarin speech test materials].

    PubMed

    Zhang, Hua; Wang, Shuo; Chen, Jing; Deng, Jun-Min; Yang, Xiao-Lin; Guo, Lian-Sheng; Zhao, Xiao-Yan; Shao, Guang-Yu; Han, De-Min

    2008-06-01

    To design an intelligent speech test system with reliability and convenience using the computer software and to evaluate this system. First, the intelligent system was designed by the Delphi program language. Second, the seven monosyllabic word lists recorded on CD were separated by Cool Edit Pro v2.1 software and put into the system as test materials. Finally, the intelligent system was used to evaluate the equivalence of difficulty between seven lists. Fifty-five college students with normal hearing participated in the study. The seven monosyllabic word lists had equivalent difficulty (F = 1.582, P > 0.05) to the subjects between each other and the system was proved as reliability and convenience. The intelligent system has the feasibility in the clinical practice.

  9. Scheduler for multiprocessor system switch with selective pairing

    DOEpatents

    Gara, Alan; Gschwind, Michael Karl; Salapura, Valentina

    2015-01-06

    System, method and computer program product for scheduling threads in a multiprocessing system with selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). The method configures the selective pairing facility to use checking provide one highly reliable thread for high-reliability and allocate threads to corresponding processor cores indicating need for hardware checking. The method configures the selective pairing facility to provide multiple independent cores and allocate threads to corresponding processor cores indicating inherent resilience.

  10. The Infeasibility of Experimental Quantification of Life-Critical Software Reliability

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Finelli, George B.

    1991-01-01

    This paper affirms that quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The key assumption of software fault tolerance|separately programmed versions fail independently|is shown to be problematic. This assumption cannot be justified by experimentation in the ultra-reliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multi-version software experiments support this affirmation.

  11. Reliability Growth in Space Life Support Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2014-01-01

    A hardware system's failure rate often increases over time due to wear and aging, but not always. Some systems instead show reliability growth, a decreasing failure rate with time, due to effective failure analysis and remedial hardware upgrades. Reliability grows when failure causes are removed by improved design. A mathematical reliability growth model allows the reliability growth rate to be computed from the failure data. The space shuttle was extensively maintained, refurbished, and upgraded after each flight and it experienced significant reliability growth during its operational life. In contrast, the International Space Station (ISS) is much more difficult to maintain and upgrade and its failure rate has been constant over time. The ISS Carbon Dioxide Removal Assembly (CDRA) reliability has slightly decreased. Failures on ISS and with the ISS CDRA continue to be a challenge.

  12. Simplified Phased-Mission System Analysis for Systems with Independent Component Repairs

    NASA Technical Reports Server (NTRS)

    Somani, Arun K.

    1996-01-01

    Accurate analysis of reliability of system requires that it accounts for all major variations in system's operation. Most reliability analyses assume that the system configuration, success criteria, and component behavior remain the same. However, multiple phases are natural. We present a new computationally efficient technique for analysis of phased-mission systems where the operational states of a system can be described by combinations of components states (such as fault trees or assertions). Moreover, individual components may be repaired, if failed, as part of system operation but repairs are independent of the system state. For repairable systems Markov analysis techniques are used but they suffer from state space explosion. That limits the size of system that can be analyzed and it is expensive in computation. We avoid the state space explosion. The phase algebra is used to account for the effects of variable configurations, repairs, and success criteria from phase to phase. Our technique yields exact (as opposed to approximate) results. We demonstrate our technique by means of several examples and present numerical results to show the effects of phases and repairs on the system reliability/availability.

  13. Secure Intra-Body Wireless Communications (SIWiC) System Project

    NASA Technical Reports Server (NTRS)

    Ahmad, Aftab; Doggett, Terrence P.

    2011-01-01

    SIWiC System is a project to investigate, design and implement future wireless networks of implantable sensors in the body. This futuristic project is designed to make use of the emerging and yet-to-emerge technologies, including ultra-wide band (UWB) for wireless communications, smart implantable sensors, ultra low power networking protocols, security and privacy for bandwidth and power deficient devices and quantum computing. Progress in each of these fronts is hindered by the needs of breakthrough. But, as we will see in this paper, these major challenges are being met or will be met in near future. SIWiC system is a network of in-situ wireless devices that are implanted to coordinate sensed data inside the body, such as symptoms monitoring collected internally, or biometric data collected of an outside object from within the intra-body network. One node has the capability of communicating outside the body to send data or alarm to a relevant authority, e.g., a remote physician.

  14. Bayesian methods in reliability

    NASA Astrophysics Data System (ADS)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  15. Software Voting in Asynchronous NMR (N-Modular Redundancy) Computer Structures.

    DTIC Science & Technology

    1983-05-06

    added reliability is exchanged for increased system cost and decreased throughput. Some applications require extremely reliable systems, so the only...not the other way around. Although no systems proidc abstract voting yet. as more applications are written for NMR systems, the programmers are going...throughput goes down, the overhead goes up. Mathematically : Overhead= Non redundant Throughput- Actual Throughput (1) In this section, the actual throughput

  16. Autonomic Computing for Spacecraft Ground Systems

    NASA Technical Reports Server (NTRS)

    Li, Zhenping; Savkli, Cetin; Jones, Lori

    2007-01-01

    Autonomic computing for spacecraft ground systems increases the system reliability and reduces the cost of spacecraft operations and software maintenance. In this paper, we present an autonomic computing solution for spacecraft ground systems at NASA Goddard Space Flight Center (GSFC), which consists of an open standard for a message oriented architecture referred to as the GMSEC architecture (Goddard Mission Services Evolution Center), and an autonomic computing tool, the Criteria Action Table (CAT). This solution has been used in many upgraded ground systems for NASA 's missions, and provides a framework for developing solutions with higher autonomic maturity.

  17. High-energy ultra-short pulse thin-disk lasers: new developments and applications

    NASA Astrophysics Data System (ADS)

    Michel, Knut; Klingebiel, Sandro; Schultze, Marcel; Tesseit, Catherine Y.; Bessing, Robert; Häfner, Matthias; Prinz, Stefan; Sutter, Dirk; Metzger, Thomas

    2016-03-01

    We report on the latest developments at TRUMPF Scientific Lasers in the field of ultra-short pulse lasers with highest output energies and powers. All systems are based on the mature and industrialized thin-disk technology of TRUMPF. Thin Yb:YAG disks provide a reliable and efficient solution for power and energy scaling to Joule- and kW-class picosecond laser systems. Due to its efficient one dimensional heat removal, the thin-disk exhibits low distortions and thermal lensing even when pumped under extremely high pump power densities of 10kW/cm². Currently TRUMPF Scientific Lasers develops regenerative amplifiers with highest average powers, optical parametric amplifiers and synchronization schemes. The first few-ps kHz multi-mJ thin-disk regenerative amplifier based on the TRUMPF thindisk technology was developed at the LMU Munich in 20081. Since the average power and energy have continuously been increased, reaching more than 300W (10kHz repetition rate) and 200mJ (1kHz repetition rate) at pulse durations below 2ps. First experiments have shown that the current thin-disk technology supports ultra-short pulse laser solutions >1kW of average power. Based on few-picosecond thin-disk regenerative amplifiers few-cycle optical parametric chirped pulse amplifiers (OPCPA) can be realized. These systems have proven to be the only method for scaling few-cycle pulses to the multi-mJ energy level. OPA based few-cycle systems will allow for many applications such as attosecond spectroscopy, THz spectroscopy and imaging, laser wake field acceleration, table-top few-fs accelerators and laser-driven coherent X-ray undulator sources. Furthermore, high-energy picosecond sources can directly be used for a variety of applications such as X-ray generation or in atmospheric research.

  18. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 3: HARP Graphics Oriented (GO) input user's guide

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Rothmann, Elizabeth; Mittal, Nitin; Koppen, Sandra Howell

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems, and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical preprocessor Graphics Oriented (GO) program. GO is a graphical user interface for the HARP engine that enables the drawing of reliability/availability models on a monitor. A mouse is used to select fault tree gates or Markov graphical symbols from a menu for drawing.

  19. Evaluation of reliability modeling tools for advanced fault tolerant systems

    NASA Technical Reports Server (NTRS)

    Baker, Robert; Scheper, Charlotte

    1986-01-01

    The Computer Aided Reliability Estimation (CARE III) and Automated Reliability Interactice Estimation System (ARIES 82) reliability tools for application to advanced fault tolerance aerospace systems were evaluated. To determine reliability modeling requirements, the evaluation focused on the Draper Laboratories' Advanced Information Processing System (AIPS) architecture as an example architecture for fault tolerance aerospace systems. Advantages and limitations were identified for each reliability evaluation tool. The CARE III program was designed primarily for analyzing ultrareliable flight control systems. The ARIES 82 program's primary use was to support university research and teaching. Both CARE III and ARIES 82 were not suited for determining the reliability of complex nodal networks of the type used to interconnect processing sites in the AIPS architecture. It was concluded that ARIES was not suitable for modeling advanced fault tolerant systems. It was further concluded that subject to some limitations (the difficulty in modeling systems with unpowered spare modules, systems where equipment maintenance must be considered, systems where failure depends on the sequence in which faults occurred, and systems where multiple faults greater than a double near coincident faults must be considered), CARE III is best suited for evaluating the reliability of advanced tolerant systems for air transport.

  20. Model reduction by trimming for a class of semi-Markov reliability models and the corresponding error bound

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Palumbo, Daniel L.

    1991-01-01

    Semi-Markov processes have proved to be an effective and convenient tool to construct models of systems that achieve reliability by redundancy and reconfiguration. These models are able to depict complex system architectures and to capture the dynamics of fault arrival and system recovery. A disadvantage of this approach is that the models can be extremely large, which poses both a model and a computational problem. Techniques are needed to reduce the model size. Because these systems are used in critical applications where failure can be expensive, there must be an analytically derived bound for the error produced by the model reduction technique. A model reduction technique called trimming is presented that can be applied to a popular class of systems. Automatic model generation programs were written to help the reliability analyst produce models of complex systems. This method, trimming, is easy to implement and the error bound easy to compute. Hence, the method lends itself to inclusion in an automatic model generator.

  1. Ultra-low-dose computed tomographic angiography with model-based iterative reconstruction compared with standard-dose imaging after endovascular aneurysm repair: a prospective pilot study.

    PubMed

    Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K

    2014-12-01

    An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.

  2. A Compressed Sensing Based Ultra-Wideband Communication System

    DTIC Science & Technology

    2009-06-01

    principle, most of the processing at the receiver can be moved to the transmitter—where energy consumption and computation are sufficient for many advanced...extended to continuous time signals. We use ∗ to denote the convolution process in a linear time-invariant (LTI) system. Assume that there is an analog...Filter Channel Low Rate A/D Processing Sparse Bit Sequence UWB Pulse Generator α̂ Waves)(RadioGHz 5 MHz125 θ Ψ Φ y θ̂ 1 ˆ arg min s.t. yθ

  3. Reliability and maintainability assessment factors for reliable fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1984-01-01

    A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics.

  4. Low background materials and fabrication techniques for cables and connectors in the Majorana Demonstrator

    NASA Astrophysics Data System (ADS)

    Busch, M.; Abgrall, N.; Alvis, S. I.; Arnquist, I. J.; Avignone, F. T.; Barabash, A. S.; Barton, C. J.; Bertrand, F. E.; Bode, T.; Bradley, A. W.; Brudanin, V.; Buuck, M.; Caldwell, T. S.; Chan, Y.-D.; Christofferson, C. D.; Chu, P.-H.; Cuesta, C.; Detwiler, J. A.; Dunagan, C.; Efremenko, Yu.; Ejiri, H.; Elliott, S. R.; Gilliss, T.; Giovanetti, G. K.; Green, M. P.; Gruszko, J.; Guinn, I. S.; Guiseppe, V. E.; Haufe, C. R.; Hehn, L.; Henning, R.; Hoppe, E. W.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Konovalov, S. I.; Kouzes, R. T.; Lopez, A. M.; Martin, R. D.; Massarczyk, R.; Meijer, S. J.; Mertens, S.; Myslik, J.; O'Shaughnessy, C.; Othman, G.; Poon, A. W. P.; Radford, D. C.; Rager, J.; Reine, A. L.; Rielage, K.; Robertson, R. G. H.; Rouf, N. W.; Shanks, B.; Shirchenko, M.; Suriano, A. M.; Tedeschi, D.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Yu, C.-H.; Yumatov, V.; Zhitnikov, I.; Zhu, B. X.

    2018-01-01

    The Majorana Collaboration is searching for the neutrinoless double-beta decay of the nucleus 76Ge. The Majorana Demonstrator is an array of germanium detectors deployed with the aim of implementing background reduction techniques suitable for a tonne scale 76Ge-based search (the LEGEND collaboration). In the Demonstrator, germanium detectors operate in an ultra-pure vacuum cryostat at 80 K. One special challenge of an ultra-pure environment is to develop reliable cables, connectors, and electronics that do not significantly contribute to the radioactive background of the experiment. This paper highlights the experimental requirements and how these requirements were met for the Majorana Demonstrator, including plans to upgrade the wiring for higher reliability in the summer of 2018. Also described are requirements for LEGEND R&D efforts underway to meet these additional requirements

  5. Time-dependent Reliability of Dynamic Systems using Subset Simulation with Splitting over a Series of Correlated Time Intervals

    DTIC Science & Technology

    2013-08-01

    cost due to potential warranty costs, repairs and loss of market share. Reliability is the probability that the system will perform its intended...MCMC and splitting sampling schemes. Our proposed SS/ STP method is presented in Section 4, including accuracy bounds and computational effort

  6. Design Considerations for a Water Treatment System Utilizing Ultra-Violet Light Emitting Diodes

    DTIC Science & Technology

    2014-03-27

    DESIGN CONSIDERATIONS FOR A WATER TREATMENT SYSTEM UTILIZING ULTRA-VIOLET LIGHT EMITTING DIODES...the United States. ii AFIT-ENV-14-M-58 DESIGN CONSIDERATIONS FOR A WATER TREATMENT SYSTEM UTILIZING ULTRA-VIOLET LIGHT EMITTING DIODES...DISTRIBUTION UNLIMITED. iii AFIT-ENV-14-M-58 DESIGN CONSIDERATIONS FOR A WATER TREATMENT SYSTEM UTILIZING ULTRA-VIOLET LIGHT EMITTING

  7. Density functional theory calculations of 95Mo NMR parameters in solid-state compounds.

    PubMed

    Cuny, Jérôme; Furet, Eric; Gautier, Régis; Le Pollès, Laurent; Pickard, Chris J; d'Espinose de Lacaillerie, Jean-Baptiste

    2009-12-21

    The application of periodic density functional theory-based methods to the calculation of (95)Mo electric field gradient (EFG) and chemical shift (CS) tensors in solid-state molybdenum compounds is presented. Calculations of EFG tensors are performed using the projector augmented-wave (PAW) method. Comparison of the results with those obtained using the augmented plane wave + local orbitals (APW+lo) method and with available experimental values shows the reliability of the approach for (95)Mo EFG tensor calculation. CS tensors are calculated using the recently developed gauge-including projector augmented-wave (GIPAW) method. This work is the first application of the GIPAW method to a 4d transition-metal nucleus. The effects of ultra-soft pseudo-potential parameters, exchange-correlation functionals and structural parameters are precisely examined. Comparison with experimental results allows the validation of this computational formalism.

  8. A Review on VSC-HVDC Reliability Modeling and Evaluation Techniques

    NASA Astrophysics Data System (ADS)

    Shen, L.; Tang, Q.; Li, T.; Wang, Y.; Song, F.

    2017-05-01

    With the fast development of power electronics, voltage-source converter (VSC) HVDC technology presents cost-effective ways for bulk power transmission. An increasing number of VSC-HVDC projects has been installed worldwide. Their reliability affects the profitability of the system and therefore has a major impact on the potential investors. In this paper, an overview of the recent advances in the area of reliability evaluation for VSC-HVDC systems is provided. Taken into account the latest multi-level converter topology, the VSC-HVDC system is categorized into several sub-systems and the reliability data for the key components is discussed based on sources with academic and industrial backgrounds. The development of reliability evaluation methodologies is reviewed and the issues surrounding the different computation approaches are briefly analysed. A general VSC-HVDC reliability evaluation procedure is illustrated in this paper.

  9. Computer models for economic and silvicultural decisions

    Treesearch

    Rosalie J. Ingram

    1989-01-01

    Computer systems can help simplify decisionmaking to manage forest ecosystems. We now have computer models to help make forest management decisions by predicting changes associated with a particular management action. Models also help you evaluate alternatives. To be effective, the computer models must be reliable and appropriate for your situation.

  10. 300 GPM Solids Removal System A True Replacement for Back Flushable Powdered Filter Systems - 13607

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ping, Mark R.; Lewis, Mark

    2013-07-01

    The EnergySolutions Solids Removal System (SRS) utilizes stainless steel cross-flow ultra-filtration (XUF) technology which allows it to reliably remove suspended solids greater than one (1) micron from liquid radwaste streams. The SRS is designed as a pre-treatment step for solids separation prior to processing through other technologies such as Ion Exchange Resin (IER) and/or Reverse Osmosis (RO), etc. Utilizing this pre-treatment approach ensures successful production of reactor grade water while 1) decreasing the amount of radioactive water being discharged to the environment; and 2) decreasing the amount of radioactive waste that must ultimately be disposed of due to the eliminationmore » of spent powdered filter media. (authors)« less

  11. Forensic Odontology: Automatic Identification of Persons Comparing Antemortem and Postmortem Panoramic Radiographs Using Computer Vision.

    PubMed

    Heinrich, Andreas; Güttler, Felix; Wendt, Sebastian; Schenkl, Sebastian; Hubig, Michael; Wagner, Rebecca; Mall, Gita; Teichgräber, Ulf

    2018-06-18

     In forensic odontology the comparison between antemortem and postmortem panoramic radiographs (PRs) is a reliable method for person identification. The purpose of this study was to improve and automate identification of unknown people by comparison between antemortem and postmortem PR using computer vision.  The study includes 43 467 PRs from 24 545 patients (46 % females/54 % males). All PRs were filtered and evaluated with Matlab R2014b including the toolboxes image processing and computer vision system. The matching process used the SURF feature to find the corresponding points between two PRs (unknown person and database entry) out of the whole database.  From 40 randomly selected persons, 34 persons (85 %) could be reliably identified by corresponding PR matching points between an already existing scan in the database and the most recent PR. The systematic matching yielded a maximum of 259 points for a successful identification between two different PRs of the same person and a maximum of 12 corresponding matching points for other non-identical persons in the database. Hence 12 matching points are the threshold for reliable assignment.  Operating with an automatic PR system and computer vision could be a successful and reliable tool for identification purposes. The applied method distinguishes itself by virtue of its fast and reliable identification of persons by PR. This Identification method is suitable even if dental characteristics were removed or added in the past. The system seems to be robust for large amounts of data.   · Computer vision allows an automated antemortem and postmortem comparison of panoramic radiographs (PRs) for person identification.. · The present method is able to find identical matching partners among huge datasets (big data) in a short computing time.. · The identification method is suitable even if dental characteristics were removed or added.. · Heinrich A, Güttler F, Wendt S et al. Forensic Odontology: Automatic Identification of Persons Comparing Antemortem and Postmortem Panoramic Radiographs Using Computer Vision. Fortschr Röntgenstr 2018; DOI: 10.1055/a-0632-4744. © Georg Thieme Verlag KG Stuttgart · New York.

  12. Computer-assisted audiovisual health history self-interviewing. Results of the pilot study of the Hoxworth Quality Donor System.

    PubMed

    Zuck, T F; Cumming, P D; Wallace, E L

    2001-12-01

    The safety of blood for transfusion depends, in part, on the reliability of the health history given by volunteer blood donors. To improve reliability, a pilot study evaluated the use of an interactive computer-based audiovisual donor interviewing system at a typical midwestern blood center in the United States. An interactive video screening system was tested in a community donor center environment on 395 volunteer blood donors. Of the donors using the system, 277 completed surveys regarding their acceptance of and opinions about the system. The study showed that an interactive computer-based audiovisual donor screening system was an effective means of conducting the donor health history. The majority of donors found the system understandable and favored the system over a face-to-face interview. Further, most donors indicated that they would be more likely to return if they were to be screened by such a system. Interactive computer-based audiovisual blood donor screening is useful and well accepted by donors; it may prevent a majority of errors and accidents that are reportable to the FDA; and it may contribute to increased safety and availability of the blood supply.

  13. Provable Transient Recovery for Frame-Based, Fault-Tolerant Computing Systems

    NASA Technical Reports Server (NTRS)

    DiVito, Ben L.; Butler, Ricky W.

    1992-01-01

    We present a formal verification of the transient fault recovery aspects of the Reliable Computing Platform (RCP), a fault-tolerant computing system architecture for digital flight control applications. The RCP uses NMR-style redundancy to mask faults and internal majority voting to purge the effects of transient faults. The system design has been formally specified and verified using the EHDM verification system. Our formalization accommodates a wide variety of voting schemes for purging the effects of transients.

  14. Soldier-worn augmented reality system for tactical icon visualization

    NASA Astrophysics Data System (ADS)

    Roberts, David; Menozzi, Alberico; Clipp, Brian; Russler, Patrick; Cook, James; Karl, Robert; Wenger, Eric; Church, William; Mauger, Jennifer; Volpe, Chris; Argenta, Chris; Wille, Mark; Snarski, Stephen; Sherrill, Todd; Lupo, Jasper; Hobson, Ross; Frahm, Jan-Michael; Heinly, Jared

    2012-06-01

    This paper describes the development and demonstration of a soldier-worn augmented reality system testbed that provides intuitive 'heads-up' visualization of tactically-relevant geo-registered icons. Our system combines a robust soldier pose estimation capability with a helmet mounted see-through display to accurately overlay geo-registered iconography (i.e., navigation waypoints, blue forces, aircraft) on the soldier's view of reality. Applied Research Associates (ARA), in partnership with BAE Systems and the University of North Carolina - Chapel Hill (UNC-CH), has developed this testbed system in Phase 2 of the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program. The ULTRA-Vis testbed system functions in unprepared outdoor environments and is robust to numerous magnetic disturbances. We achieve accurate and robust pose estimation through fusion of inertial, magnetic, GPS, and computer vision data acquired from helmet kit sensors. Icons are rendered on a high-brightness, 40°×30° field of view see-through display. The system incorporates an information management engine to convert CoT (Cursor-on-Target) external data feeds into mil-standard icons for visualization. The user interface provides intuitive information display to support soldier navigation and situational awareness of mission-critical tactical information.

  15. High-brightness displays in integrated weapon sight systems

    NASA Astrophysics Data System (ADS)

    Edwards, Tim; Hogan, Tim

    2014-06-01

    In the past several years Kopin has demonstrated the ability to provide ultra-high brightness, low power display solutions in VGA, SVGA, SXGA and 2k x 2k display formats. This paper will review various approaches for integrating high brightness overlay displays with existing direct view rifle sights and augmenting their precision aiming and targeting capability. Examples of overlay display systems solutions will be presented and discussed. This paper will review significant capability enhancements that are possible when augmenting the real-world as seen through a rifle sight with other soldier system equipment including laser range finders, ballistic computers and sensor systems.

  16. Modeling Materials: Design for Planetary Entry, Electric Aircraft, and Beyond

    NASA Technical Reports Server (NTRS)

    Thompson, Alexander; Lawson, John W.

    2014-01-01

    NASA missions push the limits of what is possible. The development of high-performance materials must keep pace with the agency's demanding, cutting-edge applications. Researchers at NASA's Ames Research Center are performing multiscale computational modeling to accelerate development times and further the design of next-generation aerospace materials. Multiscale modeling combines several computationally intensive techniques ranging from the atomic level to the macroscale, passing output from one level as input to the next level. These methods are applicable to a wide variety of materials systems. For example: (a) Ultra-high-temperature ceramics for hypersonic aircraft-we utilized the full range of multiscale modeling to characterize thermal protection materials for faster, safer air- and spacecraft, (b) Planetary entry heat shields for space vehicles-we computed thermal and mechanical properties of ablative composites by combining several methods, from atomistic simulations to macroscale computations, (c) Advanced batteries for electric aircraft-we performed large-scale molecular dynamics simulations of advanced electrolytes for ultra-high-energy capacity batteries to enable long-distance electric aircraft service; and (d) Shape-memory alloys for high-efficiency aircraft-we used high-fidelity electronic structure calculations to determine phase diagrams in shape-memory transformations. Advances in high-performance computing have been critical to the development of multiscale materials modeling. We used nearly one million processor hours on NASA's Pleiades supercomputer to characterize electrolytes with a fidelity that would be otherwise impossible. For this and other projects, Pleiades enables us to push the physics and accuracy of our calculations to new levels.

  17. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 4: HARP Output (HARPO) graphics display user's guide

    NASA Technical Reports Server (NTRS)

    Sproles, Darrell W.; Bavuso, Salvatore J.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical postprocessor program HARPO (HARP Output). HARPO reads ASCII files generated by HARP. It provides an interactive plotting capability that can be used to display alternate model data for trade-off analyses. File data can also be imported to other commercial software programs.

  18. Software Reliability Issues Concerning Large and Safety Critical Software Systems

    NASA Technical Reports Server (NTRS)

    Kamel, Khaled; Brown, Barbara

    1996-01-01

    This research was undertaken to provide NASA with a survey of state-of-the-art techniques using in industrial and academia to provide safe, reliable, and maintainable software to drive large systems. Such systems must match the complexity and strict safety requirements of NASA's shuttle system. In particular, the Launch Processing System (LPS) is being considered for replacement. The LPS is responsible for monitoring and commanding the shuttle during test, repair, and launch phases. NASA built this system in the 1970's using mostly hardware techniques to provide for increased reliability, but it did so often using custom-built equipment, which has not been able to keep up with current technologies. This report surveys the major techniques used in industry and academia to ensure reliability in large and critical computer systems.

  19. Ultra wide field fluorescein angiography can detect macular pathology in central retinal vein occlusion.

    PubMed

    Tsui, Irena; Franco-Cardenas, Valentina; Hubschman, Jean-Pierre; Yu, Fei; Schwartz, Steven D

    2012-01-01

    The purpose of this study was to evaluate whether ultra wide field fluorescein angiography (UWFFA), a tool established for the detection of peripheral non-perfusion, can also detect macular pathology. A retrospective imaging review was performed on patients with central retinal vein occlusion. UWFFA was graded for angiographic leakage (petalloid and/or diffuse leakage) and presence of abnormalities in the foveal avascular zone and was then correlated to spectral-domain optical coherence tomography (SD-OCT). Sixty-six eyes met inclusion criteria. Intergrader agreement was highly reliable for grading macular leakage on UWFFA (kappa = 0.75) and moderately reliable for the evaluation of an abnormal foveal avascular zone (kappa = 0.43). Angiographic leakage on UWFFA correlated to macular edema on SD-OCT (P > .0001), and abnormalities in the foveal avascular zone on UWFFA correlated to ganglion cell layer atrophy on SD-OCT (P = .0002). Intergrader reliability in grading UWFFA was better when assessing macular leakage than when assessing macular ischemia. UWFFA findings correlated to macular edema and signs of macular ischemia on SD-OCT. Copyright 2012, SLACK Incorporated.

  20. The New Xpert MTB/RIF Ultra: Improving Detection of Mycobacterium tuberculosis and Resistance to Rifampin in an Assay Suitable for Point-of-Care Testing.

    PubMed

    Chakravorty, Soumitesh; Simmons, Ann Marie; Rowneki, Mazhgan; Parmar, Heta; Cao, Yuan; Ryan, Jamie; Banada, Padmapriya P; Deshpande, Srinidhi; Shenai, Shubhada; Gall, Alexander; Glass, Jennifer; Krieswirth, Barry; Schumacher, Samuel G; Nabeta, Pamela; Tukvadze, Nestani; Rodrigues, Camilla; Skrahina, Alena; Tagliani, Elisa; Cirillo, Daniela M; Davidow, Amy; Denkinger, Claudia M; Persing, David; Kwiatkowski, Robert; Jones, Martin; Alland, David

    2017-08-29

    The Xpert MTB/RIF assay (Xpert) is a rapid test for tuberculosis (TB) and rifampin resistance (RIF-R) suitable for point-of-care testing. However, it has decreased sensitivity in smear-negative sputum, and false identification of RIF-R occasionally occurs. We developed the Xpert MTB/RIF Ultra assay (Ultra) to improve performance. Ultra and Xpert limits of detection (LOD), dynamic ranges, and RIF-R rpoB mutation detection were tested on Mycobacterium tuberculosis DNA or sputum samples spiked with known numbers of M. tuberculosis H37Rv or Mycobacterium bovis BCG CFU. Frozen and prospectively collected clinical samples from patients suspected of having TB, with and without culture-confirmed TB, were also tested. For M. tuberculosis H37Rv, the LOD was 15.6 CFU/ml of sputum for Ultra versus 112.6 CFU/ml of sputum for Xpert, and for M. bovis BCG, it was 143.4 CFU/ml of sputum for Ultra versus 344 CFU/ml of sputum for Xpert. Ultra resulted in no false-positive RIF-R specimens, while Xpert resulted in two false-positive RIF-R specimens. All RIF-R-associated M. tuberculosis rpoB mutations tested were identified by Ultra. Testing on clinical sputum samples, Ultra versus Xpert, resulted in an overall sensitivity of 87.5% (95% confidence interval [CI], 82.1, 91.7) versus 81.0% (95% CI, 74.9, 86.2) and a sensitivity on sputum smear-negative samples of 78.9% (95% CI, 70.0, 86.1) versus 66.1% (95% CI, 56.4, 74.9). Both tests had a specificity of 98.7% (95% CI, 93.0, 100), and both had comparable accuracies for detection of RIF-R in these samples. Ultra should significantly improve TB detection, especially in patients with paucibacillary disease, and may provide more-reliable RIF-R detection. IMPORTANCE The Xpert MTB/RIF assay (Xpert), the first point-of-care assay for tuberculosis (TB), was endorsed by the World Health Organization in December 2010. Since then, 23 million Xpert tests have been procured in 130 countries. Although Xpert showed high overall sensitivity and specificity with pulmonary samples, its sensitivity has been lower with smear-negative pulmonary samples and extrapulmonary samples. In addition, the prediction of rifampin resistance (RIF-R) in paucibacillary samples and for a few rpoB mutations has resulted in both false-positive and false-negative results. The present study is the first demonstration of the design features and operational characteristics of an improved Xpert Ultra assay. This study also shows that the Ultra format overcomes many of the known shortcomings of Xpert. The new assay should significantly improve TB detection, especially in patients with paucibacillary disease, and provide more-reliable detection of RIF-R. Copyright © 2017 Chakravorty et al.

  1. Probabilistic resource allocation system with self-adaptive capability

    NASA Technical Reports Server (NTRS)

    Yufik, Yan M. (Inventor)

    1998-01-01

    A probabilistic resource allocation system is disclosed containing a low capacity computational module (Short Term Memory or STM) and a self-organizing associative network (Long Term Memory or LTM) where nodes represent elementary resources, terminal end nodes represent goals, and weighted links represent the order of resource association in different allocation episodes. Goals and their priorities are indicated by the user, and allocation decisions are made in the STM, while candidate associations of resources are supplied by the LTM based on the association strength (reliability). Weights are automatically assigned to the network links based on the frequency and relative success of exercising those links in the previous allocation decisions. Accumulation of allocation history in the form of an associative network in the LTM reduces computational demands on subsequent allocations. For this purpose, the network automatically partitions itself into strongly associated high reliability packets, allowing fast approximate computation and display of allocation solutions satisfying the overall reliability and other user-imposed constraints. System performance improves in time due to modification of network parameters and partitioning criteria based on the performance feedback.

  2. Ultra-stretchable and skin-mountable strain sensors using carbon nanotubes-Ecoflex nanocomposites.

    PubMed

    Amjadi, Morteza; Yoon, Yong Jin; Park, Inkyu

    2015-09-18

    Super-stretchable, skin-mountable, and ultra-soft strain sensors are presented by using carbon nanotube percolation network-silicone rubber nanocomposite thin films. The applicability of the strain sensors as epidermal electronic systems, in which mechanical compliance like human skin and high stretchability (ϵ > 100%) are required, has been explored. The sensitivity of the strain sensors can be tuned by the number density of the carbon nanotube percolation network. The strain sensors show excellent hysteresis performance at different strain levels and rates with high linearity and small drift. We found that the carbon nanotube-silicone rubber based strain sensors possess super-stretchability and high reliability for strains as large as 500%. The nanocomposite thin films exhibit high robustness and excellent resistance-strain dependency for over ~1380% mechanical strain. Finally, we performed skin motion detection by mounting the strain sensors on different parts of the body. The maximum induced strain by the bending of the finger, wrist, and elbow was measured to be ~ 42%, 45% and 63%, respectively.

  3. Ultra-fast Object Recognition from Few Spikes

    DTIC Science & Technology

    2005-07-06

    Computer Science and Artificial Intelligence Laboratory Ultra-fast Object Recognition from Few Spikes Chou Hung, Gabriel Kreiman , Tomaso Poggio...neural code for different kinds of object-related information. *The authors, Chou Hung and Gabriel Kreiman , contributed equally to this work...Supplementary Material is available at http://ramonycajal.mit.edu/ kreiman /resources/ultrafast

  4. The probability estimation of the electronic lesson implementation taking into account software reliability

    NASA Astrophysics Data System (ADS)

    Gurov, V. V.

    2017-01-01

    Software tools for educational purposes, such as e-lessons, computer-based testing system, from the point of view of reliability, have a number of features. The main ones among them are the need to ensure a sufficiently high probability of their faultless operation for a specified time, as well as the impossibility of their rapid recovery by the way of replacing it with a similar running program during the classes. The article considers the peculiarities of reliability evaluation of programs in contrast to assessments of hardware reliability. The basic requirements to reliability of software used for carrying out practical and laboratory classes in the form of computer-based training programs are given. The essential requirements applicable to the reliability of software used for conducting the practical and laboratory studies in the form of computer-based teaching programs are also described. The mathematical tool based on Markov chains, which allows to determine the degree of debugging of the training program for use in the educational process by means of applying the graph of the software modules interaction, is presented.

  5. Development of Passive Fuel Cell Thermal Management Technology

    NASA Technical Reports Server (NTRS)

    Burke, Kenneth A.; Jakupca, Ian; Colozza, Anthony

    2011-01-01

    The NASA Glenn Research Center is developing advanced passive thermal management technology to reduce the mass and improve the reliability of space fuel cell systems for the NASA exploration program. The passive thermal management system relies on heat conduction within the cooling plate to move the heat from the central portion of the cell stack out to the edges of the fuel cell stack rather than using a pumped loop cooling system to convectively remove the heat. Using the passive approach eliminates the need for a coolant pump and other cooling loop components which reduces fuel cell system mass and improves overall system reliability. Previous analysis had identified that low density, ultra-high thermal conductivity materials would be needed for the cooling plates in order to achieve the desired reductions in mass and the highly uniform thermal heat sink for each cell within a fuel cell stack. A pyrolytic graphite material was identified and fabricated into a thin plate using different methods. Also a development project with Thermacore, Inc. resulted in a planar heat pipe. Thermal conductivity tests were done using these materials. The results indicated that lightweight passive fuel cell cooling is feasible.

  6. NASA Tech Briefs, August 2012

    NASA Technical Reports Server (NTRS)

    2012-01-01

    Topics covered include: Mars Science Laboratory Drill; Ultra-Compact Motor Controller; A Reversible Thermally Driven Pump for Use in a Sub-Kelvin Magnetic Refrigerator; Shape Memory Composite Hybrid Hinge; Binding Causes of Printed Wiring Assemblies with Card-Loks; Coring Sample Acquisition Tool; Joining and Assembly of Bulk Metallic Glass Composites Through Capacitive Discharge; 670-GHz Schottky Diode-Based Subharmonic Mixer with CPW Circuits and 70-GHz IF; Self-Nulling Lock-in Detection Electronics for Capacitance Probe Electrometer; Discontinuous Mode Power Supply; Optimal Dynamic Sub-Threshold Technique for Extreme Low Power Consumption for VLSI; Hardware for Accelerating N-Modular Redundant Systems for High-Reliability Computing; Blocking Filters with Enhanced Throughput for X-Ray Microcalorimetry; High-Thermal-Conductivity Fabrics; Imidazolium-Based Polymeric Materials as Alkaline Anion-Exchange Fuel Cell Membranes; Electrospun Nanofiber Coating of Fiber Materials: A Composite Toughening Approach; Experimental Modeling of Sterilization Effects for Atmospheric Entry Heating on Microorganisms; Saliva Preservative for Diagnostic Purposes; Hands-Free Transcranial Color Doppler Probe; Aerosol and Surface Parameter Retrievals for a Multi-Angle, Multiband Spectrometer LogScope; TraceContract; AIRS Maps from Space Processing Software; POSTMAN: Point of Sail Tacking for Maritime Autonomous Navigation; Space Operations Learning Center; OVERSMART Reporting Tool for Flow Computations Over Large Grid Systems; Large Eddy Simulation (LES) of Particle-Laden Temporal Mixing Layers; Projection of Stabilized Aerial Imagery Onto Digital Elevation Maps for Geo-Rectified and Jitter-Free Viewing; Iterative Transform Phase Diversity: An Image-Based Object and Wavefront Recovery; 3D Drop Size Distribution Extrapolation Algorithm Using a Single Disdrometer; Social Networking Adapted for Distributed Scientific Collaboration; General Methodology for Designing Spacecraft Trajectories; Hemispherical Field-of-View Above-Water Surface Imager for Submarines; and Quantum-Well Infrared Photodetector (QWIP) Focal Plane Assembly.

  7. Integrating Reliability Analysis with a Performance Tool

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Ulrey, Michael

    1995-01-01

    A large number of commercial simulation tools support performance oriented studies of complex computer and communication systems. Reliability of these systems, when desired, must be obtained by remodeling the system in a different tool. This has obvious drawbacks: (1) substantial extra effort is required to create the reliability model; (2) through modeling error the reliability model may not reflect precisely the same system as the performance model; (3) as the performance model evolves one must continuously reevaluate the validity of assumptions made in that model. In this paper we describe an approach, and a tool that implements this approach, for integrating a reliability analysis engine into a production quality simulation based performance modeling tool, and for modeling within such an integrated tool. The integrated tool allows one to use the same modeling formalisms to conduct both performance and reliability studies. We describe how the reliability analysis engine is integrated into the performance tool, describe the extensions made to the performance tool to support the reliability analysis, and consider the tool's performance.

  8. Investigation of ultra low-dose scans in the context of quantum-counting clinical CT

    NASA Astrophysics Data System (ADS)

    Weidinger, T.; Buzug, T. M.; Flohr, T.; Fung, G. S. K.; Kappler, S.; Stierstorfer, K.; Tsui, B. M. W.

    2012-03-01

    In clinical computed tomography (CT), images from patient examinations taken with conventional scanners exhibit noise characteristics governed by electronics noise, when scanning strongly attenuating obese patients or with an ultra-low X-ray dose. Unlike CT systems based on energy integrating detectors, a system with a quantum counting detector does not suffer from this drawback. Instead, the noise from the electronics mainly affects the spectral resolution of these detectors. Therefore, it does not contribute to the image noise in spectrally non-resolved CT images. This promises improved image quality due to image noise reduction in scans obtained from clinical CT examinations with lowest X-ray tube currents or obese patients. To quantify the benefits of quantum counting detectors in clinical CT we have carried out an extensive simulation study of the complete scanning and reconstruction process for both kinds of detectors. The simulation chain encompasses modeling of the X-ray source, beam attenuation in the patient, and calculation of the detector response. Moreover, in each case the subsequent image preprocessing and reconstruction is modeled as well. The simulation-based, theoretical evaluation is validated by experiments with a novel prototype quantum counting system and a Siemens Definition Flash scanner with a conventional energy integrating CT detector. We demonstrate and quantify the improvement from image noise reduction achievable with quantum counting techniques in CT examinations with ultra-low X-ray dose and strong attenuation.

  9. Serial Back-Plane Technologies in Advanced Avionics Architectures

    NASA Technical Reports Server (NTRS)

    Varnavas, Kosta

    2005-01-01

    Current back plane technologies such as VME, and current personal computer back planes such as PCI, are shared bus systems that can exhibit nondeterministic latencies. This means a card can take control of the bus and use resources indefinitely affecting the ability of other cards in the back plane to acquire the bus. This provides a real hit on the reliability of the system. Additionally, these parallel busses only have bandwidths in the 100s of megahertz range and EMI and noise effects get worse the higher the bandwidth goes. To provide scalable, fault-tolerant, advanced computing systems, more applicable to today s connected computing environment and to better meet the needs of future requirements for advanced space instruments and vehicles, serial back-plane technologies should be implemented in advanced avionics architectures. Serial backplane technologies eliminate the problem of one card getting the bus and never relinquishing it, or one minor problem on the backplane bringing the whole system down. Being serial instead of parallel improves the reliability by reducing many of the signal integrity issues associated with parallel back planes and thus significantly improves reliability. The increased speeds associated with a serial backplane are an added bonus.

  10. Models and techniques for evaluating the effectiveness of aircraft computing systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1977-01-01

    Models, measures and techniques were developed for evaluating the effectiveness of aircraft computing systems. The concept of effectiveness involves aspects of system performance, reliability and worth. Specifically done was a detailed development of model hierarchy at mission, functional task, and computational task levels. An appropriate class of stochastic models was investigated which served as bottom level models in the hierarchial scheme. A unified measure of effectiveness called 'performability' was defined and formulated.

  11. Three real-time architectures - A study using reward models

    NASA Technical Reports Server (NTRS)

    Sjogren, J. A.; Smith, R. M.

    1990-01-01

    Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the evolutionary behavior of the computer system by a continuous-time Markov chain, and a reward rate is associated with each state. In reliability/availability models, upstates have reward rate 1, and down states have reward rate zero associated with them. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Steady-state expected reward rate and expected instantaneous reward rate are clearly useful measures which can be extracted from the Markov reward model. The diversity of areas where Markov reward models may be used is illustrated with a comparative study of three examples of interest to the fault tolerant computing community.

  12. Reliable Acquisition of RAM Dumps from Intel-Based Apple Mac Computers over FireWire

    NASA Astrophysics Data System (ADS)

    Gladyshev, Pavel; Almansoori, Afrah

    RAM content acquisition is an important step in live forensic analysis of computer systems. FireWire offers an attractive way to acquire RAM content of Apple Mac computers equipped with a FireWire connection. However, the existing techniques for doing so require substantial knowledge of the target computer configuration and cannot be used reliably on a previously unknown computer in a crime scene. This paper proposes a novel method for acquiring RAM content of Apple Mac computers over FireWire, which automatically discovers necessary information about the target computer and can be used in the crime scene setting. As an application of the developed method, the techniques for recovery of AOL Instant Messenger (AIM) conversation fragments from RAM dumps are also discussed in this paper.

  13. Cloud Computing for the Grid: GridControl: A Software Platform to Support the Smart Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    GENI Project: Cornell University is creating a new software platform for grid operators called GridControl that will utilize cloud computing to more efficiently control the grid. In a cloud computing system, there are minimal hardware and software demands on users. The user can tap into a network of computers that is housed elsewhere (the cloud) and the network runs computer applications for the user. The user only needs interface software to access all of the cloud’s data resources, which can be as simple as a web browser. Cloud computing can reduce costs, facilitate innovation through sharing, empower users, and improvemore » the overall reliability of a dispersed system. Cornell’s GridControl will focus on 4 elements: delivering the state of the grid to users quickly and reliably; building networked, scalable grid-control software; tailoring services to emerging smart grid uses; and simulating smart grid behavior under various conditions.« less

  14. Formal design and verification of a reliable computing platform for real-time control (phase 3 results)

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Divito, Ben L.; Holloway, C. Michael

    1994-01-01

    In this paper the design and formal verification of the lower levels of the Reliable Computing Platform (RCP), a fault-tolerant computing system for digital flight control applications, are presented. The RCP uses NMR-style redundancy to mask faults and internal majority voting to flush the effects of transient faults. Two new layers of the RCP hierarchy are introduced: the Minimal Voting refinement (DA_minv) of the Distributed Asynchronous (DA) model and the Local Executive (LE) Model. Both the DA_minv model and the LE model are specified formally and have been verified using the Ehdm verification system. All specifications and proofs are available electronically via the Internet using anonymous FTP or World Wide Web (WWW) access.

  15. Identification of Target Complaints by Computer Interview: Evaluation of the Computerized Assessment System for Psychotherapy Evaluation and Research.

    ERIC Educational Resources Information Center

    Farrell, Albert D.; And Others

    1987-01-01

    Evaluated computer interview to standardize collection of target complaints. Adult outpatients (N=103) completed computer interview, unstructured intake interview, Symptoms Checklist-90, and Minnesota Multiphasic Personality Inventory. Results provided support for the computer interview in regard to reliability and validity though there was low…

  16. The Author’s Guide To Writing 412th Test Wing Technical Reports

    DTIC Science & Technology

    2014-12-01

    control CAD computer aided design cc cubic centimeters C.O. carry-over c/o checkout USAF United States Air Force C1 rolling moment coefficient...cooling air. Mission Impact: Results in maintenance inability to reliably duplicate and isolate valid aircraft failures, and degrades reliability...air. Mission Impact: Results in maintenance inability to reliably duplicate and isolate valid aircraft failures, and degrades reliability of system

  17. Thickness effect of ultra-thin Ta2O5 resistance switching layer in 28 nm-diameter memory cell

    NASA Astrophysics Data System (ADS)

    Park, Tae Hyung; Song, Seul Ji; Kim, Hae Jin; Kim, Soo Gil; Chung, Suock; Kim, Beom Yong; Lee, Kee Jeung; Kim, Kyung Min; Choi, Byung Joon; Hwang, Cheol Seong

    2015-11-01

    Resistance switching (RS) devices with ultra-thin Ta2O5 switching layer (0.5-2.0 nm) with a cell diameter of 28 nm were fabricated. The performance of the devices was tested by voltage-driven current—voltage (I-V) sweep and closed-loop pulse switching (CLPS) tests. A Ta layer was placed beneath the Ta2O5 switching layer to act as an oxygen vacancy reservoir. The device with the smallest Ta2O5 thickness (0.5 nm) showed normal switching properties with gradual change in resistance in I-V sweep or CLPS and high reliability. By contrast, other devices with higher Ta2O5 thickness (1.0-2.0 nm) showed abrupt switching with several abnormal behaviours, degraded resistance distribution, especially in high resistance state, and much lower reliability performance. A single conical or hour-glass shaped double conical conducting filament shape was conceived to explain these behavioural differences that depended on the Ta2O5 switching layer thickness. Loss of oxygen via lateral diffusion to the encapsulating Si3N4/SiO2 layer was suggested as the main degradation mechanism for reliability, and a method to improve reliability was also proposed.

  18. All-fiber optical parametric oscillator for bio-medical imaging applications

    NASA Astrophysics Data System (ADS)

    Gottschall, Thomas; Meyer, Tobias; Jauregui, Cesar; Just, Florian; Eidam, Tino; Schmitt, Michael; Popp, Jürgen; Limpert, Jens; Tünnermann, Andreas

    2017-02-01

    Among other modern imaging techniques, stimulated Raman Scattering (SRS) requires an extremely quiet, widely wavelength tunable laser, which, up to now, is unheard of in fiber laser systems. We present a compact all-fiber laser system, which features an optical parametric oscillator (OPO) based on degenerate four-wave mixing (FWM) in an endlessly single-mode photonic-crystal fiber. We employ an all-fiber frequency and repetition rate tunable laser in order to enable wideband conversion in the linear OPO cavity arrangement, the signal and idler radiation can be tuned between 764 and 960 nm and 1164 and 1552 nm at 9.5 MHz. Thus, all biochemically relevant Raman shifts between 922 and 3322 cm-1 may be addressed in combination with a secondary output, which is tunable between 1024 and 1052 nm. This ultra-low noise output emits synchronized pulses with twice the repetition rate to enable SRS imaging. We measure the relative intensity noise of this output beam at 9.5 MHz to be between -145 and -148 dBc, which is low enough to enable high-speed SRS imaging with a good signal-to-noise ratio. The laser system is computer controlled to access a certain energy differences within one second. Combining FWM based conversion, with all-fiber Yb-based fiber lasers enables the construction of the first automated, turn-key and widely tunable fiber laser. This laser concept could be the missing piece to establish CRS imaging as a reliable guiding tool for clinical diagnostics and surgical guidance.

  19. Development of a J-T Micro Compressor

    NASA Astrophysics Data System (ADS)

    Champagne, P.; Olson, J. R.; Nast, T.; Roth, E.; Collaco, A.; Kaldas, G.; Saito, E.; Loung, V.

    2015-12-01

    Lockheed Martin has developed and tested a space-quality compressor capable of delivering closed-loop gas flow with a high pressure ratio, suitable for driving a Joule- Thomson cold head. The compressor is based on a traditional “Oxford style” dual-opposed piston compressor with linear drive motors and flexure-bearing clearance-seal technology for high reliability and long life. This J-T compressor retains the approximate size, weight, and cost of the ultra-compact, 200 gram Lockheed Martin Pulse Tube Micro Compressor, despite the addition of a flow-rectifying system to convert the AC pressure wave into a steady flow.

  20. Models and techniques for evaluating the effectiveness of aircraft computing systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1982-01-01

    Models, measures, and techniques for evaluating the effectiveness of aircraft computing systems were developed. By "effectiveness" in this context we mean the extent to which the user, i.e., a commercial air carrier, may expect to benefit from the computational tasks accomplished by a computing system in the environment of an advanced commercial aircraft. Thus, the concept of effectiveness involves aspects of system performance, reliability, and worth (value, benefit) which are appropriately integrated in the process of evaluating system effectiveness. Specifically, the primary objectives are: the development of system models that provide a basis for the formulation and evaluation of aircraft computer system effectiveness, the formulation of quantitative measures of system effectiveness, and the development of analytic and simulation techniques for evaluating the effectiveness of a proposed or existing aircraft computer.

  1. MBus: An Ultra-Low Power Interconnect Bus for Next Generation Nanopower Systems

    PubMed Central

    Pannuto, Pat; Lee, Yoonmyung; Kuo, Ye-Sheng; Foo, ZhiYoong; Kempke, Benjamin; Kim, Gyouho; Dreslinski, Ronald G.; Blaauw, David; Dutta, Prabal

    2015-01-01

    As we show in this paper, I/O has become the limiting factor in scaling down size and power toward the goal of invisible computing. Achieving this goal will require composing optimized and specialized—yet reusable—components with an interconnect that permits tiny, ultra-low power systems. In contrast to today’s interconnects which are limited by power-hungry pull-ups or high-overhead chip-select lines, our approach provides a superset of common bus features but at lower power, with fixed area and pin count, using fully synthesizable logic, and with surprisingly low protocol overhead. We present MBus, a new 4-pin, 22.6 pJ/bit/chip chip-to-chip interconnect made of two “shoot-through” rings. MBus facilitates ultra-low power system operation by implementing automatic power-gating of each chip in the system, easing the integration of active, inactive, and activating circuits on a single die. In addition, we introduce a new bus primitive: power oblivious communication, which guarantees message reception regardless of the recipient’s power state when a message is sent. This disentangles power management from communication, greatly simplifying the creation of viable, modular, and heterogeneous systems that operate on the order of nanowatts. To evaluate the viability, power, performance, overhead, and scalability of our design, we build both hardware and software implementations of MBus and show its seamless operation across two FPGAs and twelve custom chips from three different semiconductor processes. A three-chip, 2.2 mm3 MBus system draws 8 nW of total system standby power and uses only 22.6 pJ/bit/chip for communication. This is the lowest power for any system bus with MBus’s feature set. PMID:26855555

  2. MBus: An Ultra-Low Power Interconnect Bus for Next Generation Nanopower Systems.

    PubMed

    Pannuto, Pat; Lee, Yoonmyung; Kuo, Ye-Sheng; Foo, ZhiYoong; Kempke, Benjamin; Kim, Gyouho; Dreslinski, Ronald G; Blaauw, David; Dutta, Prabal

    2015-06-01

    As we show in this paper, I/O has become the limiting factor in scaling down size and power toward the goal of invisible computing. Achieving this goal will require composing optimized and specialized-yet reusable-components with an interconnect that permits tiny, ultra-low power systems. In contrast to today's interconnects which are limited by power-hungry pull-ups or high-overhead chip-select lines, our approach provides a superset of common bus features but at lower power, with fixed area and pin count, using fully synthesizable logic, and with surprisingly low protocol overhead. We present MBus , a new 4-pin, 22.6 pJ/bit/chip chip-to-chip interconnect made of two "shoot-through" rings. MBus facilitates ultra-low power system operation by implementing automatic power-gating of each chip in the system, easing the integration of active, inactive, and activating circuits on a single die. In addition, we introduce a new bus primitive: power oblivious communication, which guarantees message reception regardless of the recipient's power state when a message is sent. This disentangles power management from communication, greatly simplifying the creation of viable, modular, and heterogeneous systems that operate on the order of nanowatts. To evaluate the viability, power, performance, overhead, and scalability of our design, we build both hardware and software implementations of MBus and show its seamless operation across two FPGAs and twelve custom chips from three different semiconductor processes. A three-chip, 2.2 mm 3 MBus system draws 8 nW of total system standby power and uses only 22.6 pJ/bit/chip for communication. This is the lowest power for any system bus with MBus's feature set.

  3. Enhanced ultrasonic inspection of steel bridge pin components.

    DOT National Transportation Integrated Search

    1998-01-01

    This report describes the development of a technique for obtaining a reliable assessment of the condition of steel bridge pins already determined by ultrasound to contain imperfections. The details of a technique for performing high-definition ultras...

  4. Programmable Ultra Lightweight System Adaptable Radio (PULSAR) Low Cost Telemetry - Access from Space Advanced Technologies or Down the Middle

    NASA Technical Reports Server (NTRS)

    Sims. Herb; Varnavas, Kosta; Eberly, Eric

    2013-01-01

    Software Defined Radio (SDR) technology has been proven in the commercial sector since the early 1990's. Today's rapid advancement in mobile telephone reliability and power management capabilities exemplifies the effectiveness of the SDR technology for the modern communications market. In contrast, presently qualified satellite transponder applications were developed during the early 1960's space program. Programmable Ultra Lightweight System Adaptable Radio (PULSAR, NASA-MSFC SDR) technology revolutionizes satellite transponder technology by increasing data through-put capability by, at least, an order of magnitude. PULSAR leverages existing Marshall Space Flight Center SDR designs and commercially enhanced capabilities to provide a path to a radiation tolerant SDR transponder. These innovations will (1) reduce the cost of NASA Low Earth Orbit (LEO) and Deep Space transponders, (2) decrease power requirements, and (3) a commensurate volume reduction. Also, PULSAR increases flexibility to implement multiple transponder types by utilizing the same hardware with altered logic - no analog hardware change is required - all of which can be accomplished in orbit. This provides high capability, low cost, transponders to programs of all sizes. The final project outcome would be the introduction of a Technology Readiness Level (TRL) 7 low-cost CubeSat to SmallSat telemetry system into the NASA Portfolio.

  5. Low background materials and fabrication techniques for cables and connectors in the Majorana Demonstrator

    DOE PAGES

    Busch, M.; Abgrall, N.; Alvis, S. I.; ...

    2018-01-03

    Here, the Majorana Collaboration is searching for the neutrinoless double-beta decay of the nucleus 76Ge. The Majorana Demonstrator is an array of germanium detectors deployed with the aim of implementing background reduction techniques suitable for a tonne scale 76Ge-based search (the LEGEND collaboration). In the Demonstrator, germanium detectors operate in an ultra-pure vacuum cryostat at 80 K. One special challenge of an ultra-pure environment is to develop reliable cables, connectors, and electronics that do not significantly contribute to the radioactive background of the experiment. This paper highlights the experimental requirements and how these requirements were met for the Majorana Demonstrator,more » including plans to upgrade the wiring for higher reliability in the summer of 2018. Also described are requirements for LEGEND R&D efforts underway to meet these additional requirements« less

  6. Air Bearings Machined On Ultra Precision, Hydrostatic CNC-Lathe

    NASA Astrophysics Data System (ADS)

    Knol, Pierre H.; Szepesi, Denis; Deurwaarder, Jan M.

    1987-01-01

    Micromachining of precision elements requires an adequate machine concept to meet the high demand of surface finish, dimensional and shape accuracy. The Hembrug ultra precision lathes have been exclusively designed with hydrostatic principles for main spindle and guideways. This concept is to be explained with some major advantages of hydrostatics compared with aerostatics at universal micromachining applications. Hembrug has originally developed the conventional Mikroturn ultra precision facing lathes, for diamond turning of computer memory discs. This first generation of machines was followed by the advanced computer numerically controlled types for machining of complex precision workpieces. One of these parts, an aerostatic bearing component has been succesfully machined on the Super-Mikroturn CNC. A case study of airbearing machining confirms the statement that a good result of the micromachining does not depend on machine performance alone, but also on the technology applied.

  7. Validation and Application of an Ultra High-Performance Liquid Chromatography Tandem Mass Spectrometry Method for Yuanhuacine Determination in Rat Plasma after Pulmonary Administration: Pharmacokinetic Evaluation of a New Drug Delivery System.

    PubMed

    Li, Man; Liu, Xiao; Cai, Hao; Shen, Zhichun; Xu, Liu; Li, Weidong; Wu, Li; Duan, Jinao; Chen, Zhipeng

    2016-12-16

    Yuanhuacine was found to have significant inhibitory activity against A-549 human lung cancer cells. However, there would be serious adverse toxicity effects after systemic administration of yuanhuacine, such as by oral and intravenous ways. In order to achieve better curative effect and to alleviate the adverse toxicity effects, we tried to deliver yuanhuacine directly into the lungs. Ultra high-performance liquid chromatography tandem mass spectrometry (UHPLC-MS/MS) was used to detect the analyte and IS. After extraction (ether:dichloromethane = 8:1), the analyte and IS were separated on a Waters BEH-C 18 column (100 mm × 2.1 mm, 1.7 μm) under a 5 min gradient elution using a mixture of acetonitrile and 0.1% formic acid aqueous solution as mobile phase at a flow rate of 0.3 mL/min. ESI positive mode was chosen for detection. The method was fully validated for its selectivity, accuracy, precision, stability, matrix effect, and extraction recovery. This new method for yuanhuacine concentration determination in rat plasma was reliable and could be applied for its preclinical and clinical monitoring purpose.

  8. Estimating the Reliability of the CITAR Computer Courseware Evaluation System.

    ERIC Educational Resources Information Center

    Micceri, Theodore

    In today's complex computer-based teaching (CBT)/computer-assisted instruction market, flashy presentations frequently prove the most important purchasing element, while instructional design and content are secondary to form. Courseware purchasers must base decisions upon either a vendor's presentation or some published evaluator rating.…

  9. Computer sciences

    NASA Technical Reports Server (NTRS)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  10. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components, part 2

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.

  11. Optimum spaceborne computer system design by simulation

    NASA Technical Reports Server (NTRS)

    Williams, T.; Weatherbee, J. E.; Taylor, D. S.

    1972-01-01

    A deterministic digital simulation model is described which models the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. Use of the model as a tool in configuring a minimum computer system for a typical mission is demonstrated. The configuration which is developed as a result of studies with the simulator is optimal with respect to the efficient use of computer system resources, i.e., the configuration derived is a minimal one. Other considerations such as increased reliability through the use of standby spares would be taken into account in the definition of a practical system for a given mission.

  12. HiRel - Reliability/availability integrated workstation tool

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Dugan, Joanne B.

    1992-01-01

    The HiRel software tool is described and demonstrated by application to the mission avionics subsystem of the Advanced System Integration Demonstrations (ASID) system that utilizes the PAVE PILLAR approach. HiRel marks another accomplishment toward the goal of producing a totally integrated computer-aided design (CAD) workstation design capability. Since a reliability engineer generally represents a reliability model graphically before it can be solved, the use of a graphical input description language increases productivity and decreases the incidence of error. The graphical postprocessor module HARPO makes it possible for reliability engineers to quickly analyze huge amounts of reliability/availability data to observe trends due to exploratory design changes. The addition of several powerful HARP modeling engines provides the user with a reliability/availability modeling capability for a wide range of system applications all integrated under a common interactive graphical input-output capability.

  13. Surveillance of industrial processes with correlated parameters

    DOEpatents

    White, Andrew M.; Gross, Kenny C.; Kubic, William L.; Wigeland, Roald A.

    1996-01-01

    A system and method for surveillance of an industrial process. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions.

  14. A data management system to enable urgent natural disaster computing

    NASA Astrophysics Data System (ADS)

    Leong, Siew Hoon; Kranzlmüller, Dieter; Frank, Anton

    2014-05-01

    Civil protection, in particular natural disaster management, is very important to most nations and civilians in the world. When disasters like flash floods, earthquakes and tsunamis are expected or have taken place, it is of utmost importance to make timely decisions for managing the affected areas and reduce casualties. Computer simulations can generate information and provide predictions to facilitate this decision making process. Getting the data to the required resources is a critical requirement to enable the timely computation of the predictions. An urgent data management system to support natural disaster computing is thus necessary to effectively carry out data activities within a stipulated deadline. Since the trigger of a natural disaster is usually unpredictable, it is not always possible to prepare required resources well in advance. As such, an urgent data management system for natural disaster computing has to be able to work with any type of resources. Additional requirements include the need to manage deadlines and huge volume of data, fault tolerance, reliable, flexibility to changes, ease of usage, etc. The proposed data management platform includes a service manager to provide a uniform and extensible interface for the supported data protocols, a configuration manager to check and retrieve configurations of available resources, a scheduler manager to ensure that the deadlines can be met, a fault tolerance manager to increase the reliability of the platform and a data manager to initiate and perform the data activities. These managers will enable the selection of the most appropriate resource, transfer protocol, etc. such that the hard deadline of an urgent computation can be met for a particular urgent activity, e.g. data staging or computation. We associated 2 types of deadlines [2] with an urgent computing system. Soft-hard deadline: Missing a soft-firm deadline will render the computation less useful resulting in a cost that can have severe consequences Hard deadline: Missing a hard deadline renders the computation useless and results in full catastrophic consequences. A prototype of this system has a REST-based service manager. The REST-based implementation provides a uniform interface that is easy to use. New and upcoming file transfer protocols can easily be extended and accessed via the service manager. The service manager interacts with the other four managers to coordinate the data activities so that the fundamental natural disaster urgent computing requirement, i.e. deadline, can be fulfilled in a reliable manner. A data activity can include data storing, data archiving and data storing. Reliability is ensured by the choice of a network of managers organisation model[1] the configuration manager and the fault tolerance manager. With this proposed design, an easy to use, resource-independent data management system that can support and fulfill the computation of a natural disaster prediction within stipulated deadlines can thus be realised. References [1] H. G. Hegering, S. Abeck, and B. Neumair, Integrated management of networked systems - concepts, architectures, and their operational application, Morgan Kaufmann Publishers, 340 Pine Stret, Sixth Floor, San Francisco, CA 94104-3205, USA, 1999. [2] H. Kopetz, Real-time systems design principles for distributed embedded applications, second edition, Springer, LLC, 233 Spring Street, New York, NY 10013, USA, 2011. [3] S. H. Leong, A. Frank, and D. Kranzlmu¨ ller, Leveraging e-infrastructures for urgent computing, Procedia Computer Science 18 (2013), no. 0, 2177 - 2186, 2013 International Conference on Computational Science. [4] N. Trebon, Enabling urgent computing within the existing distributed computing infrastructure, Ph.D. thesis, University of Chicago, August 2011, http://people.cs.uchicago.edu/~ntrebon/docs/dissertation.pdf.

  15. Second order nonlinear QED processes in ultra-strong laser fields

    NASA Astrophysics Data System (ADS)

    Mackenroth, Felix

    2017-10-01

    In the interaction of ultra-intense laser fields with matter the ever increasing peak laser intensities render nonlinear QED effects ever more important. For long, ultra-intense laser pulses scattering large systems, like a macroscopic plasma, the interaction time can be longer than the scattering time, leading to multiple scatterings. These are usually approximated as incoherent cascades of single-vertex processes. Under certain conditions, however, this common cascade approximation may be insufficient, as it disregards several effects such as coherent processes, quantum interferences or pulse shape effects. Quantifying deviations of the full amplitude of multiple scatterings from the commonly employed cascade approximations is a formidable, yet unaccomplished task. In this talk we are going to discuss how to compute second order nonlinear QED amplitudes and relate them to the conventional cascade approximation. We present examples for typical second order processes and benchmark the full result against common approximations. We demonstrate that the approximation of multiple nonlinear QED scatterings as a cascade of single interactions has certain limitations and discuss these limits in light of upcoming experimental tests.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tauris, T. M.; Langer, N.; Moriya, T. J.

    Recent discoveries of weak and fast optical transients raise the question of their origin. We investigate the minimum ejecta mass associated with core-collapse supernovae (SNe) of Type Ic. We show that mass transfer from a helium star to a compact companion can produce an ultra-stripped core which undergoes iron core collapse and leads to an extremely fast and faint SN Ic. In this Letter, a detailed example is presented in which the pre-SN stellar mass is barely above the Chandrasekhar limit, resulting in the ejection of only ∼0.05-0.20 M {sub ☉} of material and the formation of a low-mass neutron star (NS).more » We compute synthetic light curves of this case and demonstrate that SN 2005ek could be explained by our model. We estimate that the fraction of such ultra-stripped to all SNe could be as high as 10{sup –3}-10{sup –2}. Finally, we argue that the second explosion in some double NS systems (for example, the double pulsar PSR J0737–3039B) was likely associated with an ultra-stripped SN Ic.« less

  17. A new SMART sensing system for aerospace structures

    NASA Astrophysics Data System (ADS)

    Zhang, David C.; Yu, Pin; Beard, Shawn; Qing, Peter; Kumar, Amrita; Chang, Fu-Kuo

    2007-04-01

    It is essential to ensure the safety and reliability of in-service structures such as unmanned vehicles by detecting structural cracking, corrosion, delamination, material degradation and other types of damage in time. Utilization of an integrated sensor network system can enable automatic inspection of such damages ultimately. Using a built-in network of actuators and sensors, Acellent is providing tools for advanced structural diagnostics. Acellent's integrated structural health monitoring system consists of an actuator/sensor network, supporting signal generation and data acquisition hardware, and data processing, visualization and analysis software. This paper describes the various features of Acellent's latest SMART sensing system. The new system is USB-based and is ultra-portable using the state-of-the-art technology, while delivering many functions such as system self-diagnosis, sensor diagnosis, through-transmission mode and pulse-echo mode of operation and temperature measurement. Performance of the new system was evaluated for assessment of damage in composite structures.

  18. Reliability of ultra-thin insulation coatings for long-term electrophysiological recordings

    NASA Astrophysics Data System (ADS)

    Hooker, S. A.

    2006-03-01

    Improved measurement of neural signals is needed for research into Alzheimer's, Parkinson's, epilepsy, strokes, and spinal cord injuries. At the heart of such instruments are microelectrodes that measure electrical signals in the body. Such electrodes must be small, stable, biocompatible, and robust. However, it is also important that they be easily implanted without causing substantial damage to surrounding tissue. Tissue damage can lead to the generation of immune responses that can interfere with the electrical measurement, preventing long-term recording. Recent advances in microfabrication and nanotechnology afford the opportunity to dramatically reduce the physical dimensions of recording electrodes, thereby minimizing insertion damage. However, one potential cause for concern is the reliability of the insulating coatings, applied to these ultra-fine-diameter wires to precisely control impedance. Such coatings are often polymeric and are applied everywhere but the sharpened tips of the wires, resulting in nominal impedances between 0.5 MOhms and 2.0 MOhms. However, during operation, the polymer degrades, changing the exposed area and the impedance. In this work, ultra-thin ceramic coatings were deposited as an alternative to polymer coatings. Processing conditions were varied to determine the effect of microstructure on measurement stability during two-electrode measurements in a standard buffer solution. Coatings were applied to seven different metals to determine any differences in performance due to the surface characteristics of the underlying wire. Sintering temperature and wire type had significant effects on coating degradation. Dielectric breakdown was also observed at relatively low voltages, indicating that test conditions must be carefully controlled to maximize reliability.

  19. One approach for evaluating the Distributed Computing Design System (DCDS)

    NASA Technical Reports Server (NTRS)

    Ellis, J. T.

    1985-01-01

    The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.

  20. On the matter of the reliability of the chemical monitoring system based on the modern control and monitoring devices

    NASA Astrophysics Data System (ADS)

    Andriushin, A. V.; Dolbikova, N. S.; Kiet, S. V.; Merzlikina, E. I.; Nikitina, I. S.

    2017-11-01

    The reliability of the main equipment of any power station depends on the correct water chemistry. In order to provide it, it is necessary to monitor the heat carrier quality, which, in its turn, is provided by the chemical monitoring system. Thus, the monitoring system reliability plays an important part in providing reliability of the main equipment. The monitoring system reliability is determined by the reliability and structure of its hardware and software consisting of sensors, controllers, HMI and so on [1,2]. Workers of a power plant dealing with the measuring equipment must be informed promptly about any breakdowns in the monitoring system, in this case they are able to remove the fault quickly. A computer consultant system for personnel maintaining the sensors and other chemical monitoring equipment can help to notice faults quickly and identify their possible causes. Some technical solutions for such a system are considered in the present paper. The experimental results were obtained on the laboratory and experimental workbench representing a physical model of a part of the chemical monitoring system.

  1. Quasicrystals and Quantum Computing

    NASA Astrophysics Data System (ADS)

    Berezin, Alexander A.

    1997-03-01

    In Quantum (Q) Computing qubits form Q-superpositions for macroscopic times. One scheme for ultra-fast (Q) computing can be based on quasicrystals. Ultrafast processing in Q-coherent structures (and the very existence of durable Q-superpositions) may be 'consequence' of presence of entire manifold of integer arithmetic (A0, aleph-naught of Georg Cantor) at any 4-point of space-time, furthermore, at any point of any multidimensional phase space of (any) N-particle Q-system. The latter, apart from quasicrystals, can include dispersed and/or diluted systems (Berezin, 1994). In such systems such alleged centrepieces of Q-Computing as ability for fast factorization of long integers can be processed by sheer virtue of the fact that entire infinite pattern of prime numbers is instantaneously available as 'free lunch' at any instant/point. Infinitely rich pattern of A0 (including pattern of primes and almost primes) acts as 'independent' physical effect which directly generates Q-dynamics (and physical world) 'out of nothing'. Thus Q-nonlocality can be ultimately based on instantaneous interconnectedness through ever- the-same structure of A0 ('Platonic field' of integers).

  2. Computational methods for structural load and resistance modeling

    NASA Technical Reports Server (NTRS)

    Thacker, B. H.; Millwater, H. R.; Harren, S. V.

    1991-01-01

    An automated capability for computing structural reliability considering uncertainties in both load and resistance variables is presented. The computations are carried out using an automated Advanced Mean Value iteration algorithm (AMV +) with performance functions involving load and resistance variables obtained by both explicit and implicit methods. A complete description of the procedures used is given as well as several illustrative examples, verified by Monte Carlo Analysis. In particular, the computational methods described in the paper are shown to be quite accurate and efficient for a material nonlinear structure considering material damage as a function of several primitive random variables. The results show clearly the effectiveness of the algorithms for computing the reliability of large-scale structural systems with a maximum number of resolutions.

  3. A self-synchronized high speed computational ghost imaging system: A leap towards dynamic capturing

    NASA Astrophysics Data System (ADS)

    Suo, Jinli; Bian, Liheng; Xiao, Yudong; Wang, Yongjin; Zhang, Lei; Dai, Qionghai

    2015-11-01

    High quality computational ghost imaging needs to acquire a large number of correlated measurements between the to-be-imaged scene and different reference patterns, thus ultra-high speed data acquisition is of crucial importance in real applications. To raise the acquisition efficiency, this paper reports a high speed computational ghost imaging system using a 20 kHz spatial light modulator together with a 2 MHz photodiode. Technically, the synchronization between such high frequency illumination and bucket detector needs nanosecond trigger precision, so the development of synchronization module is quite challenging. To handle this problem, we propose a simple and effective computational self-synchronization scheme by building a general mathematical model and introducing a high precision synchronization technique. The resulted efficiency is around 14 times faster than state-of-the-arts, and takes an important step towards ghost imaging of dynamic scenes. Besides, the proposed scheme is a general approach with high flexibility for readily incorporating other illuminators and detectors.

  4. JPRS Report, Science & Technology, USSR: Computers, Control Systems and Machines

    DTIC Science & Technology

    1989-03-14

    optimizatsii slozhnykh sistem (Coding Theory and Complex System Optimization ). Alma-Ata, Nauka Press, 1977, pp. 8-16. 11. Author’s certificate number...Interpreter Specifics [0. I. Amvrosova] ............................................. 141 Creation of Modern Computer Systems for Complex Ecological...processor can be designed to decrease degradation upon failure and assure more reliable processor operation, without requiring more complex software or

  5. Towards scalable quantum communication and computation: Novel approaches and realizations

    NASA Astrophysics Data System (ADS)

    Jiang, Liang

    Quantum information science involves exploration of fundamental laws of quantum mechanics for information processing tasks. This thesis presents several new approaches towards scalable quantum information processing. First, we consider a hybrid approach to scalable quantum computation, based on an optically connected network of few-qubit quantum registers. Specifically, we develop a novel scheme for scalable quantum computation that is robust against various imperfections. To justify that nitrogen-vacancy (NV) color centers in diamond can be a promising realization of the few-qubit quantum register, we show how to isolate a few proximal nuclear spins from the rest of the environment and use them for the quantum register. We also demonstrate experimentally that the nuclear spin coherence is only weakly perturbed under optical illumination, which allows us to implement quantum logical operations that use the nuclear spins to assist the repetitive-readout of the electronic spin. Using this technique, we demonstrate more than two-fold improvement in signal-to-noise ratio. Apart from direct application to enhance the sensitivity of the NV-based nano-magnetometer, this experiment represents an important step towards the realization of robust quantum information processors using electronic and nuclear spin qubits. We then study realizations of quantum repeaters for long distance quantum communication. Specifically, we develop an efficient scheme for quantum repeaters based on atomic ensembles. We use dynamic programming to optimize various quantum repeater protocols. In addition, we propose a new protocol of quantum repeater with encoding, which efficiently uses local resources (about 100 qubits) to identify and correct errors, to achieve fast one-way quantum communication over long distances. Finally, we explore quantum systems with topological order. Such systems can exhibit remarkable phenomena such as quasiparticles with anyonic statistics and have been proposed as candidates for naturally error-free quantum computation. We propose a scheme to unambiguously detect the anyonic statistics in spin lattice realizations using ultra-cold atoms in an optical lattice. We show how to reliably read and write topologically protected quantum memory using an atomic or photonic qubit.

  6. Recent developments of the NESSUS probabilistic structural analysis computer program

    NASA Technical Reports Server (NTRS)

    Millwater, H.; Wu, Y.-T.; Torng, T.; Thacker, B.; Riha, D.; Leung, C. P.

    1992-01-01

    The NESSUS probabilistic structural analysis computer program combines state-of-the-art probabilistic algorithms with general purpose structural analysis methods to compute the probabilistic response and the reliability of engineering structures. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. The structural analysis methods include nonlinear finite element and boundary element methods. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. The scope of the code has recently been expanded to include probabilistic life and fatigue prediction of structures in terms of component and system reliability and risk analysis of structures considering cost of failure. The code is currently being extended to structural reliability considering progressive crack propagation. Several examples are presented to demonstrate the new capabilities.

  7. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    NASA Astrophysics Data System (ADS)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  8. One-step trinary signed-digit arithmetic using an efficient encoding scheme

    NASA Astrophysics Data System (ADS)

    Salim, W. Y.; Fyath, R. S.; Ali, S. A.; Alam, Mohammad S.

    2000-11-01

    The trinary signed-digit (TSD) number system is of interest for ultra fast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.

  9. Network issues for large mass storage requirements

    NASA Technical Reports Server (NTRS)

    Perdue, James

    1992-01-01

    File Servers and Supercomputing environments need high performance networks to balance the I/O requirements seen in today's demanding computing scenarios. UltraNet is one solution which permits both high aggregate transfer rates and high task-to-task transfer rates as demonstrated in actual tests. UltraNet provides this capability as both a Server-to-Server and Server-to-Client access network giving the supercomputing center the following advantages highest performance Transport Level connections (to 40 MBytes/sec effective rates); matches the throughput of the emerging high performance disk technologies, such as RAID, parallel head transfer devices and software striping; supports standard network and file system applications using SOCKET's based application program interface such as FTP, rcp, rdump, etc.; supports access to the Network File System (NFS) and LARGE aggregate bandwidth for large NFS usage; provides access to a distributed, hierarchical data server capability using DISCOS UniTree product; supports file server solutions available from multiple vendors, including Cray, Convex, Alliant, FPS, IBM, and others.

  10. GaN-on-diamond electronic device reliability: Mechanical and thermo-mechanical integrity

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Sun, Huarui; Pomeroy, James W.; Francis, Daniel; Faili, Firooz; Twitchen, Daniel J.; Kuball, Martin

    2015-12-01

    The mechanical and thermo-mechanical integrity of GaN-on-diamond wafers used for ultra-high power microwave electronic devices was studied using a micro-pillar based in situ mechanical testing approach combined with an optical investigation of the stress and heat transfer across interfaces. We find the GaN/diamond interface to be thermo-mechanically stable, illustrating the potential for this material for reliable GaN electronic devices.

  11. Test of the Center for Automated Processing of Hardwoods' Auto-Image Detection and Computer-Based Grading and Cutup System

    Treesearch

    Philip A. Araman; Janice K. Wiedenbeck

    1995-01-01

    Automated lumber grading and yield optimization using computer controlled saws will be plausible for hardwoods if and when lumber scanning systems can reliably identify all defects by type. Existing computer programs could then be used to grade the lumber, identify the best cut-up solution, and control the sawing machines. The potential value of a scanning grading...

  12. An Energy-Based Limit State Function for Estimation of Structural Reliability in Shock Environments

    DOE PAGES

    Guthrie, Michael A.

    2013-01-01

    limit state function is developed for the estimation of structural reliability in shock environments. This limit state function uses peak modal strain energies to characterize environmental severity and modal strain energies at failure to characterize the structural capacity. The Hasofer-Lind reliability index is briefly reviewed and its computation for the energy-based limit state function is discussed. Applications to two degree of freedom mass-spring systems and to a simple finite element model are considered. For these examples, computation of the reliability index requires little effort beyond a modal analysis, but still accounts for relevant uncertainties in both the structure and environment.more » For both examples, the reliability index is observed to agree well with the results of Monte Carlo analysis. In situations where fast, qualitative comparison of several candidate designs is required, the reliability index based on the proposed limit state function provides an attractive metric which can be used to compare and control reliability.« less

  13. Technique for Early Reliability Prediction of Software Components Using Behaviour Models

    PubMed Central

    Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad

    2016-01-01

    Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748

  14. The scientific data acquisition system of the GAMMA-400 space project

    NASA Astrophysics Data System (ADS)

    Bobkov, S. G.; Serdin, O. V.; Gorbunov, M. S.; Arkhangelskiy, A. I.; Topchiev, N. P.

    2016-02-01

    The description of scientific data acquisition system (SDAS) designed by SRISA for the GAMMA-400 space project is presented. We consider the problem of different level electronics unification: the set of reliable fault-tolerant integrated circuits fabricated on Silicon-on-Insulator 0.25 mkm CMOS technology and the high-speed interfaces and reliable modules used in the space instruments. The characteristics of reliable fault-tolerant very large scale integration (VLSI) technology designed by SRISA for the developing of computation systems for space applications are considered. The scalable net structure of SDAS based on Serial RapidIO interface including real-time operating system BAGET is described too.

  15. Using Transverse Optical Patterns for Ultra-Low-Light All-Optical Switching

    DTIC Science & Technology

    2008-01-01

    handling devices from cellular telephones to supercomputers. The de - velopment of the internet (world-wide-web) was enabled by personal computers and...increase in response time for de - creasing power that is qualitatively similar to experimental observations. To facilitate comparison to Fig. 5.8(a...wells and of the entire ring correspond to the preference of the system to emit light in a hexagonal pattern. To de - scribe the pattern orientation using

  16. Assembly of Ultra-Dense Nanowire-Based Computing Systems

    DTIC Science & Technology

    2006-06-30

    34* characterized basic device element properties and statistics "* demonstrated product of sums (POS) validating assembled 2-bit adder structures " Demonstrated...linear region (Vds= 10 mV) from the peak g = 3 jiS at IVg -VTI= 0.13 V using the charge control model, representsmore than a factor of 10 improvement over...disrupted by ionizing particles or thermal fluctuation. Further, when working with such small charges, it is statistically possible that logic

  17. Multi-step cure kinetic model of ultra-thin glass fiber epoxy prepreg exhibiting both autocatalytic and diffusion-controlled regimes under isothermal and dynamic-heating conditions

    NASA Astrophysics Data System (ADS)

    Kim, Ye Chan; Min, Hyunsung; Hong, Sungyong; Wang, Mei; Sun, Hanna; Park, In-Kyung; Choi, Hyouk Ryeol; Koo, Ja Choon; Moon, Hyungpil; Kim, Kwang J.; Suhr, Jonghwan; Nam, Jae-Do

    2017-08-01

    As packaging technologies are demanded that reduce the assembly area of substrate, thin composite laminate substrates require the utmost high performance in such material properties as the coefficient of thermal expansion (CTE), and stiffness. Accordingly, thermosetting resin systems, which consist of multiple fillers, monomers and/or catalysts in thermoset-based glass fiber prepregs, are extremely complicated and closely associated with rheological properties, which depend on the temperature cycles for cure. For the process control of these complex systems, it is usually required to obtain a reliable kinetic model that could be used for the complex thermal cycles, which usually includes both the isothermal and dynamic-heating segments. In this study, an ultra-thin prepreg with highly loaded silica beads and glass fibers in the epoxy/amine resin system was investigated as a model system by isothermal/dynamic heating experiments. The maximum degree of cure was obtained as a function of temperature. The curing kinetics of the model prepreg system exhibited a multi-step reaction and a limited conversion as a function of isothermal curing temperatures, which are often observed in epoxy cure system because of the rate-determining diffusion of polymer chain growth. The modified kinetic equation accurately described the isothermal behavior and the beginning of the dynamic-heating behavior by integrating the obtained maximum degree of cure into the kinetic model development.

  18. RELIABLE COMPUTATION OF HOMOGENEOUS AZEOTROPES. (R824731)

    EPA Science Inventory

    Abstract

    It is important to determine the existence and composition of homogeneous azeotropes in the analysis of phase behavior and in the synthesis and design of separation systems, from both theoretical and practical standpoints. A new method for reliably locating an...

  19. High Available COTS Based Computer for Space

    NASA Astrophysics Data System (ADS)

    Hartmann, J.; Magistrati, Giorgio

    2015-09-01

    The availability and reliability factors of a system are central requirements of a target application. From a simple fuel injection system used in cars up to a flight control system of an autonomous navigating spacecraft, each application defines its specific availability factor under the target application boundary conditions. Increasing quality requirements on data processing systems used in space flight applications calling for new architectures to fulfill the availability, reliability as well as the increase of the required data processing power. Contrary to the increased quality request simplification and use of COTS components to decrease costs while keeping the interface compatibility to currently used system standards are clear customer needs. Data processing system design is mostly dominated by strict fulfillment of the customer requirements and reuse of available computer systems were not always possible caused by obsolescence of EEE-Parts, insufficient IO capabilities or the fact that available data processing systems did not provide the required scalability and performance.

  20. Distributed computing for macromolecular crystallography

    PubMed Central

    Krissinel, Evgeny; Uski, Ville; Lebedev, Andrey; Ballard, Charles

    2018-01-01

    Modern crystallographic computing is characterized by the growing role of automated structure-solution pipelines, which represent complex expert systems utilizing a number of program components, decision makers and databases. They also require considerable computational resources and regular database maintenance, which is increasingly more difficult to provide at the level of individual desktop-based CCP4 setups. On the other hand, there is a significant growth in data processed in the field, which brings up the issue of centralized facilities for keeping both the data collected and structure-solution projects. The paradigm of distributed computing and data management offers a convenient approach to tackling these problems, which has become more attractive in recent years owing to the popularity of mobile devices such as tablets and ultra-portable laptops. In this article, an overview is given of developments by CCP4 aimed at bringing distributed crystallographic computations to a wide crystallographic community. PMID:29533240

  1. Distributed computing for macromolecular crystallography.

    PubMed

    Krissinel, Evgeny; Uski, Ville; Lebedev, Andrey; Winn, Martyn; Ballard, Charles

    2018-02-01

    Modern crystallographic computing is characterized by the growing role of automated structure-solution pipelines, which represent complex expert systems utilizing a number of program components, decision makers and databases. They also require considerable computational resources and regular database maintenance, which is increasingly more difficult to provide at the level of individual desktop-based CCP4 setups. On the other hand, there is a significant growth in data processed in the field, which brings up the issue of centralized facilities for keeping both the data collected and structure-solution projects. The paradigm of distributed computing and data management offers a convenient approach to tackling these problems, which has become more attractive in recent years owing to the popularity of mobile devices such as tablets and ultra-portable laptops. In this article, an overview is given of developments by CCP4 aimed at bringing distributed crystallographic computations to a wide crystallographic community.

  2. Surveillance of industrial processes with correlated parameters

    DOEpatents

    White, A.M.; Gross, K.C.; Kubic, W.L.; Wigeland, R.A.

    1996-12-17

    A system and method for surveillance of an industrial process are disclosed. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions. 10 figs.

  3. A Unified Framework for Simulating Markovian Models of Highly Dependable Systems

    DTIC Science & Technology

    1989-07-01

    ependability I’valuiation of Complex lault- lolerant Computing Systems. Ptreedings of the 1-.et-enth Sv~npmiun on Falult- lolerant Comnputing. Portland, Maine...New York. [12] (icis;t, R.M. and ’I’rivedi, K.S. (1983). I!Itra-Il gh Reliability Prediction for Fault-’ lolerant Computer Systems. IEE.-E Trw.%,.cions... 1998 ). Surv’ey of Software Tools for [valuating Reli- ability. A vailability, and Serviceabilitv. ACA1 Computing S urveyjs 20. 4, 227-269). [32] Meyer

  4. Smaller Footprint Drilling System for Deep and Hard Rock Environments; Feasibility of Ultra-High-Speed Diamond Drilling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnis Judzis; Alan Black; Homer Robertson

    2006-03-01

    The two phase program addresses long-term developments in deep well and hard rock drilling. TerraTek believes that significant improvements in drilling deep hard rock will be obtained by applying ultra-high rotational speeds (greater than 10,000 rpm). The work includes a feasibility of concept research effort aimed at development that will ultimately result in the ability to reliably drill ''faster and deeper'' possibly with smaller, more mobile rigs. The principle focus is on demonstration testing of diamond bits rotating at speeds in excess of 10,000 rpm to achieve high rate of penetration (ROP) rock cutting with substantially lower inputs of energymore » and loads. The significance of the ultra-high rotary speed drilling system is the ability to drill into rock at very low weights on bit and possibly lower energy levels. The drilling and coring industry today does not practice this technology. The highest rotary speed systems in oil field and mining drilling and coring today run less than 10,000 rpm--usually well below 5,000 rpm. This document details the progress to date on the program entitled ''Smaller Footprint Drilling System for Deep and Hard Rock Environments: Feasibility of Ultra-High-Speed Diamond Drilling'' for the period starting 1 October 2004 through 30 September 2005. Additionally, research activity from 1 October 2005 through 28 February 2006 is included in this report: (1) TerraTek reviewed applicable literature and documentation and convened a project kick-off meeting with Industry Advisors in attendance. (2) TerraTek designed and planned Phase I bench scale experiments. Some difficulties continue in obtaining ultra-high speed motors. Improvements have been made to the loading mechanism and the rotational speed monitoring instrumentation. New drill bit designs have been provided to vendors for production. A more consistent product is required to minimize the differences in bit performance. A test matrix for the final core bit testing program has been completed. (3) TerraTek is progressing through Task 3 ''Small-scale cutting performance tests''. (4) Significant testing has been performed on nine different rocks. (5) Bit balling has been observed on some rock and seems to be more pronounces at higher rotational speeds. (6) Preliminary analysis of data has been completed and indicates that decreased specific energy is required as the rotational speed increases (Task 4). This data analysis has been used to direct the efforts of the final testing for Phase I (Task 5). (7) Technology transfer (Task 6) has begun with technical presentations to the industry (see Judzis).« less

  5. SMALLER FOOTPRINT DRILLING SYSTEM FOR DEEP AND HARD ROCK ENVIRONMENTS; FEASIBILITY OF ULTRA-HIGH SPEED DIAMOND DRILLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alan Black; Arnis Judzis

    2004-10-01

    The two phase program addresses long-term developments in deep well and hard rock drilling. TerraTek believes that significant improvements in drilling deep hard rock will be obtained by applying ultra-high (greater than 10,000 rpm) rotational speeds. The work includes a feasibility of concept research effort aimed at development and test results that will ultimately result in the ability to reliably drill ''faster and deeper'' possibly with rigs having a smaller footprint to be more mobile. The principle focus is on demonstration testing of diamond bits rotating at speeds in excess of 10,000 rpm to achieve high rate of penetration rockmore » cutting with substantially lower inputs of energy and loads. The project draws on TerraTek results submitted to NASA's ''Drilling on Mars'' program. The objective of that program was to demonstrate miniaturization of a robust and mobile drilling system that expends small amounts of energy. TerraTek successfully tested ultrahigh speed ({approx}40,000 rpm) small kerf diamond coring. Adaptation to the oilfield will require innovative bit designs for full hole drilling or continuous coring and the eventual development of downhole ultra-high speed drives. For domestic operations involving hard rock and deep oil and gas plays, improvements in penetration rates is an opportunity to reduce well costs and make viable certain field developments. An estimate of North American hard rock drilling costs is in excess of $1,200 MM. Thus potential savings of $200 MM to $600 MM are possible if drilling rates are doubled [assuming bit life is reasonable]. The net result for operators is improved profit margin as well as an improved position on reserves. The significance of the ''ultra-high rotary speed drilling system'' is the ability to drill into rock at very low weights on bit and possibly lower energy levels. The drilling and coring industry today does not practice this technology. The highest rotary speed systems in oil field and mining drilling and coring today run less than 10,000 rpm--usually well below 5,000 rpm. This document details the progress to date on the program entitled ''SMALLER FOOTPRINT DRILLING SYSTEM FOR DEEP AND HARD ROCK ENVIRONMENTS; FEASIBILITY OF ULTRA-HIGH SPEED DIAMOND DRILLING'' for the period starting June 23, 2003 through September 30, 2004. TerraTek has reviewed applicable literature and documentation and has convened a project kick-off meeting with Industry Advisors in attendance. TerraTek has designed and planned Phase I bench scale experiments. Some difficulties in obtaining ultra-high speed motors for this feasibility work were encountered though they were sourced mid 2004. TerraTek is progressing through Task 3 ''Small-scale cutting performance tests''. Some improvements over early NASA experiments have been identified.« less

  6. SMALLER FOOTPRINT DRILLING SYSTEM FOR DEEP AND HARD ROCK ENVIRONMENTS; FEASIBILITY OF ULTRA-HIGH SPEED DIAMOND DRILLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alan Black; Arnis Judzis

    2004-10-01

    The two phase program addresses long-term developments in deep well and hard rock drilling. TerraTek believes that significant improvements in drilling deep hard rock will be obtained by applying ultra-high (greater than 10,000 rpm) rotational speeds. The work includes a feasibility of concept research effort aimed at development and test results that will ultimately result in the ability to reliably drill ''faster and deeper'' possibly with rigs having a smaller footprint to be more mobile. The principle focus is on demonstration testing of diamond bits rotating at speeds in excess of 10,000 rpm to achieve high rate of penetration rockmore » cutting with substantially lower inputs of energy and loads. The project draws on TerraTek results submitted to NASA's ''Drilling on Mars'' program. The objective of that program was to demonstrate miniaturization of a robust and mobile drilling system that expends small amounts of energy. TerraTek successfully tested ultrahigh speed ({approx}40,000 rpm) small kerf diamond coring. Adaptation to the oilfield will require innovative bit designs for full hole drilling or continuous coring and the eventual development of downhole ultra-high speed drives. For domestic operations involving hard rock and deep oil and gas plays, improvements in penetration rates is an opportunity to reduce well costs and make viable certain field developments. An estimate of North American hard rock drilling costs is in excess of $1,200 MM. Thus potential savings of $200 MM to $600 MM are possible if drilling rates are doubled [assuming bit life is reasonable]. The net result for operators is improved profit margin as well as an improved position on reserves. The significance of the ''ultra-high rotary speed drilling system'' is the ability to drill into rock at very low weights on bit and possibly lower energy levels. The drilling and coring industry today does not practice this technology. The highest rotary speed systems in oil field and mining drilling and coring today run less than 10,000 rpm--usually well below 5,000 rpm. This document details the progress to date on the program entitled ''SMALLER FOOTPRINT DRILLING SYSTEM FOR DEEP AND HARD ROCK ENVIRONMENTS; FEASIBILITY OF ULTRA-HIGH SPEED DIAMOND DRILLING'' for the period starting June 23, 2003 through September 30, 2004. (1) TerraTek has reviewed applicable literature and documentation and has convened a project kick-off meeting with Industry Advisors in attendance. (2) TerraTek has designed and planned Phase I bench scale experiments. Some difficulties in obtaining ultra-high speed motors for this feasibility work were encountered though they were sourced mid 2004. (3) TerraTek is progressing through Task 3 ''Small-scale cutting performance tests''. Some improvements over early NASA experiments have been identified.« less

  7. Clock Agreement Among Parallel Supercomputer Nodes

    DOE Data Explorer

    Jones, Terry R.; Koenig, Gregory A.

    2014-04-30

    This dataset presents measurements that quantify the clock synchronization time-agreement characteristics among several high performance computers including the current world's most powerful machine for open science, the U.S. Department of Energy's Titan machine sited at Oak Ridge National Laboratory. These ultra-fast machines derive much of their computational capability from extreme node counts (over 18000 nodes in the case of the Titan machine). Time-agreement is commonly utilized by parallel programming applications and tools, distributed programming application and tools, and system software. Our time-agreement measurements detail the degree of time variance between nodes and how that variance changes over time. The dataset includes empirical measurements and the accompanying spreadsheets.

  8. A highly reliable, autonomous data communication subsystem for an advanced information processing system

    NASA Technical Reports Server (NTRS)

    Nagle, Gail; Masotto, Thomas; Alger, Linda

    1990-01-01

    The need to meet the stringent performance and reliability requirements of advanced avionics systems has frequently led to implementations which are tailored to a specific application and are therefore difficult to modify or extend. Furthermore, many integrated flight critical systems are input/output intensive. By using a design methodology which customizes the input/output mechanism for each new application, the cost of implementing new systems becomes prohibitively expensive. One solution to this dilemma is to design computer systems and input/output subsystems which are general purpose, but which can be easily configured to support the needs of a specific application. The Advanced Information Processing System (AIPS), currently under development has these characteristics. The design and implementation of the prototype I/O communication system for AIPS is described. AIPS addresses reliability issues related to data communications by the use of reconfigurable I/O networks. When a fault or damage event occurs, communication is restored to functioning parts of the network and the failed or damage components are isolated. Performance issues are addressed by using a parallelized computer architecture which decouples Input/Output (I/O) redundancy management and I/O processing from the computational stream of an application. The autonomous nature of the system derives from the highly automated and independent manner in which I/O transactions are conducted for the application as well as from the fact that the hardware redundancy management is entirely transparent to the application.

  9. A reliability analysis tool for SpaceWire network

    NASA Astrophysics Data System (ADS)

    Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou

    2017-04-01

    A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.

  10. Magnetic bilayer-skyrmions without skyrmion Hall effect

    NASA Astrophysics Data System (ADS)

    Zhang, Xichao; Zhou, Yan; Ezawa, Motohiko

    2016-01-01

    Magnetic skyrmions might be used as information carriers in future advanced memories, logic gates and computing devices. However, there exists an obstacle known as the skyrmion Hall effect (SkHE), that is, the skyrmion trajectories bend away from the driving current direction due to the Magnus force. Consequently, the skyrmions in constricted geometries may be destroyed by touching the sample edges. Here we theoretically propose that the SkHE can be suppressed in the antiferromagnetically exchange-coupled bilayer system, since the Magnus forces in the top and bottom layers are exactly cancelled. We show that such a pair of SkHE-free magnetic skyrmions can be nucleated and be driven by the current-induced torque. Our proposal provides a promising means to move magnetic skyrmions in a perfectly straight trajectory in ultra-dense devices with ultra-fast processing speed.

  11. Loosely Coupled GPS-Aided Inertial Navigation System for Range Safety

    NASA Technical Reports Server (NTRS)

    Heatwole, Scott; Lanzi, Raymond J.

    2010-01-01

    The Autonomous Flight Safety System (AFSS) aims to replace the human element of range safety operations, as well as reduce reliance on expensive, downrange assets for launches of expendable launch vehicles (ELVs). The system consists of multiple navigation sensors and flight computers that provide a highly reliable platform. It is designed to ensure that single-event failures in a flight computer or sensor will not bring down the whole system. The flight computer uses a rules-based structure derived from range safety requirements to make decisions whether or not to destroy the rocket.

  12. Development of a Whole Slide Imaging System on Smartphones and Evaluation With Frozen Section Samples

    PubMed Central

    Jiang, Liren

    2017-01-01

    Background The aim was to develop scalable Whole Slide Imaging (sWSI), a WSI system based on mainstream smartphones coupled with regular optical microscopes. This ultra-low-cost solution should offer diagnostic-ready imaging quality on par with standalone scanners, supporting both oil and dry objective lenses of different magnifications, and reasonably high throughput. These performance metrics should be evaluated by expert pathologists and match those of high-end scanners. Objective The aim was to develop scalable Whole Slide Imaging (sWSI), a whole slide imaging system based on smartphones coupled with optical microscopes. This ultra-low-cost solution should offer diagnostic-ready imaging quality on par with standalone scanners, supporting both oil and dry object lens of different magnification. All performance metrics should be evaluated by expert pathologists and match those of high-end scanners. Methods In the sWSI design, the digitization process is split asynchronously between light-weight clients on smartphones and powerful cloud servers. The client apps automatically capture FoVs at up to 12-megapixel resolution and process them in real-time to track the operation of users, then give instant feedback of guidance. The servers first restitch each pair of FoVs, then automatically correct the unknown nonlinear distortion introduced by the lens of the smartphone on the fly, based on pair-wise stitching, before finally combining all FoVs into one gigapixel VS for each scan. These VSs can be viewed using Internet browsers anywhere. In the evaluation experiment, 100 frozen section slides from patients randomly selected among in-patients of the participating hospital were scanned by both a high-end Leica scanner and sWSI. All VSs were examined by senior pathologists whose diagnoses were compared against those made using optical microscopy as ground truth to evaluate the image quality. Results The sWSI system is developed for both Android and iPhone smartphones and is currently being offered to the public. The image quality is reliable and throughput is approximately 1 FoV per second, yielding a 15-by-15 mm slide under 20X object lens in approximately 30-35 minutes, with little training required for the operator. The expected cost for setup is approximately US $100 and scanning each slide costs between US $1 and $10, making sWSI highly cost-effective for infrequent or low-throughput usage. In the clinical evaluation of sample-wise diagnostic reliability, average accuracy scores achieved by sWSI-scan-based diagnoses were as follows: 0.78 for breast, 0.88 for uterine corpus, 0.68 for thyroid, and 0.50 for lung samples. The respective low-sensitivity rates were 0.05, 0.05, 0.13, and 0.25 while the respective low-specificity rates were 0.18, 0.08, 0.20, and 0.25. The participating pathologists agreed that the overall quality of sWSI was generally on par with that produced by high-end scanners, and did not affect diagnosis in most cases. Pathologists confirmed that sWSI is reliable enough for standard diagnoses of most tissue categories, while it can be used for quick screening of difficult cases. Conclusions As an ultra-low-cost alternative to whole slide scanners, diagnosis-ready VS quality and robustness for commercial usage is achieved in the sWSI solution. Operated on main-stream smartphones installed on normal optical microscopes, sWSI readily offers affordable and reliable WSI to resource-limited or infrequent clinical users. PMID:28916508

  13. Recent advances in computational structural reliability analysis methods

    NASA Astrophysics Data System (ADS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-10-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  14. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  15. Design and reliability analysis of DP-3 dynamic positioning control architecture

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Wan, Lei; Jiang, Da-Peng; Xu, Yu-Ru

    2011-12-01

    As the exploration and exploitation of oil and gas proliferate throughout deepwater area, the requirements on the reliability of dynamic positioning system become increasingly stringent. The control objective ensuring safety operation at deep water will not be met by a single controller for dynamic positioning. In order to increase the availability and reliability of dynamic positioning control system, the triple redundancy hardware and software control architectures were designed and developed according to the safe specifications of DP-3 classification notation for dynamically positioned ships and rigs. The hardware redundant configuration takes the form of triple-redundant hot standby configuration including three identical operator stations and three real-time control computers which connect each other through dual networks. The function of motion control and redundancy management of control computers were implemented by software on the real-time operating system VxWorks. The software realization of task loose synchronization, majority voting and fault detection were presented in details. A hierarchical software architecture was planed during the development of software, consisting of application layer, real-time layer and physical layer. The behavior of the DP-3 dynamic positioning control system was modeled by a Markov model to analyze its reliability. The effects of variation in parameters on the reliability measures were investigated. The time domain dynamic simulation was carried out on a deepwater drilling rig to prove the feasibility of the proposed control architecture.

  16. Surgical correction of cryptotia combined with an ultra-delicate split-thickness skin graft in continuity with a full-thickness skin rotation flap.

    PubMed

    Yu, Xiaobo; Yang, Qinghua; Jiang, Haiyue; Pan, Bo; Zhao, Yanyong; Lin, Lin

    2017-11-01

    Cryptotia is a common congenital ear deformity in Asian populations. In cryptotia, a portion of the upper ear is hidden and fixed in a pocket of the skin of the mastoid. Here we describe our method for cryptotia correction by using an ultra-delicate split-thickness skin graft in continuity with a full-thickness skin rotation flap. We developed a new method for correcting cryptotia by using an ultra-delicate split-thickness skin graft in continuity with a full-thickness skin rotation flap. Following ear release, the full-thickness skin rotation flap is rotated into the defect, and the donor site is covered with an ultra-delicate split-thickness skin graft raised in continuity with the flap. All patients exhibited satisfactory release of cryptotia. No cases involved partial or total flap necrosis, and post-operative outcomes using this new technique for cryptotia correction have been more than satisfactory. Our method of using an ultra-delicate split-thickness skin graft in continuity with a full-thickness skin rotation flap to correct cryptotia is simple and reliable. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  17. An abstract specification language for Markov reliability models

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1985-01-01

    Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.

  18. An abstract language for specifying Markov reliability models

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    1986-01-01

    Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.

  19. Periodically Self Restoring Redundant Systems for VLSI Based Highly Reliable Design,

    DTIC Science & Technology

    1984-01-01

    fault tolerance technique for realizing highly reliable computer systems for critical control applications . However, VL.SI technology has imposed a...operating correctly; failed critical real time control applications . n modules are discarded from the vote. the classical "static" voted redundancy...redundant modules are failure number of InterconnecttIon3. This results In f aree. However, for applications requiring higm modular complexity because

  20. Active Reliability Engineering - Technical Concept and Program Plan. A Solid-State Systems Approach to Increased Reliability and Availability in Military Systems.

    DTIC Science & Technology

    1983-10-05

    battle damage. Others are local electrical power and cooling disruptions. Again, a highly critical function is lost if its computer site is destroyed. A...formalized design of the test bed to meet the requirements of the functional description and goals of the program. AMTEC --Z3IT TASKS: 610, 710, 810

  1. Continuing challenges for computer-based neuropsychological tests.

    PubMed

    Letz, Richard

    2003-08-01

    A number of issues critical to the development of computer-based neuropsychological testing systems that remain continuing challenges to their widespread use in occupational and environmental health are reviewed. Several computer-based neuropsychological testing systems have been developed over the last 20 years, and they have contributed substantially to the study of neurologic effects of a number of environmental exposures. However, many are no longer supported and do not run on contemporary personal computer operating systems. Issues that are continuing challenges for development of computer-based neuropsychological tests in environmental and occupational health are discussed: (1) some current technological trends that generally make test development more difficult; (2) lack of availability of usable speech recognition of the type required for computer-based testing systems; (3) implementing computer-based procedures and tasks that are improvements over, not just adaptations of, their manually-administered predecessors; (4) implementing tests of a wider range of memory functions than the limited range now available; (5) paying more attention to motivational influences that affect the reliability and validity of computer-based measurements; and (6) increasing the usability of and audience for computer-based systems. Partial solutions to some of these challenges are offered. The challenges posed by current technological trends are substantial and generally beyond the control of testing system developers. Widespread acceptance of the "tablet PC" and implementation of accurate small vocabulary, discrete, speaker-independent speech recognition would enable revolutionary improvements to computer-based testing systems, particularly for testing memory functions not covered in existing systems. Dynamic, adaptive procedures, particularly ones based on item-response theory (IRT) and computerized-adaptive testing (CAT) methods, will be implemented in new tests that will be more efficient, reliable, and valid than existing test procedures. These additional developments, along with implementation of innovative reporting formats, are necessary for more widespread acceptance of the testing systems.

  2. Study of Fuze Structure and Reliability Design Based on the Direct Search Method

    NASA Astrophysics Data System (ADS)

    Lin, Zhang; Ning, Wang

    2017-03-01

    Redundant design is one of the important methods to improve the reliability of the system, but mutual coupling of multiple factors is often involved in the design. In my study, Direct Search Method is introduced into the optimum redundancy configuration for design optimization, in which, the reliability, cost, structural weight and other factors can be taken into account simultaneously, and the redundant allocation and reliability design of aircraft critical system are computed. The results show that this method is convenient and workable, and applicable to the redundancy configurations and optimization of various designs upon appropriate modifications. And this method has a good practical value.

  3. Reliability computation using fault tree analysis

    NASA Technical Reports Server (NTRS)

    Chelson, P. O.

    1971-01-01

    A method is presented for calculating event probabilities from an arbitrary fault tree. The method includes an analytical derivation of the system equation and is not a simulation program. The method can handle systems that incorporate standby redundancy and it uses conditional probabilities for computing fault trees where the same basic failure appears in more than one fault path.

  4. Statis Program Analysis for Reliable, Trusted Apps

    DTIC Science & Technology

    2017-02-01

    flexibility to system design. However, it is challenging for a static analysis to compute or verify properties about a system that uses implicit control...sources might affect the variable’s value. The type qualifier @Sink indicates where (information computed from) the value might be output. These...upper bound on the set of sensitive sources that were actually used to compute the value. If the type of x is qualified by @Source({INTERNET, LOCATION

  5. Simultaneous determination of tilianin and its metabolites in mice using ultra-high-performance liquid chromatography with tandem mass spectrometry and its application to a pharmacokinetic study.

    PubMed

    Wang, Liping; Chen, Qingwei; Zhu, Lijun; Zeng, Xuejun; Li, Qiang; Hu, Ming; Wang, Xinchun; Liu, Zhongqiu

    2018-04-01

    Tilianin is an active flavonoid glycoside found in many medical plants. Data are lacking regarding its pharmacokinetics and disposition in vivo. The objective of this study was to develop a sensitive, reliable and validated ultra-high-performance liquid chromatography with tandem mass spectrometry (UHPLC-MS/MS) method to simultaneously quantify tilianin and its main metabolites and to determine its pharmacokinetics in wild-type and breast cancer resistance protein knockout (Bcrp1-/-) FVB mice. Chromatographic separation was accomplished on a C 18 column by utilizing acetonitrile and 0.5 mm ammonium acetate as the mobile phase. Mass spectrometric detection was performed using electrospray ionization in both positive and negative modes. The results showed that the precision, accuracy and recovery, as well as the stability of tilianin and its metabolites in mouse plasma, were all within acceptable limits. Acacetin-7-glucuronide and acacetin-7-sulfate were the major metabolites of tilianin in mouse plasma. Moreover, systemic exposure of acacetin-7-sulfate was significantly higher in Bcrp1 (-/-) FVB mice compared with wild-type FVB mice. In conclusion, the fully validated UHPLC-MS/MS method was sensitive, reliable, and was successfully applied to assess the pharmacokinetics of tilianin in wild-type and Bcrp1 (-/-) FVB mice. Breast cancer resistance protein had a significant impact on the elimination of the sulfated metabolite of tilianin in vivo. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Interpretive Reliability of Six Computer-Based Test Interpretation Programs for the Minnesota Multiphasic Personality Inventory-2.

    PubMed

    Deskovitz, Mark A; Weed, Nathan C; McLaughlan, Joseph K; Williams, John E

    2016-04-01

    The reliability of six Minnesota Multiphasic Personality Inventory-Second edition (MMPI-2) computer-based test interpretation (CBTI) programs was evaluated across a set of 20 commonly appearing MMPI-2 profile codetypes in clinical settings. Evaluation of CBTI reliability comprised examination of (a) interrater reliability, the degree to which raters arrive at similar inferences based on the same CBTI profile and (b) interprogram reliability, the level of agreement across different CBTI systems. Profile inferences drawn by four raters were operationalized using q-sort methodology. Results revealed no significant differences overall with regard to interrater and interprogram reliability. Some specific CBTI/profile combinations (e.g., the CBTI by Automated Assessment Associates on a within normal limits profile) and specific profiles (e.g., the 4/9 profile displayed greater interprogram reliability than the 2/4 profile) were interpreted with variable consensus (α range = .21-.95). In practice, users should consider that certain MMPI-2 profiles are interpreted more or less consensually and that some CBTIs show variable reliability depending on the profile. © The Author(s) 2015.

  7. Are Handheld Computers Dependable? A New Data Collection System for Classroom-Based Observations

    ERIC Educational Resources Information Center

    Adiguzel, Tufan; Vannest, Kimberly J.; Parker, Richard I.

    2009-01-01

    Very little research exists on the dependability of handheld computers used in public school classrooms. This study addresses four dependability criteria--reliability, maintainability, availability, and safety--to evaluate a data collection tool on a handheld computer. Data were collected from five sources: (1) time-use estimations by 19 special…

  8. Some key considerations in evolving a computer system and software engineering support environment for the space station program

    NASA Technical Reports Server (NTRS)

    Mckay, C. W.; Bown, R. L.

    1985-01-01

    The space station data management system involves networks of computing resources that must work cooperatively and reliably over an indefinite life span. This program requires a long schedule of modular growth and an even longer period of maintenance and operation. The development and operation of space station computing resources will involve a spectrum of systems and software life cycle activities distributed across a variety of hosts, an integration, verification, and validation host with test bed, and distributed targets. The requirement for the early establishment and use of an apporopriate Computer Systems and Software Engineering Support Environment is identified. This environment will support the Research and Development Productivity challenges presented by the space station computing system.

  9. Care 3, Phase 1, volume 1

    NASA Technical Reports Server (NTRS)

    Stiffler, J. J.; Bryant, L. A.; Guccione, L.

    1979-01-01

    A computer program to aid in accessing the reliability of fault tolerant avionics systems was developed. A simple mathematical expression was used to evaluate the reliability of any redundant configuration over any interval during which the failure rates and coverage parameters remained unaffected by configuration changes. Provision was made for convolving such expressions in order to evaluate the reliability of a dual mode system. A coverage model was also developed to determine the various relevant coverage coefficients as a function of the available hardware and software fault detector characteristics, and subsequent isolation and recovery delay statistics.

  10. Ultra-fast relaxation, decoherence, and localization of photoexcited states in π-conjugated polymers

    NASA Astrophysics Data System (ADS)

    Mannouch, Jonathan R.; Barford, William; Al-Assam, Sarah

    2018-01-01

    The exciton relaxation dynamics of photoexcited electronic states in poly(p-phenylenevinylene) are theoretically investigated within a coarse-grained model, in which both the exciton and nuclear degrees of freedom are treated quantum mechanically. The Frenkel-Holstein Hamiltonian is used to describe the strong exciton-phonon coupling present in the system, while external damping of the internal nuclear degrees of freedom is accounted for by a Lindblad master equation. Numerically, the dynamics are computed using the time evolving block decimation and quantum jump trajectory techniques. The values of the model parameters physically relevant to polymer systems naturally lead to a separation of time scales, with the ultra-fast dynamics corresponding to energy transfer from the exciton to the internal phonon modes (i.e., the C-C bond oscillations), while the longer time dynamics correspond to damping of these phonon modes by the external dissipation. Associated with these time scales, we investigate the following processes that are indicative of the system relaxing onto the emissive chromophores of the polymer: (1) Exciton-polaron formation occurs on an ultra-fast time scale, with the associated exciton-phonon correlations present within half a vibrational time period of the C-C bond oscillations. (2) Exciton decoherence is driven by the decay in the vibrational overlaps associated with exciton-polaron formation, occurring on the same time scale. (3) Exciton density localization is driven by the external dissipation, arising from "wavefunction collapse" occurring as a result of the system-environment interactions. Finally, we show how fluorescence anisotropy measurements can be used to investigate the exciton decoherence process during the relaxation dynamics.

  11. Specification and testing for power by wire aircraft

    NASA Technical Reports Server (NTRS)

    Hansen, Irving G.; Kenney, Barbara H.

    1993-01-01

    A power by wire aircraft is one in which all active functions other than propulsion are implemented electrically. Other nomenclature are 'all electric airplane,' or 'more electric airplane.' What is involved is the task of developing and certifying electrical equipment to replace existing hydraulics and pneumatics. When such functions, however, are primary flight controls which are implemented electrically, new requirements are imposed that were not anticipated by existing power system designs. Standards of particular impact are the requirements of ultra-high reliability, high peak transient bi-directional power flow, and immunity to electromagnetic interference and lightning. Not only must the electromagnetic immunity of the total system be verifiable, but box level tests and meaningful system models must be established to allow system evaluation. This paper discusses some of the problems, the system modifications involved, and early results in establishing wiring harness and interface susceptibility requirements.

  12. Chiral magnetic conductivity and surface states of Weyl semimetals in topological insulator ultra-thin film multilayer.

    PubMed

    Owerre, S A

    2016-06-15

    We investigate an ultra-thin film of topological insulator (TI) multilayer as a model for a three-dimensional (3D) Weyl semimetal. We introduce tunneling parameters t S, [Formula: see text], and t D, where the former two parameters couple layers of the same thin film at small and large momenta, and the latter parameter couples neighbouring thin film layers along the z-direction. The Chern number is computed in each topological phase of the system and we find that for [Formula: see text], the tunneling parameter [Formula: see text] changes from positive to negative as the system transits from Weyl semi-metallic phase to insulating phases. We further study the chiral magnetic effect (CME) of the system in the presence of a time dependent magnetic field. We compute the low-temperature dependence of the chiral magnetic conductivity and show that it captures three distinct phases of the system separated by plateaus. Furthermore, we propose and study a 3D lattice model of Porphyrin thin film, an organic material known to support topological Frenkel exciton edge states. We show that this model exhibits a 3D Weyl semi-metallic phase and also supports a 2D Weyl semi-metallic phase. We further show that this model recovers that of 3D Weyl semimetal in topological insulator thin film multilayer. Thus, paving the way for simulating a 3D Weyl semimetal in topological insulator thin film multilayer. We obtain the surface states (Fermi arcs) in the 3D model and the chiral edge states in the 2D model and analyze their topological properties.

  13. Multi-objective optimization of GENIE Earth system models.

    PubMed

    Price, Andrew R; Myerscough, Richard J; Voutchkov, Ivan I; Marsh, Robert; Cox, Simon J

    2009-07-13

    The tuning of parameters in climate models is essential to provide reliable long-term forecasts of Earth system behaviour. We apply a multi-objective optimization algorithm to the problem of parameter estimation in climate models. This optimization process involves the iterative evaluation of response surface models (RSMs), followed by the execution of multiple Earth system simulations. These computations require an infrastructure that provides high-performance computing for building and searching the RSMs and high-throughput computing for the concurrent evaluation of a large number of models. Grid computing technology is therefore essential to make this algorithm practical for members of the GENIE project.

  14. Noise-constrained switching times for heteroclinic computing

    NASA Astrophysics Data System (ADS)

    Neves, Fabio Schittler; Voit, Maximilian; Timme, Marc

    2017-03-01

    Heteroclinic computing offers a novel paradigm for universal computation by collective system dynamics. In such a paradigm, input signals are encoded as complex periodic orbits approaching specific sequences of saddle states. Without inputs, the relevant states together with the heteroclinic connections between them form a network of states—the heteroclinic network. Systems of pulse-coupled oscillators or spiking neurons naturally exhibit such heteroclinic networks of saddles, thereby providing a substrate for general analog computations. Several challenges need to be resolved before it becomes possible to effectively realize heteroclinic computing in hardware. The time scales on which computations are performed crucially depend on the switching times between saddles, which in turn are jointly controlled by the system's intrinsic dynamics and the level of external and measurement noise. The nonlinear dynamics of pulse-coupled systems often strongly deviate from that of time-continuously coupled (e.g., phase-coupled) systems. The factors impacting switching times in pulse-coupled systems are still not well understood. Here we systematically investigate switching times in dependence of the levels of noise and intrinsic dissipation in the system. We specifically reveal how local responses to pulses coact with external noise. Our findings confirm that, like in time-continuous phase-coupled systems, piecewise-continuous pulse-coupled systems exhibit switching times that transiently increase exponentially with the number of switches up to some order of magnitude set by the noise level. Complementarily, we show that switching times may constitute a good predictor for the computation reliability, indicating how often an input signal must be reiterated. By characterizing switching times between two saddles in conjunction with the reliability of a computation, our results provide a first step beyond the coding of input signal identities toward a complementary coding for the intensity of those signals. The results offer insights on how future heteroclinic computing systems may operate under natural, and thus noisy, conditions.

  15. Parallelized reliability estimation of reconfigurable computer networks

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Das, Subhendu; Palumbo, Dan

    1990-01-01

    A parallelized system, ASSURE, for computing the reliability of embedded avionics flight control systems which are able to reconfigure themselves in the event of failure is described. ASSURE accepts a grammar that describes a reliability semi-Markov state-space. From this it creates a parallel program that simultaneously generates and analyzes the state-space, placing upper and lower bounds on the probability of system failure. ASSURE is implemented on a 32-node Intel iPSC/860, and has achieved high processor efficiencies on real problems. Through a combination of improved algorithms, exploitation of parallelism, and use of an advanced microprocessor architecture, ASSURE has reduced the execution time on substantial problems by a factor of one thousand over previous workstation implementations. Furthermore, ASSURE's parallel execution rate on the iPSC/860 is an order of magnitude faster than its serial execution rate on a Cray-2 supercomputer. While dynamic load balancing is necessary for ASSURE's good performance, it is needed only infrequently; the particular method of load balancing used does not substantially affect performance.

  16. Hardware based redundant multi-threading inside a GPU for improved reliability

    DOEpatents

    Sridharan, Vilas; Gurumurthi, Sudhanva

    2015-05-05

    A system and method for verifying computation output using computer hardware are provided. Instances of computation are generated and processed on hardware-based processors. As instances of computation are processed, each instance of computation receives a load accessible to other instances of computation. Instances of output are generated by processing the instances of computation. The instances of output are verified against each other in a hardware based processor to ensure accuracy of the output.

  17. Quantifying Digital Ulcers in Systemic Sclerosis: Reliability of Computer-Assisted Planimetry in Measuring Lesion Size.

    PubMed

    Simpson, V; Hughes, M; Wilkinson, J; Herrick, A L; Dinsdale, G

    2018-03-01

    Digital ulcers are a major problem in patients with systemic sclerosis (SSc), causing severe pain and impairment of hand function. In addition, digital ulcers heal slowly and sometimes become infected, which can lead to gangrene and necessitate amputation if appropriate intervention is not taken. A reliable, objective method for assessing digital ulcer healing or progression is needed in both the clinical and research arenas. This study was undertaken to compare 2 computer-assisted planimetry methods of measurement of digital ulcer area on photographs (ellipse and freehand regions of interest [ROIs]), and to assess the reliability of photographic calibration and the 2 methods of area measurement. Photographs were taken of 107 digital ulcers in 36 patients with SSc spectrum disease. Three raters assessed the photographs. Custom software allowed raters to calibrate photograph dimensions and draw ellipse or freehand ROIs. The shapes and dimensions of the ROIs were saved for further analysis. Calibration (by a single rater performing 5 repeats per image) produced an intraclass correlation coefficient (intrarater reliability) of 0.99. The mean ± SD areas of digital ulcers assessed using ellipse and freehand ROIs were 18.7 ± 20.2 mm 2 and 17.6 ± 19.3 mm 2 , respectively. Intrarater and interrater reliability of the ellipse ROI were 0.97 and 0.77, respectively. For the freehand ROI, the intrarater and interrater reliability were 0.98 and 0.76, respectively. Our findings indicate that computer-assisted planimetry methods applied to SSc-related digital ulcers can be extremely reliable. Further work is needed to move toward applying these methods as outcome measures for clinical trials and in clinical settings. © 2017, American College of Rheumatology.

  18. Tse computers. [ultrahigh speed optical processing for two dimensional binary image

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.; Strong, J. P., III

    1977-01-01

    An ultra-high-speed computer that utilizes binary images as its basic computational entity is being developed. The basic logic components perform thousands of operations simultaneously. Technologies of the fiber optics, display, thin film, and semiconductor industries are being utilized in the building of the hardware.

  19. Formal Techniques for Synchronized Fault-Tolerant Systems

    NASA Technical Reports Server (NTRS)

    DiVito, Ben L.; Butler, Ricky W.

    1992-01-01

    We present the formal verification of synchronizing aspects of the Reliable Computing Platform (RCP), a fault-tolerant computing system for digital flight control applications. The RCP uses NMR-style redundancy to mask faults and internal majority voting to purge the effects of transient faults. The system design has been formally specified and verified using the EHDM verification system. Our formalization is based on an extended state machine model incorporating snapshots of local processors clocks.

  20. Artificial Experts: The Computer as Diagnostician Has Definite Limits.

    ERIC Educational Resources Information Center

    Pournelle, Jerry

    1984-01-01

    Argues that, although expert systems--which are supposed to give users all the advantages of consulting with human experts--can be useful for medical diagnosis, where tests tend to be reliable, they can be hazardous in such areas as psychological testing, where test reliability is difficult to measure. (MBR)

  1. Reliability based design optimization: Formulations and methodologies

    NASA Astrophysics Data System (ADS)

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.

  2. Synthesizing cognition in neuromorphic electronic systems

    PubMed Central

    Neftci, Emre; Binas, Jonathan; Rutishauser, Ueli; Chicca, Elisabetta; Indiveri, Giacomo; Douglas, Rodney J.

    2013-01-01

    The quest to implement intelligent processing in electronic neuromorphic systems lacks methods for achieving reliable behavioral dynamics on substrates of inherently imprecise and noisy neurons. Here we report a solution to this problem that involves first mapping an unreliable hardware layer of spiking silicon neurons into an abstract computational layer composed of generic reliable subnetworks of model neurons and then composing the target behavioral dynamics as a “soft state machine” running on these reliable subnets. In the first step, the neural networks of the abstract layer are realized on the hardware substrate by mapping the neuron circuit bias voltages to the model parameters. This mapping is obtained by an automatic method in which the electronic circuit biases are calibrated against the model parameters by a series of population activity measurements. The abstract computational layer is formed by configuring neural networks as generic soft winner-take-all subnetworks that provide reliable processing by virtue of their active gain, signal restoration, and multistability. The necessary states and transitions of the desired high-level behavior are then easily embedded in the computational layer by introducing only sparse connections between some neurons of the various subnets. We demonstrate this synthesis method for a neuromorphic sensory agent that performs real-time context-dependent classification of motion patterns observed by a silicon retina. PMID:23878215

  3. On the reliability of computed chaotic solutions of non-linear differential equations

    NASA Astrophysics Data System (ADS)

    Liao, Shijun

    2009-08-01

    A new concept, namely the critical predictable time Tc, is introduced to give a more precise description of computed chaotic solutions of non-linear differential equations: it is suggested that computed chaotic solutions are unreliable and doubtable when t > Tc. This provides us a strategy to detect reliable solution from a given computed result. In this way, the computational phenomena, such as computational chaos (CC), computational periodicity (CP) and computational prediction uncertainty, which are mainly based on long-term properties of computed time-series, can be completely avoided. Using this concept, the famous conclusion `accurate long-term prediction of chaos is impossible' should be replaced by a more precise conclusion that `accurate prediction of chaos beyond the critical predictable time Tc is impossible'. So, this concept also provides us a timescale to determine whether or not a particular time is long enough for a given non-linear dynamic system. Besides, the influence of data inaccuracy and various numerical schemes on the critical predictable time is investigated in details by using symbolic computation software as a tool. A reliable chaotic solution of Lorenz equation in a rather large interval 0 <= t < 1200 non-dimensional Lorenz time units is obtained for the first time. It is found that the precision of the initial condition and the computed data at each time step, which is mathematically necessary to get such a reliable chaotic solution in such a long time, is so high that it is physically impossible due to the Heisenberg uncertainty principle in quantum physics. This, however, provides us a so-called `precision paradox of chaos', which suggests that the prediction uncertainty of chaos is physically unavoidable, and that even the macroscopical phenomena might be essentially stochastic and thus could be described by probability more economically.

  4. Rollover risk prediction of heavy vehicles by reliability index and empirical modelling

    NASA Astrophysics Data System (ADS)

    Sellami, Yamine; Imine, Hocine; Boubezoul, Abderrahmane; Cadiou, Jean-Charles

    2018-03-01

    This paper focuses on a combination of a reliability-based approach and an empirical modelling approach for rollover risk assessment of heavy vehicles. A reliability-based warning system is developed to alert the driver to a potential rollover before entering into a bend. The idea behind the proposed methodology is to estimate the rollover risk by the probability that the vehicle load transfer ratio (LTR) exceeds a critical threshold. Accordingly, a so-called reliability index may be used as a measure to assess the vehicle safe functioning. In the reliability method, computing the maximum of LTR requires to predict the vehicle dynamics over the bend which can be in some cases an intractable problem or time-consuming. With the aim of improving the reliability computation time, an empirical model is developed to substitute the vehicle dynamics and rollover models. This is done by using the SVM (Support Vector Machines) algorithm. The preliminary obtained results demonstrate the effectiveness of the proposed approach.

  5. Night Sky Weather Monitoring System Using Fish-Eye CCD

    NASA Astrophysics Data System (ADS)

    Tomida, Takayuki; Saito, Yasunori; Nakamura, Ryo; Yamazaki, Katsuya

    Telescope Array (TA) is international joint experiment observing ultra-high energy cosmic rays. TA employs fluorescence detection technique to observe cosmic rays. In this technique, tho existence of cloud significantly affects quality of data. Therefore, cloud monitoring provides important information. We are developing two new methods for evaluating night sky weather with pictures taken by charge-coupled device (CCD) camera. One is evaluating the amount of cloud with pixels brightness. The other is counting the number of stars with contour detection technique. The results of these methods show clear correlation, and we concluded both the analyses are reasonable methods for weather monitoring. We discuss reliability of the star counting method.

  6. Battlefield awareness computers: the engine of battlefield digitization

    NASA Astrophysics Data System (ADS)

    Ho, Jackson; Chamseddine, Ahmad

    1997-06-01

    To modernize the army for the 21st century, the U.S. Army Digitization Office (ADO) initiated in 1995 the Force XXI Battle Command Brigade-and-Below (FBCB2) Applique program which became a centerpiece in the U.S. Army's master plan to win future information wars. The Applique team led by TRW fielded a 'tactical Internet' for Brigade and below command to demonstrate the advantages of 'shared situation awareness' and battlefield digitization in advanced war-fighting experiments (AWE) to be conducted in March 1997 at the Army's National Training Center in California. Computing Devices is designated the primary hardware developer for the militarized version of the battlefield awareness computers. The first generation of militarized battlefield awareness computer, designated as the V3 computer, was an integration of off-the-shelf components developed to meet the agressive delivery requirements of the Task Force XXI AWE. The design efficiency and cost effectiveness of the computer hardware were secondary in importance to delivery deadlines imposed by the March 1997 AWE. However, declining defense budgets will impose cost constraints on the Force XXI production hardware that can only be met by rigorous value engineering to further improve design optimization for battlefield awareness without compromising the level of reliability the military has come to expect in modern military hardened vetronics. To answer the Army's needs for a more cost effective computing solution, Computing Devices developed a second generation 'combat ready' battlefield awareness computer, designated the V3+, which is designed specifically to meet the upcoming demands of Force XXI (FBCB2) and beyond. The primary design objective is to achieve a technologically superior design, value engineered to strike an optimal balance between reliability, life cycle cost, and procurement cost. Recognizing that the diverse digitization demands of Force XXI cannot be adequately met by any one computer hardware solution, Computing Devices is planning to develop a notebook sized military computer designed for space limited vehicle-mounted applications, as well as a high-performance portable workstation equipped with a 19', full color, ultra-high resolution and high brightness active matrix liquid crystal display (AMLCD) targeting the command posts and tactical operations centers (TOC) applications. Together with the wearable computers Computing Devices developed at the Minneapolis facility for dismounted soldiers, Computing Devices will have a complete suite of interoperable battlefield awareness computers spanning the entire spectrum of battle digitization operating environments. Although this paper's primary focus is on a second generation 'combat ready' battlefield awareness computer or the V3+, this paper also briefly discusses the extension of the V3+ architecture to address the needs of the embedded and command post applications.3080

  7. Nodal failure index approach to groundwater remediation design

    USGS Publications Warehouse

    Lee, J.; Reeves, H.W.; Dowding, C.H.

    2008-01-01

    Computer simulations often are used to design and to optimize groundwater remediation systems. We present a new computationally efficient approach that calculates the reliability of remedial design at every location in a model domain with a single simulation. The estimated reliability and other model information are used to select a best remedial option for given site conditions, conceptual model, and available data. To evaluate design performance, we introduce the nodal failure index (NFI) to determine the number of nodal locations at which the probability of success is below the design requirement. The strength of the NFI approach is that selected areas of interest can be specified for analysis and the best remedial design determined for this target region. An example application of the NFI approach using a hypothetical model shows how the spatial distribution of reliability can be used for a decision support system in groundwater remediation design. ?? 2008 ASCE.

  8. Mask fabrication and its applications to extreme ultra-violet diffractive optics

    NASA Astrophysics Data System (ADS)

    Cheng, Yang-Chun

    Short-wavelength radiation around 13nm of wavelength (Extreme Ultra-Violet, EUV) is being considered for patterning microcircuits, and other electronic chips with dimensions in the nanometer range. Interferometric Lithography (IL) uses two beams of radiation to form high-resolution interference fringes, as small as half the wavelength of the radiation used. As a preliminary step toward manufacturing technology, IL can be used to study the imaging properties of materials in a wide spectral range and at nanoscale dimensions. A simple implementation of IL uses two transmission diffraction gratings to form the interference pattern. More complex interference patterns can be created by using different types of transmission gratings. In this thesis, I describe the development of a EUV lithography system that uses diffractive optical elements (DOEs), from simple gratings to holographic structures. The exposure system is setup on a EUV undulator beamline at the Synchrotron Radiation Center, in the Center for NanoTechnology clean room. The setup of the EUV exposure system is relatively simple, while the design and fabrication of the DOE "mask" is complex, and relies on advanced nanofabrication techniques. The EUV interferometric lithography provides reliable EUV exposures of line/space patterns and is ideal for the development of EUV resist technology. In this thesis I explore the fabrication of these DOE for the EUV range, and discuss the processes I have developed for the fabrication of ultra-thin membranes. In addition, I discuss EUV holographic lithography and generalized Talbot imaging techniques to extend the capability of our EUV-IL system to pattern arbitrary shapes, using more coherent sources than the undulator. In a series of experiments, we have demonstrated the use of a soft X-ray (EUV) laser as effective source for EUV lithography. EUV-IL, as implemented at CNTech, is being used by several companies and research organizations to characterize photoresist materials.

  9. Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images

    PubMed Central

    Frey, Eric C.; Humm, John L.; Ljungberg, Michael

    2012-01-01

    The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429

  10. Construction of Blaze at the University of Illinois at Chicago: A Shared, High-Performance, Visual Computer for Next-Generation Cyberinfrastructure-Accelerated Scientific, Engineering, Medical and Public Policy Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Maxine D.; Leigh, Jason

    2014-02-17

    The Blaze high-performance visual computing system serves the high-performance computing research and education needs of University of Illinois at Chicago (UIC). Blaze consists of a state-of-the-art, networked, computer cluster and ultra-high-resolution visualization system called CAVE2(TM) that is currently not available anywhere in Illinois. This system is connected via a high-speed 100-Gigabit network to the State of Illinois' I-WIRE optical network, as well as to national and international high speed networks, such as the Internet2, and the Global Lambda Integrated Facility. This enables Blaze to serve as an on-ramp to national cyberinfrastructure, such as the National Science Foundation’s Blue Waters petascalemore » computer at the National Center for Supercomputing Applications at the University of Illinois at Chicago and the Department of Energy’s Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. DOE award # DE-SC005067, leveraged with NSF award #CNS-0959053 for “Development of the Next-Generation CAVE Virtual Environment (NG-CAVE),” enabled us to create a first-of-its-kind high-performance visual computing system. The UIC Electronic Visualization Laboratory (EVL) worked with two U.S. companies to advance their commercial products and maintain U.S. leadership in the global information technology economy. New applications are being enabled with the CAVE2/Blaze visual computing system that is advancing scientific research and education in the U.S. and globally, and help train the next-generation workforce.« less

  11. Look-ahead Dynamic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-10-20

    Look-ahead dynamic simulation software system incorporates the high performance parallel computing technologies, significantly reduces the solution time for each transient simulation case, and brings the dynamic simulation analysis into on-line applications to enable more transparency for better reliability and asset utilization. It takes the snapshot of the current power grid status, functions in parallel computing the system dynamic simulation, and outputs the transient response of the power system in real time.

  12. DATMAN: A reliability data analysis program using Bayesian updating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, M.; Feltus, M.A.

    1996-12-31

    Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less

  13. Nontarget analysis of polar contaminants in freshwater sediments influenced by pharmaceutical industry using ultra-high-pressure liquid chromatography-quadrupole time-of-flight mass spectrometry.

    PubMed

    Terzic, Senka; Ahel, Marijan

    2011-02-01

    A comprehensive analytical procedure for a reliable identification of nontarget polar contaminants in aquatic sediments was developed, based on the application of ultra-high-pressure liquid chromatography (UHPLC) coupled to hybrid quadrupole time-of-flight mass spectrometry (QTOFMS). The procedure was applied for the analysis of freshwater sediment that was highly impacted by wastewater discharges from the pharmaceutical industry. A number of different contaminants were successfully identified owing to the high mass accuracy of the QTOFMS system, used in combination with high chromatographic resolution of UHPLC. The major compounds, identified in investigated sediment, included a series of polypropylene glycols (n=3-16), alkylbenzene sulfonate and benzalkonium surfactants as well as a number of various pharmaceuticals (chlorthalidone, warfarin, terbinafine, torsemide, zolpidem and macrolide antibiotics). The particular advantage of the applied technique is its capability to detect less known pharmaceutical intermediates and/or transformation products, which have not been previously reported in freshwater sediments. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. Flexible Mixed-Potential-Type (MPT) NO₂ Sensor Based on An Ultra-Thin Ceramic Film.

    PubMed

    You, Rui; Jing, Gaoshan; Yu, Hongyan; Cui, Tianhong

    2017-07-29

    A novel flexible mixed-potential-type (MPT) sensor was designed and fabricated for NO₂ detection from 0 to 500 ppm at 200 °C. An ultra-thin Y₂O₃-doped ZrO₂ (YSZ) ceramic film 20 µm thick was sandwiched between a heating electrode and reference/sensing electrodes. The heating electrode was fabricated by a conventional lift-off process, while the porous reference and the sensing electrodes were fabricated by a two-step patterning method using shadow masks. The sensor's sensitivity is achieved as 58.4 mV/decade at the working temperature of 200 °C, as well as a detection limit of 26.7 ppm and small response time of less than 10 s at 200 ppm. Additionally, the flexible MPT sensor demonstrates superior mechanical stability after bending over 50 times due to the mechanical stability of the YSZ ceramic film. This simply structured, but highly reliable flexible MPT NO₂ sensor may lead to wide application in the automobile industry for vehicle emission systems to reduce NO₂ emissions and improve fuel efficiency.

  15. Flexible Mixed-Potential-Type (MPT) NO2 Sensor Based on An Ultra-Thin Ceramic Film

    PubMed Central

    You, Rui; Jing, Gaoshan; Yu, Hongyan; Cui, Tianhong

    2017-01-01

    A novel flexible mixed-potential-type (MPT) sensor was designed and fabricated for NO2 detection from 0 to 500 ppm at 200 °C. An ultra-thin Y2O3-doped ZrO2 (YSZ) ceramic film 20 µm thick was sandwiched between a heating electrode and reference/sensing electrodes. The heating electrode was fabricated by a conventional lift-off process, while the porous reference and the sensing electrodes were fabricated by a two-step patterning method using shadow masks. The sensor’s sensitivity is achieved as 58.4 mV/decade at the working temperature of 200 °C, as well as a detection limit of 26.7 ppm and small response time of less than 10 s at 200 ppm. Additionally, the flexible MPT sensor demonstrates superior mechanical stability after bending over 50 times due to the mechanical stability of the YSZ ceramic film. This simply structured, but highly reliable flexible MPT NO2 sensor may lead to wide application in the automobile industry for vehicle emission systems to reduce NO2 emissions and improve fuel efficiency. PMID:28758933

  16. Suitability of ultra-high performance liquid chromatography for the determination of fat-soluble nutritional status (vitamins A, E, D, and individual carotenoids).

    PubMed

    Granado-Lorencio, F; Herrero-Barbudo, C; Blanco-Navarro, I; Pérez-Sacristán, B

    2010-06-01

    Our aim was to assess the suitability of ultra-high performance liquid chromatography (UHPLC) for the simultaneous determination of biomarkers of vitamins A (retinol, retinyl esters), E (alpha- and gamma-tocopherol), D (25-OH-vitamin D), and the major carotenoids in human serum to be used in clinical practice. UHPLC analysis was performed on HSS T3 column (2.1 x 100 mm; 1.8 microm) using gradient elution and UV-VIS detection. The system allows the simultaneous determination of retinol, retinyl palmitate, 25-OH-vitamin D, alpha- and gamma-tocopherol, lutein plus zeaxanthin, alpha-carotene, beta-carotene, alpha- and beta-cryptoxanthin and lycopene. The method showed a good linearity over the physiological range with an adequate accuracy in samples from quality control programs. Suitability of the method in clinical practice was tested by analyzing samples (n = 286) from patients. In conclusion, UHPLC constitutes a reliable approach for nutrient/biomarker profiling allowing the rapid, simultaneous and low-cost determination of vitamins A, E, and D (including vitamers and ester forms) and the major carotenoids in clinical practice.

  17. Analytical interference of 4-hydroxy-3-methoxymethamphetamine with the measurement of plasma free normetanephrine by ultra-high pressure liquid chromatography-tandem mass spectrometry.

    PubMed

    Dunand, Marielle; Donzelli, Massimiliano; Rickli, Anna; Hysek, Cédric M; Liechti, Matthias E; Grouzmann, Eric

    2014-08-01

    The diagnosis of pheochromocytoma relies on the measurement of plasma free metanephrines assay whose reliability has been considerably improved by ultra-high pressure liquid chromatography tandem mass spectrometry (UHPLC-MS/MS). Here we report an analytical interference occurring between 4-hydroxy-3-methoxymethamphetamine (HMMA), a metabolite of 3,4-methylenedioxymethamphetamine (MDMA, "Ecstasy"), and normetanephrine (NMN) since they share a common pharmacophore resulting in the same product ion after fragmentation. Synthetic HMMA was spiked into plasma samples containing various concentrations of NMN and the intensity of the interference was determined by UPLC-MS/MS before and after improvement of the analytical method. Using a careful adjustment of chromatographic conditions including the change of the UPLC analytical column, we were able to distinguish both compounds. HMMA interference for NMN determination should be seriously considered since MDMA activates the sympathetic nervous system and if confounded with NMN may lead to false-positive tests when performing a differential diagnostic of pheochromocytoma. Copyright © 2014 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  18. Decentralized State Estimation and Remedial Control Action for Minimum Wind Curtailment Using Distributed Computing Platform

    DOE PAGES

    Liu, Ren; Srivastava, Anurag K.; Bakken, David E.; ...

    2017-08-17

    Intermittency of wind energy poses a great challenge for power system operation and control. Wind curtailment might be necessary at the certain operating condition to keep the line flow within the limit. Remedial Action Scheme (RAS) offers quick control action mechanism to keep reliability and security of the power system operation with high wind energy integration. In this paper, a new RAS is developed to maximize the wind energy integration without compromising the security and reliability of the power system based on specific utility requirements. A new Distributed Linear State Estimation (DLSE) is also developed to provide the fast andmore » accurate input data for the proposed RAS. A distributed computational architecture is designed to guarantee the robustness of the cyber system to support RAS and DLSE implementation. The proposed RAS and DLSE is validated using the modified IEEE-118 Bus system. Simulation results demonstrate the satisfactory performance of the DLSE and the effectiveness of RAS. Real-time cyber-physical testbed has been utilized to validate the cyber-resiliency of the developed RAS against computational node failure.« less

  19. Decentralized State Estimation and Remedial Control Action for Minimum Wind Curtailment Using Distributed Computing Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ren; Srivastava, Anurag K.; Bakken, David E.

    Intermittency of wind energy poses a great challenge for power system operation and control. Wind curtailment might be necessary at the certain operating condition to keep the line flow within the limit. Remedial Action Scheme (RAS) offers quick control action mechanism to keep reliability and security of the power system operation with high wind energy integration. In this paper, a new RAS is developed to maximize the wind energy integration without compromising the security and reliability of the power system based on specific utility requirements. A new Distributed Linear State Estimation (DLSE) is also developed to provide the fast andmore » accurate input data for the proposed RAS. A distributed computational architecture is designed to guarantee the robustness of the cyber system to support RAS and DLSE implementation. The proposed RAS and DLSE is validated using the modified IEEE-118 Bus system. Simulation results demonstrate the satisfactory performance of the DLSE and the effectiveness of RAS. Real-time cyber-physical testbed has been utilized to validate the cyber-resiliency of the developed RAS against computational node failure.« less

  20. The New Xpert MTB/RIF Ultra: Improving Detection of Mycobacterium tuberculosis and Resistance to Rifampin in an Assay Suitable for Point-of-Care Testing

    PubMed Central

    Simmons, Ann Marie; Rowneki, Mazhgan; Parmar, Heta; Cao, Yuan; Ryan, Jamie; Banada, Padmapriya P.; Deshpande, Srinidhi; Shenai, Shubhada; Gall, Alexander; Glass, Jennifer; Krieswirth, Barry; Schumacher, Samuel G.; Nabeta, Pamela; Tukvadze, Nestani; Rodrigues, Camilla; Skrahina, Alena; Tagliani, Elisa; Cirillo, Daniela M.; Davidow, Amy; Denkinger, Claudia M.; Persing, David; Kwiatkowski, Robert; Jones, Martin

    2017-01-01

    ABSTRACT The Xpert MTB/RIF assay (Xpert) is a rapid test for tuberculosis (TB) and rifampin resistance (RIF-R) suitable for point-of-care testing. However, it has decreased sensitivity in smear-negative sputum, and false identification of RIF-R occasionally occurs. We developed the Xpert MTB/RIF Ultra assay (Ultra) to improve performance. Ultra and Xpert limits of detection (LOD), dynamic ranges, and RIF-R rpoB mutation detection were tested on Mycobacterium tuberculosis DNA or sputum samples spiked with known numbers of M. tuberculosis H37Rv or Mycobacterium bovis BCG CFU. Frozen and prospectively collected clinical samples from patients suspected of having TB, with and without culture-confirmed TB, were also tested. For M. tuberculosis H37Rv, the LOD was 15.6 CFU/ml of sputum for Ultra versus 112.6 CFU/ml of sputum for Xpert, and for M. bovis BCG, it was 143.4 CFU/ml of sputum for Ultra versus 344 CFU/ml of sputum for Xpert. Ultra resulted in no false-positive RIF-R specimens, while Xpert resulted in two false-positive RIF-R specimens. All RIF-R-associated M. tuberculosis rpoB mutations tested were identified by Ultra. Testing on clinical sputum samples, Ultra versus Xpert, resulted in an overall sensitivity of 87.5% (95% confidence interval [CI], 82.1, 91.7) versus 81.0% (95% CI, 74.9, 86.2) and a sensitivity on sputum smear-negative samples of 78.9% (95% CI, 70.0, 86.1) versus 66.1% (95% CI, 56.4, 74.9). Both tests had a specificity of 98.7% (95% CI, 93.0, 100), and both had comparable accuracies for detection of RIF-R in these samples. Ultra should significantly improve TB detection, especially in patients with paucibacillary disease, and may provide more-reliable RIF-R detection. PMID:28851844

  1. Space robotics--DLR's telerobotic concepts, lightweight arms and articulated hands.

    PubMed

    Hirzinger, G; Brunner, B; Landzettel, K; Sporer, N; Butterfass, J; Schedl, M

    2003-01-01

    The paper briefly outlines DLR's experience with real space robot missions (ROTEX and ETS VII). It then discusses forthcoming projects, e.g., free-flying systems in low or geostationary orbit and robot systems around the space station ISS, where the telerobotic system MARCO might represent a common baseline. Finally it describes our efforts in developing a new generation of "mechatronic" ultra-light weight arms with multifingered hands. The third arm generation is operable now (approaching present-day technical limits). In a similar way DLR's four-fingered hand II was a big step towards higher reliability and yet better performance. Artificial robonauts for space are a central goal now for the Europeans as well as for NASA, and the first verification tests of DLR's joint components are supposed to fly already end of 93 on the space station.

  2. Enhancing Security by System-Level Virtualization in Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Sun, Dawei; Chang, Guiran; Tan, Chunguang; Wang, Xingwei

    Many trends are opening up the era of cloud computing, which will reshape the IT industry. Virtualization techniques have become an indispensable ingredient for almost all cloud computing system. By the virtual environments, cloud provider is able to run varieties of operating systems as needed by each cloud user. Virtualization can improve reliability, security, and availability of applications by using consolidation, isolation, and fault tolerance. In addition, it is possible to balance the workloads by using live migration techniques. In this paper, the definition of cloud computing is given; and then the service and deployment models are introduced. An analysis of security issues and challenges in implementation of cloud computing is identified. Moreover, a system-level virtualization case is established to enhance the security of cloud computing environments.

  3. Analysis of gas membrane ultra-high purification of small quantities of mono-isotopic silane

    DOE PAGES

    de Almeida, Valmor F.; Hart, Kevin J.

    2017-01-03

    A small quantity of high-value, crude, mono-isotopic silane is a prospective gas for a small-scale, high-recovery, ultra-high membrane purification process. This is an unusual application of gas membrane separation for which we provide a comprehensive analysis of a simple purification model. The goal is to develop direct analytic expressions for estimating the feasibility and efficiency of the method, and guide process design; this is only possible for binary mixtures of silane in the dilute limit which is a somewhat realistic case. In addition, analytic solutions are invaluable to verify numerical solutions obtained from computer-aided methods. Hence, in this paper wemore » provide new analytic solutions for the purification loops proposed. Among the common impurities in crude silane, methane poses a special membrane separation challenge since it is chemically similar to silane. Other potential problematic compounds are: ethylene, diborane and ethane (in this order). Nevertheless, we demonstrate, theoretically, that a carefully designed membrane system may be able to purify mono-isotopic, crude silane to electronics-grade level in a reasonable amount of time and expenses. We advocate a combination of membrane materials that preferentially reject heavy impurities based on mobility selectivity, and light impurities based on solubility selectivity. We provide estimates for the purification of significant contaminants of interest. In this study, we suggest cellulose acetate and polydimethylsiloxane as examples of membrane materials on the basis of limited permeability data found in the open literature. We provide estimates on the membrane area needed and priming volume of the cell enclosure for fabrication purposes when using the suggested membrane materials. These estimates are largely theoretical in view of the absence of reliable experimental data for the permeability of silane. And finally, future extension of this work to the non-dilute limit may apply to the recovery of silane from rejected streams of natural silicon semi-conductor processes.« less

  4. Analysis of gas membrane ultra-high purification of small quantities of mono-isotopic silane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Almeida, Valmor F.; Hart, Kevin J.

    A small quantity of high-value, crude, mono-isotopic silane is a prospective gas for a small-scale, high-recovery, ultra-high membrane purification process. This is an unusual application of gas membrane separation for which we provide a comprehensive analysis of a simple purification model. The goal is to develop direct analytic expressions for estimating the feasibility and efficiency of the method, and guide process design; this is only possible for binary mixtures of silane in the dilute limit which is a somewhat realistic case. In addition, analytic solutions are invaluable to verify numerical solutions obtained from computer-aided methods. Hence, in this paper wemore » provide new analytic solutions for the purification loops proposed. Among the common impurities in crude silane, methane poses a special membrane separation challenge since it is chemically similar to silane. Other potential problematic compounds are: ethylene, diborane and ethane (in this order). Nevertheless, we demonstrate, theoretically, that a carefully designed membrane system may be able to purify mono-isotopic, crude silane to electronics-grade level in a reasonable amount of time and expenses. We advocate a combination of membrane materials that preferentially reject heavy impurities based on mobility selectivity, and light impurities based on solubility selectivity. We provide estimates for the purification of significant contaminants of interest. In this study, we suggest cellulose acetate and polydimethylsiloxane as examples of membrane materials on the basis of limited permeability data found in the open literature. We provide estimates on the membrane area needed and priming volume of the cell enclosure for fabrication purposes when using the suggested membrane materials. These estimates are largely theoretical in view of the absence of reliable experimental data for the permeability of silane. And finally, future extension of this work to the non-dilute limit may apply to the recovery of silane from rejected streams of natural silicon semi-conductor processes.« less

  5. Chemical profiling of Qixue Shuangbu Tincture by ultra-performance liquid chromatography with electrospray ionization quadrupole-time-of-flight high-definition mass spectrometry (UPLC-QTOF/MS).

    PubMed

    Chen, Lin-Wei; Wang, Qin; Qin, Kun-Ming; Wang, Xiao-Li; Wang, Bin; Chen, Dan-Ni; Cai, Bao-Chang; Cai, Ting

    2016-02-01

    The present study was designed to develop and validate a sensitive and reliable ultra high performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry (UPLC-QTOF/MS) method to separate and identify the chemical constituents of Qixue Shuangbu Tincture (QXSBT), a classic traditional Chinese medicine (TCM) prescription. Under the optimized UPLC and QTOF/MS conditions, 56 components in QXSBT, including chalcones, triterpenoids, protopanaxatriol, flavones and flavanones were identified and tentatively characterized within a running time of 42 min. The components were identified by comparing the retention times, accurate mass, and mass spectrometric fragmentation characteristic ions, and matching empirical molecular formula with that of the published compounds. In conclusion, the established UPLC-QTOF/MS method was reliable for a rapid identification of complicated components in the TCM prescriptions. Copyright © 2016 China Pharmaceutical University. Published by Elsevier B.V. All rights reserved.

  6. Airborne Advanced Reconfigurable Computer System (ARCS)

    NASA Technical Reports Server (NTRS)

    Bjurman, B. E.; Jenkins, G. M.; Masreliez, C. J.; Mcclellan, K. L.; Templeman, J. E.

    1976-01-01

    A digital computer subsystem fault-tolerant concept was defined, and the potential benefits and costs of such a subsystem were assessed when used as the central element of a new transport's flight control system. The derived advanced reconfigurable computer system (ARCS) is a triple-redundant computer subsystem that automatically reconfigures, under multiple fault conditions, from triplex to duplex to simplex operation, with redundancy recovery if the fault condition is transient. The study included criteria development covering factors at the aircraft's operation level that would influence the design of a fault-tolerant system for commercial airline use. A new reliability analysis tool was developed for evaluating redundant, fault-tolerant system availability and survivability; and a stringent digital system software design methodology was used to achieve design/implementation visibility.

  7. Use of Soft Computing Technologies for a Qualitative and Reliable Engine Control System for Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Brown, Terry; Crumbley, R. T. (Technical Monitor)

    2001-01-01

    The problem to be addressed in this paper is to explore how the use of Soft Computing Technologies (SCT) could be employed to improve overall vehicle system safety, reliability, and rocket engine performance by development of a qualitative and reliable engine control system (QRECS). Specifically, this will be addressed by enhancing rocket engine control using SCT, innovative data mining tools, and sound software engineering practices used in Marshall's Flight Software Group (FSG). The principle goals for addressing the issue of quality are to improve software management, software development time, software maintenance, processor execution, fault tolerance and mitigation, and nonlinear control in power level transitions. The intent is not to discuss any shortcomings of existing engine control methodologies, but to provide alternative design choices for control, implementation, performance, and sustaining engineering, all relative to addressing the issue of reliability. The approaches outlined in this paper will require knowledge in the fields of rocket engine propulsion (system level), software engineering for embedded flight software systems, and soft computing technologies (i.e., neural networks, fuzzy logic, data mining, and Bayesian belief networks); some of which are briefed in this paper. For this effort, the targeted demonstration rocket engine testbed is the MC-1 engine (formerly FASTRAC) which is simulated with hardware and software in the Marshall Avionics & Software Testbed (MAST) laboratory that currently resides at NASA's Marshall Space Flight Center, building 4476, and is managed by the Avionics Department. A brief plan of action for design, development, implementation, and testing a Phase One effort for QRECS is given, along with expected results. Phase One will focus on development of a Smart Start Engine Module and a Mainstage Engine Module for proper engine start and mainstage engine operations. The overall intent is to demonstrate that by employing soft computing technologies, the quality and reliability of the overall scheme to engine controller development is further improved and vehicle safety is further insured. The final product that this paper proposes is an approach to development of an alternative low cost engine controller that would be capable of performing in unique vision spacecraft vehicles requiring low cost advanced avionics architectures for autonomous operations from engine pre-start to engine shutdown.

  8. Conductive bridging random access memory—materials, devices and applications

    NASA Astrophysics Data System (ADS)

    Kozicki, Michael N.; Barnaby, Hugh J.

    2016-11-01

    We present a review and primer on the subject of conductive bridging random access memory (CBRAM), a metal ion-based resistive switching technology, in the context of current research and the near-term requirements of the electronics industry in ultra-low energy devices and new computing paradigms. We include extensive discussions of the materials involved, the underlying physics and electrochemistry, the critical roles of ion transport and electrode reactions in conducting filament formation and device switching, and the electrical characteristics of the devices. Two general cation material systems are given—a fast ion chacogenide electrolyte and a lower ion mobility oxide ion conductor, and numerical examples are offered to enhance understanding of the operation of devices based on these. The effect of device conditioning on the activation energy for ion transport and consequent switching speed is discussed, as well as the mechanisms involved in the removal of the conducting bridge. The morphology of the filament and how this could be influenced by the solid electrolyte structure is described, and the electrical characteristics of filaments with atomic-scale constrictions are discussed. Consideration is also given to the thermal and mechanical environments within the devices. Finite element and compact modelling illustrations are given and aspects of CBRAM storage elements in memory circuits and arrays are included. Considerable emphasis is placed on the effects of ionizing radiation on CBRAM since this is important in various high reliability applications, and the potential uses of the devices in reconfigurable logic and neuromorphic systems is also discussed.

  9. Developing Antimatter Containment Technology: Modeling Charged Particle Oscillations in a Penning-Malmberg Trap

    NASA Technical Reports Server (NTRS)

    Chakrabarti, S.; Martin, J. J.; Pearson, J. B.; Lewis, R. A.

    2003-01-01

    The NASA MSFC Propulsion Research Center (PRC) is conducting a research activity examining the storage of low energy antiprotons. The High Performance Antiproton Trap (HiPAT) is an electromagnetic system (Penning-Malmberg design) consisting of a 4 Tesla superconductor, a high voltage confinement electrode system, and an ultra high vacuum test section; designed with an ultimate goal of maintaining charged particles with a half-life of 18 days. Currently, this system is being experimentally evaluated using normal matter ions which are cheap to produce and relatively easy to handle and provide a good indication of overall trap behavior, with the exception of assessing annihilation losses. Computational particle-in-cell plasma modeling using the XOOPIC code is supplementing the experiments. Differing electrode voltage configurations are employed to contain charged particles, typically using flat, modified flat and harmonic potential wells. Ion cloud oscillation frequencies are obtained experimentally by amplification of signals induced on the electrodes by the particle motions. XOOPIC simulations show that for given electrode voltage configurations, the calculated charged particle oscillation frequencies are close to experimental measurements. As a two-dimensional axisymmetric code, XOOPIC cannot model azimuthal plasma variations, such as those induced by radio-frequency (RF) modulation of the central quadrupole electrode in experiments designed to enhance ion cloud containment. However, XOOPIC can model analytically varying electric potential boundary conditions and particle velocity initial conditions. Application of these conditions produces ion cloud axial and radial oscillation frequency modes of interest in achieving the goal of optimizing HiPAT for reliable containment of antiprotons.

  10. Animal models for treatment of unresectable liver tumours: a histopathologic and ultra-structural study of cellular toxic changes after electrochemical treatment in rat and dog liver.

    PubMed

    von Euler, Henrik; Olsson, Jerker M; Hultenby, Kjell; Thörne, Anders; Lagerstedt, Anne-Sofie

    2003-04-01

    Electrochemical treatment (EChT) has been taken under serious consideration as being one of several techniques for local treatment of malignancies. The advantage of EChT is the minimal invasive approach and the absence of serious side effects. Macroscopic, histopathological and ultra-structural findings in liver following a four-electrode configuration (dog) and a two-electrode EChT design (dog and rat) were studied. 30 female Sprague-Dawley rats and four female beagle dogs were studied with EChT using Platinum:Iridium electrodes and the delivered dose was 5, 10 or 90 C (As). After EChT, the animals were euthanized. The distribution of the lesions was predictable, irrespective of dose and electrode configuration. Destruction volumes were found to fit into a logarithmic curve (dose-response). Histopathological examination confirmed a spherical (rat) and cylindrical/ellipsoidal (dog) lesion. The type of necrosis differed due to electrode polarity. Ultra-structural analysis showed distinct features of cell damage depending on the distance from the electrode. Histopathological and ultra-structural examination demonstrated that the liver tissue close to the border of the lesion displayed a normal morphology. The in vivo dose-planning model is reliable, even in species with larger tissue mass such as dogs. A multi-electrode EChT-design could obtain predictable lesions. The cellular toxicity following EChT is clearly identified and varies with the distance from the electrode and polarity. The distinct border between the lesion and normal tissue suggests that EChT in a clinical setting for the treatment of liver tumours can give a reliable destruction margin.

  11. Characterization of Ultra-fine Grained and Nanocrystalline Materials Using Transmission Kikuchi Diffraction

    PubMed Central

    Proust, Gwénaëlle; Trimby, Patrick; Piazolo, Sandra; Retraint, Delphine

    2017-01-01

    One of the challenges in microstructure analysis nowadays resides in the reliable and accurate characterization of ultra-fine grained (UFG) and nanocrystalline materials. The traditional techniques associated with scanning electron microscopy (SEM), such as electron backscatter diffraction (EBSD), do not possess the required spatial resolution due to the large interaction volume between the electrons from the beam and the atoms of the material. Transmission electron microscopy (TEM) has the required spatial resolution. However, due to a lack of automation in the analysis system, the rate of data acquisition is slow which limits the area of the specimen that can be characterized. This paper presents a new characterization technique, Transmission Kikuchi Diffraction (TKD), which enables the analysis of the microstructure of UFG and nanocrystalline materials using an SEM equipped with a standard EBSD system. The spatial resolution of this technique can reach 2 nm. This technique can be applied to a large range of materials that would be difficult to analyze using traditional EBSD. After presenting the experimental set up and describing the different steps necessary to realize a TKD analysis, examples of its use on metal alloys and minerals are shown to illustrate the resolution of the technique and its flexibility in term of material to be characterized. PMID:28447998

  12. Characterization of Ultra-fine Grained and Nanocrystalline Materials Using Transmission Kikuchi Diffraction.

    PubMed

    Proust, Gwénaëlle; Trimby, Patrick; Piazolo, Sandra; Retraint, Delphine

    2017-04-01

    One of the challenges in microstructure analysis nowadays resides in the reliable and accurate characterization of ultra-fine grained (UFG) and nanocrystalline materials. The traditional techniques associated with scanning electron microscopy (SEM), such as electron backscatter diffraction (EBSD), do not possess the required spatial resolution due to the large interaction volume between the electrons from the beam and the atoms of the material. Transmission electron microscopy (TEM) has the required spatial resolution. However, due to a lack of automation in the analysis system, the rate of data acquisition is slow which limits the area of the specimen that can be characterized. This paper presents a new characterization technique, Transmission Kikuchi Diffraction (TKD), which enables the analysis of the microstructure of UFG and nanocrystalline materials using an SEM equipped with a standard EBSD system. The spatial resolution of this technique can reach 2 nm. This technique can be applied to a large range of materials that would be difficult to analyze using traditional EBSD. After presenting the experimental set up and describing the different steps necessary to realize a TKD analysis, examples of its use on metal alloys and minerals are shown to illustrate the resolution of the technique and its flexibility in term of material to be characterized.

  13. Ultra-high-throughput microarray generation and liquid dispensing using multiple disposable piezoelectric ejectors.

    PubMed

    Hsieh, Huangpin Ben; Fitch, John; White, Dave; Torres, Frank; Roy, Joy; Matusiak, Robert; Krivacic, Bob; Kowalski, Bob; Bruce, Richard; Elrod, Scott

    2004-03-01

    The authors have constructed an array of 12 piezoelectric ejectors for printing biological materials. A single-ejector footprint is 8 mm in diameter, standing 4 mm high with 2 reservoirs totaling 76 micro L. These ejectors have been tested by dispensing various fluids in several environmental conditions. Reliable drop ejection can be expected in both humidity-controlled and ambient environments over extended periods of time and in hot and cold room temperatures. In a prototype system, 12 ejectors are arranged in a rack, together with an X - Y stage, to allow printing any pattern desired. Printed arrays of features are created with a biological solution containing bovine serum albumin conjugated oligonucleotides, dye, and salty buffer. This ejector system is designed for the ultra-high-throughput generation of arrays on a variety of surfaces. These single or racked ejectors could be used as long-term storage vessels for materials such as small molecules, nucleic acids, proteins, or cell libraries, which would allow for efficient preprogrammed selection of individual clones and greatly reduce the chance of cross-contamination and loss due to transfer. A new generation of design ideas includes plastic injection molded ejectors that are inexpensive and disposable and handheld personal pipettes for liquid transfer in the nanoliter regime.

  14. Promising Results from Three NASA SBIR Solar Array Technology Development Programs

    NASA Technical Reports Server (NTRS)

    Eskenazi, Mike; White, Steve; Spence, Brian; Douglas, Mark; Glick, Mike; Pavlick, Ariel; Murphy, David; O'Neill, Mark; McDanal, A. J.; Piszczor, Michael

    2005-01-01

    Results from three NASA SBIR solar array technology programs are presented. The programs discussed are: 1) Thin Film Photovoltaic UltraFlex Solar Array; 2) Low Cost/Mass Electrostatically Clean Solar Array (ESCA); and 3) Stretched Lens Array SquareRigger (SLASR). The purpose of the Thin Film UltraFlex (TFUF) Program is to mature and validate the use of advanced flexible thin film photovoltaics blankets as the electrical subsystem element within an UltraFlex solar array structural system. In this program operational prototype flexible array segments, using United Solar amorphous silicon cells, are being manufactured and tested for the flight qualified UltraFlex structure. In addition, large size (e.g. 10 kW GEO) TFUF wing systems are being designed and analyzed. Thermal cycle and electrical test and analysis results from the TFUF program are presented. The purpose of the second program entitled, Low Cost/Mass Electrostatically Clean Solar Array (ESCA) System, is to develop an Electrostatically Clean Solar Array meeting NASA s design requirements and ready this technology for commercialization and use on the NASA MMS and GED missions. The ESCA designs developed use flight proven materials and processes to create a ESCA system that yields low cost, low mass, high reliability, high power density, and is adaptable to any cell type and coverglass thickness. All program objectives, which included developing specifications, creating ESCA concepts, concept analysis and trade studies, producing detailed designs of the most promising ESCA treatments, manufacturing ESCA demonstration panels, and LEO (2,000 cycles) and GEO (1,350 cycles) thermal cycling testing of the down-selected designs were successfully achieved. The purpose of the third program entitled, "High Power Platform for the Stretched Lens Array," is to develop an extremely lightweight, high efficiency, high power, high voltage, and low stowed volume solar array suitable for very high power (multi-kW to MW) applications. These objectives are achieved by combining two cutting edge technologies, the SquareRigger solar array structure and the Stretched Lens Array (SLA). The SLA SquareRigger solar array is termed SLASR. All program objectives, which included developing specifications, creating preliminary designs for a near-term SLASR, detailed structural, mass, power, and sizing analyses, fabrication and power testing of a functional flight-like SLASR solar blanket, were successfully achieved.

  15. HOME - An application of fault-tolerant techniques and system self-testing. [independent computer for helicopter flight control command monitoring

    NASA Technical Reports Server (NTRS)

    Holden, D. G.

    1975-01-01

    Hard Over Monitoring Equipment (HOME) has been designed to complement and enhance the flight safety of a flight research helicopter. HOME is an independent, highly reliable, and fail-safe special purpose computer that monitors the flight control commands issued by the flight control computer of the helicopter. In particular, HOME detects the issuance of a hazardous hard-over command for any of the four flight control axes and transfers the control of the helicopter to the flight safety pilot. The design of HOME incorporates certain reliability and fail-safe enhancement design features, such as triple modular redundancy, majority logic voting, fail-safe dual circuits, independent status monitors, in-flight self-test, and a built-in preflight exerciser. The HOME design and operation is described with special emphasis on the reliability and fail-safe aspects of the design.

  16. Development and clinical application of a computer-aided real-time feedback system for detecting in-bed physical activities.

    PubMed

    Lu, Liang-Hsuan; Chiang, Shang-Lin; Wei, Shun-Hwa; Lin, Chueh-Ho; Sung, Wen-Hsu

    2017-08-01

    Being bedridden long-term can cause deterioration in patients' physiological function and performance, limiting daily activities and increasing the incidence of falls and other accidental injuries. Little research has been carried out in designing effective detecting systems to monitor the posture and status of bedridden patients and to provide accurate real-time feedback on posture. The purposes of this research were to develop a computer-aided system for real-time detection of physical activities in bed and to validate the system's validity and test-retest reliability in determining eight postures: motion leftward/rightward, turning over leftward/rightward, getting up leftward/rightward, and getting off the bed leftward/rightward. The in-bed physical activity detecting system consists mainly of a clinical sickbed, signal amplifier, a data acquisition (DAQ) system, and operating software for computing and determining postural changes associated with four load cell sensing components. Thirty healthy subjects (15 males and 15 females, mean age = 27.8 ± 5.3 years) participated in the study. All subjects were asked to execute eight in-bed activities in a random order and to participate in an evaluation of the test-retest reliability of the results 14 days later. Spearman's rank correlation coefficient was used to compare the system's determinations of postural states with researchers' recordings of postural changes. The test-retest reliability of the system's ability to determine postures was analyzed using the interclass correlation coefficient ICC(3,1). The system was found to exhibit high validity and accuracy (r = 0.928, p < 0.001; accuracy rate: 87.9%) in determining in-bed displacement, turning over, sitting up, and getting off the bed. The system was particularly accurate in detecting motion rightward (90%), turning over leftward (83%), sitting up leftward or rightward (87-93%), and getting off the bed (100%). The test-retest reliability ICC(3,1) value was 0.968 (p < 0.001). The system developed in this study exhibits satisfactory validity and reliability in detecting changes in-bed body postures and can be beneficial in assisting caregivers and clinical nursing staff in detecting the in-bed physical activities of bedridden patients and in developing fall prevention warning systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. 1D-VAR Retrieval Using Superchannels

    NASA Technical Reports Server (NTRS)

    Liu, Xu; Zhou, Daniel; Larar, Allen; Smith, William L.; Schluessel, Peter; Mango, Stephen; SaintGermain, Karen

    2008-01-01

    Since modern ultra-spectral remote sensors have thousands of channels, it is difficult to include all of them in a 1D-var retrieval system. We will describe a physical inversion algorithm, which includes all available channels for the atmospheric temperature, moisture, cloud, and surface parameter retrievals. Both the forward model and the inversion algorithm compress the channel radiances into super channels. These super channels are obtained by projecting the radiance spectra onto a set of pre-calculated eigenvectors. The forward model provides both super channel properties and jacobian in EOF space directly. For ultra-spectral sensors such as Infrared Atmospheric Sounding Interferometer (IASI) and the NPOESS Airborne Sounder Testbed Interferometer (NAST), a compression ratio of more than 80 can be achieved, leading to a significant reduction in computations involved in an inversion process. Results will be shown applying the algorithm to real IASI and NAST data.

  18. Ultra-low frequency vibration data acquisition concerns in operating flight simulators. [Motion sickness inducing vibrations in flight simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Hoy, B.W.

    1988-01-01

    The measurement of ultra-low frequency vibration (.01 to 1.0 Hz) in motion based flight simulators was undertaken to quantify the energy and frequencies of motion present during operation. Methods of measurement, the selection of transducers, recorders, and analyzers and the development of a test plan, as well as types of analysis are discussed. Analysis of the data using a high-speed minicomputer and a comparison of the computer analysis with standard FFT analysis are also discussed. Measurement of simulator motion with the pilot included as part of the control dynamics had not been done up to this time. The data aremore » being used to evaluate the effect of low frequency energy on the vestibular system of the air crew, and the incidence of simulator induced sickness. 11 figs.« less

  19. Multigigabit optical transceivers for high-data rate military applications

    NASA Astrophysics Data System (ADS)

    Catanzaro, Brian E.; Kuznia, Charlie

    2012-01-01

    Avionics has experienced an ever increasing demand for processing power and communication bandwidth. Currently deployed avionics systems require gigabit communication using opto-electronic transceivers connected with parallel optical fiber. Ultra Communications has developed a series of transceiver solutions combining ASIC technology with flip-chip bonding and advanced opto-mechanical molded optics. Ultra Communications custom high speed ASIC chips are developed using an SoS (silicon on sapphire) process. These circuits are flip chip bonded with sources (VCSEL arrays) and detectors (PIN diodes) to create an Opto-Electronic Integrated Circuit (OEIC). These have been combined with micro-optics assemblies to create transceivers with interfaces to standard fiber array (MT) cabling technology. We present an overview of the demands for transceivers in military applications and how new generation transceivers leverage both previous generation military optical transceivers as well as commercial high performance computing optical transceivers.

  20. Kron-Branin modelling of ultra-short pulsed signal microelectrode

    NASA Astrophysics Data System (ADS)

    Xu, Zhifei; Ravelo, Blaise; Liu, Yang; Zhao, Lu; Delaroche, Fabien; Vurpillot, Francois

    2018-06-01

    An uncommon circuit modelling of microelectrode for ultra-short signal propagation is developed. The proposed model is based on the Tensorial Analysis of Network (TAN) using the Kron-Branin (KB) formalism. The systemic graph topology equivalent to the considered structure problem is established by assuming as unknown variables the branch currents. The TAN mathematical solution is determined after the KB characteristic matrix identification. The TAN can integrate various structure physical parameters. As proof of concept, via hole ended microelectrodes implemented on Kapton substrate were designed, fabricated and tested. The 0.1-MHz-to-6-GHz S-parameter KB model, simulation and measurement are in good agreement. In addition, time-domain analyses with nanosecond duration pulse signals were carried out to predict the microelectrode signal integrity. The modelled microstrip electrode is usually integrated in the atom probe tomography. The proposed unfamiliar KB method is particularly beneficial with respect to the computation speed and adaptability to various structures.

  1. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  2. Digital Plasma Control System for Alcator C-Mod

    NASA Astrophysics Data System (ADS)

    Ferrara, M.; Wolfe, S.; Stillerman, J.; Fredian, T.; Hutchinson, I.

    2004-11-01

    A digital plasma control system (DPCS) has been designed to replace the present C-Mod system, which is based on hybrid analog-digital computer. The initial implementation of DPCS comprises two 64 channel, 16 bit, low-latency cPCI digitizers, each with 16 analog outputs, controlled by a rack-mounted single-processor Linux server, which also serves as the compute engine. A prototype system employing three older 32 channel digitizers was tested during the 2003-04 campaign. The hybrid's linear PID feedback system was emulated by IDL code executing a synchronous loop, using the same target waveforms and control parameters. Reliable real-time operation was accomplished under a standard Linux OS (RH9) by locking memory and disabling interrupts during the plasma pulse. The DPCS-computed outputs agreed to within a few percent with those produced by the hybrid system, except for discrepancies due to offsets and non-ideal behavior of the hybrid circuitry. The system operated reliably, with no sample loss, at more than twice the 10kHz design specification, providing extra time for implementing more advanced control algorithms. The code is fault-tolerant and produces consistent output waveforms even with 10% sample loss.

  3. Towards automatic Markov reliability modeling of computer architectures

    NASA Technical Reports Server (NTRS)

    Liceaga, C. A.; Siewiorek, D. P.

    1986-01-01

    The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.

  4. Data flow modeling techniques

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.

    1984-01-01

    There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.

  5. Soviet Cybernetics Review, Vol. 3, No. 9, September 1969.

    ERIC Educational Resources Information Center

    Holland, Wade B., Ed.

    The issue features articles and photographs of computers displayed at the Automation-69 Exhibition in Moscow, especially the Mir-1 and Ruta-110. Also discussed are the Doza analog computer for radiological dosage; 'on-the-fly' output printers; other ways to increase computer speed and productivity; and the planned ultra-high-energy 1000-Bev…

  6. Requirements and approach for a space tourism launch system

    NASA Astrophysics Data System (ADS)

    Penn, Jay P.; Lindley, Charles A.

    2003-01-01

    Market surveys suggest that a viable space tourism industry will require flight rates about two orders of magnitude higher than those required for conventional spacelift. Although enabling round-trip cost goals for a viable space tourism business are about 240/pound (529/kg), or 72,000/passenger round-trip, goals should be about 50/pound (110/kg) or approximately 15,000 for a typical passenger and baggage. The lower price will probably open space tourism to the general population. Vehicle reliabilities must approach those of commercial aircraft as closely as possible. This paper addresses the development of spaceplanes optimized for the ultra-high flight rate and high reliability demands of the space tourism mission. It addresses the fundamental operability, reliability, and cost drivers needed to satisfy this mission need. Figures of merit similar to those used to evaluate the economic viability of conventional commercial aircraft are developed, including items such as payload/vehicle dry weight, turnaround time, propellant cost per passenger, and insurance and depreciation costs, which show that infrastructure can be developed for a viable space tourism industry. A reference spaceplane design optimized for space tourism is described. Subsystem allocations for reliability, operability, and costs are made and a route to developing such a capability is discussed. The vehicle's ability to satisfy the traditional spacelift market is also shown.

  7. Ultra-Scale Computing for Emergency Evacuation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhaduri, Budhendra L; Nutaro, James J; Liu, Cheng

    2010-01-01

    Emergency evacuations are carried out in anticipation of a disaster such as hurricane landfall or flooding, and in response to a disaster that strikes without a warning. Existing emergency evacuation modeling and simulation tools are primarily designed for evacuation planning and are of limited value in operational support for real time evacuation management. In order to align with desktop computing, these models reduce the data and computational complexities through simple approximations and representations of real network conditions and traffic behaviors, which rarely represent real-world scenarios. With the emergence of high resolution physiographic, demographic, and socioeconomic data and supercomputing platforms, itmore » is possible to develop micro-simulation based emergency evacuation models that can foster development of novel algorithms for human behavior and traffic assignments, and can simulate evacuation of millions of people over a large geographic area. However, such advances in evacuation modeling and simulations demand computational capacity beyond the desktop scales and can be supported by high performance computing platforms. This paper explores the motivation and feasibility of ultra-scale computing for increasing the speed of high resolution emergency evacuation simulations.« less

  8. "Reliability Of Fiber Optic Lans"

    NASA Astrophysics Data System (ADS)

    Code n, Michael; Scholl, Frederick; Hatfield, W. Bryan

    1987-02-01

    Fiber optic Local Area Network Systems are being used to interconnect increasing numbers of nodes. These nodes may include office computer peripherals and terminals, PBX switches, process control equipment and sensors, automated machine tools and robots, and military telemetry and communications equipment. The extensive shared base of capital resources in each system requires that the fiber optic LAN meet stringent reliability and maintainability requirements. These requirements are met by proper system design and by suitable manufacturing and quality procedures at all levels of a vertically integrated manufacturing operation. We will describe the reliability and maintainability of Codenoll's passive star based systems. These include LAN systems compatible with Ethernet (IEEE 802.3) and MAP (IEEE 802.4), and software compatible with IBM Token Ring (IEEE 802.5). No single point of failure exists in this system architecture.

  9. Modeling Creep-Fatigue-Environment Interactions in Steam Turbine Rotor Materials for Advanced Ultra-supercritical Coal Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Chen

    2014-04-01

    The goal of this project is to model creep-fatigue-environment interactions in steam turbine rotor materials for advanced ultra-supercritical (A-USC) coal power Alloy 282 plants, to develop and demonstrate computational algorithms for alloy property predictions, and to determine and model key mechanisms that contribute to the damages caused by creep-fatigue-environment interactions.

  10. Robust adaptive control for a hybrid solid oxide fuel cell system

    NASA Astrophysics Data System (ADS)

    Snyder, Steven

    2011-12-01

    Solid oxide fuel cells (SOFCs) are electrochemical energy conversion devices. They offer a number of advantages beyond those of most other fuel cells due to their high operating temperature (800-1000°C), such as internal reforming, heat as a byproduct, and faster reaction kinetics without precious metal catalysts. Mitigating fuel starvation and improving load-following capabilities of SOFC systems are conflicting control objectives. However, this can be resolved by the hybridization of the system with an energy storage device, such as an ultra-capacitor. In this thesis, a steady-state property of the SOFC is combined with an input-shaping method in order to address the issue of fuel starvation. Simultaneously, an overall adaptive system control strategy is employed to manage the energy sharing between the elements as well as to maintain the state-of-charge of the energy storage device. The adaptive control method is robust to errors in the fuel cell's fuel supply system and guarantees that the fuel cell current and ultra-capacitor state-of-charge approach their target values and remain uniformly, ultimately bounded about these target values. Parameter saturation is employed to guarantee boundedness of the parameters. The controller is validated through hardware-in-the-loop experiments as well as computer simulations.

  11. Automatic specification of reliability models for fault-tolerant computers

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1993-01-01

    The calculation of reliability measures using Markov models is required for life-critical processor-memory-switch structures that have standby redundancy or that are subject to transient or intermittent faults or repair. The task of specifying these models is tedious and prone to human error because of the large number of states and transitions required in any reasonable system. Therefore, model specification is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model specification. Automation requires a general system description language (SDL). For practicality, this SDL should also provide a high level of abstraction and be easy to learn and use. The first attempt to define and implement an SDL with those characteristics is presented. A program named Automated Reliability Modeling (ARM) was constructed as a research vehicle. The ARM program uses a graphical interface as its SDL, and it outputs a Markov reliability model specification formulated for direct use by programs that generate and evaluate the model.

  12. Product Definition Data (PDD) Current Environment Report

    DOT National Transportation Integrated Search

    1989-05-01

    The objective of the Air Force Computer-aided Acquisition and Logistics Support (CALS) Program is to improve weapon system reliability, supportability and maintainability, and to reduce the cost of weapon system acquisition and logistics support. As ...

  13. An overview of the mathematical and statistical analysis component of RICIS

    NASA Technical Reports Server (NTRS)

    Hallum, Cecil R.

    1987-01-01

    Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.

  14. Implantable electronics: emerging design issues and an ultra light-weight security solution.

    PubMed

    Narasimhan, Seetharam; Wang, Xinmu; Bhunia, Swarup

    2010-01-01

    Implantable systems that monitor biological signals require increasingly complex digital signal processing (DSP) electronics for real-time in-situ analysis and compression of the recorded signals. While it is well-known that such signal processing hardware needs to be implemented under tight area and power constraints, new design requirements emerge with their increasing complexity. Use of nanoscale technology shows tremendous benefits in implementing these advanced circuits due to dramatic improvement in integration density and power dissipation per operation. However, it also brings in new challenges such as reliability and large idle power (due to higher leakage current). Besides, programmability of the device as well as security of the recorded information are rapidly becoming major design considerations of such systems. In this paper, we analyze the emerging issues associated with the design of the DSP unit in an implantable system. Next, we propose a novel ultra light-weight solution to address the information security issue. Unlike the conventional information security approaches like data encryption, which come at large area and power overhead and hence are not amenable for resource-constrained implantable systems, we propose a multilevel key-based scrambling algorithm, which exploits the nature of the biological signal to effectively obfuscate it. Analysis of the proposed algorithm in the context of neural signal processing and its hardware implementation shows that we can achieve high level of security with ∼ 13X lower power and ∼ 5X lower area overhead than conventional cryptographic solutions.

  15. Nanofiber Anisotropic Conductive Films (ACF) for Ultra-Fine-Pitch Chip-on-Glass (COG) Interconnections

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Hoon; Kim, Tae-Wan; Suk, Kyung-Lim; Paik, Kyung-Wook

    2015-11-01

    Nanofiber anisotropic conductive films (ACF) were invented, by adapting nanofiber technology to ACF materials, to overcome the limitations of ultra-fine-pitch interconnection packaging, i.e. shorts and open circuits as a result of the narrow space between bumps and electrodes. For nanofiber ACF, poly(vinylidene fluoride) (PVDF) and poly(butylene succinate) (PBS) polymers were used as nanofiber polymer materials. For PVDF and PBS nanofiber ACF, conductive particles of diameter 3.5 μm were incorporated into nanofibers by electrospinning. In ultra-fine-pitch chip-on-glass assembly, insulation was significantly improved by using nanofiber ACF, because nanofibers inside the ACF suppressed the mobility of conductive particles, preventing them from flowing out during the bonding process. Capture of conductive particles was increased from 31% (conventional ACF) to 65%, and stable electrical properties and reliability were achieved by use of nanofiber ACF.

  16. Control optimization, stabilization and computer algorithms for aircraft applications

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.

  17. Melt-infiltrated Sic Composites for Gas Turbine Engine Applications

    NASA Technical Reports Server (NTRS)

    Morscher, Gregory N.; Pujar, Vijay V.

    2004-01-01

    SiC-SiC ceramic matrix composites (CMCs) manufactured by the slurry -cast melt-infiltration (MI) process are leading candidates for many hot-section turbine engine components. A collaborative program between Goodrich Corporation and NASA-Glenn Research Center is aimed at determining and optimizing woven SiC/SiC CMC performance and reliability. A variety of composites with different fiber types, interphases and matrix compositions have been fabricated and evaluated. Particular focus of this program is on the development of interphase systems that will result in improved intermediate temperature stressed-oxidation properties of this composite system. The effect of the different composite variations on composite properties is discussed and, where appropriate, comparisons made to properties that have been generated under NASA's Ultra Efficient Engine Technology (UEET) Program.

  18. Broadband Terahertz Computed Tomography Using a 5k-pixel Real-time THz Camera

    NASA Astrophysics Data System (ADS)

    Trichopoulos, Georgios C.; Sertel, Kubilay

    2015-07-01

    We present a novel THz computed tomography system that enables fast 3-dimensional imaging and spectroscopy in the 0.6-1.2 THz band. The system is based on a new real-time broadband THz camera that enables rapid acquisition of multiple cross-sectional images required in computed tomography. Tomographic reconstruction is achieved using digital images from the densely-packed large-format (80×64) focal plane array sensor located behind a hyper-hemispherical silicon lens. Each pixel of the sensor array consists of an 85 μm × 92 μm lithographically fabricated wideband dual-slot antenna, monolithically integrated with an ultra-fast diode tuned to operate in the 0.6-1.2 THz regime. Concurrently, optimum impedance matching was implemented for maximum pixel sensitivity, enabling 5 frames-per-second image acquisition speed. As such, the THz computed tomography system generates diffraction-limited resolution cross-section images as well as the three-dimensional models of various opaque and partially transparent objects. As an example, an over-the-counter vitamin supplement pill is imaged and its material composition is reconstructed. The new THz camera enables, for the first time, a practical application of THz computed tomography for non-destructive evaluation and biomedical imaging.

  19. Interfacing Neural Network Components and Nucleic Acids

    PubMed Central

    Lissek, Thomas

    2017-01-01

    Translating neural activity into nucleic acid modifications in a controlled manner harbors unique advantages for basic neurobiology and bioengineering. It would allow for a new generation of biological computers that store output in ultra-compact and long-lived DNA and enable the investigation of animal nervous systems at unprecedented scales. Furthermore, by exploiting the ability of DNA to precisely influence neuronal activity and structure, it could be possible to more effectively create cellular therapy approaches for psychiatric diseases that are currently difficult to treat. PMID:29255707

  20. Optimizing the G/T ratio of the DSS-13 34-meter beam-waveguide antenna

    NASA Technical Reports Server (NTRS)

    Esquivel, M. S.

    1992-01-01

    Calculations using Physical Optics computer software were done to optimize the gain-to-noise temperature (G/T) ratio of DSS-13, the DSN's 34-m beam-waveguide antenna, at X-band for operation with the ultra-low-noise amplifier maser system. A better G/T value was obtained by using a 24.2-dB far-field-gain smooth-wall dual-mode horn than by using the standard X-band 22.5-dB-gain corrugated horn.

  1. A self-learning camera for the validation of highly variable and pseudorandom patterns

    NASA Astrophysics Data System (ADS)

    Kelley, Michael

    2004-05-01

    Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.

  2. Trace gas detection in hyperspectral imagery using the wavelet packet subspace

    NASA Astrophysics Data System (ADS)

    Salvador, Mark A. Z.

    This dissertation describes research into a new remote sensing method to detect trace gases in hyperspectral and ultra-spectral data. This new method is based on the wavelet packet transform. It attempts to improve both the computational tractability and the detection of trace gases in airborne and spaceborne spectral imagery. Atmospheric trace gas research supports various Earth science disciplines to include climatology, vulcanology, pollution monitoring, natural disasters, and intelligence and military applications. Hyperspectral and ultra-spectral data significantly increases the data glut of existing Earth science data sets. Spaceborne spectral data in particular significantly increases spectral resolution while performing daily global collections of the earth. Application of the wavelet packet transform to the spectral space of hyperspectral and ultra-spectral imagery data potentially improves remote sensing detection algorithms. It also facilities the parallelization of these methods for high performance computing. This research seeks two science goals, (1) developing a new spectral imagery detection algorithm, and (2) facilitating the parallelization of trace gas detection in spectral imagery data.

  3. Evaluation of power system security and development of transmission pricing method

    NASA Astrophysics Data System (ADS)

    Kim, Hyungchul

    The electric power utility industry is presently undergoing a change towards the deregulated environment. This has resulted in unbundling of generation, transmission and distribution services. The introduction of competition into unbundled electricity services may lead system operation closer to its security boundaries resulting in smaller operating safety margins. The competitive environment is expected to lead to lower price rates for customers and higher efficiency for power suppliers in the long run. Under this deregulated environment, security assessment and pricing of transmission services have become important issues in power systems. This dissertation provides new methods for power system security assessment and transmission pricing. In power system security assessment, the following issues are discussed (1) The description of probabilistic methods for power system security assessment; (2) The computation time of simulation methods; (3) on-line security assessment for operation. A probabilistic method using Monte-Carlo simulation is proposed for power system security assessment. This method takes into account dynamic and static effects corresponding to contingencies. Two different Kohonen networks, Self-Organizing Maps and Learning Vector Quantization, are employed to speed up the probabilistic method. The combination of Kohonen networks and Monte-Carlo simulation can reduce computation time in comparison with straight Monte-Carlo simulation. A technique for security assessment employing Bayes classifier is also proposed. This method can be useful for system operators to make security decisions during on-line power system operation. This dissertation also suggests an approach for allocating transmission transaction costs based on reliability benefits in transmission services. The proposed method shows the transmission transaction cost of reliability benefits when transmission line capacities are considered. The ratio between allocation by transmission line capacity-use and allocation by reliability benefits is computed using the probability of system failure.

  4. Computer Proficiency Questionnaire: Assessing Low and High Computer Proficient Seniors

    PubMed Central

    Boot, Walter R.; Charness, Neil; Czaja, Sara J.; Sharit, Joseph; Rogers, Wendy A.; Fisk, Arthur D.; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran

    2015-01-01

    Purpose of the Study: Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. Design and Methods: To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. Results: The CPQ demonstrated excellent reliability (Cronbach’s α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. Implications: The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. PMID:24107443

  5. An operating system for future aerospace vehicle computer systems

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Berman, W. J.; Will, R. W.; Bynum, W. L.

    1984-01-01

    The requirements for future aerospace vehicle computer operating systems are examined in this paper. The computer architecture is assumed to be distributed with a local area network connecting the nodes. Each node is assumed to provide a specific functionality. The network provides for communication so that the overall tasks of the vehicle are accomplished. The O/S structure is based upon the concept of objects. The mechanisms for integrating node unique objects with node common objects in order to implement both the autonomy and the cooperation between nodes is developed. The requirements for time critical performance and reliability and recovery are discussed. Time critical performance impacts all parts of the distributed operating system; e.g., its structure, the functional design of its objects, the language structure, etc. Throughout the paper the tradeoffs - concurrency, language structure, object recovery, binding, file structure, communication protocol, programmer freedom, etc. - are considered to arrive at a feasible, maximum performance design. Reliability of the network system is considered. A parallel multipath bus structure is proposed for the control of delivery time for time critical messages. The architecture also supports immediate recovery for the time critical message system after a communication failure.

  6. Breast ultrasound image segmentation: an optimization approach based on super-pixels and high-level descriptors

    NASA Astrophysics Data System (ADS)

    Massich, Joan; Lemaître, Guillaume; Martí, Joan; Mériaudeau, Fabrice

    2015-04-01

    Breast cancer is the second most common cancer and the leading cause of cancer death among women. Medical imaging has become an indispensable tool for its diagnosis and follow up. During the last decade, the medical community has promoted to incorporate Ultra-Sound (US) screening as part of the standard routine. The main reason for using US imaging is its capability to differentiate benign from malignant masses, when compared to other imaging techniques. The increasing usage of US imaging encourages the development of Computer Aided Diagnosis (CAD) systems applied to Breast Ultra-Sound (BUS) images. However accurate delineations of the lesions and structures of the breast are essential for CAD systems in order to extract information needed to perform diagnosis. This article proposes a highly modular and flexible framework for segmenting lesions and tissues present in BUS images. The proposal takes advantage of optimization strategies using super-pixels and high-level descriptors, which are analogous to the visual cues used by radiologists. Qualitative and quantitative results are provided stating a performance within the range of the state-of-the-art.

  7. Segmental dynamics of polymers in nanoscopic confinements, as probed by simulations of polymer/layered-silicate nanocomposites.

    PubMed

    Kuppa, V; Foley, T M D; Manias, E

    2003-09-01

    In this paper we review molecular modeling investigations of polymer/layered-silicate intercalates, as model systems to explore polymers in nanoscopically confined spaces. The atomic-scale picture, as revealed by computer simulations, is presented in the context of salient results from a wide range of experimental techniques. This approach provides insights into how polymeric segmental dynamics are affected by severe geometric constraints. Focusing on intercalated systems, i.e. polystyrene (PS) in 2 nm wide slit-pores and polyethylene-oxide (PEO) in 1 nm wide slit-pores, a very rich picture for the segmental dynamics is unveiled, despite the topological constraints imposed by the confining solid surfaces. On a local scale, intercalated polymers exhibit a very wide distribution of segmental relaxation times (ranging from ultra-fast to ultra-slow, over a wide range of temperatures). In both cases (PS and PEO), the segmental relaxations originate from the confinement-induced local density variations. Additionally, where there exist special interactions between the polymer and the confining surfaces ( e.g., PEO) more molecular mechanisms are identified.

  8. L-3 Com AVISYS civil aviation self-protection system

    NASA Astrophysics Data System (ADS)

    Carey, Jim

    2006-05-01

    In early 2004, L-3 Com AVISYS Corporation (hereinafter referred to as L-3 AVISYS or AVISYS) completed a contract for the integration and deployment of an advanced Infrared Countermeasures self-protection suite for a Head of State Airbus A340 aircraft. This initial L-3 AVISYS IRCM Suite was named WIPPS (Widebody Integrated Platform Protection System). The A340 WIPPS installation provisions were FAA certified with the initial deployment of the modified aircraft. WIPPS is unique in that it utilizes a dual integrated missile warning subsystem to produce a robust, multi-spectral, ultra-low false alarm rate threat warning capability. WIPPS utilizes the Thales MWS-20 Pulsed Doppler Radar Active MWS and the EADS AN/AAR-60 Ultraviolet Passive MWS. These MWS subsystems are integrated through an L-3 AVISYS Electronic Warfare Control Set (EWCS). The EWCS also integrates the WIPPS MWS threat warning information with the A340 flight computer data to optimize ALE-47 Countermeasure Dispensing System IR decoy dispensing commands, program selection and timing. WIPPS utilizes standard and advanced IR Decoys produced by ARMTEC Defense and Alloy Surfaces. WIPPS demonstrated that when IR decoy dispensing is controlled by threat range and time-to-go information provided by an Active MWS, unsurpassed self protection levels are achievable. Recognizing the need for high volume civil aviation protection, L-3 AVISYS configured a variant of WIPPS optimized for commercial airline reliability requirements, safety requirements, supportability and most importantly, affordability. L-3 AVISYS refers to this IRCM suite as CAPS (Commercial Airliner Protection System). CAPS has been configured for applications to all civil aircraft ranging from the small Regional Jets to the largest Wide-bodies. This presentation and paper will provide an overview of the initial WIPPS IRCM Suite and the important factors that were considered in defining the CAPS configuration.

  9. The Unification of Space Qualified Integrated Circuits by Example of International Space Project GAMMA-400

    NASA Astrophysics Data System (ADS)

    Bobkov, S. G.; Serdin, O. V.; Arkhangelskiy, A. I.; Arkhangelskaja, I. V.; Suchkov, S. I.; Topchiev, N. P.

    The problem of electronic component unification at the different levels (circuits, interfaces, hardware and software) used in space industry is considered. The task of computer systems for space purposes developing is discussed by example of scientific data acquisition system for space project GAMMA-400. The basic characteristics of high reliable and fault tolerant chips developed by SRISA RAS for space applicable computational systems are given. To reduce power consumption and enhance data reliability, embedded system interconnect made hierarchical: upper level is Serial RapidIO 1x or 4x with rate transfer 1.25 Gbaud; next level - SpaceWire with rate transfer up to 400 Mbaud and lower level - MIL-STD-1553B and RS232/RS485. The Ethernet 10/100 is technology interface and provided connection with the previously released modules too. Systems interconnection allows creating different redundancy systems. Designers can develop heterogeneous systems that employ the peer-to-peer networking performance of Serial RapidIO using multiprocessor clusters interconnected by SpaceWire.

  10. European Workshop Industrical Computer Science Systems approach to design for safety

    NASA Technical Reports Server (NTRS)

    Zalewski, Janusz

    1992-01-01

    This paper presents guidelines on designing systems for safety, developed by the Technical Committee 7 on Reliability and Safety of the European Workshop on Industrial Computer Systems. The focus is on complementing the traditional development process by adding the following four steps: (1) overall safety analysis; (2) analysis of the functional specifications; (3) designing for safety; (4) validation of design. Quantitative assessment of safety is possible by means of a modular questionnaire covering various aspects of the major stages of system development.

  11. NAS Requirements Checklist for Job Queuing/Scheduling Software

    NASA Technical Reports Server (NTRS)

    Jones, James Patton

    1996-01-01

    The increasing reliability of parallel systems and clusters of computers has resulted in these systems becoming more attractive for true production workloads. Today, the primary obstacle to production use of clusters of computers is the lack of a functional and robust Job Management System for parallel applications. This document provides a checklist of NAS requirements for job queuing and scheduling in order to make most efficient use of parallel systems and clusters for parallel applications. Future requirements are also identified to assist software vendors with design planning.

  12. Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant

    NASA Astrophysics Data System (ADS)

    Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.

    2015-12-01

    This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.

  13. UNIX-based operating systems robustness evaluation

    NASA Technical Reports Server (NTRS)

    Chang, Yu-Ming

    1996-01-01

    Robust operating systems are required for reliable computing. Techniques for robustness evaluation of operating systems not only enhance the understanding of the reliability of computer systems, but also provide valuable feed- back to system designers. This thesis presents results from robustness evaluation experiments on five UNIX-based operating systems, which include Digital Equipment's OSF/l, Hewlett Packard's HP-UX, Sun Microsystems' Solaris and SunOS, and Silicon Graphics' IRIX. Three sets of experiments were performed. The methodology for evaluation tested (1) the exception handling mechanism, (2) system resource management, and (3) system capacity under high workload stress. An exception generator was used to evaluate the exception handling mechanism of the operating systems. Results included exit status of the exception generator and the system state. Resource management techniques used by individual operating systems were tested using programs designed to usurp system resources such as physical memory and process slots. Finally, the workload stress testing evaluated the effect of the workload on system performance by running a synthetic workload and recording the response time of local and remote user requests. Moderate to severe performance degradations were observed on the systems under stress.

  14. A Research Roadmap for Computation-Based Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is oftenmore » secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.« less

  15. Monitoring techniques and alarm procedures for CMS services and sites in WLCG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molina-Perez, J.; Bonacorsi, D.; Gutsche, O.

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS, the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operatingmore » worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.« less

  16. Ultra Low Energy Binary Decision Diagram Circuits Using Few Electron Transistors

    NASA Astrophysics Data System (ADS)

    Saripalli, Vinay; Narayanan, Vijay; Datta, Suman

    Novel medical applications involving embedded sensors, require ultra low energy dissipation with low-to-moderate performance (10kHz-100MHz) driving the conventional MOSFETs into sub-threshold operation regime. In this paper, we present an alternate ultra-low power computing architecture using Binary Decision Diagram based logic circuits implemented using Single Electron Transistors (SETs) operating in the Coulomb blockade regime with very low supply voltages. We evaluate the energy - performance tradeoff metrics of such BDD circuits using time domain Monte Carlo simulations and compare them with the energy-optimized CMOS logic circuits. Simulation results show that the proposed approach achieves better energy-delay characteristics than CMOS realizations.

  17. Ultra high speed image processing techniques. [electronic packaging techniques

    NASA Technical Reports Server (NTRS)

    Anthony, T.; Hoeschele, D. F.; Connery, R.; Ehland, J.; Billings, J.

    1981-01-01

    Packaging techniques for ultra high speed image processing were developed. These techniques involve the development of a signal feedthrough technique through LSI/VLSI sapphire substrates. This allows the stacking of LSI/VLSI circuit substrates in a 3 dimensional package with greatly reduced length of interconnecting lines between the LSI/VLSI circuits. The reduced parasitic capacitances results in higher LSI/VLSI computational speeds at significantly reduced power consumption levels.

  18. Fundamental device design considerations in the development of disruptive nanoelectronics.

    PubMed

    Singh, R; Poole, J O; Poole, K F; Vaidya, S D

    2002-01-01

    In the last quarter of a century silicon-based integrated circuits (ICs) have played a major role in the growth of the economy throughout the world. A number of new technologies, such as quantum computing, molecular computing, DNA molecules for computing, etc., are currently being explored to create a product to replace semiconductor transistor technology. We have examined all of the currently explored options and found that none of these options are suitable as silicon IC's replacements. In this paper we provide fundamental device criteria that must be satisfied for the successful operation of a manufacturable, not yet invented, device. The two fundamental limits are the removal of heat and reliability. The switching speed of any practical man-made computing device will be in the range of 10(-15) to 10(-3) s. Heisenberg's uncertainty principle and the computer architecture set the heat generation limit. The thermal conductivity of the materials used in the fabrication of a nanodimensional device sets the heat removal limit. In current electronic products, redundancy plays a significant part in improving the reliability of parts with macroscopic defects. In the future, microscopic and even nanoscopic defects will play a critical role in the reliability of disruptive nanoelectronics. The lattice vibrations will set the intrinsic reliability of future computing systems. The two critical limits discussed in this paper provide criteria for the selection of materials used in the fabrication of future devices. Our work shows that diamond contains the clue to providing computing devices that will surpass the performance of silicon-based nanoelectronics.

  19. State recovery and lockstep execution restart in a system with multiprocessor pairing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gara, Alan; Gschwind, Michael K; Salapura, Valentina

    System, method and computer program product for a multiprocessing system to offer selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). Each paired microprocessor or processor cores that provide one highly reliable thread for high-reliability connect with a system components such as a memory "nest" (or memory hierarchy), an optional system controller, and optional interrupt controller, optional I/O or peripheral devices, etc. The memory nest is attached to a selective pairing facility via a switchmore » or a bus. Each selectively paired processor core is includes a transactional execution facility, whereing the system is configured to enable processor rollback to a previous state and reinitialize lockstep execution in order to recover from an incorrect execution when an incorrect execution has been detected by the selective pairing facility.« less

  20. System life and reliability modeling for helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Brikmanis, C. K.

    1986-01-01

    A computer program which simulates life and reliability of helicopter transmissions is presented. The helicopter transmissions may be composed of spiral bevel gear units and planetary gear units - alone, in series or in parallel. The spiral bevel gear units may have either single or dual input pinions, which are identical. The planetary gear units may be stepped or unstepped and the number of planet gears carried by the planet arm may be varied. The reliability analysis used in the program is based on the Weibull distribution lives of the transmission components. The computer calculates the system lives and dynamic capacities of the transmission components and the transmission. The system life is defined as the life of the component or transmission at an output torque at which the probability of survival is 90 percent. The dynamic capacity of a component or transmission is defined as the output torque which can be applied for one million output shaft cycles for a probability of survival of 90 percent. A complete summary of the life and dynamic capacity results is produced by the program.

Top